[widget id="surstudio-translator-revolution-3"]

Developing or procuring AI? Here’s what you need to know

10 July 2025
Adrian Chotar, Partner, Sydney Dudley Kneller, Partner, Melbourne Sinead Lynch, Partner, Sydney Antoine Pace, Partner, Melbourne Mitchell Wright, Partner, Canberra Raisa Blanco, Special Counsel, Melbourne

AI tools are bringing paradigm-shifting opportunities that will transform the way we work and interact.

In a recent McKinsey survey, 78% of respondents indicated that their organisation uses at least some AI tools – an increase from only 50% prior to the generative AI boom of late 2022, spurred on by the public release of OpenAI’s ChatGPT. Generative AI use has also skyrocketed, with 71% of respondents now using this technology in some capacity, compared to only 33% just two years ago.

At the risk of prematurely pushing you into the ‘trough of disillusionment’, as defined in Gartner’s famed ‘hype cycle’, we believe there’s a bit more to the puzzle.

In this article, we set out key technical, legal and ethical risks that can arise depending on your architecture and integration decisions when developing or procuring AI tools.  With that in mind, we walk through internal guardrails and vendor management tips to help you proactively mitigate those risks, ensuring you responsibly develop and procure AI tools in line with community expectations.

Risks

We see risks cropping up in two key areas of your AI workflow – architecture risks related to the type of system you choose, and integration risks related to the way in which you implement that system.

Architecture risks

The Federal Government’s voluntary AI safety standard identifies that AI tools can be either General purpose AI systems (GPAI) or ‘narrow’ AI systems (Narrow AI).  Although voluntary, this is a helpful framework for identifying broad categories of AI tools, which each bring unique risks.

GPAI systems are more flexible, trained to handle a broad range of tasks (like OpenAI’s ChatGPT, Google’s Gemini or Microsoft’s Copilot).  However, this breadth and flexibility may pose a number of challenges, including in respect of:

  1. Precision – The tool may provide helpful generic information but perform poorly on specialised tasks;
  2. Hallucination – The tool may generate plausible but inaccurate information, given it relies on non-specific training data, and often generates outputs based on the probability of a string of words being helpful rather than being factually correct; and
  3. Integration – Broadly integrated tools can create new avenues for malicious activity, including via ‘prompt injection’ which can cause GPAI tools to behave in unexpected ways and release confidential information.

By contrast Narrow AI systems are more targeted, trained to perform specific tasks (like a tool that generates a risk rating or employment suitability decision, based on certain datapoints).  These tools carry unique risks, including regarding:

  1. Overfitting – The tool may accurately handle specific tasks but not generalise to all your intended use-cases;
  2. Fragility – The tool’s utility may degrade where inputs and integrations vary even slightly from training data or environments; and
  3. Context – The tool’s narrow capabilities and training data may mean it lacks awareness of your commercial, operational and legal context.

Integration risks

Regardless of whether a system is better classified as GPAI or Narrow AI, the way that AI system is integrated into your workflow can also pose a number of risks, including in respect of:

RiskExplanation
Accountability and black boxesOpaque algorithmic decision-making may yield helpful outputs but is not readily explainable to stakeholders and obscure liability in complex or outsourced supply chains. This gives rise to regulatory non-compliance risks (including in respect of critical infrastructure and essential services obligations).
BiasBiased models and data may skew outputs, raising accuracy, fairness and discrimination risks. This could also include risks of contravening prohibitions against unfair practices under the Australian Consumer Law, such as misleading and deceptive conduct, unconscionable conduct or breach of statutory guarantees.
CybersecurityAI tools may not be sufficiently secure, raising regulatory non-compliance risks relating to specific laws and regulations that impose prescriptive risk management and cybersecurity obligations.
EnvironmentAI tools require significant energy and water to function, raising sustainability risks and contractual non-compliance in respect of any net zero or decarbonisation obligations.
Intellectual PropertyComplex licensing arrangements may threaten ownership of models, input data, and generated outputs. Further complexity will arise to the extent that input data and output data involve personal information.
PrivacyAI tools may process and handle individuals’ personal information, raising privacy and data breach risks. The nature of certain AI tools may run contrary to the data minimisation principle and transparency requirements under the Privacy Act 1988 (Cth).

Given these risks, we expect that the government and regulators will consider introducing specific laws and regulations in respect of the development and implementation of AI tools. This is aligned with community expectations, noting that a recent survey in the UK showed that 75% of the public believed that the government and regulators should oversee AI safety and 88% of the public considered it appropriate that the government and regulators should have the power to stop the use of an AI tool if it is deemed to pose a risk of serious harm to individuals.

In the meantime, it is critical that organisations keep these risks in the forefront of decision-making. Importantly, although these risks arise where AI tools are implemented, responsibility for managing them doesn’t just fall to the organisation using the relevant tool. Rather, these risks pervade the AI ecosystem. For example:

  1. When developing AI – your customers will turn to you to provide a tool that meets their needs. This includes providing a tool that;
    • empowers them to adequately manage the ethical risks of fairness, bias and accountability;
    • assists them in complying with their regulatory and contractual obligations, including in respect of privacy (or at least does not cause them to breach those obligations);
    • protects their valuable IP and confidential information;
    • is reasonably secure; and
    • supports sustainability goals.
  1. When procuring AI – you will need to juggle (often competing) interests from your organisation and customers/clients. This means you would need to identify tools that:
    • are capable of meeting commercial goals, while treating stakeholders fairly and without bias;
    • ensure clear accountability for business decisions;
    • support your organisation’s compliance with its regulatory and contractual obligations;
    • protect customers/clients’ privacy, and your organisation’s IP and confidential information;
    • are reasonably secure; and
    • align with your organisation’s sustainability commitments.

Mitigation strategies

Leading the charge on AI gives you the opportunity to proactively mitigate risks of developing and adopting AI tools. We believe this is best done with a combination of strong internal guardrails and vendor management strategies.

Internal guardrails

Internal guardrails set the direction for how your organisation develops or procures AI tools. A unified approach will ensure all internal stakeholders are alive to key risks, and that they are managed in a consolidated way.

We recommend:

  1. Completing a privacy review – Proactively identify and mitigate privacy risks. Conduct privacy impact assessments early in the development or deployment process to understand how your proposed AI tool will affect the collection, use, and disclosure of personal information. Pay close attention to risky overseas transfers – such as to overseas data centres.
  2. Consolidating IP licences – Protect your IP, and control licensing to stakeholders. Update relevant agreements with customers, staff and suppliers to ensure ownership of models, input data and generated outputs is managed as intended.  This is particularly critical for public-facing tools that collect or share information with third parties.
  3. Preparing ethical frameworks and guardrails – Reflect on stakeholder expectations and target your approach. Develop a holistic AI policy to address stakeholders’ varied expectations as to how you manage AI-related risks – such as ensuring automated decisions are reviewable and unbiased, and that data is secure. Ensure all staff understand their legal, social and environmental obligations, and that strong accountability mechanisms are in place.

Vendor management

You will engage a number of vendors regardless of whether you are developing or procuring an AI system. For example, developers may engage third party cloud service providers, datacentres and developers to ensure you build a system can function as expected. Organisations procuring AI will generally engage the developing organisation itself, as well as others such as consultants (or lawyers!) to assist in transitioning key systems.

Vendors of each kind should be managed closely to mitigate risks, particularly through:

  1. Due diligence checklists – Assess if vendors are up to scratch during procurement. Develop a checklist to assess how your vendors comply with key legal and ethical requirements, including your own AI policy and key privacy laws.  Implement this checklist early on to proactively identify risks, regardless of whether the vendor supplies AI tools, or staff to assist in development.
  2. Security questionnaires – Complete a targeted review of vendors’ security practices. Design a targeted questionnaire to assess vendors’ security controls, data protection measures, incident response plans, and compliance certifications (e.g. ISO 27001).  Tailor questions to your proposed AI tool’s risk profile, and the kinds of data the AI tool (or developers) may handle.
  3. Dynamic oversight – Regularly reassess vendors as your needs change. AI tools are shiny and new, but it is critical to apply a commercial lens to any procurement decisions.  Regularly reassess your goals, and vendors’ compliance, to reflect your organisation’s changing needs, changes in applicable legislation, and evolving technology use patterns.

Where to next?

Managing an emerging technology is, of course, an ongoing challenge. We’ll be preparing more useful tips and tricks going forward to help set you up for AI-powered success but, to get you started, we’ve synthesised our key recommendations in a handy reference document here. 

In the meantime, please do not hesitate to contact our team if you would like any further assistance, or to explore these ideas further.

If you found this insight article useful and you would like to subscribe to Gadens’ updates, click here.


Authored by:
Dudley Kneller, Partner
Raisa Blanco, Special Counsel
Chris Girardi, Associate

This update does not constitute legal advice and should not be relied upon as such. It is intended only to provide a summary and general overview on matters of interest and it is not intended to be comprehensive. You should seek legal or other professional advice before acting or relying on any of the content.

Get in touch