[widget id="surstudio-translator-revolution-3"]

Contracting for AI calls for a nuanced approach

10 September 2025
Sinead Lynch, Partner, Sydney

Taking lead from the immortal words of Wet Wet Wet’s Marti Pellow, it is not ‘love’ but AI that is currently ‘all around us’ …it’s everywhere [we] go...’!

Since the hyped introduction of gen AI in 2023, to the wave of experimentation that gripped 2024, experts are calling FY25/26 the year of acceleration of enterprise AI adoption and implementation.[1]

McKinsey research estimates that new gen AI use cases worldwide are expected to unleash between $240 billion to $460 billion in impact in high tech.[2] And for Australia’s nascent AI industry, the Albanese Government’s recent recognition of ‘economic security and sovereign capability’ appears to be an encouraging sign for home-grown innovation and resilience.

Many in Australia have already moved to seize the moment and are adopting gen AI-related solutions in their organisations at scale[3]. Others remain cautiously experimenting, immobilised at the pilot or back-end trial stage, waiting for regulatory and ethical certainty to unfold. This reluctance is not unfounded. Generative AI models, in particular, represent a greenfield area rife with technological change and regulatory uncertainty. Whilst the Australian Government are ‘considering options’ to mandate mandatory guardrails on the development and deployment of ‘high-risk’ AI, the Productivity Commission’s recent Interim Report on ‘Harnessing Digital & Data Technology’ [4]would seem to support the view that realisation of these appear to be still a long way off, and, if implemented, are likely to only apply in the strictest sense. Yet, widespread industry innovation and adoption of AI solutions will not wait.

As such, when developing or procuring AI models and solutions, businesses and their legal practitioners must be alive to not only the transformative opportunities that AI presents, but also to the inherent legal (and ethical) risks and challenges that must be captured and mitigated. Given that many of these AI solutions are being procured or licensed from third parties, the manner in which you navigate legal issues in-contract is critical.

Automation or AI?

First off, how ‘AI’ is defined in the contract – including its technical application and scope – is of paramount importance. From simple automation solutions (which can sometimes be confused as ‘artificial intelligence’ solutions), to traditional ‘rule-based’ or ‘narrow’ AI solutions, to the more complex generative AI and ‘agentic AI’ solutions, the technical capabilities and proposed use cases can vary considerably for every business. And, so too, the technical, legal and commercial considerations that apply to the procurement and/or development of these solutions.

A well-defined solution in the contract is therefore critical for both parties and underpins the lawyer’s ability to properly document each party’s risk and liability, as well as the treatment of data (input and output), intellectual property rights, performance metrics, transparency on outcomes, not to mention mitigating potential disputes down the track.

Let’s look firstly at traditional ‘narrow’ AI solutions (such as machine learning or robotic process automation) which are generally deterministic in nature – designed to perform specific, repetitive tasks more intelligently, or to make decisions within specific rules defined by human programmers. These solutions learn from and make decisions based on defined data inputs and rule-based task instruction. They generally have little or no creative capacity, and as such, the legal issues, including contracting and licensing issues arising, tend to be more aligned with traditional technology procurement.

But with the emergence of more evolved AI solutions, including generative AI and ‘agentic AI’ models – which involve more complex, intelligent systems – the traditional approach to documenting the procured solution is fundamentally challenged.

Generative AI models are typically based on learnt patterns – deep large learning models (LLMs) that can generate new content based on an initial training data set – which then ‘self-learn’ to create new natural languages, images and/or text. These LLMs operate without any rule-based requirements (gen AI).

Further, ‘agentic AI’ solutions involve the use of technical ‘agents’ which operate by taking complex (self-learned) actions and making decisions, based simply on pattern recognition and probability, and carrying out those decisions independently of human direction or oversight (agentic AI).

When contracting for such evolved solutions, lawyers must ask more nuanced questions from stakeholders – to understand the use case(s), technical solution, inputs and outputs proposed for the AI mode – and their scope and usage.

For customers, it is critical to identify what the solution is designed to do – and intended to do for your organisation? What environment is the solution to be deployed in (e.g. on premises, public/private cloud-based infrastructure, or a hybrid)? How will it integrate with existing systems? Is it reliant on a third party or other contracted solution? What performance standards are expected (incl. accuracy rates, processing speeds, and scalability)? How will the AI model be trained, and what testing methodology will be used?

For vendors, can customer requirements be clearly delineated? How will customer training data or other inputs be ring-fenced? How will learnings or improvements derived from customer data be managed?  How will exclusivity requests be managed? Have the AI model’s capabilities, limitations and use parameters been fully and properly articulated? What is the scope of the customer’s ‘human oversight’ requirements, including for its personnel (if any)?

We see new terminology is emerging for traditional IT procurement definitions in-contract, such as the ‘Supplier Software’, the ‘Solution’, ‘Third Party Services’ and ‘Solution generated IP’ – all of which require careful drafting to get right. Retaining ownership of the underlying model whilst negotiating a customer’s right to use or adapt the model will be key for vendors. Confirming and mitigating the scope of liability for outputs will also be front of mind, as will documenting the IP rights arising in outputs that may wish to be reused across clients or markets for vendors, with related commercialisation objectives.

Documenting these responses appropriately, and articulating the nuanced technical, commercial and legal risks applicable to AI models in-contract, requires a unique approach. Further, and from a practical perspective, given the volume of additional definitions, scope and particulars required to be documented and associated with transactions or services involving AI solutions, we are seeing separate ‘AI clauses’ forming addendums or annexes in a contract (much like the GDPR-style addendums for data protection) to support the provision of clarity to the terms.

Liability & Risk

Contracting for AI requires practitioners to take account of a growing, decentralised suite of vendors and third-party players in the ‘AI eco-system’, which can make it extremely difficult to attribute risk and liability when things go wrong.  Whether deploying AI models as part of an IP licensing deal, an M&A transaction, business process or IT outsourcing, development or services contract, or otherwise, it is critical to clearly establish in the contract the roles, responsibilities and controls for the acts and omissions of the various parties (including their complex up and down-stream supply chains).

As with all contracting, appropriate allocation of risk and liability involves an assessment of what could go wrong, how likely is it that the risks would materialise, the potential quantum of damages, who is best able to manage the risks, and whether the ‘risks versus rewards’ adage equates (i.e. are the fees paid by the customer sufficient to compensate the vendor for the level of risk a customer may wish the vendor to bear?). In AI-contracting, this assessment becomes more complex, fuelled further by the unpredictability when contracting for gen AI and agentic AI models.

Vendors and third parties increasingly operate under outcome based, agile, pay-as-you-go- type arrangements, with ‘as is’ warranty and liability carve-outs and limitations for the solutions and services provided – and their AI-generated content.  And we are starting to see an increasing vendor-led approach to contracting for these more evolved AI solutions.

From a customer’s perspective, it remains essential to ensure that the vendor does not seek to exclude liability for matters within their control. But identifying what is within the vendor’s control can be challenging, when considering the increasing opacity and interdependence of the decision-making process of an evolved AI solution – where the tools and systems underpinning a gen AI or agentic AI solution can develop their own logic and decision-making – making it sometimes impossible to trace outcomes to human decisions to accurately attribute responsibility; i.e. the so-called ‘black box’ problem.

Coupled with this, these evolved AI tools are relatively new and unproven, and vendors continue to heavily restrict their liability in-contract. The enterprise terms of use of large LLMs, such as Microsoft Co-Pilot and Open AI’s ChatGPT, communicate clearly that their services are provided on an “as is” basis, with warranty disclaimers with respect of the services offered, and with liability and responsibility excluded for specific outcomes or impacts arising for a customer’s use of their services. Whilst a limited IP indemnity may sometimes be offered by the larger providers covering third-party IP infringement claims, these are generally subject to specific restrictions and carve-outs (e.g. requirements on the use of content filters and safety systems in-product). Other general liability provisions are usually excluded.

Vendors are also increasingly seeking greater liability from customers in respect of a customer’s training dataset – to hold the vendor harmless where a customer’s data is inaccurate or biased (e.g. evolved solutions can unintentionally perpetuate biases inherent in training datasets provided by the customer, raising not just legal issues but also ethical considerations). These greater liabilities can potentially be balanced against higher priced enterprise options for lower liability caps. In some cases, cross-indemnities are also being sought for third party claims arising in connection with a customer’s use of an evolved AI solution e.g. where a customer has breached a vendor policy when using the AI solution or provided inaccurate or biased training data.

Understand the Model

To address these risks, customers are prioritising their level of understanding of the AI model, including the behaviour of the AI solution and its technical limitations, and the roles and responsibilities of each party. For example, being clear in-contract where additional technologies (and providers) may be engaged on data handling (e.g. if retrieval augmented generation (‘RAG’) tools are required to provide contextual outputs; or synthetic data or content is necessary) or where pre-testing of training data is required to weed out bias or discrimination.

Achieving transparency and explainability in-contract may be challenging, but it can be captured in a number of ways – e.g. including requirements to clearly disclose all training dataset sources and categorisations; documenting interpretations and limitations of simpler AI models; requiring vendors to engage technical means to explain more complex AI models and their outputs (e.g. explanations of coded algorithms or training models, and the limitations or errors they may deploy); and/or incorporating technical labelling and/or traceability on AI-generated content (used especially in regulated sectors).

This level of transparency is critical. Understanding why an AI solution reached a particular decision or output is critical to managing the risks associated with the overarching solution. And importantly, in the contract, to identify what is – or should be – within a particular vendor or eco-system player’s control to which appropriate contractual risk and liability should apply.  Accordingly, customers will want to ensure that warranties and indemnities in the contract expand to address these issues and reflect the contributions of vendor and third parties alike.

Monitoring and audit rights (including to undertake ‘bias’ audits) to increase transparency are also of increasing importance, not only to support visibility of the decision-making pattern of the AI solution (which could be buried in a ‘neural’ LLM network), but also to verify compliance with ethical and legal standards – and increasingly in certain regulated sectors to include information-sharing obligations for regulators.

Hallucination Disclaimers

Customers particularly will need to pay close attention to, and scrutinise, vendor warranties and disclaimers (including the commonly used ‘hallucination disclaimers’) and determine the scope of indemnification that may be available where an AI developer or vendor creates a material breach situation. Incorporating rights to conduct quality assurance checks to confirm the accuracy and reliability of AI derived data and outputs are also helpful.

When a mistake or error occurs with an AI model, e.g. an AI hallucination, or ‘fake news’, it is likely that it can go undetected for long periods of time, and/or be repeated many times before detection, depending on it use – thus increasing the severity of the error. The potential for higher value claims is therefore also increased and possible tort liability (e.g. negligence) for mistakes in software may also arise. A well-drafted contract must contemplate these issues and provide appropriate protections for the parties.

As a final touchpoint on liability, it is critical to evaluate the usual ‘compliance with laws’ provision that we see in general services or tech-based contracting. In the case of AI, we do not currently have any set of prescribed laws in place in Australia (the mandatory guardrails proposed for ‘high-risk’ AI solutions remaining still a long way off), although it is clear other laws can and do apply to relevant risks. That said, and taking account of the shifting regulatory sands here and elsewhere, negotiating remedies to vary or terminate the contract if a change in law materially restricts a party’s ability to comply with its obligations, or the functionality of an AI solution, is strongly recommended.

Data Security & Privacy

Data is of course a crucial factor when contracting for AI solutions, and particularly so for evolved AI solutions which are dependent on large-scale training data to derive results. These vast datasets are used to learn patterns, make predictions, and generate outputs. The quality, scope, and provenance of this data directly impact both an AI model’s performance (e.g. accuracy, bias, reliability) as well as its risk exposure (to misuse, discriminatory outcomes, ‘false positives’ / hallucinations etc). And as an organisation’s use of AI increases, there is a proportionate increase in the scope and volume of data necessary to ‘feed’ the model.

The scope, volume, ownership and usage rights of relevant ‘input’ and ‘output’ datasets, as well as derived data and learnings from the AI solution, are all therefore equally critical to itemise. Clarity is required on the specific datasets, categorisations and data flows involved. Will customer personal information be used, and to what extent? How will data be processed or analysed? Who will host, access, use and/or store the data? To whom will data be disclosed? Will personal data be transferred or handled overseas?  Will synthetic data be used? How will transparency and explainability standards be met? What operational measures are embedded, or need to be included in customer requirements to mitigate bias or discrimination risk?

Compliance with applicable privacy laws by all parties must be articulated in-contract, covering not only usual obligations on collection, use, disclosure and storage of personal information, but also breach notification obligations and timings, and regulator access and audit rights.  But practical complexities can arise where personal information incorporated in these vast training data sets, or derived outputs, requiring a consent-based authorisation from customers – particularly the challenge for evolved AI solutions which lack transparency on decision-making or how derived data or learnings were arrived at. The risk of signing up to blanket ‘compliance with all privacy laws’ terms and obligations to procure consents to share personal data take on new significance when contracting for such evolved technology.

To mitigate, contracts (and procurement guidelines) with vendors for AI solutions should incorporate data transparency protocols and addendums, to include accurate recording and labelling of data source inputs used, and the data outputs derived; relevant terms about how, and on what, the relevant AI model will be trained; the parties who are likely to obtain access; the basis for such access; and/or to whom data will be disclosed (including if offshore) to enable provision of vendor’s AI solution or related service.  Restrictions on certain use and AI data governance protocols are also becoming commonplace.

Mitigating the risks of cyber-attack and data breach scenarios will also be front of mind for both customer and vendor. Ensuring that appropriate technical and organisational security controls are included in contract addenda when procuring or deploying AI is now commonplace – not least arising from the flow-on impacts of international regulations such as GPDR and DORA, but also back home under the recent Privacy (and Other Legislation) Amendment Act, 2024 (POLA).

That said, identifying the standard of what is ‘appropriate’ information security risk management when deploying AI solutions is a key negotiation point in contract – and not least a pricing point – whether compliance with accepted industry technical standards (such as ISO 27001) or other equivalent benchmarks are used. That said, incorporating robust and more stringent third-party data sharing and information security controls are required to safeguard the integrity and confidentiality of personal and sensitive information in datasets. This includes advanced encryption techniques (for data in transit and at rest) and access control provisions, including multi-factor authentication, role-based access controls, and user permissions, as foundational requirements. Treating learnings and AI outputs as confidential until their sensitivity is known is also recommended.

Ownership of Data Outputs?

Finally, the question of ownership of data, in particular derived data and learnings from the AI solution, is often the subject of intense contract negotiation. As owner or licensors of a proprietary AI solution, vendors seek to own and/or obtain unrestricted rights to use and commercialise the solution outputs, together with its associated learnings and improvements. This is key for vendors so that it can further train the AI model and/or leverage learnings for its own benefit and that of its other customers in the market. This may be well founded in certain cases – for example, where a vendor has an eye on potential investment or sale of its business or solution, as investors tend to be focused in key contracts on the parameters of what has been agreed around exclusivity and non-compete protections.

But customers generally resist any such broad rights and seek to retain ownership in contract of solution/data outputs and solution learnings, on the basis that these were achieved using customer-owned training data sets and prompts. Customers will also have strict regulatory and contractual obligations to manage confidential and personal information (e.g. of employees, customers or other party data) where used in the training data and, as such, any rights to seek non-exclusivity from a vendor are generally resisted and instead reasonable non-compete restrictions and/or lock-in provisions are usually incorporated in-contract to protect the customer’s investment or competitive advantage.

Other contracting considerations

Another significant touchpoint that must be mentioned when contracting for AI is defining the scope, ownership and licensing of applicable intellectual property rights (IPR). Services and other tech-based contracts need to address ownership of algorithms and LLMs, with the usage of usual pre-existing and newly developed intellectual property terms, as well as ownership of enhancements and customisations to the solution. New IPR for AI include rights in the training datasets, derived outputs, usage rights and learnings from the use of the solution, all of which must also be catered for in the contract. Third-party licensing arrangements, specifying rights to use, modify, and distribute relevant training data and outputs need also be considered, as will usual copyright laws and exceptions.

Conclusion

In an AI-driven environment – and as many customers scale up their adoption of enterprise AI solutions – considering many of these issues now will better place customers, vendors and third parties alike to leverage and capitalise on the opportunities provided by AI models (whether traditional or evolved), whilst ensuring that potential legal and ethical risks and liabilities are mitigated in-contract.

If you found this insight article useful and you would like to subscribe to Gadens’ updates, click here.


Authored by:

Sinead Lynch, Partner
Kalarni Orr, Graduate

[1] AI in 2025: Predictions from Industry Experts

[2] Beyond-the-hype-Capturing-the-potential-of-AI-and-gen-AI-in-TMT-v1.indb

[3] Although in the main, many are licensing or deploying third party large language models (LLMs) from Big Tech providers such as Microsoft CoPilot, Open AI ChatGPT, Google Gemini etc. rather that local development or other deployment of gen AI.

[4] Interim Report – Harnessing data and digital technology – Productivity Commission

This update does not constitute legal advice and should not be relied upon as such. It is intended only to provide a summary and general overview on matters of interest and it is not intended to be comprehensive. You should seek legal or other professional advice before acting or relying on any of the content.

Get in touch