AI tools are bringing paradigm-shifting opportunities that will transform the way we work and interact.
In a recent McKinsey survey, 78% of respondents indicated that their organisation uses at least some AI tools – an increase from only 50% prior to the generative AI boom of late 2022, spurred on by the public release of OpenAI’s ChatGPT. Generative AI use has also skyrocketed, with 71% of respondents now using this technology in some capacity, compared to only 33% just two years ago.
At the risk of prematurely pushing you into the ‘trough of disillusionment’, as defined in Gartner’s famed ‘hype cycle’, we believe there’s a bit more to the puzzle.
In this article, we set out key technical, legal and ethical risks that can arise depending on your architecture and integration decisions when developing or procuring AI tools. With that in mind, we walk through internal guardrails and vendor management tips to help you proactively mitigate those risks, ensuring you responsibly develop and procure AI tools in line with community expectations.
We see risks cropping up in two key areas of your AI workflow – architecture risks related to the type of system you choose, and integration risks related to the way in which you implement that system.
Architecture risks
The Federal Government’s voluntary AI safety standard identifies that AI tools can be either General purpose AI systems (GPAI) or ‘narrow’ AI systems (Narrow AI). Although voluntary, this is a helpful framework for identifying broad categories of AI tools, which each bring unique risks.
GPAI systems are more flexible, trained to handle a broad range of tasks (like OpenAI’s ChatGPT, Google’s Gemini or Microsoft’s Copilot). However, this breadth and flexibility may pose a number of challenges, including in respect of:
By contrast Narrow AI systems are more targeted, trained to perform specific tasks (like a tool that generates a risk rating or employment suitability decision, based on certain datapoints). These tools carry unique risks, including regarding:
Integration risks
Regardless of whether a system is better classified as GPAI or Narrow AI, the way that AI system is integrated into your workflow can also pose a number of risks, including in respect of:
Risk | Explanation |
---|---|
Accountability and black boxes | Opaque algorithmic decision-making may yield helpful outputs but is not readily explainable to stakeholders and obscure liability in complex or outsourced supply chains. This gives rise to regulatory non-compliance risks (including in respect of critical infrastructure and essential services obligations). |
Bias | Biased models and data may skew outputs, raising accuracy, fairness and discrimination risks. This could also include risks of contravening prohibitions against unfair practices under the Australian Consumer Law, such as misleading and deceptive conduct, unconscionable conduct or breach of statutory guarantees. |
Cybersecurity | AI tools may not be sufficiently secure, raising regulatory non-compliance risks relating to specific laws and regulations that impose prescriptive risk management and cybersecurity obligations. |
Environment | AI tools require significant energy and water to function, raising sustainability risks and contractual non-compliance in respect of any net zero or decarbonisation obligations. |
Intellectual Property | Complex licensing arrangements may threaten ownership of models, input data, and generated outputs. Further complexity will arise to the extent that input data and output data involve personal information. |
Privacy | AI tools may process and handle individuals’ personal information, raising privacy and data breach risks. The nature of certain AI tools may run contrary to the data minimisation principle and transparency requirements under the Privacy Act 1988 (Cth). |
Given these risks, we expect that the government and regulators will consider introducing specific laws and regulations in respect of the development and implementation of AI tools. This is aligned with community expectations, noting that a recent survey in the UK showed that 75% of the public believed that the government and regulators should oversee AI safety and 88% of the public considered it appropriate that the government and regulators should have the power to stop the use of an AI tool if it is deemed to pose a risk of serious harm to individuals.
In the meantime, it is critical that organisations keep these risks in the forefront of decision-making. Importantly, although these risks arise where AI tools are implemented, responsibility for managing them doesn’t just fall to the organisation using the relevant tool. Rather, these risks pervade the AI ecosystem. For example:
Leading the charge on AI gives you the opportunity to proactively mitigate risks of developing and adopting AI tools. We believe this is best done with a combination of strong internal guardrails and vendor management strategies.
Internal guardrails
Internal guardrails set the direction for how your organisation develops or procures AI tools. A unified approach will ensure all internal stakeholders are alive to key risks, and that they are managed in a consolidated way.
We recommend:
Vendor management
You will engage a number of vendors regardless of whether you are developing or procuring an AI system. For example, developers may engage third party cloud service providers, datacentres and developers to ensure you build a system can function as expected. Organisations procuring AI will generally engage the developing organisation itself, as well as others such as consultants (or lawyers!) to assist in transitioning key systems.
Vendors of each kind should be managed closely to mitigate risks, particularly through:
Managing an emerging technology is, of course, an ongoing challenge. We’ll be preparing more useful tips and tricks going forward to help set you up for AI-powered success but, to get you started, we’ve synthesised our key recommendations in a handy reference document here.
In the meantime, please do not hesitate to contact our team if you would like any further assistance, or to explore these ideas further.
If you found this insight article useful and you would like to subscribe to Gadens’ updates, click here.
Authored by:
Dudley Kneller, Partner
Raisa Blanco, Special Counsel
Chris Girardi, Associate