Navigating the European Union's new AI Act: Implications for Australian businesses

04 April 2024

Artificial Intelligence (AI) has become the cornerstone of modern technological advancement. We are experiencing a fundamental shift out of the two decades-long ‘people-internet era’ into the new ‘algorithmic decision-making era’.1

AI is increasingly entwining with fundamental aspects of our lives from delivering personalised customer service experiences to automated medical diagnostics. But with the increased influence and dominance of algorithm decision-making comes a pressing need for comprehensive regulation to ensure ethical and responsible deployment.

Consumer confidence in AI remains low.2 Consumers’ attitudes have been coloured by the speed of AI deployment, the poor quality of many early-stage Generative AI applications, and overarching data privacy concerns.

The introduction of the European Union's (EU) AI Act this month stands as a landmark initiative, aimed at addressing the risks associated with AI while fostering innovation and growth. The European Parliament approved the Artificial Intelligence Act on 13th March 2024 in a bid to ensure safety and compliance with fundamental rights.

In short, the legislation is a game changer. The Act categorises AI applications into three distinct risk levels: unacceptable risk, high-risk, and low or minimal risk. Just as the EU’s General Data Protection Regulation (GDPR) fundamentally changed corporate attitudes to privacy globally, the implications of the AI Act will extend far beyond the boundaries of the European Union and impact organisations worldwide. For businesses operating within the EU or seeking access to its markets, compliance with the new AI Act is imperative. Non-compliance could lead to significant barriers to market entry and reputational damage.

So, how can Australian businesses prepare? The EU is Australia's third-largest trading partner, accounting for AUD $60.9 billion (8.7%) of Australia's total goods trade. As AI embeds itself into business processes, proving compliance with the AI Act will be essential for firms seeking to gain and retain access to the EU. Australian businesses will need to navigate the Act’s implications and learn in parallel how to adopt and use AI to drive efficiency and competitive edge. This is particularly important for Australian businesses already operating within the EU's jurisdiction.

Applications deemed to pose unacceptable risks, such as government social scoring systems reminiscent of those seen in China3, are unequivocally banned under the Act. This prohibition underscores the EU's commitment to safeguarding individuals' rights and preventing potential harm arising from AI misuse.

Penalties for non-compliance are hefty, particularly for the use of systems (or placing systems on the market) prohibited under the AI Act due to the unacceptable level of risk that they pose. These instances are subject to fines of up to €35 million, or up to 7% of annual worldwide turnover for companies. This surpasses the penalties under GDPR, therefore imposing some of the highest penalties for non-compliance in the EU. The second highest fines are for non-compliance with specific obligations for providers, representatives, importers, distributors, deployers, notified bodies, and users, which sets fines of up to €15 million, or up to 3% of annual worldwide turnover for companies.‍4

For AI applications classified as high-risk, which may include tools like CV-scanning algorithms for job applications, the AI Act imposes stringent legal requirements. These requirements mandate transparency, accountability, and robust safeguards to mitigate risks and protect individuals' rights. By subjecting high-risk AI systems to such regulatory scrutiny, the Act aims to foster greater public trust in AI technologies.

AI applications categorised as low or minimal risk are subject to less stringent regulation under the AI Act. While not explicitly governed by specific legal requirements, organisations deploying such AI systems are still expected to adhere to overarching principles of responsible AI use, with a focus on ethical considerations and human oversight. This approach acknowledges the diversity of AI applications and seeks to avoid stifling innovation while promoting responsible deployment.

Australian organisations should take proactive steps to prepare for the possibility that the EU AI Act becomes a de facto global standard (much like the GDPR):

  • Conduct comprehensive audits of existing AI tools, mapping information flows where AI exists and identifying privacy and security implications.

  • Develop clear policies and procedures outlining compliance requirements, ethical AI usage, and data privacy protection.

  • Carry out rigorous contract reviews with AI vendors to ensure genuine scrutiny of what third party developers are providing and how they might be processing, storing and using your company’s data.


1 Prof. Marek Kowalekiewicz, The Economy of Algorithms: AI and the Rise of the Digital Minions. Bristol University Press, 2024.

2 Consumer trust is the key to realising AI's full potential | World Economic Forum (

3 Katie Canales and Aaron Mok, Explained: China Social Credit System, Punishments, Rewards (, 2022.