Artificial Intelligence in Victoria’s Courts and Tribunals: Consultation Paper

5. Regulating AI: the big picture

Overview

• The widespread use of AI has led to a range of regulatory responses across different countries. This includes broad, sector-wide responses to regulate the safe and ethical use of AI.

• This chapter provides an overview of broad AI regulatory frameworks in Australia and overseas. While Australia does not currently have AI-specific legislation, the Australian Government is considering regulatory options. It is necessary to understand this context before considering principles and guidelines specific to Victorian courts and tribunals.

Regulating AI in Australia

5.1 Our terms of reference require us to consider regulatory changes specific to courts and tribunals in this review. Broader legislative and regulatory change is beyond the scope of the project.

5.2 However, any court and tribunal specific approach will need to be aware of the broader regulatory framework. That includes developments by the Australian Government and lessons from international approaches to regulating artificial intelligence (AI).

5.3 There is no AI-specific legislation in Australia. In August 2024, the Australian Government released the Voluntary AI Safety Standard which contains 10 voluntary ‘guardrails’ to support the safe and responsible use of AI by Australian organisations.[1] Because the guardrails are voluntary, they do not create legal duties for organisations. But they encourage organisations to make commitments to a set of ongoing activities to create organisational and systems level processes.[2] These voluntary guardrails aim to align Australian practices with other jurisdictions by drawing on existing international standards, including those of the International Standards Organization and the United States National Institute for Standards and Technology.[3]

Australia’s voluntary AI ‘guardrails[4]

1. Establish, implement, and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.

2. Establish and implement a risk management process to identify and mitigate risks.

3. Protect AI systems and implement data governance measures to manage data quality and provenance.

4. Test AI models and systems to evaluate model performance and monitor the system once deployed.

5. Enable human control or intervention in an AI system to achieve meaningful human oversight.

6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content.

7. Establish processes for people impacted by AI systems to challenge use or outcomes.

8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks.

9. Keep and maintain records to allow third parties to assess compliance with ‘guardrails.

10. Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness.

5.4 The Australian Government has also published a consultation paper proposing the introduction of mandatory guardrails for AI in high-risk settings.[5] The proposed mandatory guardrails largely replicate the Voluntary AI Safety Standard.[6]

5.5 The Australian Government has proposed a risk-based framework, where only AI systems that are categorised as high-risk are required to comply with the mandatory guardrails.[7] In determining whether an AI system is high-risk, it is proposed that consideration is given to severity and extent of adverse impacts to individuals and to the community (this is discussed further in 5.34).[8] However, it is proposed that all general-purpose AI systems be automatically required to comply with the guardrails.[9]

5.6 The paper identifies three regulatory reform options to mandate the guardrails:[10]

• A domain-specific approach: Reform existing regulatory frameworks to incorporate the guardrails on a sector-by-sector basis.

• A framework approach: Introduce a new framework legislation including definitions, the guardrails and thresholds for when they apply, and amend existing legislation, to enable enforcement by existing regulators.

• A whole of economy approach: Introduce a new AI-specific act, which would include definitions, thresholds, guardrails and create a new AI regulator with enforcement and monitoring powers.

5.7 If the proposal for mandatory guardrails is implemented, it will apply across Australia. The Voluntary AI Standard and proposed mandatory guardrails build on the Safe and Responsible AI in Australia discussion paper[11] and Australian Government’s interim response.[12]

5.8 These reforms sit alongside Australia’s AI Ethics Framework,[13] which contains eight ethics principles for the safe, reliable and transparent use of AI. However, the Australian Government has recognised that voluntary compliance with the ethics principles may not be enough to regulate AI in high-risk settings and effective regulation and enforcement will be necessary.[14]

5.9 Other relevant reforms at the federal level include the implementation of privacy law reforms,[15] a review of the Online Safety Act 2021 (Cth)[16] and new laws relating to misinformation and disinformation.[17] The Australian Government has also committed to reviewing consumer and copyright laws to consider how they deal with AI.[18]

Regulating AI use by Australian Government agencies

5.10 In June 2024, Australia’s data and digital ministers, representing the Australian, state and territory governments, endorsed a National Framework for the Assurance of Artificial Intelligence in Government.[19] This framework adopts a risk-based framework and aims to provide a consistent approach to AI by governments, aligned to internationally recognised standards. It includes standards for the development, procurement and deployment of AI. The National Framework also refers to the NSW AI Assessment Framework (see below).[20]

5.11 In August 2024, the Australian Government published the Policy for Responsible Use of AI in Government, which outlines mandatory requirements for the safe and responsible use of AI by Australian government agencies.[21] The policy encourages agencies to participate in the pilot of the AI Assurance Framework.[22]

5.12 The Australian Government is also developing a framework in relation to the use of automated decision-making. It includes but is not limited to AI automated decision-making systems.[23]

5.13 The New South Wales (NSW) Government has developed an AI Ethics Policy and AI Assessment Framework to provide guidance for designing, building and using AI technology.[24] It includes questions and considerations around risk and governance. The NSW government has mandated the use of the recently updated AI Framework for government agencies, as well as a Digital Assurance Framework for high-value projects.

5.14 The Victorian Government has endorsed the National Framework for the Assurance of Artificial Intelligence in Government,[25] and is developing Victorian-specific guidance based on it. Work is also underway to develop policy and guidance on the safe and responsible use of generative AI across the public sector, which will include public service bodies and public entities.[26] Additionally, the Commissioner for Economic Growth is due to report on Victoria’s use of AI by 31 October 2024.[27] This work is focused on economic opportunities for the use of AI, including ‘whole of government activities for facilitating the adoption of AI, including appropriate policy, legislative and regulatory frameworks’.[28] Some Victorian agencies have already released targeted guidance on AI, such as the Office of the Victorian Information Commissioner’s Artificial Intelligence – Understanding Privacy Obligations.[29]

International approaches to regulating AI

5.15 AI is regulated in a range of ways around the world. Between 2016 and 2023, at least 148 AI-related bills were passed across 128 countries.[30] The diverse approaches of overseas jurisdictions provide useful context. We consider three approaches in particular:

• AI-specific legislation

• risk-based approaches

• principles-based regulation.

AI legislation

European Union Artificial Intelligence Act

5.16 The European Union has introduced AI-specific regulation. The Artificial Intelligence Act 2024 (EU AI Act) is intended to be a comprehensive regulatory and legal framework across all sectors.[31]

5.17 The EU AI Act bans or imposes strict conditions for high-risk applications of AI, such as biometrics, law enforcement and administration of justice.[32]

5.18 High-risk systems are subject to regulatory requirements relating to:

• data governance

• transparency and provisions of information

• human oversight

• robustness

• accuracy

• security.[33]

5.19 The Australasian Institute for Judicial Administration notes that the EU AI Act may raise issues for judicial independence and accountability.[34] The EU AI Act creates control, compliance and testing obligations for system ‘providers’.[35] The EU AI Act defines system providers broadly which could include executive and legislative bodies.[36] Judicial independence requires the judiciary to operate separately to the executive and the legislature. If government bodies are responsible for developing and testing AI systems used in courts, this could impact judicial accountability and independence.[37] The EU AI Act also lists several bodies that will be responsible for conducting assessments of AI tools. The Australasian Institute for Judicial Administration suggests that the role of these various bodies in monitoring AI tools used in courtrooms could also impact judicial independence and accountability.[38]

5.20 Another perspective is that regulating AI is complex, and courts may not have the necessary resources to appropriately scrutinise AI tools they choose to use. The EU AI Act proposes a pre-deployment (ex-ante) regulation system where some high-risk forms of AI systems are prohibited from deployment and commercialisation. AI systems must undertake a conformity assessment to demonstrate compliance with basic principles such as transparency, human oversight and data protection.[39] This would provide a base level of assurance about security, safety and fairness.

Other risk-based legislation

5.21 Other countries are considering a range of risk-based legislative responses.

(See Table 5 for examples).

Table 5: Jurisdictions considering risk-based legislation

Canada

• A Bill was introduced in 2022 for the Artificial Intelligence and Data Act[40] which would adopt a risk-based approach to regulating AI. The Bill has not been passed and remains under committee consideration.

• If an AI system were assessed as high impact, a risk mitigation plan would be required, including measures to monitor risks, and information provided about how the system will be used and whether it will generate decisions, recommendations or predictions.[41]

Brazil

• A Bill was introduced in May 2023 outlining general rules for the development, implementation and responsible use of AI.[42]

• Includes a risk assessment framework to register and monitor high-risk AI systems and prohibit certain practices.[43] The administration of justice is listed as high risk.[44]

South Korea

• A Bill to consolidate the AI regulatory landscape remains under review.[45]

• It would require high-risk areas to establish ethical principles and ensure reliability and safety. Criminal investigations and arrests are considered high risk.[46]

United States

• A Bill was introduced in 2022 for a proposed Algorithmic Accountability Act.[47]

• The Californian legislature recently passed a Bill for the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.[48]

China

• Regulations were introduced to respond to aspects of AI, such as deepfakes (or deep synthesis), generative AI and recommendation algorithms.[49] Chinese authorities have outlined an intention to introduce a general AI law in future.[50]

Principles-based regulation and frameworks

5.22 Some countries have developed a principles-based policy or other similar regulatory approaches, rather than adopting comprehensive AI legislation at this stage

(See Table 6 for examples).

Table 6: Jurisdictions developing principles-based regulation

United Kingdom

• The AI framework sets out core principles to guide AI use.[51] Regulators will implement the framework across sectors, with possible targeted legislative interventions to address gaps in the regulatory framework.[52]

• It also allows for regulatory ‘sandboxes’, which are controlled environments to trial AI with the oversight of a regulator.[53]

• The recently elected British Government intends to introduce binding regulations focused on the most powerful AI models.[54]

New Zealand

• Trustworthy AI in Aotearoa: AI Principles[55] provides five overarching principles to guide the design, development and deployment of AI in New Zealand. They include: fairness and justice; reliability, security and privacy; transparency; human oversight and accountability; and wellbeing.

• The Algorithm Charter for Aotearoa New Zealand[56] contains a risk management framework to support government agencies to manage how algorithms are used. This includes overarching commitments to transparency, partnership, people, data, privacy, ethics and human rights, and human oversight.

Japan

• Japan has adopted a principles-based approach to AI, including principles for human-centric AI, as well as governance guidelines for implementing AI principles.[57]

United States

• The New Standards for AI Safety and Security Executive Order contains principles and risk-management practices which are binding on federal authorities.[58]

• Partly in response to the Executive Order, the National Institute of Standards and Technology (NIST) developed a framework to assist organisations to consider the risks of generative AI.[59]

• The White House Office of Science and Technology Policy has also developed the Blueprint for an AI Bill of Rights,[60] which outlines voluntary principles for safe and effective systems.

Canada

• The Directive on Automated Decision-Making was introduced for federal government agencies, although it does not apply to the use of AI or automated decision-making in the federal criminal justice system.[61] It includes risk and impact assessment tools.

• Canada also has a voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems for the private sector.[62] Signatories agree to implement a range of measures for addressing risks associated with AI, in addition to existing legal obligations.

Singapore

• AI Verify is a not-for profit foundation which sits under the Infocomm Media Development Authority of Singapore.[63]

• AI Verify released an AI governance testing framework and toolkit to support organisations to voluntarily conduct a self-assessment of their AI system against 11 AI governance principles.[64]

• The principles aim to be consistent with other international AI governance frameworks including the European Union and the OECD.[65]

International agreements

5.23 There are mechanisms to coordinate international regulation of AI. Australia is a signatory to the Bletchley Declaration, which establishes a shared understanding of the opportunities and risks posed by AI.[66] It was signed by 28 countries and the European Union at the AI Safety Summit in 2023. Australia is also a signatory to the Seoul Declaration for Safe, Innovative and Inclusive AI, which builds on the Bletchley Declaration.[67] It aims to foster ’international cooperation and dialogue on artificial intelligence (Al) in the face of its unprecedented advancements and the impact on our economies and societies’.[68] 

5.24 Australia is engaged in other multilateral projects regarding AI, including with the Organisation for Economic Co-operation and Development, the United Nations, the World Trade Organisation and the World Economic Forum.[69]

5.25 The Council of Europe’s Committee on Artificial Intelligence drafted the first international legally binding treaty on artificial intelligence, the Framework Convention on Artificial Intelligence, in May 2024.[70] It requires that the Council’s 46 member states ensure public authorities and private actors comply with a range of fundamental principles, provide remedies, procedural rights and safeguards to affected persons, and implement risk and impact assessments in line with human rights, democratic processes and the rule of law.

Options for regulating rapidly developing technology

5.26 Thinking about the safe use of AI in courts and tribunals, we need to consider:

• Flexibility, because the technology is developing fast.

• Keeping a balance between regulating AI technology specifically, and adopting a ‘technology-neutral’ approach.

We are interested in your views about these points.

5.27 The regulatory landscape continues to shift alongside AI technology. Regulation can take many forms, ranging from ‘hard law’ legislation and regulations, to regulatory ‘sandboxes’, principles-based regulations, frameworks and standards.

5.28 Regulatory reforms need to balance flexibility and risk mitigation. We need to consider whether legislative change is required, or whether changes to rules or regulations together with governance structures are enough to address the risks. Principles and guidelines can assist in developing a more flexible approach. Addressing regulation through court and tribunal rules and procedures, as well as guidelines, may provide greater flexibility than legislative reforms.

Technology-neutral or AI-specific responses

5.29 Regulatory responses can be directed to the particular technology used to achieve an outcome (‘AI-specific’), or be more broadly about regulating activities, processes and outcomes, rather than the technology itself (‘technology-neutral’).[71]

5.30 A technology-neutral response has the advantage of staying relevant even as the technology develops. On the other hand, AI-specific approaches may be better when developing frameworks or guidelines, or to address regulatory gaps that arise due to the technology.

5.31 The Human Technology Institute outlines steps for regulating AI, including: ‘identify how existing laws apply, or … should be applied, to the development and use of AI’ and ‘ensure gaps are generally filled by technology-neutral law’, or technology-specific law reform if required.[72]

5.32 In courts and tribunals, any reforms or responses to AI should consider whether they need to be AI-specific. Examples of AI specific responses by courts includes guidelines published by the Courts of New Zealand in 2023, on the use of generative artificial intelligence in courts and tribunals.[73] 

5.33 Another distinction is whether regulation is ex-ante (before the event) or ex-post (after the event). An ex-ante regulatory approach to AI would require companies to meet certain standards before an AI system can be deployed. Ex-post regulation tries to regulate a system after deployment, for example by issuing fines or other penalties.[74] Some experts argue that because regulating AI systems is so complex, an ex-ante proactive approach may be more ‘effective in preventing harm and ensuring accountability.’[75]

Risk-based responses

5.34 Another flexible approach to regulation is based on level of risk. The Australian Government is considering imposing mandatory guardrails when an AI system is classified as high risk.[76] It has proposed a set of principles to decide whether an AI system is high risk.[77] These are the risks to be considered:[78]

• adverse impacts to an individual’s rights as recognised in Australian and international human rights law

• adverse impacts to an individual’s physical or mental health or safety

• adverse legal effects, defamation or similarly significant effects on an individual

• adverse impacts to groups of individuals or collective rights of cultural groups

• adverse impacts to the broader Australian economy, society, environment and rule of law

• the severity and extent of those adverse impacts outlined above.

5.35 The Australian Government has asked for feedback on the principles, compared to an exclusive list of high-risk uses.[79] Some jurisdictions, including the EU and Canada, have developed or proposed risk-based regulatory approaches (see Table 5), which contain a specific list of high-risk use cases.[80] The use of AI systems in law enforcement and the administration of justice are identified as high-risk in some jurisdictions.[81]

5.36 The Australian Government has also proposed to classify all general-purpose AI models as high-risk and needing to comply with the mandatory guardrails because it poses unforeseeable risks.[82]

5.37 Some jurisdictions have developed risk-based frameworks to support their regulatory responses. Under its Directive on Automated Decision-Making, Canada has developed an algorithmic impact assessment, which outlines requirements based on risk.[83] The directive applies to federal government agencies but not the criminal justice system.[84] These are not legal specific frameworks, but may be helpful when considering regulation in Victorian courts and tribunals. (See Part D.)

Questions

8. Are there lessons from international approaches that we should consider in developing a regulatory response for Victorian courts and tribunals?

9. What would the best regulatory response to AI use in Victorian courts and tribunals look like? Consider:

a. which regulatory tools would be most effective, including rules, regulations, principles, guidelines and risk management frameworks, in the context of rapidly changing technology.

b. whether regulatory responses should be technologically neutral, or do some aspects of AI require specific regulation?

10. How should court and tribunal guidelines align with AI regulation by the Australian Government?


  1. Department of Industry, Science and Resources (Cth) and National Artificial Intelligence Centre, Voluntary AI Safety Standard (Report, August 2024).

  2. Ibid 14–15.

  3. Ibid 5.

  4. Ibid iv.

  5. Department of Industry, Science and Resources (Cth), Safe and Responsible AI in Australia: Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings (Proposals Paper, September 2024).

  6. Ibid 35.

  7. Ibid 19.

  8. Ibid.

  9. Ibid 29.

  10. Ibid 46.

  11. Department of Industry, Science and Resources (Cth), Safe and Responsible AI in Australia: Discussion Paper (Discussion Paper, June 2023).

  12. Department of Industry, Science and Resources (Cth), Safe and Responsible AI in Australia Consultation: Australian Government’s Interim Response (Report, 2024) The Interim Response committed to setting up an expert AI group and investigating possible legislation for introducing mandatory guardrails for AI in high-risk settings.

  13. Australian Government, ‘Australia’s AI Ethics Principles – Australia’s Artificial Intelligence Ethics Framework’, Department of Industry, Science and Resources (Web Page, 5 October 2022) <https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles>.

  14. Ibid.

  15. Privacy and Other Legislation Amendment Bill 2024 (Cth).

  16. Department of Industry, Science and Resources (Cth), Safe and Responsible AI in Australia Consultation: Australian Government’s Interim Response (Report, 2024) 5.

  17. Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024 (Cth).

  18. Department of Industry, Science and Resources (Cth), Safe and Responsible AI in Australia: Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings (Proposals Paper, September 2024) 56.

  19. Australian Government et al, National Framework for the Assurance of Artificial Intelligence in Government: A Joint Approach to Safe and Responsible AI by the Australian, State and Territory Governments. (Report, 21 June 2024).

  20. Digital NSW, The NSW AI Assessment Framework (Report, 2024) <https://arp.nsw.gov.au/assets/ars/attachments/Updated-AI-Assessment-Framework-V3.pdf>.

  21. Digital Transformation Agency (Cth), Policy for the Responsible Use of AI in Government (Version 1.1, September 2024).

  22. Ibid 11, 13.

  23. Department of Industry, Science and Resources (Cth), Safe and Responsible AI in Australia: Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings (Proposals Paper, September 2024) 57.

  24. Digital NSW, The NSW AI Assessment Framework (Report, 2024) <https://arp.nsw.gov.au/assets/ars/attachments/Updated-AI-Assessment-Framework-V3.pdf>.

  25. Australian Government, ‘Data and Digital Ministers Meeting Communique’, Department of Finance (Web Page, 21 June 2024) <https://www.finance.gov.au/publications/data-and-digital-ministers-meeting-outcomes/21-june-2024>.

  26. Zander Hunter, Artificial Intelligence and Recordkeeping (Research Paper, Public Record Office Victoria, 2023) 2

    <https://prov.vic.gov.au/sites/default/files/files/documents/AI_Research_Paper_October_2023.pdf>.

  27. Commissioner for Economic Growth, ‘Review of Artificial Intelligence Use in Victoria – Terms of Reference’, VIC.GOV.AU (Web Page, 5 June 2024) <https://www.vic.gov.au/review-artificial-intelligence-use-victoria-terms-reference>.

  28. Ibid.

  29. Office of the Victorian Information Commissioner, Artificial Intelligence – Understanding Privacy Obligations (Report, April 2021) <https://ovic.vic.gov.au/privacy/resources-for-organisations/artificial-intelligence-understanding-privacy-obligations/>.

  30. Nestor Maslej et al, The AI Index 2024 Annual Report (Report, AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, April 2024) 376 <https://aiindex.stanford.edu/wp-content/uploads/2024/04/HAI_2024_AI-Index-Report.pdf> The Ai Index analysed legislation containing ‘artificial intelligence’ in 128 select countries.

  31. Regulation (EU) 2024/1689 (Artificial Intelligence Act) [2024] OJ L 2024/1689.

  32. Ibid annex III.

  33. Ibid ch III, arts 8-15.

  34. Felicity Bell et al, AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators (Report, Australasian Institute of Judicial Administration Incorporated, December 2023) 48.

  35. Regulation (EU) 2024/1689 (Artificial Intelligence Act) [2024] OJ L 2024/1689. Chs 3, section 3.

  36. Ibid art 3.

  37. Felicity Bell et al, AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators (Report, Australasian Institute of Judicial Administration Incorporated, December 2023) 48.

  38. Ibid.

  39. Gianclaudio Malgieri and Frank Pasquale, ‘Licensing High-Risk Artificial Intelligence: Toward Ex Ante Justification for a Disruptive Technology’ (2024) 52 Computer Law & Security Review 105899, 7 <https://www.sciencedirect.com/science/article/pii/S0267364923001097>.reactive tools. This approach has proven inadequate, as numerous foreseeable problems arising out of commercial development and applications of AI have harmed vulnerable persons and communities, with few (and sometimes no

  40. Bill C-27, Digital Charter Implementation Act 2022 (Canada).

  41. Government of Canada, ‘The Artificial Intelligence and Data Act (AIDA) – Companion Document’, Innovation, Science and Economic Development Canada (ISED): Innovation for a Better Canada (Web Page, 13 March 2023) <https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document> Examples of high risks systems include: screening systems impacting access to services or employment; biometric systems used for identification and inference; systems that can influence human behaviour at scale; systems critical to health and safety.

  42. PL 2338/2023 [Bill No. 2338, of 2023] (Plenary of the Federal Senate, Brazil).

  43. Ana Frazão, ‘Regulation of Artificial Intelligence in Brazil: Examination of Draft Bill No. 2338/2023’ [2024] 10(1) UNIO – EU Law Journal 54 <https://revistas.uminho.pt/index.php/unio/article/view/5842>.

  44. Ibid.

  45. National Assembly South Korea, National Assembly Subcommittee 2 Subcommittee on National Defense Bill, ‘Metaverse Act’ and ‘Artificial Intelligence Act’ (Web Page, 14 February 2023) <https://www.assembly.go.kr/portal/bbs/B0000051/view.do?nttId=2095056&menuNo=600101&sdate=&edate=&pageUnit=10&pageIndex=1> The Bill was initially passed by a Committee of the Korean National Assembly in February 2023 but remains under review.

  46. ‘Korea Update: Legislative Framework and Practical Implications of “Law on Nurturing the AI Industry and Establishing a Trust Basis”’, Transatlantic Law International (Web Page, 22 March 2023) <https://www.transatlanticlaw.com/content/korea-update-legislative-framework-and-practical-implications-of-law-on-nurturing-the-ai-industry-and-establishing-a-trust-basis/>.

  47. Algorithmic Accountability Act of 2022, H.R.6580, 117th Congress (2021-2022). The Bill was introduced by Senator Ron Wyden, Senator Cory Brooker and Representative Yvette Clark. The Bill has not passed and has been referred to the Subcommitteee on Consumer Protection and Commerce.

  48. Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, SB-1047 , (California Senate, Regular Session (2023–2024)). The Bill is yet to be signed into law by the Californian Governor.

  49. Matt Sheehan, Tracing the Roots of China’s AI Regulations (Report, Carnegie Endowment for International Peace, 27 February 2024) 9–10 <https://carnegieendowment.org/research/2024/02/tracing-the-roots-of-chinas-ai-regulations?lang=en>.Carnegie Endowment for International Peace, 27 February 2024

  50. General Office of the State Council and Chinese Government Network, Notice of the General Office of the State Council on Printing and Distributing the 2023 Legislative Work Plan of the State Council (State Council Fa [2023] No. 18, 6 June 2023) <https://www.gov.cn/zhengce/content/202306/content_6884925.htm>.

  51. Department for Science, Innovation and Technology (UK), A Pro-Innovation Approach to AI Regulation (Report No CP 815, March 2023).

  52. Ibid 39 [65].

  53. Ibid 59-61 [93-100].

  54. The Labour Party, Change – Labour Party Manifesto 2024 (Report, 2024) 35 <https://labour.org.uk/change/kickstart-economic-growth/>; Martin Coulter, ‘Britain’s New Government Aims to Regulate Most Powerful AI Models’, Reuters (online, 17 July 2024) <https://www.reuters.com/technology/artificial-intelligence/britains-new-government-aims-regulate-most-powerful-ai-models-2024-07-17/>; His Majesty King Charles III, ‘The King’s Speech 2024’ (Speech, House of Lords, UK Parliament, 17 July 2024) <https://www.gov.uk/government/speeches/the-kings-speech-2024>.

  55. AI Forum New Zealand, Trustworthy AI in Aotearoa: AI Principles (Report, March 2020) 4 <https://aiforum.org.nz/wp-content/uploads/2020/03/Trustworthy-AI-in-Aotearoa-March-2020.pdf>.

  56. New Zealand Government, Algorithm Charter for Aotearoa New Zealand (Report, July 2020) 3.

  57. Kimberlee Weatherall and ARC Centre for Excellence for Automated Decision-Making and Society (ADM+S), Automated Decision-Making and Society (ADM+S) Submission to Safe and Responsible AI in Australia Discussion Paper (Submission No. 437

    to Department of Industry, Science and Resource’s Consultation on Safe and Responsible AI in Australia, 4 August 2023) Appendix 1.

  58. President of the United States of America, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Executive Order No 14110, 30 October 2023).

  59. Ibid.

  60. United States Government, White House Office of Science and Technology Policy, Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People (White Paper, October 2022).

  61. Law Commission of Ontario, Accountable AI (LCO Final Report, June 2022) 23.

  62. Government of Canada, ‘Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems’, Innovation, Science and Economic Development Canada (Web Page, 28 May 2024) <https://ised-isde.canada.ca/site/ised/en/voluntary-code-conduct-responsible-development-and-management-advanced-generative-ai-systems>.

  63. ‘AI Verify Foundation’, AI Verify Foundation (Web Page) <https://aiverifyfoundation.sg/ai-verify-foundation/>.

  64. AI Verify, ‘Launch of AI Verify – An AI Governance Testing Framework and Toolkit’, Personal Data Protection Commission, Singapore (Web Page, 25 May 2022) <https://aiverifyfoundation.sg/downloads/AI_Verify_Primer_Jun-2023.pdf>.

  65. ‘What Is AI Verify’, AI Verify Foundation (Web Page) <https://aiverifyfoundation.sg/what-is-ai-verify/>.

  66. Prime Minister’s Office, 10 Downing Street, Foreign, Commonwealth & Development Office and Department for Science, Innovation and Technology (UK), The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023 (Policy Paper, 1 November 2023) <https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023>.

  67. Australian Government, ‘The Seoul Declaration by Countries Attending the AI Seoul Summit, 21-22 May 2024’, Department of Industry Science and Resources (Web Page, 24 May 2024) <https://www.industry.gov.au/publications/seoul-declaration-countries-attending-ai-seoul-summit-21-22-may-2024>.

  68. Ibid.

  69. Department of Industry, Science and Resources (Cth), Safe and Responsible AI in Australia: Discussion Paper (Discussion Paper, June 2023) 16.

  70. Council of Europe, Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, opened for signature 5 September 2024, CETS 225 – Artificial Intelligence, <https://www.coe.int/en/web/artificial-intelligence/the-framework-convention-on-artificial-intelligence>

  71. Productivity Commission, Making the Most of the AI Opportunity – Research Paper 2: The Challenges of Regulating AI (Report, January 2024) 5–6.

  72. Nicholas Davis, Sophie Farthing and Edward Santow, Department of Industry, Science and Resources Discussion Paper, ‘Safe and Responsible AI in Australia’ Submission, Human Technology Institute, UTS (Submission No. 476 to Department of Industry, Science and Resource’s Consultation on Safe and Responsible AI in Australia, 9 August 2023) 14. The steps were advice to the federal government in considering AI law reform.

  73. Courts of New Zealand, Guidelines for Use of Generative Artificial Intelligence in Courts and Tribunals: Judges, Judicial Officers, Tribunal Members and Judicial Support Staff (Report, 7 December 2023); Courts of New Zealand, Guidelines for Use of Generative Artificial Intelligence in Courts and Tribunals: Lawyers (Report, 7 December 2023); Courts of New Zealand, Guidelines for Use of Generative Artificial Intelligence in Courts and Tribunals: Non-Lawyers (Report, 7 December 2023).

  74. Gianclaudio Malgieri and Frank Pasquale, ‘Licensing High-Risk Artificial Intelligence: Toward Ex Ante Justification for a Disruptive Technology’ (2024) 52 Computer Law & Security Review 105899, 2 <https://www.sciencedirect.com/science/article/pii/S0267364923001097>.

  75. Ibid.

  76. Department of Industry, Science and Resources (Cth), Safe and Responsible AI in Australia: Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings (Proposals Paper, September 2024).

  77. Ibid 19.

  78. Ibid.

  79. Ibid 25–27.

  80. Ibid.

  81. Ibid.

  82. Ibid 28–9.

  83. Treasury Board of Canada Secretariat (TBS), ‘Algorithmic Impact Assessment Tool’, Responsible Use of Artificial Intelligence in Government (Web Page, 30 May 2024) <https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html>.

  84. Ibid.