Artificial Intelligence in Victoria’s Courts and Tribunals: Consultation Paper

8. Developing guidelines for the use of AI in Victoria’s courts and tribunals

Overview

• Our terms of reference ask us to consider what guidelines may be needed so that AI is used safely in Victoria’s courts and tribunals. This includes the use of AI by:

– Court users—parties, legal professionals and prosecutorial agencies

– Courts and tribunals—court administrators, the judiciary and tribunal members.

• We have also been asked to consider guidelines to support Victoria’s courts and tribunals when considering whether to implement new AI systems.

• In this section we consider how the experience of other jurisdictions can inform guidelines for Victoria’s courts and tribunals.

Guidelines for the use of AI by court and tribunal users

8.1 Courts and professional bodies have published guidelines for court users including self-represented litigants and legal professionals to guide the safe use of AI in courts and tribunals.

8.2 Each jurisdiction has its own approach. But there are common elements which often relate to disclosure requirements and ethical and legal risks associated with AI, including privacy and security.

Court-issued guidelines about AI

8.3 Courts have issued guidelines to court users concerning AI. Guidelines vary in their level of detail and requirements.

8.4 The Victorian Supreme and County Courts developed guidelines to assist legal practitioners and unrepresented litigants using AI.[1] Queensland appears to be the only other Australian jurisdiction to have published court-issued guidelines on AI.[2] This is focused on the use of AI by non-lawyers.[3]

8.5 Court-issued guidelines for court users, including overseas examples, are listed in Table 8 and described in more detail below.

Table 8: Guidelines relevant to use of AI by court users

Guidelines to court users

Australia

Victorian Supreme Court and County Court—Guidelines to Litigants[4]

• Parties and practitioners should disclose to each other the assistance provided by AI programs.

• Self-represented litigants are encouraged to disclose AI use in the documents filed with the court.

Queensland Courts—The Use of Generative Artificial Intelligence (AI) Guidelines for Responsible Use by Non-Lawyers[5]

• Guidelines are targeted at non-lawyers to highlight risks and limitations of generative AI. There are no disclosure requirements.

New Zealand

Guidelines for the Use of Generative Artificial Intelligence in Courts and Tribunals[6]

• Separate guidelines for lawyers and non-lawyers about the use of generative AI in courts and tribunals.

• Assumes that risks are managed if guidelines are complied with, but that a court or tribunal may require disclosure of AI use (noting judges are not required to disclose use of generative AI).

Canada

Federal Court Notice to Parties and the Profession[7]

• Requires court users to disclose AI-generated content. The Court of King’s Bench of Manitoba and Supreme Court of Yukon have similar requirements to declare AI use.

United States

AI Standing Orders—various district courts e.g. Texas, Illinois, Pennsylvania[8]

• Some district courts require mandatory disclosure of AI use.

Disclosure

8.6 Disclosure is a common element across the guidelines. Some jurisdictions require mandatory disclosure of the use of AI, others note the limitations and risks of AI and encourage disclosure where relevant.

Non-mandatory disclosure

8.7 In Victoria, disclosure is encouraged but not mandated in the Supreme and County Courts. The respective court guidelines both state:

ordinarily parties and their practitioners should disclose to each other the assistance provided by AI programs to the legal task undertaken. Where appropriate (for example, where it is necessary to enable a proper understanding of the provenance of a document or the weight that can be placed upon its contents), the use of AI should be disclosed to other parties and the court.[9]

8.8 Self-represented litigants and witnesses are encouraged to identify the use of AI by including a statement in the document noting the AI tool used. The guidelines note that this will ‘not detract from the contents of the document being considered by the relevant judicial officer on its merits but will provide useful context to assist the judicial officer’.[10]

8.9 These guidelines note the use of AI is subject to legal professional obligations, including the obligation of candour (honesty) to the court and obligations imposed by the Civil Procedure Act 2010 (Vic) that documents prepared and submitted must have a proper basis.[11]

8.10 The guidelines encourage caution where AI is used in expert reports, and state that they should comply with the Expert Witness Code of Conduct.[12]

8.11 In Queensland, courts have issued guidelines that outline the risks and limitations of AI use for non-lawyers.[13] There is no requirement to disclose AI use.[14]

8.12 Courts of New Zealand issued separate guidelines for lawyers and non-lawyers which explain their respective disclosure obligations in the use of generative AI. The guideline for lawyers sets out the limitations, risks and ethical issues of AI. It also notes existing professional obligations of lawyers apply to the use of AI.[15] These guidelines assume that if they are complied with, such as by checking for accuracy, the key risks of generative AI have been adequately addressed. However, a court or tribunal may ask or require disclosure.[16]

Mandatory disclosure

8.13 Some jurisdictions have introduced mandatory disclosure requirements. In Canada, the Canadian Federal Court introduced mandatory disclosure requirements where AI is used in the preparation of materials filed with the court.[17]

8.14 There is an obligation for counsel, parties and ‘interveners’ to provide notice and to consider principles of ‘caution’, ‘human in the loop’ and ‘neutrality’ if they use AI to prepare documentation filed with the Federal Court.[18]

8.15 Parties are only required to disclose AI use if content in the material was directly provided by AI, not if it was used ‘to suggest changes, provide recommendations, or critique content already created by a human who could then consider and implement the changes’.[19] The Court of King’s Bench of Manitoba and Supreme Court of Yukon have similar requirements to declare AI use.[20]

8.16 In consulting on the development of the guidelines, the Canadian Bar Association criticised the proposal for certification as unnecessary, given existing professional obligations. This includes obligations on counsel to sign submissions to the Court

and to:

use tactics that are legal, honest and respectful of the courts, to act with integrity and professionalism, to maintain their overarching responsibility to ensure civil conduct, and to educate clients about the court processes in the interest of promoting the public’s confidence in the administration of justice.[21]

8.17 It was noted that certification could burden registry staff and create filing delays, depending on the form the certification takes and assessment by the court.[22] In response to this, the final court notice clarified the form of declaration necessary.[23]

8.18 Similarly, some United States district courts require lawyers and litigants appearing before court to declare if their filings were drafted using generative AI. This includes the Northern District of Texas and Northern District of Illinois for civil cases.[24] The Eastern District of Pennsylvania made similar orders, although directed at AI tools more generally rather than generative AI.[25]

Guidelines about AI for legal practitioners by professional bodies

8.19 Legal professional bodies have developed guidance about legal professionals using AI.

8.20 The Victorian Legal Services Board and Commissioner has advised lawyers to ensure the use of AI is consistent with their legal professional duties.[26] It warns lawyers to be particularly careful when using generative AI and issued the following direction:

ChatGPT can be a very helpful tool, but you must use it ethically and safely. All the rules of ethics apply in your use of this tool, including your duties of competence and diligence, of supervision, to the court and the administration of justice, and to maintain client confidentiality.[27]

8.21 The Victorian Legal Services Board and Commissioner identified the potential improper use of AI by lawyers as a key risk in 2024.[28]

8.22 In New South Wales, the Law Society and the Bar Association have issued guidelines to solicitors and barristers about how their professional obligations align with the use of AI.[29] The Bar Association guidelines outline the need for barristers to comply with existing legal professional rules and obligations when using generative AI.[30] It advises that a solicitor using generative AI ‘should employ the same level of care and caution as they would to any legal assistant or paralegal’.[31] Further, it advises ‘AI must be used responsibly to supplement (rather than substitute) legal services’.[32]

8.23 The Queensland Law Society issued a guiding statement on the use of AI in legal practice.[33] The statement sets out principles rather than specific obligations arising from the use of AI, which relate to:

• competence and diligence when using AI

• confidentiality, transparency and disclosure to clients about the use of AI

• the need for supervision and accountability to ensure AI does not replace professional judgment.[34]

International guidelines about AI

8.24 Legal professional bodies in other countries have developed similar guidelines. In England and Wales, the Law Society has a guide on the use of generative AI, including a checklist to consider before using it, and considerations for risk management.[35]

8.25 The New York Bar Association has comprehensive guidelines for legal practitioners, following publication of a special taskforce report in April 2024.[36] These include the benefits and risks of AI and potential impacts on professional obligations. The report states the Bar Association should establish a standing committee to oversee AI updates, and makes broader recommendations about education and gaps in existing legislation.[37]

8.26 The American Bar Association Model Rules of Professional Conduct includes with its guidelines a training and education requirement:

to maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.[38]

8.27 A form of this rule has been adopted by at least 27 states across the United States.[39] Some states, such as Florida, require technology training as part of continuing education requirements.[40]

Response of courts to professional obligations and AI

8.28 In Canada and some other countries, courts can have regard to guidance by law societies when assessing legal responsibilities and AI use. The Law Society of British Columbia developed Guidance on Professional Responsibility and Generative AI.[41] In Zhang v Chen a judge referred to guidance from the Law Society, noting it confirmed ‘lawyers are responsible for work products generated using technology-based solutions’ and urged lawyers to ‘review the content carefully and ensure its accuracy’.[42] In this case, legal counsel used generative AI to prepare court materials which contained references to non-existent cases. The court ordered the lawyer to pay costs for the additional expenses and effort incurred by the citation of fake cases.[43]

8.29 The United States case of Mata v Avianca demonstrates how inaccuracy risks associated with generative AI tools can conflict with lawyers’ professional obligations.[44] Legal counsel used ChatGPT, which hallucinated a non-existent case.[45] The court found the lawyer had breached the Federal Rules of Civil Procedure, which prohibits the misrepresenting of facts or making frivolous legal arguments.[46] The lawyer was ordered to pay a $5,000 fine[47] and was required to inform their client and the judges whose names were wrongfully used.[48]

8.30 A similar issue occurred in a recent Australian matter of Dayal,[49] which cited Mata v Avianca. In this case a Victorian solicitor used an AI legal research tool to generate a list and summary of cases that were provided to the court.[50] The cases were found to be inaccurate hallucinations. The court referred the matter to the Victorian Legal Services Board and Commissioner for their consideration.[51] The court noted that, ‘Whilst the use of AI tools offer opportunities for legal practitioners, it also comes with significant risks’.[52]

Developing guidelines for Victorian court and tribunal users

8.31 Guidelines inform court users, including self-represented litigants and legal professionals, how AI should be used in courts and tribunals.

8.32 Legal professional bodies will likely play a role in issuing guidelines for the legal profession about using AI. They may include:

• Law Institute of Victoria—developing guidelines for legal professionals about AI risks, opportunities and how it is used across the profession (for example, the Law Society in New South Wales. has developed guidelines for legal professionals)[53]

• Victorian Legal Services Board and Commissioner—developing guidelines for legal professionals about professional obligations and AI, and implications for breaching professional obligations.

• Victorian Bar Association—developing guidelines for barristers about using AI and implications for professional obligations.

8.33 We want to hear your views on whether stronger or more comprehensive guidelines are needed to ensure AI is used safely by court users.

Questions

Should guidelines be developed for Victorian court and tribunal users relating to the use of AI?

Should guidelines require disclosure of AI use? If so, who should it apply to:

legal professionals

expert witnesses

the public (including self-represented litigants and witnesses)?

What are the benefits and risks of disclosure? If mandatory, what form should disclosure take?

What is the role for courts in regulating use of AI by legal professionals? What is the role of professional bodies such as the Victorian Legal Services Board and Commissioner, the Law Institute of Victoria and the Bar Association?

Are there other guidelines or practice notes relevant to court users and AI use that should be considered by the Commission?

Guidelines for the use of AI by judges, tribunal members and court and tribunal staff

Victorian guidelines

8.34 The Supreme and County Courts are the only Victorian courts to have published guidelines concerning AI.[54] These guidelines mainly provide direction to court users. But they briefly discuss AI for judicial officers:

AI is not presently used for decision making nor used to develop or prepare reasons for decision because it does not engage in a reasoning process nor a process specific to the circumstance before the court.[55]

8.35 The guidelines also refer to the Australasian Institute for Judicial Administration’s AI Decision Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators.[56] This guide highlights that AI use should be consistent with core judicial values.[57] It is the first guide to consider in detail the challenges and opportunities created by AI in Australian courts and tribunals. It looks at legislation, case law and policies from around the world. The guide covers AI concepts, uses, benefits and risks. It also identifies core judicial values impacted by AI.

8.36 The Commission is not aware of any other internal court-issued guidelines about the use of AI by courts and tribunals in Victoria. We therefore consider below the experience of other jurisdictions and how this may help us to develop guidelines for Victoria’s courts and tribunals.

International guidelines

8.37 Other countries have developed guidelines, often issued by courts, to assist courts and judicial office holders in the safe use of AI. Examples are set out in Table 9.

8.38 These examples assist in considering potential guidelines for Victorian courts and tribunals. The examples vary significantly in the scope of technology and the detail of advice.

Table 9: International guidelines relevant to use of AI in courts

Guidelines for courts

Canada

Canadian Federal Court—Interim Principles and Guidelines on the Courts’ Use of AI[58]

• Provides principles for the use of AI by the Federal Court:

– Accountability: The Court will be fully accountable to the public for any potential use of AI in its decision-making function.

– Respect of fundamental rights, including the right to a fair hearing before an impartial decision-maker.

– Non-discrimination: The Court will ensure that its use of AI does not reproduce or aggravate discrimination.

– Accuracy: For processing of judicial decisions and data for purely administrative purposes, the Court will use certified or verified sources and data.

– Transparency: The Court will authorise external audits of any AI-assisted data processing methods that it embraces.

– Cybersecurity: Data will be managed and stored securely and will protect confidentiality and privacy.

– ‘Human in the loop’: Members of the Court and clerks will verify the results of any AI-generated outputs used in their work.

• Public consultation is required before using AI. Further, if a ‘specific use of AI by the Court may have an impact on the profession or public, the Court will consult the relevant stakeholders before implement[ion]’.[59]

• Recognises AI’s potential to improve efficiency and fairness.

• Commits to investigating and piloting potential uses of AI for internal administrative purposes.

England and Wales

Courts and Tribunals Judiciary (England & Wales)—Artificial Intelligence Guidance for Judicial Officers[60]

• Provides support for the judiciary, their clerks and other support staff on their interactions with AI.

• Outlines limitations and risks of AI relating to confidentiality and privacy, accountability and accuracy, bias, security and responsible use of AI by legal professionals and unrepresented litigants.[61]

• Provides examples where AI is useful, such as in summarising text, writing presentations or administrative tasks.

• AI is not recommended for legal research or analysis.

United States

The Federal Justice Centre—An Introduction to AI for Federal Judges[62]

• Provides technical background on AI and highlights potential legal issues arising from the use of AI in courts.

• Provides considerations for judges to assist when thinking about the application of AI, including when admitting AI applications into evidence.[63]

• Notes that AI is iterative and should be tested and validated continuously.[64] Recognises that behind each AI application are human values and choices, and courts should know who designed the algorithm, metrics used and who validated the data.[65]

• Discusses the importance of judges as evidentiary gatekeepers of AI in courts, including how judges determine whether AI will assist the finder of fact in court, and how the rule of evidence will guide this determination.

• Other areas where AI may arise in the courtroom are also discussed, including litigants introducing AI in a variety of civil and criminal contexts including risk assessments.

New Zealand

Courts of New Zealand—Guidelines for Use of Generative Artificial Intelligence in Courts and Tribunals[66]

• Outlines key risks and limitations of AI and provides examples of AI use.

• Separate guidance for judicial officers and court staff, lawyers and non-lawyers. The guidelines for judges, judicial officers, tribunal members and judicial support staff were developed to assist them in the use of generative AI in their work.[67]

• Any use of generative AI chatbots or other generative AI tools must be consistent with overarching obligation to protect the integrity of the administration of justice and court/tribunal processes.[68]

• Outlines examples where AI may be used, key risks and potential ways to mitigate risks, as well as areas where AI may be useful including summarising information, speech writing and administrative tasks. Tasks requiring extra care include legal research and analysis.[69]

Brazil

National Council of Justice—Resolution No. 332 of 08/21/2020[70]

• Provides direction to the Brazilian judiciary (excluding the Brazilian Supreme Court) on ‘ethics, transparency and governance in the production and use of Artificial Intelligence in the Judiciary’.[71]

• Establishes principles relating to respect for fundamental rights, such as non-discrimination, equality, security, publicity and transparency, user control, governance and quality.

• Information needs to be clear and in precise language about the use of AI in services.

• Where AI is used to support a judicial decision, it requires ‘steps that led to the result’.[72]

• The National Council of Justice has not explicitly prohibited the use of large language models such as ChatGPT by the judiciary.[73] But the use of AI models in criminal matters is not encouraged, especially for predictive decision models.[74]

Developing guidelines for use by courts and tribunals

8.39 In this section we consider what could be included in guidelines for courts and tribunals.

8.40 We have explored principles fundamental to the administration of justice in Part C. The appropriate use of AI in Victorian courts and tribunals is shaped by perspectives of what we understand is the purpose and function of the law.[75] Guidelines for courts and tribunals on the use of AI need to be shaped by the core principles of justice.

8.41 Public consultation can promote key principles such as accountability and transparency. Some countries, including Canada, have guidelines that require courts and tribunals to undertake consultation with the public and legal professionals before using AI, to promote transparency.[76]

8.42 Dedicated guidelines for judicial officeholders could be considered. AI tools may influence judicial practice and create risks for the ethics and principles guiding judicial conduct. For example, the use of predicative analytics tools such as COMPAS could negatively impact the principles of fairness in judicial-decision-making by introducing risks of bias[77] (as discussed in Part B). Judicial guidelines may include consideration about the principles of justice, including judicial independence, impartiality and procedural fairness. In the United Kingdom, a guide for the judiciary is based on the principles of equality and fairness in the application of AI.[78]

8.43 In Australia, existing standards of ethics and conduct for judicial officers are outlined in the Guide to Judicial Conduct.[79] But these are not specific to the application of AI. The Judicial College of Victoria is responsible for ongoing education and professional development for judicial officers. Complaints about the conduct or capacity of a judicial officer or member of VCAT can be made to the Judicial Commission of Victoria.[80]

8.44 New or modified guidelines and resources on AI could be developed for judicial office holders. These could be created by the Judicial College of Victoria in collaboration with the Judicial Commission of Victoria.

Questions

21. Should guidelines be developed for the use of AI by Victorian courts and tribunals including for administrative staff, the judiciary and tribunal members? If so, what should they include and who should issue them?

22. Should there be dedicated guidelines for judicial officeholders?

23. Are there tools from other jurisdictions you think should be incorporated into guidelines to support Victorian courts and tribunals in their use of AI? If so, what are they?

24. Should courts and tribunals undertake consultation with the public or affected groups before using AI and/or disclose to court users when and how they use AI? What other mechanisms could courts and tribunals use to promote the accountable and transparent use of AI?

Criminal and civil matters

8.45 Uses and risks of AI will be different for criminal and civil law. So far, AI guidelines have not distinguished between criminal or civil matters. The guidelines of the Victorian Supreme and County Courts are not specific to criminal or civil matters,[81] and Queensland’s guidelines are explicitly stated to apply to both civil and criminal proceedings.[82]

8.46 Differences between criminal and civil matters may be relevant in setting guidelines for the use of AI. In Australia, the standard of proof is different in civil cases and criminal cases. This may be relevant in considering the accuracy of AI evidence, including expert and forensic evidence (see discussion in Parts B and above).

8.47 Specific human rights can be impacted by criminal proceedings such as the presumption of innocence and the right to liberty.[83] Bail and sentencing decisions in criminal law require judges to make decisions which can deprive an individual of their liberty. As discussed in Part B, there is a growing international trend for courts to rely on predictive AI tools to assist with bail and sentencing decisions. However, there have been cases ‘where the use of AI algorithms in predictive policing, risk assessment and sentencing has led to sub-optimal outcomes in the criminal justice system’ including where AI systems have incorporated biases or made them worse.[84]

Question

25. Should there be different guidelines or additional considerations for the use of AI in relation to criminal and civil law matters?

An AI assessment framework for courts and tribunals

8.48 This review will consider how Victorian courts and tribunals can assess the suitability of new AI applications for use in courts and tribunals.

8.49 Other jurisdictions have developed risk assessment frameworks to guide decision-making on the use of a particular AI system. Existing risk assessment frameworks are not legally specific but may offer a helpful approach in Victoria’s courts and tribunals.

8.50 We want to hear your views on what an AI risk assessment framework for courts and tribunals should include, and what AI uses should be considered high risk. As discussed in Part C, the Australian Government has proposed a set of principles to determine if an AI system is high risk and subject to mandatory ‘guardrails’.[85] If implemented, this will be relevant to Victorian courts and tribunals.

New South Wales AI Assessment Framework

8.51 The New South Wales AI Assessment Framework, while not specific to courts and tribunals, shows how AI risks can be assessed against a range of key principles.[86] This framework focuses on ‘elevated risks’ involving:

systems influencing decisions with legal or similar level consequences, triggering significant actions, operating autonomously, using sensitive data, risking harm, and lacking explainability. All Generative AI solutions should be classified elevated risk.[87]

8.52 The framework recognises AI may be used where there are elevated risks but only after alternative options have been considered and using AI will lead to a better outcome. It requires careful evaluation and monitoring for harm across the AI system lifecycle.[88]

8.53 Under the framework, risks are classified across five levels:

• None or negligible: Risks and potential consequences are insignificant. For example, grammar and spell checking.

• Low risk: Risks are reversible with negligible consequences. Potential consequences of risks are minimal and do not cause harms to individuals, organisation or society. For example, document classification and tagging.

• Mid-range risk: Risks are reversible with moderate consequences. Potential consequences are more noticeable and may have a temporary impact. For example, customer service chatbots.

• High risk: Risks are reversible with significant consequences. Consequences have a lasting impact on individuals, organisation or entire industries. For example, facial recognition systems.

• Very high risk: Risks have significant or irreversible consequences. Potential consequences are severe and may have permanent or irreversible implications. For example, autonomous benefits eligibility assessment without human oversight.[89]

8.54 Potential harms include individual physical and psychological harms to a person as well as broader harms to the community, such as the erosion of trust and social equality.[90]

8.55 The framework encourages consideration of benefits, including:

• delivery of better-quality services or outcomes

• reducing processing times

• generating efficiencies

• delivering a new service or outcome

• enabling future innovation to existing services.[91]

International approaches

8.56 Some international jurisdictions, including Canada, have adopted risk-based frameworks to assess the use of AI systems by government.[92]

8.57 The Canadian Algorithmic Impact Assessment Tool (discussed in Part C) outlines broad areas for assessing risks and impacts.[93] This includes the rights of individuals or communities, the health or well-being of individuals or communities, economic interests and the ongoing sustainability of an eco-system.[94] Risks are distinguished based on reversibility and expected duration. Impact is measured across four levels: little to none, moderate, high and very high.

8.58 In the US, the National Institute of Standards and Technology’s AI Risk Assessment Framework and Playbook provides a guide for considering AI risks.[95] This includes considering the use of the AI system, the AI system’s lifecycle, data and inputs, testing the AI model and verifying and validating the model and outputs.[96] A framework to consider generative AI was also released.[97]

8.59 The Council of Europe has developed a legal-specific risk assessment framework.[98] It contains questions designed for decision makers in judicial institutions to assist in the identification of potential risks of an AI system used by the judiciary.[99] The risk-based questions are linked to the five key principles for the design and use of AI by the judiciary contained in the European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their Environment.[100]

Developing an AI assessment framework for courts and tribunals

8.60 We want to hear your views on what an assessment framework for considering the suitability of new AI systems by Victoria’s courts and tribunals might look like.

Questions

26. Should an assessment framework be developed to guide the assessment of the suitability of AI technology in Victorian courts and tribunals?

27. Does the NSW AI Assurance Framework provide a useful model for Victoria’s courts and tribunals? Why or why not? What other models or guidelines should be considered?

28. How can risk categories (low, medium and high) be distinguished appropriately? What should be considered high risk?

29. What potential harms and benefits should an AI assessment framework for Victoria’s courts and tribunals consider?


  1. Supreme Court of Victoria, Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (Guidelines, Supreme Court of Victoria, 6 May 2024) <http://www.supremecourt.vic.gov.au/forms-fees-and-services/forms-templates-and-guidelines/guideline-responsible-use-of-ai-in-litigation>; County Court of Victoria, Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (Report, 3 July 2024) <https://www.countycourt.vic.gov.au/practice-notes>.

  2. Queensland Courts, The Use of Generative Artificial Intelligence (AI) Guidelines for Responsible Use by Non-Lawyers (Guidelines, Queensland Courts, 13 May 2024) <https://www.courts.qld.gov.au/about/news/news233/2024/the-use-of-generative-artificial-intelligence-ai>.

  3. Ibid.

  4. Supreme Court of Victoria, Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (Guidelines, Supreme Court of Victoria, 6 May 2024) 1 <http://www.supremecourt.vic.gov.au/forms-fees-and-services/forms-templates-and-guidelines/guideline-responsible-use-of-ai-in-litigation>; County Court of Victoria, Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (Report, 3 July 2024) 1 <https://www.countycourt.vic.gov.au/practice-notes>.

  5. Queensland Courts, The Use of Generative Artificial Intelligence (AI) Guidelines for Responsible Use by Non-Lawyers (Guidelines, Queensland Courts, 13 May 2024) <https://www.courts.qld.gov.au/about/news/news233/2024/the-use-of-generative-artificial-intelligence-ai>.

  6. Courts of New Zealand, Guidelines for Use of Generative Artificial Intelligence in Courts and Tribunals: Lawyers (Report, 7 December 2023).

  7. Federal Court of Canada, Notice to Parties and the Profession – The Use of Artificial Intelligence in Court Proceedings (Report, 7 May 2024).

  8. Court of King’s Bench of Manitoba, Re: Use of Artificial Intelligence in Court Submissions (Practice Direction, 23 June 2023); Supreme Court of Yukon, Use of Artificial Intelligence Tools (Practice Direction General No 29, 26 June 2023).

  9. Supreme Court of Victoria, Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (Guidelines, Supreme Court of Victoria, 6 May 2024) 1 <http://www.supremecourt.vic.gov.au/forms-fees-and-services/forms-templates-and-guidelines/guideline-responsible-use-of-ai-in-litigation>; County Court of Victoria, Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (Report, 3 July 2024) 1 <https://www.countycourt.vic.gov.au/practice-notes>.

  10. Supreme Court of Victoria, Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (Guidelines, Supreme Court of Victoria, 6 May 2024) 2 <http://www.supremecourt.vic.gov.au/forms-fees-and-services/forms-templates-and-guidelines/guideline-responsible-use-of-ai-in-litigation>; County Court of Victoria, Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (Report, 3 July 2024) 2 <https://www.countycourt.vic.gov.au/practice-notes>.

  11. Supreme Court of Victoria, Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (Guidelines, Supreme Court of Victoria, 6 May 2024) 1–2 <http://www.supremecourt.vic.gov.au/forms-fees-and-services/forms-templates-and-guidelines/guideline-responsible-use-of-ai-in-litigation>.

  12. Ibid 3; County Court of Victoria, Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (Report, 3 July 2024) 3 <https://www.countycourt.vic.gov.au/practice-notes>.

  13. Queensland Courts, The Use of Generative Artificial Intelligence (AI) Guidelines for Responsible Use by Non-Lawyers (Guidelines, Queensland Courts, 13 May 2024) <https://www.courts.qld.gov.au/about/news/news233/2024/the-use-of-generative-artificial-intelligence-ai>.

  14. Ibid.

  15. Courts of New Zealand, Guidelines for Use of Generative Artificial Intelligence in Courts and Tribunals: Lawyers (Report, 7 December 2023) 1.

  16. Ibid 4.

  17. Federal Court of Canada, Notice to Parties and the Profession – The Use of Artificial Intelligence in Court Proceedings (Report, 7 May 2024).

  18. Ibid 2–3.

  19. Ibid 1.

  20. Court of King’s Bench of Manitoba, Re: Use of Artificial Intelligence in Court Submissions (Practice Direction, 23 June 2023); Supreme Court of Yukon, Use of Artificial Intelligence Tools (Practice Direction General No 29, 26 June 2023).

  21. Intellectual Property Section of the Canadian Bar Association, Submission to the Federal Court of Canada’s Consultation on the Draft Federal Court Practice Direction on AI Guidance (Report, Canadian Bar Association, 24 November 2023) 2 <https://www.cba.org/CMSPages/GetFile.aspx?guid=3d500f72-96ff-4ff8-817e-32ac96e5ceeb>.

  22. Ibid.

  23. Federal Court of Canada, Notice to Parties and the Profession – The Use of Artificial Intelligence in Court Proceedings (Report, 7 May 2024).

  24. Standing Order Regarding Use of Artificial Intelligence (Standing Order, 394th Judicial District Court of Texas, 9 June 2023) <https://img1.wsimg.com/blobby/go/2f8cb9d7-adb6-4232-a36b-27b72fdfcd38/downloads/Standing%20order%20Regarding%20Use%20of%20Artificial%20Int.pdf?ver=1720638374301>; Standing Order For Civil Cases Before Magistrate Judge Fuentes (Standing Order, District Court of Northern Illinois, 31 May 2023) <https://www.ilnd.uscourts.gov/_assets/_documents/_forms/_judges/Fuentes/Standing%20Order%20For%20Civil%20Cases%20Before%20Judge%20Fuentes%20rev%27d%205-31-23%20(002).pdf>.

  25. Standing Order Re Artificial Intelligence (‘AI’) in Cases Assigned to Judge Baylson (Standing Order, District Court for the Eastern District of Pennsylvania, 6 June 2023) <https://www.paed.uscourts.gov/sites/paed/files/documents/locrules/standord/Standing%20Order%20Re%20Artificial%20Intelligence%206.6.pdf>.

  26. ‘Generative AI and Lawyers’, Victorian Legal Services Board + Commissioner (Web Page, 17 November 2023) <https://lsbc.vic.gov.au/news-updates/news/generative-ai-and-lawyers>.

  27. Ibid.

  28. ‘2024 Risk Outlook’, Victorian Legal Services Board + Commissioner (Web Page, 1 August 2024) <https://lsbc.vic.gov.au/lawyers/risk-outlook/2024-risk-outlook>.

  29. The Law Society of NSW, A Solicitor’s Guide to Responsible Use of Artificial Intelligence (Report, 10 July 2024)

    <https://www.lawsociety.com.au/sites/default/files/2024-07/LS4527_MKG_ResponsibleAIGuide_2024-07-10.pdf>; NSW Bar Association, Issues Arising from the Use of AI Language Models (Including ChatGPT) in Legal Practice (Guidelines, NSW Bar Association, 22 June 2023) <https://inbrief.nswbar.asn.au/posts/9e292ee2fc90581f795ff1df0105692d/attachment/NSW%20Bar%20Association%20GPT%20AI%20Language%20Models%20Guidelines.pdf>.

  30. The Law Society of NSW, A Solicitor’s Guide to Responsible Use of Artificial Intelligence (Report, 10 July 2024) 3

    <https://www.lawsociety.com.au/sites/default/files/2024-07/LS4527_MKG_ResponsibleAIGuide_2024-07-10.pdf>.

  31. Ibid.

  32. Ibid.

  33. Queensland Law Society, No.37 Artificial Intelligence in Legal Practice (Guidance Statement, 2023) <https://www.qls.com.au/Guidance-Statements/No-37-Artificial-Intelligence-in-Legal-Practice>.

  34. Ibid.

  35. The Law Society of England and Wales, Generative AI – the essentials: checklist (Report, The Law Society of England and Wales, 17 November 2023) <https://www.lawsociety.org.uk//en/Topics/AI-and-lawtech/Guides/Generative-AI-the-essentials>.

  36. New York State Bar Association, Report and Recommendations of the New York State Bar Association Task Force on Artificial Intelligence (Report, New York State Bar Association Task Force on Artificial Intelligence, April 2024).

  37. Ibid 53, 53–54.

  38. American Bar Association, ‘Model Rules of Professional Conduct: Rule 1.1 Competence – Comment’, ABA Centre for Professional Responsibility (Web Page, 2024) <https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_1_competence/comment_on_rule_1_1/>.

  39. Lauri Donahue, ‘A Primer on Using Artificial Intelligence in the Legal Profession’, JOLT Digest (Web Page, 3 January 2018) <https://jolt.law.harvard.edu/digest/a-primer-on-using-artificial-intelligence-in-the-legal-profession>.

  40. Mark Killan, ‘Court Approves CLE Tech Component’, The Florida Bar: The Florida Bar News (Web Page, 15 October 2016)

    <https://www.floridabar.org/the-florida-bar-news/court-approves-cle-tech-component/>.

  41. Law Society of British Columbia, Guidance on Professional Responsibility and Generative AI (Practice Resource, October 2023).

  42. Zhang v Chen [2024] BCSC 285, [35].

  43. Ibid [43].

  44. Mata v Avianca, Inc 678 F.Supp.3d 443 (2023).

  45. Ibid.

  46. Federal Rules of Civil Procedure 2023 (US) r 11.

  47. Mata v Avianca, Inc 678 F.Supp.3d 443 (2023), [466].

  48. Ibid.

  49. Dayal [2024] FedCFamC2F 1166.

  50. Ibid [1].

  51. Ibid [22].

  52. Ibid [10].

  53. The Law Society of NSW, A Solicitor’s Guide to Responsible Use of Artificial Intelligence (Report, 10 July 2024) <https://www.lawsociety.com.au/sites/default/files/2024-07/LS4527_MKG_ResponsibleAIGuide_2024-07-10.pdf>.

  54. Supreme Court of Victoria, Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (Guidelines, Supreme Court of Victoria, 6 May 2024) <http://www.supremecourt.vic.gov.au/forms-fees-and-services/forms-templates-and-guidelines/guideline-responsible-use-of-ai-in-litigation>; County Court of Victoria, Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (Report, 3 July 2024) <https://www.countycourt.vic.gov.au/practice-notes>.

  55. Supreme Court of Victoria, Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (Guidelines, Supreme Court of Victoria, 6 May 2024) 4 <http://www.supremecourt.vic.gov.au/forms-fees-and-services/forms-templates-and-guidelines/guideline-responsible-use-of-ai-in-litigation>; County Court of Victoria, Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (Report, 3 July 2024) 3 <https://www.countycourt.vic.gov.au/practice-notes>.

  56. Felicity Bell et al, AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators (Report, Australasian Institute of Judicial Administration Incorporated, December 2023) 43.

  57. Ibid.

  58. Federal Court of Canada, Interim Principles and Guidelines on the Court’s Use of Artificial Intelligence (Web Page, 20 December 2023) 2 <https://www.fct-cf.gc.ca/en/pages/law-and-practice/artificial-intelligence>.

  59. Ibid.

  60. Courts and Tribunals Judiciary (UK), Artificial Intelligence (AI): Guidance for Judicial Office Holders (Report, 12 December 2023).

  61. Ibid 3-5.

  62. James E Baker, Laurie N Hobart and Matthew Mittelsteadt, An Introduction to Artificial Intelligence for Federal Judges (Report, Federal Judicial Centre, 2023).

  63. Ibid. Notably this document does not mention generative AI or ChatGPT.

  64. Ibid 24.

  65. Ibid 25–6.

  66. Courts of New Zealand, Guidelines for Use of Generative Artificial Intelligence in Courts and Tribunals: Judges, Judicial Officers, Tribunal Members and Judicial Support Staff (Report, 7 December 2023).

  67. Ibid.

  68. Ibid 1.

  69. Ibid 5–6.

  70. Eduardo Villa Coimbra Campos, ‘Artificial Intelligence, the Brazilian Judiciary and Some Conundrums’ (Web Page, 3 March 2023) <https://www.sciencespo.fr/public/chaire-numerique/en/2023/03/03/article-artificial-intelligence-the-brazilian-judiciary-and-some-conundrums/>.

  71. ‘Resolution No. 332 of 08/21/2020’, National Council of Justice (Brazil), Research System for Normative Acts (Web Page) <https://atos.cnj.jus.br/atos/detalhar/3429>.

  72. Eduardo Villa Coimbra Campos, ‘Artificial Intelligence, the Brazilian Judiciary and Some Conundrums’ (Web Page, 3 March 2023) <https://www.sciencespo.fr/public/chaire-numerique/en/2023/03/03/article-artificial-intelligence-the-brazilian-judiciary-and-some-conundrums/>.

  73. Fábio O Ribeiro, ‘ChatGPT May Be Used with Restrictions by Brazilian Judges’, Fábio’s Newsletter (Substack Newsletter, 17 August 2023) <https://fbio.substack.com/p/chatgpt-may-be-used-with-restrictions>.

  74. Gustavo Mascarenhas Lacerda Pedrina and Tatiana Lourenço Emmerich de Souza, ‘Brazillian Report on Traditional Criminal Law Categories and AI’ (2019) 5(3) International Review of Penal Law 93, 98 <https://revista.ibraspp.com.br/RBDPP/article/view/265>.

  75. Tania Sourdin, Judges, Technology and Artificial Intelligence: The Artificial Judge (Edward Elgar Publishing, 2021) 257–69.

  76. Federal Court of Canada, Interim Principles and Guidelines on the Court’s Use of Artificial Intelligence (Web Page, 20 December 2023) 1 <https://www.fct-cf.gc.ca/en/pages/law-and-practice/artificial-intelligence>.

  77. Julia Angwin et al, ‘Machine Bias’, ProPublica (online, 23 May 2016) <https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing>; Jeff Larson et al, ‘How We Analyzed the COMPAS Recidivism Algorithm’, ProPublica (online, 23 May 2016) <https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm>.

  78. Courts and Tribunals Judiciary (UK), Artificial Intelligence (AI): Guidance for Judicial Office Holders (Report, 12 December 2023).

  79. Australian Institute of Judicial Administration (AIJA), Guide to Judicial Conduct, Third Edition (Revised) (Report, December 2023) <https://aija.org.au/wp-content/uploads/2024/04/Judicial-Conduct-guide_revised-Dec-2023-formatting-edits-applied.pdf> The guide is an Australia-wide resource which provides high-level authoritative guidance on issues of conduct and ethics, including how core judicial values such as impartiality, independence and integrity inform the standards of conduct expected of judicial officers.

  80. Judicial Commission of Victoria Act 2016 (Vic) The Judicial Commission may make guidelines about standards of ethical and professional conduct and practices that should be adopted by judicial officers in the performance of their functions: s134.

  81. Supreme Court of Victoria, Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (Guidelines, Supreme Court of Victoria, 6 May 2024) <http://www.supremecourt.vic.gov.au/forms-fees-and-services/forms-templates-and-guidelines/guideline-responsible-use-of-ai-in-litigation>; County Court of Victoria, Guidelines for Litigants: Responsible Use of Artificial Intelligence in Litigation (Report, 3 July 2024) <https://www.countycourt.vic.gov.au/practice-notes>.

  82. Queensland Courts, The Use of Generative Artificial Intelligence (AI) Guidelines for Responsible Use by Non-Lawyers (Guidelines, Queensland Courts, 13 May 2024) 1 <https://www.courts.qld.gov.au/about/news/news233/2024/the-use-of-generative-artificial-intelligence-ai>.

  83. In Victoria there is a right to liberty and security of person and specific rights in criminal proceedings: see Charter of Human Rights and Responsibilities Act 2006 (Vic) ss 21, 25.

  84. Miriam Stankovich et al, Global Toolkit on AI and the Rule of Law for the Judiciary (Report No CI/DIT/2023/AIRoL/01, UNESCO, 2023) 137 <https://unesdoc.unesco.org/ark:/48223/pf0000387331>.

  85. Department of Industry, Science and Resources (Cth), Safe and Responsible AI in Australia: Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings (Proposals Paper, September 2024) 19.

  86. Digital NSW, The NSW AI Assessment Framework (Report, 2024) <https://arp.nsw.gov.au/assets/ars/attachments/Updated-AI-Assessment-Framework-V3.pdf>.

  87. Ibid 5.

  88. Ibid 11.

  89. Ibid 18. Note that examples can move between risk levels depending on their specific use.

  90. Ibid 28–9.

  91. Ibid 23.

  92. Treasury Board of Canada Secretariat (TBS), ‘Algorithmic Impact Assessment Tool’, Responsible Use of Artificial Intelligence in Government (Web Page, 30 May 2024) <https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html>.

  93. Ibid.

  94. Ibid.

  95. National Institute of Standards and Technology (U.S.), Artificial Intelligence Risk Management Framework (AI RMF 1.0) (Report No NIST AI 100-1, 26 January 2023) <http://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf>; National Institute of Standards and Technology (U.S.), ‘NIST AI RMF Playbook’, NIST: Trustworthy & Responsible AI Resource Center (Web Page) <https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook>.

  96. Ibid.

  97. Chloe Autio et al, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (Report No NIST AI 600-1, National Institute of Standards and Technology (U.S.), July 2024) <https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf>.

  98. European Commission for the Efficiency of Justice (CEPEJ), Assessment Tool for the Operationalisation of the European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment (Report No CEPEJ(2023)16final, Council of Europe, 4 December 2023).

  99. Ibid 8–17.

  100. European Commission for the Efficiency of Justice (CEPEJ), European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment (Report, Council of Europe, December 2018) 7–12.