Artificial Intelligence in Victoria’s Courts and Tribunals: Consultation Paper

7. AI in courts and tribunals: current laws and regulation

Overview

• Our terms of reference ask us to consider opportunities to build on existing legislation, regulation and common law, so that AI is used safely in Victoria’s courts and tribunals.

• We consider:

– any gaps in the law where reform may be required to ensure AI is used safely

– any barriers in the law preventing AI being used in courts and tribunals

– existing legal professional obligations and how these may support AI to be used safely.

Victorian legislation and regulation relevant to AI

7.1 There is no AI-specific legislation or regulations in Victoria. But elements of Victoria’s broader legislative and regulatory framework are relevant in considering how AI can be safely used in Victoria’s courts and tribunals.

7.2 The Australian Government has proposed mandatory ‘guardrails’ for the use of AI in high-risk settings.[1] If implemented, they may have implications for courts and tribunals.

7.3 In this section we explore whether the existing legal framework can manage the risks of AI in courts and tribunals. We also consider what changes may be needed to the law.

Table 7: Legislation and regulations relevant to AI in Victorian courts and tribunals

Regulation

Court rules and procedures

Criminal Procedure Act 2009 (Vic)

Civil Procedure Act 2010 (Vic)

Evidence Act 2008 (Vic)

Criminal Procedure Regulations 2020 (Vic)

Open Courts Act 2013 (Vic)

Statutory rules on court procedure, including:

Supreme Court (General Civil Procedure) Rules 2015 (Vic)

Supreme Court (Criminal Procedure) Rules 2017 (Vic)

County Court Civil Procedure Rules 2018 (Vic)

County Court Criminal Procedure Rules 2019 (Vic)

Magistrates’ Court General Civil Procedure Rules 2020 (Vic)/Criminal Procedure Rules 2019 (Vic)

Coroner’s Court Rules 2019 (Vic)/Coroner’s Court Regulations 2019 (Vic)

Children’s Court Criminal Procedure Rules 2019 (Vic)

VCAT Rules 2018 (Vic)

Privacy and data regulation

Privacy and Data Protection Act 2014 (Vic)

Privacy Act 1988 (Cth)

Administrative law

Administrative Law Act 1978 (Vic)

Administrative Decisions (Judicial Review) Act 1977 (Cth)

Human rights

Charter of Human Rights and Responsibilities Act 2006 (Vic)

Equal Opportunity Act 2010 (Vic)

Court rules and procedures

7.4 If AI is to be permitted in Victoria’s courts and tribunals for a range of processes and functions, court rules and procedures will need to change. Introducing AI into court processes requires us to consider existing procedural rules. Changes are also likely to be required if AI is not permitted (at least, to ensure there is clarity in relation to what is or is not authorised).

AI use by courts and tribunals

7.5 Court and tribunal rules have already changed to support the use of technology. Changes may be needed to allow even low-level automation.

7.6 The following recent examples, although not involving AI, show how court procedures have changed to enable new technology:

• Amendment to the Children’s Court rules to support the introduction of an electronic case management system in the Children’s Court.[2]

• During the COVID-19 pandemic, legislation was introduced to enable the use of technology in courts.[3] This enabled electronic service of documents, audio-visual links and the ability to make decisions based on written submissions or in the absence of a party.[4]

7.7 In England and Wales, online pilots have required new practice directions to supplement civil procedure rules:

• Practice Direction 7E was introduced to facilitate the pilot of the Money Claim Online system for small claims.[5]

• Practice Direction 36E was introduced to allow online filing applications in certain matrimonial and civil partnership proceedings.[6]

• Practice Direction 36ZD was introduced to facilitate a pilot for an online system for certain private law proceedings relating to children and some protective orders and appeals.[7]

Procedural considerations for court users

7.8 Changes to court and tribunal procedures may be needed to allow for appropriate consideration of AI. The court or parties involved in matters involving AI may need access to information about an AI system, such as its source code, data, variables or other information. Because AI systems are so complex, this could result in the need for extensive disclosure, involving huge volumes of data and millions of lines of code. The volume and complexity of materials may require new and different disclosure requirements. Changes may allow for system testing results to be produced rather than millions of lines of code.

7.9 Matters involving AI disputes (such as infringements of copyright) will require consideration. Existing procedural rules may be sufficiently adaptable to allow parties to navigate an AI dispute fairly. But AI’s novelty and complexity suggests that legal professionals and judges will need further education.

Privacy and data regulation

7.10 AI raises significant issues related to privacy and data security. Existing legislation and regulations provide some legal protections. The Office of the Victorian Information Commissioner is responsible for overseeing the Privacy and Data Protection Act 2014 (Vic) (PDP Act). The PDP Act establishes information privacy principles about collection, use and disclosure, data quality and security, anonymity, transborder data flows and sensitive information.[8] The principles apply to Victorian public sector organisations and government contractors.[9] The PDP Act does not explicitly reference AI. But the principles can be applied to AI systems and broadly align with the privacy and data protection principles for regulating AI, as discussed in Part C.

7.11 There is also privacy regulation at the national level. The Privacy Act 1988 (Cth) applies to some private sector organisations (generally with a turnover of $3 million), as well as most Australian Government agencies.[10]

Office of the Victorian Information Commissioner Privacy and Artificial Intelligence Guide

7.12 The Office of the Victorian Information Commissioner released a guide in 2021 about AI and privacy.[11] It is designed to assist public sector organisations to consider privacy obligations under the PDP Act when using AI systems.

7.13 The guide encourages public sector organisations to be aware of the unique risks AI can pose across the ‘collection, use, handling and governance of personal information’.[12]

7.14 The Office of the Victorian Information Commissioner has raised particular concerns about generative AI systems and privacy obligations.[13] Recently an investigation by the Victorian Information Commissioner found that a child protection worker used ChatGPT to draft reports, including a report for the Children’s Court. The Office of the Victorian Information Commissioner found this was a serious breach of information privacy principles because it involved personal and sensitive information.[14] The use of ChatGPT also resulted in inaccuracies and downplayed risks to the child.[15] As a result, the Office of the Victorian Information Commissioner issued a compliance notice requiring the Department of Families, Fairness and Housing to ban child protection workers from using ChatGPT and similar tools.[16]

7.15 The Victorian Privacy and Data Protection Deputy Commissioner issued a statement that Victorian public sector organisations ‘must ensure staff and contracted service providers do not use [their own or other peoples’] personal information with ChatGPT’.[17] Further, public sector organisations have been advised not to use ChatGPT to ‘formulate decisions, undertake assessments, or […] for other administrative actions that may have consequences for individuals’.[18]

Privacy and data considerations for courts and tribunals

7.16 Victorian courts and tribunals aim to follow state and national privacy laws.[19] This includes the Office of the Victorian Information Commissioner principles for collecting and handling personal information.[20] But courts and tribunals are generally exempt from the PDP Act in relation to their judicial or quasi-judicial functions.[21] This includes judicial or quasi-judicial officer holders, registry or other office holders and staff.[22]

7.17 The use of AI in courts and tribunals could raise privacy issues in many ways. Considerations include:

• how information is uploaded into an AI system

• how third parties are engaged by the courts or tribunals to develop, operate or maintain an AI system

• how data is managed

• how consent is obtained for data use and whether consent extends to the purpose for which that data is used.

7.18 Courts and tribunals need to carefully identify the purpose and use of information across the AI system lifecycle, and ensure every use is lawful.

Administrative law

7.19 AI raises issues for administrative law, particularly in relation to the use of automated decision-making tools by government.

7.20 There are questions about whether an automated decision is a legally valid decision for administrative law purposes.[23] The use of automated decision-making tools by government may create barriers for people seeking judicial review of decisions.[24]

7.21 The majority of the Federal Court of Australia in Pintarich v Deputy Commissioner of Taxation[25] (Pintarich) set a precedent that an output of an autonomous machine may not be recognised by law as a decision.[26] The majority reasoned that autonomous machines cannot make decisions because they do not have a subjective mental capacity.[27] This case suggests that ‘fully automated discretionary decisions may not be reviewable’[28] under the Administrative Decisions (Judicial Review) Act 1977 (Cth) (ADJR Act). This may prevent persons affected by fully automated discretionary decisions from seeking judicial review of those decisions.

7.22 Whether the use of automated decision-making tools will create barriers to judicial review may depend on the level of human involvement. It has been argued by Yee-Fui Ng and Maria O’Sullivan that:

If a human makes a decision guided or assisted by automated systems, this would still be a decision under the ADJR Act under the majority’s interpretation, as it would still involve a mental process of deliberation and cogitating by a human decision-maker.[29]

7.23 Justice Kerr delivered a dissenting judgment in Pintarich. He stated the legal understanding of a decision should not remain static.[30] Further, the concept of decisions should be reconsidered based on advancements in technology:

The expectation that a ‘decision’ will usually involve human mental processes of reaching a conclusion … is being challenged by automated ‘intelligent’ decision-making systems that rely on algorithms to process applications and make decisions.[31]

7.24 Justice Kerr stated it would ‘undermine fundamental principles of administrative law’ if a decision maker could deny a decision had been made because a machine had been used to make it.[32]

7.25 The Australian Human Rights Commission recommended that legislation be introduced to require that ‘any affected individual is notified where AI is materially used in making an administrative decision. That notification should include information about how an affected individual can challenge a decision’.[33] The extent to which AI is ‘material’ will be important to consider as AI continues to be increasingly built into a wider range of applications.

7.26 ‘Rules as code’ also raises issues for administrative law. Rules as code seeks to translate legislation and regulation into code that can be interpreted and applied by a computer[34] (see examples in Part B). The Australasian Institute for Judicial Administration has highlighted that rules as code projects may in future create ‘implications for statutory interpretation and administrative decision-making.’[35] If Rules as code is given legal recognition and applied to produce lawfully enforceable decisions, this could create challenges for existing judicial review avenues.[36] These risks could potentially be reduced by ensuring there are avenues to appeal automated decisions to a human decision maker.[37]

Evidence

7.27 AI tools may be used in the collection, generation and assessment of evidence. This raises challenges for decision makers about the ‘reliability, transparency, interpretability and bias of evidence’.[38] How AI may be used to prepare and assess evidence, including deepfakes, is discussed in Part B. Changes to the Evidence Act 2008 (Vic) or existing codes of conduct or practice notes may be needed to respond to evidence involving AI.

7.28 The Australasian Institute of Judicial Administration suggests that:

Where AI systems are used in the courtroom or tribunal hearing, a report should accompany its use which provides a sufficient explanation to the judge and parties, appropriate for the context of its use. Where such safeguards are not in place, courts should be wary of using AI systems’ outputs in ways that affect the rights and obligations of individuals in circumstances where they have no real prospects of understanding or challenging the operation of the system.[39]

7.29 We want to hear your views about whether there are sufficient safeguards in place in relation to the use of AI in the collection, generation and assessment of evidence in courts and tribunals.

Assessing evidence involving AI

7.30 AI is new and evolving. In assessing evidence involving AI it may be useful for decision makers to consider:

• Context: Does the purpose for which the AI system was designed align with how it is being used?[40]

• Input factors: What metrics does the system rely on and how are they weighted? Is the data suitable and what training data was used?[41]

• Risk of bias: What data was used, including training data, and how representative is it? Are proxies being used inappropriately (such as for racial bias)?

• Accuracy: Has it been tested? Has the accuracy of the system and outputs been validated? Where testing has occurred, what are the limitations and error rate?

7.31 The use of generative AI and large language models raises additional issues. Decision makers may need to consider what, if any, fine-tuning has been undertaken, how the system has been prompted, and whether (and what) retrieval-augmented generation has been used.

Expert opinion evidence

7.32 The use of AI gives rise to difficult questions in relation to expert evidence. For example, what kind of expert evidence is required to establish the reliability and admissibility of AI-related evidence? This issue will arise in relation to evidence about AI itself, but also where AI has been used in the assessment, collection or generation of evidence.

7.33 Because AI is new and complex, consideration may need to be given to the best ways for judges to assess expert evidence in relation to AI. The Civil Procedure Act 2010 (Vic) enables court-appointed experts,[42] a single joint expert[43] (an expert witness engaged jointly by both parties) or concurrent evidence[44] (where experts can be questioned jointly and the judge and counsel for either party can ask questions). These processes provide opportunities to ask experts questions and engage with different views. The Victorian Supreme and County Courts have issued practice notes on Expert Evidence in Criminal Trials which makes provision for evidence to be heard concurrently, if parties agree.[45]

7.34 The Federal Court of Australia in Trivago v ACCC showed how courts can examine expert evidence about AI.[46] The court had assistance from two computer science experts about algorithms on the Trivago website. The evidence was given concurrently and focused on specific questions. One of the experts had access to the algorithms underlying metrics and their ratings. The other expert had to rely on results from testing the system.[47]

7.35 Different considerations will arise in criminal trials. Issues relating to the use of expert evidence about AI in the criminal context may require specific attention.

7.36 Practice Notes and Codes of Conduct, such as the Victorian Supreme and County Court’s Expert Evidence in Criminal Trials[48] and the Expert Witness Code of Conduct,[49] may need to be updated to consider the risks and limitations of AI and whether disclosure about AI is required.

Human rights

7.37 The use of AI in courts and tribunals may have implications for human rights. The risk of bias associated with AI systems raises issues concerning discrimination and accessibility. In some circumstances, the use of AI may also raise concerns relating to a fair hearing. UNESCO states that ‘human rights and fundamental freedoms must be respected, protected and promoted throughout the life cycle of AI systems’.[50]

7.38 Existing laws relating to privacy and anti-discrimination provide some protection. The Equal Opportunity Act 2010 (Vic) protects against discrimination, sexual harassment and victimisation.[51] The Charter of Human Rights and Responsibilities Act 2006 (Vic) (the Charter) contains basic rights, freedoms and responsibilities. This includes protection from unlawful discrimination, the protection of privacy and reputation, and criminal procedural rights.

7.39 Courts and tribunals must, as far as possible, interpret all Victorian laws in a way that upholds the rights outlined in the Charter.[52] The Charter applies to certain functions of courts and tribunals.[53]

7.40 The Australian Human Rights Commission recommended that the Australian Government adopt a human rights-centred approach to AI development and deployment.[54] It also recommended that human rights be protected in procurement law, policy and guidelines with the design and development of any AI-informed decision-making tools procured by the Australian Government.[55] Further, it recommended human rights impact assessments to assess whether any proposed AI-informed decision-making system:

• complies with human rights law

• involves automating any element of discretion

• provides appropriate review by human decision makers

• is authorised and governed by legislation.[56]

7.41 Human rights issues for courts and tribunals may include:

• detecting and responding to discrimination in any AI process

• considering the impact of AI on marginalised cohorts and people experiencing barriers to justice

• ensuring human rights are respected by court users who use AI.

Questions

11. Building on Table 7, are other statutes or regulations relevant to the safe use of AI in Victorian courts and tribunals?

12. Are there legislative or regulatory gaps or barriers where reform is needed for the safe use of AI in courts and tribunals?

13. What, if any, changes to legislation, rules or processes are necessary to enable courts and tribunals to:

a. safely use AI

b. consider evidence in relation to AI

c. implement human rights principles (Should there be a human rights impact assessment of any AI use in courts and tribunals?)

d. align AI use with privacy responsibilities?

14. How can changes be achieved while maintaining appropriate flexibility?

Regulatory framework for legal professionals

Legal professionals

7.42 In addition to the legal framework discussed above, barristers and solicitors are bound by legal professional duties. Risks relating to AI may challenge existing legal professional obligations.

7.43 There are risks to justice if legal professionals use AI without checking for accuracy. Issues relating to client privilege, privacy and bias are key considerations for legal professionals.

7.44 Barristers and solicitors are bound by professional conduct rules and ethical obligations under the Legal Profession Uniform Acts and Barrister and Solicitor Conduct Rules.[57] Under this framework barristers and solicitors owe duties to their clients, their colleagues and to the court. Provided below are examples of barristers’ duties, noting similar rules also apply to solicitors.

Duty to courts

7.45 Barristers have an overriding duty to the court to act with independence in the administration of justice.[58] As officers of the court, barristers must not deceive or mislead the court.[59] They must take all necessary steps to ensure the accuracy of information they provide to the court and correct any misleading statements.[60]

7.46 The Civil Procedure Act 2010 (Vic) also creates a paramount duty to the court to further the administration of justice in relation to any civil proceeding. Legal professionals must act honestly, ensure claims have a proper basis and not mislead or deceive the court.[61]

7.47 AI raises risks in relation to legal professionals meeting their duties to the courts, particularly relating to accuracy. As discussed in Parts A and B, there are risks relating to the accuracy of AI, particularly generative AI. If legal professionals rely on AI tools without applying their own judgment, this may result in them providing misleading or inaccurate information to courts.

Duty to clients

7.48 Barristers owe duties to their clients and must promote and protect their client’s best interests.[62] Barristers must also maintain confidentiality and legal professional privilege.[63] But there is also an obligation for barristers to use their own judgment and not act simply as a messenger of their client.[64]

7.49 AI tools can produce one-size-fits-all templates and responses which do not consider an individual’s unique circumstances. To act in a client’s best interests, legal professionals need to apply their own judgment when using AI, including in drafting documents and preparing legal advice.[65]

7.50 The use of AI may lead to a breach of a client’s privacy, depending on how the AI uses personal information (see privacy discussion in Parts A and B).

Implication of legal professional obligations for AI

7.51 These existing rules might provide a way for legal professionals to manage AI risks. Breaches of professional conduct rules and ethical obligations are generally handled by legal professional bodies such as the Victorian Legal Services Board and Commissioner. Legal professional bodies including those in New South Wales and Victoria have provided guidance to lawyers and barristers on their duties in relation to using AI (see discussion below from section XX).[66]

7.52 The uniform conduct rules for barristers and solicitors outline a requirement to act with competence and diligence in the administration of justice.[67] It has been suggested, that in the future, if legal professionals do not use AI when it can provide better, quicker and less costly legal services, this might raise issues about diligence and the need to protect clients’ interests.[68] In some places the duty of diligence has been interpreted to require an understanding of relevant technologies. The American Bar Association approved changes to its Model Rules of Professional Conduct in 2012.[69] This change highlighted that professional competency included understanding relevant technologies associated with law.[70]

7.53 Additionally, the court has inherent jurisdiction to supervise the conduct of its officers (solicitors and barristers).[71] This ensures standards are maintained to enable the proper administration of justice. A court can take disciplinary action where it has been recklessly misled.[72] Therefore, courts can direct and discipline lawyers and, in principle, would have the capacity to direct how lawyers use AI in court matters.

Prosecutorial bodies

7.54 AI is playing a growing role in criminal investigations and the determination of policing matters (see discussion in Part B).

7.55 Victoria Police developed an AI Ethics Framework to support the ethical and lawful use of AI.[73] This framework aims to mitigate key risks of AI, including risks to human rights or legal consequences, or where AI may replace or influence police discretion.[74] The framework is modelled on Australia’s AI Ethics Principles, in conjunction with the Police Artificial Intelligence Principles developed by the Australian New Zealand Policy Advisory Agency.[75]

7.56 This framework highlights that any use of AI must be compatible with Victoria Police’s existing legal obligations, including under the Charter. The framework sets out eight overarching principles that the use of AI must comply with, which includes:

• human rights

• community benefit

• fairness

• privacy and security

• transparency

• accountability

• human oversight

• skills and knowledge.[76]

7.57 Implementation of the Australia New Zealand Police Advisory Agency‘s principles include a commitment to community safety, harm minimisation and community confidence in the adoption and deployment of AI. This includes ensuring information about the use of AI is publicly available to the greatest extent possible, without undermining policing objectives.[77]

7.58 Other jurisdictions have developed guidelines for the use of AI by policing agencies. In Canada, the Toronto Police Service Board recently updated its AI policy.[78] This policy contains nine guiding principles for the use of AI by Toronto Police and requires a risk assessment before the procurement, use or deployment of any new AI technology. There is also a requirement to implement a public engagement strategy to ‘transparently inform the public of the use of new AI technology’.[79]

Question

15. Is there a need to strengthen professional obligations to manage risks relating to AI? If so, what changes might be required to the Legal Profession Uniform Law, Civil Procedure Act or regulations?


  1. Department of Industry, Science and Resources (Cth), Safe and Responsible AI in Australia: Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings (Proposals Paper, September 2024) 26, 63.

  2. For example, see Children’s Court Authentication and Electronic Transmission Rules 2020 (Vic).

  3. COVID-19 Omnibus (Emergency Measures) Act 2020 (Vic).

  4. Ibid.

  5. This has been subsequently amended to Practice Direction 7C. Ministry of Justice (UK), Practice Direction 7C – Money Claim Online (Practice Direction, HM Courts and Tribunals Service, 1 October 2022).

  6. Ministry of Justice (UK), Practice Direction 36e – Pilot Scheme: Procedure for Online Filing of Applications in Certain Proceedings for a Matrimonial Order (Practice Direction, HM Courts and Tribunals Service, 30 July 2018) <https://www.justice.gov.uk/courts/procedure-rules/family/practice_directions/practice-direction-36e-pilot-scheme-procedure-for-online-filing-of-applications-in-certain-proceedings-for-a-matrimonial-order>.

  7. Ministry of Justice (UK), Practice Direction 36zd – Pilot Scheme: Online System for Certain Private Law Proceedings Relating to Children, Certain Protective Orders and Certain Appeals (Practice Direction, HM Courts and Tribunals Service, 28 April 2024) <https://www.justice.gov.uk/courts/procedure-rules/family/practice_directions/practice-direction-36zd-pilot-scheme-online-system-for-certain-private-law-proceedings-relating-to-children-and-for-certain-protective-orders>.

  8. Privacy Data and Protection Act 2014 (Vic) pt 3, Div 2.

  9. Ibid s 13.

  10. Privacy Act 1988 (Cth) pt 1, Div 2.

  11. Office of the Victorian Information Commissioner, Artificial Intelligence – Understanding Privacy Obligations (Report, April 2021) <https://ovic.vic.gov.au/privacy/resources-for-organisations/artificial-intelligence-understanding-privacy-obligations/>; Note this resource is being updated following public consultation. Public consultation closed on 19 June, 2024. See Office of the Victorian Information Commissioner, ‘Artificial Intelligence – Understanding Privacy Obligations’, Office of the Victorian Information Commissioner (Web Page) <https://ovic.vic.gov.au/privacy/resources-for-organisations/artificial-intelligence-understanding-privacy-obligations/>.

  12. Office of the Victorian Information Commissioner, Artificial Intelligence – Understanding Privacy Obligations (Report, April 2021) 3 <https://ovic.vic.gov.au/privacy/resources-for-organisations/artificial-intelligence-understanding-privacy-obligations/>.

  13. Privacy and Data Protection Deputy Commissioner, ‘Public Statement: Use of Personal Information with ChatGPT’, (Report, Office of the Victorian Information Commissioner, September 2024) 1 <https://ovic.vic.gov.au/privacy/resources-for-organisations/public-statement-use-of-personal-information-with-chatgpt/>.

  14. Office of the Victorian Information Commissioner, Investigation into the Use of ChatGPT by a Child Protection Worker (Report, Office of the Victorian Information Commissioner, September 2024) 5–7 <https://ovic.vic.gov.au/regulatory-action/investigation-into-the-use-ofchatgpt-by-a-child-protection-worker/>.

  15. Ibid.

  16. Ibid 8.

  17. Privacy and Data Protection Deputy Commissioner, ‘Public Statement: Use of Personal Information with ChatGPT’, Office of the Victorian Information Commissioner (Web Page, February 2024) 1 <https://ovic.vic.gov.au/privacy/resources-for-organisations/public-statement-use-of-personal-information-with-chatgpt/>.

  18. Ibid.

  19. Supreme Court of Victoria, ‘PRIVACY: Privacy Statement for the Supreme Court of Victoria Website’, The Supreme Court of Victoria (Web Page, 2024) <http://www.supremecourt.vic.gov.au/privacy>; County Court of Victoria, ‘Privacy Statement’, County Court of Victoria (Web Page, 7 October 2021) <https://www.countycourt.vic.gov.au/privacy-statement>.

  20. Office of the Victorian Information Commissioner, Privacy Management Framework (Report, Office of the Victorian Information Commissioner, April 2021) <https://ovic.vic.gov.au/privacy/resources-for-organisations/privacy-management-framework/>.

  21. Privacy Data and Protection Act 2014 (Vic) s 10.

  22. Ibid.

  23. Yee-Fui Ng and Maria O’Sullivan, ‘Deliberation and Automation – When Is a Decision a Decision?’ (2019) 26(21) Australian Journal of Administrative Law 21, 23.

  24. See discussion in NSW Ombudsman, The New Machinery of Government: Using Machine Technology in Administrative Decision-Making (Report, 29 November 2021).

  25. Pintarich v Federal Commissioner of Taxation [2018] FCAFC 79; (2018) 262 FCR 41.

  26. Anna Huggins, ‘Addressing Disconnection: Automated Decision-Making, Administrative Law and Regulatory Reform’ (2021) 44(3) University of New South Wales Law Journal 1048, 1062 <https://www.unswlawjournal.unsw.edu.au/article/addressing-disconnection-automated-decision-making-administrative-law-and-regulatory-reform/>.

  27. Ibid.

  28. Ibid 1064.

  29. Yee-Fui Ng and Maria O’Sullivan, ‘Deliberation and Automation – When Is a Decision a Decision?’ (2019) 26(21) Australian Journal of Administrative Law 21, 30.

  30. Pintarich v Federal Commissioner of Taxation [2018] FCAFC 79; (2018) 262 FCR 41, [49]..

  31. Ibid.

  32. Ibid.

  33. Australian Human Rights Commission, Human Rights and Technology (Final Report, 2021) 60.

  34. Ronan Kennedy, ‘Rules as Code and the Rule of Law: Ensuring Effective Judicial Review of Administration by Software’ [2024] Law, Innovation and Technology 1, 2–3.

  35. Felicity Bell et al, AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators (Report, Australasian Institute of Judicial Administration, December 2023) 11.

  36. James Mohun and Alex Roberts, Cracking the Code: Rulemaking for Humans and Machines (OECD Working Papers on Public Governance No 42, 2020) 94 <https://www.oecd-ilibrary.org/governance/cracking-the-code_3afe6ba5-en>.

  37. Ibid.

  38. UNESCO, ‘How to Determine the Admissibility of AI-Generated Evidence in Courts?’, UNESCO (Web Page, 26 July 2023) <https://www.unesco.org/en/articles/how-determine-admissibility-ai-generated-evidence-courts>.

  39. Felicity Bell et al, AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators (Report, Australasian Institute of Judicial Administration Incorporated, December 2023) 46.

  40. For example, the FBI’s facial recognition application reportedly has a high match rate when comparing an image with a local database but much lower accuracy when compared against another country’s dataset that it was not trained on: James E Baker, Laurie N Hobart and Matthew Mittelsteadt, An Introduction to Artificial Intelligence for Federal Judges (Report, Federal Judicial Centre, 2023) 52–3.

  41. For example, in considering AI driven bail or sentencing tools, it is important to ensure that inappropriate or unlawful factors were not included and that relevant factors are weighted appropriately by the AI system.

  42. Civil Procedure Act 2010 (Vic) s 65M.

  43. Ibid s 65L.

  44. Ibid s 65K.

  45. Supreme Court of Victoria, SC CR 3 Expert Evidence in Criminal Trials (Report, 30 January 2017) [13.1] <http://www.supremecourt.vic.gov.au/areas/legal-resources/practice-notes/sc-cr-3-expert-evidence-in-criminal-trials>; County Court of Victoria, Practice Note: Expert Evidence in Criminal Trials (Report, 24 June 2014) [11.1] <https://www.countycourt.vic.gov.au/files/documents/2019-10/expert-evidence-criminal-trials.pdf>.

  46. Trivago NV v Australian Consumer and Competition Commission [2020] FCAFC 185; (2020) 384 ALR 496.

  47. Ibid [81].

  48. Supreme Court of Victoria, SC CR 3 Expert Evidence in Criminal Trials (Report, 30 January 2017) 6.2 <http://www.supremecourt.vic.gov.au/areas/legal-resources/practice-notes/sc-cr-3-expert-evidence-in-criminal-trials>; County Court of Victoria, Practice Note: Expert Evidence in Criminal Trials (Report, 24 June 2014) 4.2 <https://www.countycourt.vic.gov.au/files/documents/2019-10/expert-evidence-criminal-trials.pdf>.

  49. Supreme Court (General Civil Procedure) Rules 2015 (Vic) Form 44A.

  50. UNESCO, Recommendation on the Ethics of Artificial Intelligence (Report No SHS/BIO/PI/2021/1, 2022) 18 <https://unesdoc.unesco.org/ark:/48223/pf0000381137>.

  51. Equal Opportunity Act 2010 (Vic).

  52. Charter of Human Rights and Responsibilities Act 2006 (Vic) s 32.

  53. Ibid s 6(2)(b).

  54. Australian Human Rights Commission, Australian Human Rights Commission Submission to the Select Committee on Adopting Artificial Intelligence (Report, Australian Human Rights Commission, 15 May 2024) 12 <https://humanrights.gov.au/our-work/legal/submission/adopting-ai-australia>.

  55. Australian Human Rights Commission, Human Rights and Technology (Final Report, 2021) 100.

  56. Ibid 55.

  57. Legal Profession Uniform Law Application Act 2014 (Vic); Legal Profession Uniform Conduct (Barristers) Rules 2015 (NSW); Legal Profession Uniform Legal Practice (Solicitors) Rules 2015 (NSW).

  58. Legal Profession Uniform Conduct (Barristers) Rules 2015 (NSW) r 23.

  59. Ibid r 24.

  60. Ibid r 25.

  61. Civil Procedure Act 2010 (Vic) ss 10(b), 16–27 The paramount duty is to further the administration of justice (s 16). Overarching obligations relevant to the use of AI include the obligations to act honestly (s17), ensure claims have a proper basis (s18) and not to mislead or deceive the court (s 21).

  62. Legal Profession Uniform Conduct (Barristers) Rules 2015 (NSW) r 35.

  63. Ibid r 114.

  64. Ibid r 42.

  65. Michael Legg and Felicity Bell, ‘Artificial Intelligence and the Legal Profession: Becoming the AI-Enhanced Lawyer’ (2019) 38(2) University of Tasmania Law Review 34, 55–7.

  66. ‘Generative AI and Lawyers’, Victorian Legal Services Board + Commissioner (Web Page, 17 November 2023) <https://lsbc.vic.gov.au/news-updates/news/generative-ai-and-lawyers>; NSW Bar Association, Issues Arising from the Use of AI Language Models (Including ChatGPT) in Legal Practice (Guidelines, NSW Bar Association, 22 June 2023) <https://inbrief.nswbar.asn.au/posts/9e292ee2fc90581f795ff1df0105692d/attachment/NSW%20Bar%20Association%20GPT%20AI%20Language%20Models%20Guidelines.pdf>; The Law Society of NSW, A Solicitor’s Guide to Responsible Use of Artificial Intelligence (Report, 10 July 2024) <https://www.lawsociety.com.au/sites/default/files/2024-07/LS4527_MKG_ResponsibleAIGuide_2024-07-10.pdf>.

  67. Legal Profession Uniform Conduct (Barristers) Rules 2015 (NSW) r 4 (c); Legal Profession Uniform Legal Practice (Solicitors) Rules 2015 (NSW) r 4.1.3.

  68. Michael Legg and Felicity Bell, ‘Artificial Intelligence and Solicitors’ Ethical Duties’ [2022] (85) Law Society Journal 77.

  69. Ibid.

  70. Ibid.

  71. GE Dal Pont, Lawyers’ Professional Responsibility (Thomson Reuters, 7th ed, 2021) 572 [17.20] and 574 [17.25].

  72. Ibid 574 [17.25].

  73. Victoria Police, Victoria Police Artificial Intelligence Ethics Framework (Policy, Victoria Police, 23 April 2024) <https://www.police.vic.gov.au/victoria-police-artificial-intelligence-ethics-framework>.

  74. Ibid.

  75. Australian Government, ‘Australia’s AI Ethics Principles – Australia’s Artificial Intelligence Ethics Framework’, Department of Industry, Science and Resources (Web Page, 5 October 2022) <https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles>; Australia New Zealand Policing Advisory Agency (ANZPAA), Australia New Zealand Police Artificial Intelligence Principles (Report, 14 July 2023) <https://www.anzpaa.org.au/resources/publications/australia-new-zealand-police-artificial-intelligence-principles>.

  76. Victoria Police, Victoria Police Artificial Intelligence Ethics Framework (Policy, Victoria Police, 23 April 2024) <https://www.police.vic.gov.au/victoria-police-artificial-intelligence-ethics-framework>.

  77. Australia New Zealand Policing Advisory Agency (ANZPAA), Australia New Zealand Police Artificial Intelligence Principles (Report, 14 July 2023) <https://www.anzpaa.org.au/resources/publications/australia-new-zealand-police-artificial-intelligence-principles>.

  78. Toronto Police Service Board, Use of Artificial Intelligence Technology (Report, Toronto Police Service Board, 11 January 2024) <https://tpsb.ca/policies-by-laws/board-policies/195-use-of-artificial-intelligence-technology>.

  79. Ibid.