Artificial Intelligence in Victoria’s Courts and Tribunals: Consultation Paper

6. Principles for responsible and fair use of AI in courts and tribunals

Overview

• This chapter proposes principles for guiding safe and responsible use of AI in courts and tribunals. The proposed principles draw on:

– broad principles relating to AI

– core principles of justice relevant to courts and tribunals.

Principles for regulating AI in courts and tribunals

6.1 Our terms of reference ask us to develop principles to guide the safe use of AI in Victoria’s courts and tribunals. We have considered a combination of common principles for regulating AI and principles of justice, discussed in the two tables below.

Common AI regulatory principles

6.2 A broad range of principles have been developed across Australia and overseas to regulate the safe use of AI. As shown in Appendix A, there is significant similarity in the principles other jurisdictions have used to regulate AI. The principles below are summarised from a range of international sources.

Common AI regulatory principles

Fairness and equity

AI systems should be fair and equitable, and not discriminate against individuals or groups. This is particularly important given the risk of bias. AI should also be accessible and not marginalise vulnerable groups.

Accountability

There should be clearly identified accountability for the safe use of AI throughout the lifecycle. This includes risk management in the development and ongoing use of AI. AI systems should be tested and evaluated before use, and monitored after implementation.

Human oversight

Human oversight should be retained as a check on AI systems. The level of oversight will vary depending on the context.

Transparency

There should be transparency about when AI is used, and information on the data and methodology used should be readily available. The AI should be explainable enough for people to ask questions and challenge outcomes.

Contestability

People whose rights or interests are affected by AI should have a process to challenge the use or output of an AI system.

Privacy and data security

AI systems should be developed and deployed with respect for privacy and data protection. This requires careful data collection, use and storage and appropriate governance and management.

Principles of justice

6.3 The principles above offer useful guidance. But additional principles are fundamental to the justice system. Any principles about AI in courts and tribunals need to be anchored in core judicial values. The following discussion draws on the judicial values outlined in AI Decision-Making and the Courts.[1]

Principles of justice

Impartiality and equality before the law

Impartiality has been described as ‘the fundamental quality required of a judge and the core attribute of the judiciary’.[2] Judges take an oath to ‘do right to all manner of people according to the law without fear or favour, affection or ill will’.[3] Impartiality and equality are also fundamental to the right to a fair hearing by ‘a competent, independent and impartial court or tribunal’.[4]

Access to justice

Access to justice is fundamental to a fair and effective justice system. If people are unable to access justice in a fair, timely and cost-effective way, they cannot exercise or defend their rights or hold decision makers to account. Access to justice extends beyond the formal justice system of lawyers and courts. It includes access to ‘legal information and education, non-court based dispute resolution and law reform.’[5]

Judicial accountability

Judicial accountability involves explaining decisions and demonstrating that they are independent, impartial and based on evidence. A key mechanism for judicial accountability is the obligation to give adequate reasons. This allows decisions to be scrutinised by parties, appellate courts and the public. It also promotes good decision-making and the acceptability of decisions.

Independence

A key aspect of judicial independence is that the judiciary operates separately to the executive and the legislature (i.e. the separation of powers). However, the principle is broader than this. It requires that ‘a judge be, and be seen to be, independent of all sources of power or influence in society, including the media and commercial interests’.[6]

Open justice

A fundamental aspect of the system of justice in Australia is that it is open to public scrutiny and assessment.[7] The Open Courts Act 2013 (Vic) describes how the principle of open justice ‘is a fundamental aspect of the Victorian legal system’. It ‘maintains the integrity and impartiality of courts and tribunals’ and ‘strengthens public confidence in the system of justice’.[8]

Open justice involves accountability for processes and decisions, and transparency in court operations.[9] Open justice is promoted by conducting proceedings in ‘open court’, where the public can access and observe hearings, and by allowing fair and truthful reports of judicial proceedings.[10] Giving judgments in public and making reasons accessible also contributes to open justice.[11]

Public trust

Public trust is central to the rule of law. Public confidence is ‘invoked as a guiding principle in relation to the conduct of judges … and in relation to the institutional conduct of courts’.[12] The general acceptance of judicial decisions has been described as resting ‘not upon coercion, but upon public confidence’.[13] Confidence and trust require that the public is satisfied the justice system is based on judicial independence, impartiality, integrity and fairness.

Procedural fairness

Procedural fairness, or natural justice, ‘lies at the heart of the judicial function’.[14] It requires that a court be impartial and that each party is provided ‘an opportunity to be heard, to advance its own case and to answer, by evidence and argument, the case put against it’.[15] Procedural fairness requires that a person has sufficient understanding of the evidence or decision to be able to challenge the case against them. The ability to interrogate evidence and the basis for decisions is also necessary to appeal a decision.

Efficiency

Courts and tribunals should provide modern and cost-effective services where possible, while maintaining outcomes that align with core judicial values. This is highlighted by the Civil Procedure Act 2010 (Vic), which has an overarching objective ‘to facilitate the just, efficient, timely and cost-effective resolution of the real issues in dispute’.[16]

Developing court- and tribunal-specific principles for
regulating AI

6.4 Drawing on a combination of (a) principles for regulating AI and (b) the principles of justice identified above, we propose eight potential principles to guide the safe use of AI in Victoria’s courts and tribunals:

• impartiality and fairness

• accountability and independence

• transparency and open justice

• contestability and procedural fairness

• privacy and data security

• access to justice

• efficiency

• human oversight and monitoring

6.5 These proposed principles are discussed below and would underpin the guidelines discussed in Part D.

6.6 The principles will provide a foundation for maintaining public trust in the courts. Maintaining public trust requires that courts and tribunals carefully consider the risks of using AI and how it aligns with judicial values. Public trust and confidence is fundamental to the acceptance of judicial decisions and the operation of the law.

6.7 We are interested in your views about whether these principles are appropriate and sufficient to guide the regulation of AI in courts and tribunals. We recognise that these are high-level principles, so we are also interested in how they can be tailored more specifically to courts and tribunals.

6.8 There has been criticism that the ‘ethical AI approach’ is not enough to address the risks of AI, particularly in high-risk settings. Some critics say principles are not specific enough and rely on voluntary compliance.[17] We are also interested in whether principles are sufficient, or if further regulatory intervention is necessary.

Impartiality and fairness

Principle 1: Impartiality and fairness

AI systems should be fair and equitable, and not discriminatory. AI should not undermine judicial independence and impartiality, or the right to procedural fairness.

Courts and tribunals should understand the risk of bias with the proposed AI technology. They should understand who designed it and why, what data the system is trained on, and the quality and relevance of that data.

Legal professionals and experts relying on AI should be aware of potential bias and only use AI for the appropriate purpose.

6.9 AI systems sometimes incorporate bias, and can even add to it (see Parts A and B). This raises significant issues for principles of impartiality and equality. There may be inherent bias in AI systems created by the data they are trained on, and biased assumptions may underpin the algorithms. Furthermore, AI’s capacity for continual learning can reinforce bias.

6.10 Because AI can continually learn and adapt, courts and tribunals need to monitor and evaluate AI systems after they have been deployed, to detect bias and ensure fairness. As a result of the ‘black box’ phenomenon (see Part A) it can be difficult to detect and rectify bias, so some AI tools may need to be subject to strict oversight, or not applied to high-risk uses that impact individual rights and liberties.

6.11 It is essential that courts and tribunals take measures to minimise the risk of bias when they use AI systems.

Accountability and independence

Principle 2: Accountability and independence

When AI is used in courts, it should be clear who is accountable. Courts should be accountable for the use of AI in court processes or decision-making, and court users accountable for their own use of AI.

Courts and tribunals that use AI will need to be clear about who is accountable for its design, development and deployment. There should be clear lines of accountability where third-party AI systems are used.

In applying the principle of accountability, courts and tribunals should consider how accurate the tool is and how to explain and understand it.

6.12 Courts and tribunals will need appropriate accountability processes to ensure public confidence in any AI systems.

6.13 Developers are responsible for the design and accuracy of AI systems, but courts and tribunals will be accountable for how they are used and how accurate the outputs are. They will need to take care to ensure that the different actors involved in the system are accountable for their part of it.[18]

6.14 One of the guardrails in the Australian Government’s Voluntary AI Safety Standard is for organisations to ‘establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.’[19] Courts and tribunals will need to decide what information they need from AI developers and suppliers so that they can implement accountability processes and maintain human oversight.

6.15 Officers of courts and tribunals will need to establish who is accountable for testing, evaluating and monitoring AI systems after they have been deployed. The Voluntary AI Safety Standard highlights the need to monitor AI systems to detect any behaviours, changes or unintended consequences of deploying them.[20]

6.16 Some court guidelines require court officials or judicial officers to disclose when AI is used, or oblige them to consult with the public on the use of AI. The Australian Human Rights Commission stated in its report Human Rights and Technology that ‘individuals should be made aware when they are subject to AI-informed decision-making’.[21] There is more accountability when the public knows and understands how AI is being used.

Who makes the decisions?

6.17 AI may interfere with judicial independence when it is used to support decision-making. Even more significant ethical and legal issues would arise if AI was used in future for fully automated decision-making.

6.18 Judges may be less able to provide reasons for judicial decisions if they can’t explain the part played by AI tools, either for proprietary reasons, or because they don’t understand the technology. This would make it harder for court users to understand and challenge those decisions. Tania Sourdin notes that the right to appeal a judge’s decision has been emphasised by several commentators.[22] In her view, ‘automation can compromise individual due process rights by undermining the ability of a party to challenge a decision affecting them’.[23] Sourdin also says that it is important to ‘ensure automated processes do not prevent parties from accessing or assessing the information used to make the decision’.[24]

6.19 Difficulties may also arise where it is not clear that AI has been used, for example preparing written submissions or expert opinions.

Who designs and regulates AI tools?

6.20 How AI is regulated raises considerations for judicial independence. If government seeks to regulate the use of AI in courts and tribunals, can it do so without interfering with judicial independence? Would legislation prohibiting AI impinge on judicial independence?

6.21 There may be issues of independence if external bodies are responsible for overseeing AI tools used by courts. For example, assessing the suitability of new AI applications could be done internally by courts, or externally by another body. The Australasian Institute for Judicial Administration notes that under the EU AI Act, tools used by the judiciary will be placed under the control of many entities. The EU AI Act lists a range of bodies that will assess AI systems. The roles of such external bodies could impact judicial independence and accountability.

6.22 The design of AI systems could also affect judicial independence. This could happen if governments are involved in designing AI systems used in courts, including those operated by third parties.[25] It could also happen when private companies design AI tools used in courts. There may be risks of interference from foreign states where AI tools are hosted in international jurisdictions and are subject to different regulatory requirements. Examples of this have already been identified.[26]

Transparency and open justice

Principle 3: Transparency and open justice

Courts and tribunals should be transparent about when they use AI, and information on the data and methodology should be readily available. Courts should

• provide sufficient information for a person affected by a decision to understand and contest the use of AI where their rights are affected.

• consider when it is appropriate to disclose the use of AI, including obligations for court staff, judicial officers and court users.

The principle of transparency also applies to court users. Disclosure by parties about the use of AI tools in courts and tribunals may be appropriate to assess its potential misuse and to minimise unintended consequences.

6.23 There are opportunities for AI to promote open justice. For example, AI could make transcripts more readily available than they are now. But AI also poses risks for open justice and transparency:

• The complexity of AI systems makes them difficult to explain, and it will not always be apparent that AI is being used.

• Proprietary interests could make it more difficult to explain or assess the AI system.[27] Expert evidence about the technology might be heard in closed court,[28] but should be balanced with transparency and open justice considerations.

• How to maintain ‘openness’ when matters are determined in alternative forums, such as online or by automated processes, rather than open hearings.

6.24 It is critical for courts and tribunals to be transparent about the AI they are using, so that there can be proper oversight. That means transparency about what technology is used and how it is trained, tested and monitored. Other considerations include:

• when to disclose use of AI

• to what degree to explain the underlying technology

6.25 The degree of transparency may vary depending on how the AI is used, and whether it is used by court administration or judicial officers. Where an AI application is developed by a third party, such as an automated software system, it may be difficult for courts to provide information that they do not have.

6.26 The people who use AI technology might not always need to understand or explain how the algorithm works, if they can verify the results. Court staff who use AI for administrative tasks may not need to understand the technology but will need to be able to check it is accurate. However, if a judge, magistrate or tribunal member uses AI to assist with the exercise of discretion, it will be essential to understand exactly how results are produced, and to explain the AI process so people can appeal or contest the decision.

Contestability and procedural fairness

Principle 4: Contestability and procedural fairness

People whose rights or interests are affected by AI should have access to a process to challenge the use or output of an AI system. Contestability is especially important when an individual’s rights are affected but may also be necessary for some procedural decisions by court administration.

6.27 The level of automation and the use of AI is relevant in considering the impact on procedural fairness. Automating administrative functions, such as case management, will not impact procedural fairness as much as automation in areas where judicial decision-making is exercised.

6.28 Where AI supports judicial decision-making, the difficulties in understanding and explaining AI may undermine procedural fairness. For example, if AI was used in risk assessment for sentencing, it would be necessary for a court user to understand the methodology of the tool before they could contest it or launch an appeal.

6.29 The possibility of fully automated judicial decision-making raises significant issues about how parties advance their cases and contest evidence and decisions.[29] The very nature of appellate review may need to be reconsidered if a first instance decision is fully automated.

6.30 Courts will need to consider appropriate procedures for contesting decisions or findings based on AI and provide adequate information to enable court users to challenge a decision.

Privacy and data security

Principle 5: Privacy and data security

AI systems should be developed and deployed with respect for privacy and data protection. This requires careful collection, use and storage of data and appropriate data governance and management. Courts and tribunals must consider how personal information is handled by AI systems, including where data is stored, and who has access to it, including third parties. Courts should be guided by privacy guidelines and standards.

6.31 Courts and tribunals will need appropriate procedures to ensure privacy and data security.

6.32 The risk of data insecurity and loss of privacy caused by using AI systems in courts and tribunals is detailed in Part A.

Access to justice

Principle 6: Access to justice

People should be able to access the justice system in a fair, timely and cost-effective way. The use of AI should support access to justice and minimise existing barriers or inequalities. Courts and tribunals should consider how any proposed use of AI may create new barriers to accessing justice, or any unintended consequences.

6.33 AI may address some barriers to justice which affect all court and tribunal users. AI could provide lower-cost legal advice and faster access to courts and tribunals through increased efficiency.

6.34 A range of factors can make it more difficult for some court users to access the justice system, including physical or mental impairment, geographical or social isolation, economic disadvantage, language or other cultural factors, and barriers related to discrimination and bias. AI can potentially reduce these barriers.

6.35 But AI technology can also heighten the risk of exclusion.[30] People experiencing marginalisation or disadvantage already face additional barriers to justice. The Australian Human Rights Commission Human Rights and Technology report noted the potential for AI ‘to cause harm is disproportionately experienced by people who are already vulnerable and marginalised’.[31]

6.36 The use of AI may add to a ‘digital divide’ for people who do not have access to technology or have low digital literacy. For example, older Australians continue to be the ‘least likely age cohort to have access to a computer or internet due to physical (for example physical disability) and/or psychological barriers (for example lack of confidence)’.[32]

6.37 There is a digital divide between First Nations and non-First Nations people, and between capital cities and other parts of the country.[33] In a study of digital exclusion, those who are ‘highly excluded’ were found to be more likely than others to have a disability, live in public housing, not have completed secondary school, or be aged over 75.[34] The principle of access to justice needs to balance gains from efficiency and increased accessibility with not undermining inclusion.

6.38 A related issue is how AI will impact the way that court users, victims and the public engage with the justice system. For some court users, the notion of justice includes the opportunity to be heard in open court and before a judge. There are inherently human factors in judicial decision-making, such as human experience, emotion, morality and creativity. Tania Sourdin and Richard Cornes consider the unconscious reasoning process of a human judge and note that an AI judge is unlikely to replicate this process.[35] Losing these elements may ‘fundamentally change what justice looks like’.[36]

Efficiency

Principle 7: Efficiency

The adoption of AI systems by courts and tribunals should contribute to the overall efficiency of the justice system.

Timely and cost-effective court services are a critical feature of a fair justice system.

Courts and tribunals should consider and adopt AI systems which are demonstrated to save time and reduce the cost of court and tribunal procedures.

6.39 AI tools when appropriately used can significantly save time, reduce costs and simplify proceedings.

6.40 Potential benefits of AI systems to deliver efficiencies for courts and tribunals are discussed in Part A.

Human oversight and monitoring

Principle 8: Human oversight and monitoring

Human oversight should be retained as a check on AI systems to address risks relating to bias, reliability and accuracy. It is critical for courts and tribunals to ensure fairness, natural justice, impartiality, transparency and accuracy. The level of oversight will vary depending on the use and risks of AI in specific contexts. Courts and tribunals should have appropriate governance arrangements. AI systems and outputs should be evaluated and tested before use and monitored after implementation, with ongoing assessment.

6.41 Human oversight is an important mechanism for addressing bias and accuracy, as well as maintaining public trust. Although connected to accountability, human oversight is critical in its own right in the context of courts and tribunals. Justice Perry has commented that ‘proper verification and audit mechanisms need to be integrated into the systems from the outset, and appropriate mechanisms put in place for review in the individual case by humans.’[37] Human oversight can include human ‘in-the-loop’, referring to human involvement in selecting and guiding inputs, and human ‘on-the-loop’, where humans are involved at the final stage by overseeing or correcting AI predictions or decisions.[38] The level of human oversight required will vary depending on the context. The EU AI Act classifies AI tools used in the administration of justice as high risk. This includes ‘tools intended to assist judicial authorities to conduct research and interpret and apply the law’. It requires high-risk tools to have human oversight, as well as a range of other regulatory obligations.

Questions

11. Are the principles listed in this chapter appropriate to guide the use of AI in Victorian courts and tribunals? What other principles might be considered?

12. Are principles sufficient, or are guidelines or other regulatory responses also required?

13. What regulatory tools, including guidelines, could be used to implement these high-level principles in Victoria’s courts and tribunals?

14. How can the use of AI by courts and tribunals be regulated without interfering with courts’ independence, and what risks should be considered?

15. Is it appropriate to have varying levels of transparency and disclosure depending on the use of AI by courts and tribunals? (For example, use by administrative staff compared with judicial officers.)

16. Who should be able to contest an AI decision, and when? Is the capacity to contest necessary for decisions made by court administration staff, or only judicial decisions? Consider how courts and tribunals can ensure sufficient information is available to enable decisions to be contested.


  1. Felicity Bell et al, AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators (Report, Australasian Institute of Judicial Administration Incorporated, December 2023) This was a joint project by the Australasian Institute for Judicial Administration and UNSW to prepare a guide for judges, tribunal members and court administrators on AI in the courtroom.

  2. Commentry on the Bangalore Principles of Judicial Conduct (Report, United Nations Office on Drugs and Crime, September 2007) [52] <https://www.unodc.org/conig/uploads/documents/publications/Otherpublications/Commentry_on_the_Bangalore_principles_of_Judicial_Conduct.pdf>.

  3. Australian Law Reform Commission, Without Fear or Favour: Judicial Impartiality and the Law on Bias (ALRC Final Report No 138, December 2021) 39 <https://www.alrc.gov.au/wp-content/uploads/2022/08/ALRC-Judicial-Impartiality-138-Final-Report.pdf> citing The Hon Chief Justice M Gleeson AC, ‘The Right to an Independent Judiciary’ (Speech, 14th Commonwealth Law Conference, September 2005).

  4. Charter of Human Rights and Responsibilities Act 2006 (Vic) s 24.

  5. Law Council of Australia, The Justice Project (Final Report, August 2018) 48 <https://lawcouncil.au/files/web-pdf/Justice%20Project/Final%20Report/Justice%20Project%20_%20Final%20Report%20in%20full.pdf>.

  6. Australian Institute of Judicial Administration (AIJA), Guide to Judicial Conduct, Third Edition (Revised) (Report, December 2023) 7 <https://aija.org.au/wp-content/uploads/2024/04/Judicial-Conduct-guide_revised-Dec-2023-formatting-edits-applied.pdf>.

  7. John Fairfax Publications Pty Ltd v District Court (NSW) [2004] NSWCA 324; (2004) 61 NSWLR 344, [18].

  8. Open Courts Act 2013 (Vic) Part 1 s 1 (aa) (i) and (ii).

  9. Frank Vincent, Open Courts Act Review (Report, September 2017) [31] <https://files.justice.vic.gov.au/2021-11/Open%20Courts%20Act%20Review%20-%20March%202018.pdf>.

  10. Ibid [80].

  11. Wainohu v New South Wales [2011] HCA 24; (2011) 243 CLR 181, 206 [39], 208 [54] (French CJ and Kiefel J).

  12. The Hon Chief Justice Murray Gleeson, ‘Public Confidence in the Judiciary’ (2002) 76 Australian Law Journal 558, 558.

  13. Ibid.

  14. International Finance Trust Co Ltd v New South Wales Crime Commission [2009] HCA 49; (2009) 240 CLR 319, 354 [54] (French CJ).

  15. Ibid.

  16. Civil Procedure Act 2010 (Vic) s 1(c).

  17. Law Commission of Ontario, Regulating AI: Critical Issues and Choices (LCO Issue Paper, April 2021) 25.

  18. Jennifer Cobbe, Michael Veale and Jatinder Singh, ‘Understanding Accountability in Algorithmic Supply Chains’ [2023] FAccT ’23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency 1186 <https://dl.acm.org/doi/10.1145/3593013.3594073>.

  19. Department of Industry, Science and Resources (Cth) and National Artificial Intelligence Centre, Voluntary AI Safety Standard (Report, August 2024) 13.

  20. Ibid 25–26.

  21. Australian Human Rights Commission, Human Rights and Technology (Final Report, 2021) 61.

  22. Tania Sourdin, Judges, Technology and Artificial Intelligence: The Artificial Judge (Edward Elgar Publishing, 2021) 246.

  23. Ibid.

  24. Ibid.

  25. Felicity Bell et al, AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators (Report, Australasian Institute of Judicial Administration Incorporated, December 2023) 46.

  26. Ibid 48.

  27. Felicity Bell et al, AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators (Report, Australasian Institute of Judicial Administration, December 2023) 29.

  28. Trivago NV v Australian Consumer and Competition Commission [2020] FCAFC 185; (2020) 384 ALR 496.

  29. Felicity Bell et al, AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators (Report, Australasian Institute of Judicial Administration Incorporated, December 2023) 54.

  30. Australian Human Rights Commission, Human Rights and Technology (Final Report, 2021) 137.

  31. Ibid 45.

  32. Charlene H Chu et al, ‘Digital Ageism: Challenges and Opportunities in Artificial Intelligence for Older Adults’ (2022) 62(7) The Gerontologist 947, 949 <https://academic.oup.com/gerontologist/article/62/7/947/6511948>.

  33. Julian Thomas et al, Measuring Australia’s Digital Divide: Australian Digital Inclusion Index 2023 (Report, ARC Centre of Excellence for Automated Decision-Making and Society, RMIT University, Swinburne University of Technology, and Telstra., 2023) 9–10 <https://www.digitalinclusionindex.org.au/wp-content/uploads/2023/07/ADII-2023-Summary_Report_Final-1.pdf>.

  34. Ibid 10.

  35. Tania Sourdin and Richard Cornes, ‘Do Judges Need to Be Human? The Implications of Technology for Responsive Judging’ in The Responsive Judge: International Perspectives (Springer, 2018) 87, 104; Tania Sourdin, Judges, Technology and Artificial Intelligence: The Artificial Judge (Edward Elgar Publishing, 2021) 215.

  36. Felicity Bell et al, AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators (Report, Australasian Institute of Judicial Administration Incorporated, December 2023) 56.

  37. Justice Melissa Perry, ‘iDecide: Digital Pathways to Decision’ (Conference Paper, 2019 CPD Immigration Law Conference, 21 March 2019) 7 <https://www.fedcourt.gov.au/digital-law-library/judges-speeches/justice-perry/perry-j-20190321>.

  38. Tania Sourdin, Judges, Technology and Artificial Intelligence: The Artificial Judge (Edward Elgar Publishing, 2021) 247.