Artificial Intelligence in Victoria’s Courts and Tribunals: Consultation Paper

Glossary

Artificial intelligence (AI)

AI is ‘a machine-based system that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment’.[1]

AI system (vs AI model)

‘In short, a system is a broader concept than a model … An AI system comprises various components, including, in addition to the model or models, elements such as interfaces, sensors, conventional software, etc … As any other possible component in the context of the AI value chain, the AI model can be provided by actors other than the provider of the final AI system, who determines the purpose for which the AI system can be used.’[2]

Automated decision-making

Automated decision-making is ‘the application of automated systems in any part of the decision-making process … Automated systems range from traditional non-technological rules-based systems to specialised technological systems which use automated tools to predict and deliberate … Although automated decision-making may in some instances use AI technologies, in other cases it will not’.[3]

Black box

‘An AI system where the data inputted is known, and the decisions made from that data are known, but the way in which the data was used to make the decisions is not understood by humans.’[4]

Chatbot

‘Chatbots are popular applications of Generative AI and large language models. A chatbot is a computer program that interacts with humans through natural language conversations. Some chatbots use large language models to generate content according to user inputs.’[5]

Computer vision

Computer vision is the ‘capability of a functional unit to acquire, process and interpret data representing images or video’.[6]

Deepfake

‘A deepfake is a digital photo, video or sound file of a real person that has been edited to create an extremely realistic but false depiction of them doing or saying something that they did not actually do or say.’[7]

Deep learning

Deep learning is a ‘subset of machine learning’ and an ‘artificial intelligence approach to creating rich hierarchical representations through the training of neural networks with many hidden layers’.[8]

Ex ante regulation (vs ex post)

‘By requiring quality control measures before AI is deployed, an ex ante approach would often mitigate and sometimes entirely prevent injuries that AI causes or contributes to. Licensing is an important tool of ex ante regulation, and should be applied in many high-risk domains of AI.’[9]

Expert system

‘An expert system is an AI system that encapsulates knowledge provided by a human expert in a specific domain to infer solutions to problems. An expert system consists of a knowledge base, an inference engine and a user interface. The knowledge base stores declarative knowledge of a specific domain, which encompasses both factual and heuristic information.’[10]

Fine-tuning

‘Fine-tuning is a term borrowed from the world of engineering, meaning to make small adjustments to improve performance, In the context of AI, fine-tuning refers to a similar process: refining a pre-trained model to enhance its accuracy and efficacy, particularly for a specific task or dataset.’[11]

‘The difference between retrieval augmented generation and fine-tuning is that retrieval augmented generation augments a natural language processing model by connecting it to an organization’s proprietary database, while fine-tuning optimizes deep learning models for domain-specific tasks. Retrieval augmented generation and fine-tuning have the same intended outcome: enhancing a model’s performance to maximize value for the enterprise that uses it.’[12]

Generative AI

‘Generative AI or GenAI are both short for generative artificial intelligence. These are software systems that create content as text, images, music, audio and videos based on a user’s “prompts”.’[13]

Hallucination

‘Hallucination refers to AI models making up facts to fit a prompt’s intent. When a large language model processes a prompt, it searches for statistically appropriate words, not necessarily the most accurate answer. An AI system does not “understand” anything, it only recognises the most statistically likely answer. That means an answer might sound convincing but have no basis in fact. This creates significant risks for organisations that rely on chatbots to give advice about products or services because that advice might not be accurate.’[14]

Large language model

‘Large language models are data transformation systems. They are trained with large numbers of parameters, which are numerical values that developers adjust to shape the inputs and outputs of AI models. When a user inputs a prompt, the model generates text content in response.’[15]

Machine learning

‘Machine learning (sometimes seen as ML) is a set of techniques for creating algorithms so that computational systems can learn from data.’[16]

Machine learning algorithm

A machine learning algorithm is ‘a set of rules or processes used by an AI system to conduct tasks—most often to discover new data insights and patterns, or to predict output values from a given set of input variables’.[17]

Narrow AI or Artificial Narrow Intelligence

‘A “narrow AI” system is able to solve defined tasks to address a specific problem (possibly much better than humans would do). A “general AI” system addresses a broad range of tasks with a satisfactory level of performance. Current AI systems are considered as “narrow”. It is not yet known whether “general” AI systems will be technically feasible in the future.’[18]

Natural language processing

‘Natural language processing is information processing based upon natural language understanding and natural language generation. This encompasses natural language analysis and generation, with text or speech. By using natural language processing capabilities, computers can analyse text that is written in human language and identify concepts, entities, keywords, relations, emotions, sentiments and other characteristics, allowing users to draw insights from content. With those capabilities, computers can also generate text or speech to communicate with users.’[19]

Online alternative dispute resolution

Online alternative dispute resolution is an example of online dispute resolution. It represents ‘dispute resolution outside the courts, which originally emerged in the mid-1990s as an adjunct to various forms of alternative dispute resolution (ADR) and as a response to disputes arising from the expansion of ecommerce. As a result, it focussed on using technology to resolve customer complaints and sought to support negotiation, mediation and arbitration’.[20]

Online dispute resolution

‘Online dispute resolution involves the use of information and communications technology to help parties resolve disputes. Within a court and tribunal system, online dispute resolution is a digital platform that allows people to progress through dispute resolution for low-value disputes, from the commencement of a claim to final determination, entirely online.’[21]

Open domain (vs closed domain)

Open domain ‘question answering is a task that answers factoid questions using a large collection of documents.’[22]

‘A closed domain system, also known as domain-specific, focuses on a particular set of topics and has limited responses based on the business problem … On the other hand, an open domain system is expected to understand any topic and return relevant responses.’[23]

Open-source (vs closed source)

‘Open-source LLMs (Large Language Models) have publicly accessible source code and underlying architecture, allowing developers, deployers, researchers and enterprises to use, modify and distribute them freely or subject to limited restrictions … Closed-source large language models have proprietary underlying source code and architecture. They are accessible only under specific terms defined by their developers.’[24]

Retrieval augmented generation

Retrieval augmented generation ‘enhances LLMs (Large Language Models) by retrieving relevant document chunks from external knowledge base through semantic similarity calculation. By referencing external knowledge, retrieval augmented generation effectively reduces the problem of generating factually incorrect content. Its integration into LLMs has resulted in widespread adoption, establishing retrieval augmented generation as a key technology in advancing chatbots and enhancing the suitability of large language models for real-world applications’.[25]

Rule based system

‘A rule based system is a type of software system that uses rules as the basis for making decisions or solving problems. These rules are defined in a format that the system can interpret and process, typically in the form of “if-then” statements. Rule-based systems are a branch of artificial intelligence and are used in various applications, from expert systems to data processing and business automation.’[26]

Rules as code

‘Rules as code is a public sector innovation, which involves a preparation of a machine-consumable version of some legislation. The term ‘machine-consumable’ implies that the rules are written in a way that they can be processed directly as rules by a computer. This can be done using a computer coding language or by using one of the platforms specifically built for this purpose.’[27]

Self-represented litigant

A self-represented litigant is ‘anyone who is attempting to resolve any component of a legal problem for which they do not have legal counsel, whether or not the matter actually goes before a court or tribunal’.[28]

Speech recognition

Speech recognition is ‘conversion, by a functional unit, of a speech signal to a representation of the content of the speech. Digitized speech is a form of sequential data, so that techniques that can handle data associated with a time interval can be used to process phonemes from speech’.[29]

Supervised learning (vs unsupervised learning)

‘Supervised machine learning is defined as “machine learning that makes use of labelled data during training”. In this case, machine learning models are trained with training data that include a known or determined output or target variable (the label)… Supervised learning can be used for classification and regression tasks, as well as for more complex tasks pertaining to structured prediction.’[30]

Technology assisted review

Technology assisted review is a ‘process for prioritizing or coding a collection of documents using a computerized system that harnesses human judgements of one or more subject matter expert(s) on a smaller set of documents and then extrapolates those judgements to the remaining document collection’.[31]


  1. OECD, Recommendation of the Council on Artificial Intelligence (Report No OECD/LEGAL/0449, 2024) 7.

  2. David Fernández‑Llorca et al, ‘An Interdisciplinary Account of the Terminological Choices by EU Policymakers Ahead of the Final Agreement on the AI Act’ [2024] Artificial Intelligence and Law 7–8 <https://doi.org/10.1007/s10506-024-09412-y>.

  3. Definition based on the International Organisation for Standardisation (ISO) definition and adopted in: Department of Industry, Science and Resources (Cth), Safe and Responsible AI in Australia: Discussion Paper (Discussion Paper, June 2023) 5–6.

  4. Toby Walsh et al, Closer to the Machine: Technical, Social, and Legal Aspects of AI (Report, Office of the Victorian Information Commissioner, August 2019) 3.

  5. Fan Yang, Jake Goldenfein and Kathy Nickels, GenAI Concepts: Technical, Operational and Regulatory Terms and Concepts for Generative Artificial Intelligence (GenAI) (Report, ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), and the Office of the Victorian Information Commissioner (OVIC), 2024) <https://apo.org.au/node/327400>.

  6. Standards Australia Limited, ‘AS ISO/IEC 22989:2023 Information Technology – Artificial Intelligence – Artificial Intelligence Concepts and Terminology’ 16 [3.7.1] <https://www.standards.org.au/standards-catalogue/standard-details?designation=as-iso-iec-22989-202>.

  7. eSafety Commissioner, ‘Deepfake Trends and Challenges – Position Statement’, eSafety Commissioner (Web Page, 19 August 2024) <https://www.esafety.gov.au/industry/tech-trends-and-challenges/deepfakes>.

  8. Standards Australia Limited, ‘AS ISO/IEC 22989:2023 Information Technology – Artificial Intelligence – Artificial Intelligence Concepts and Terminology’ 10 [3.4.4] <https://www.standards.org.au/standards-catalogue/standard-details?designation=as-iso-iec-22989-202>.

  9. Gianclaudio Malgieri and Frank Pasquale, ‘Licensing High-Risk Artificial Intelligence: Toward Ex Ante Justification for a Disruptive Technology’ (2024) 52 Computer Law & Security Review 105899, 1, 6.reactive tools. This approach has proven inadequate, as numerous foreseeable problems arising out of commercial development and applications of AI have harmed vulnerable persons and communities, with few (and sometimes no

  10. Standards Australia Limited, ‘AS ISO/IEC 22989:2023 Information Technology – Artificial Intelligence – Artificial Intelligence Concepts and Terminology’ 46 [8.5.2] <https://www.standards.org.au/standards-catalogue/standard-details?designation=as-iso-iec-22989-202>.

  11. Neural Ninja, ‘The Art of Fine-Tuning AI Models: A Beginner’s Guide’, Let’s Data Science (Web Page, 29 January 2024)

    <https://letsdatascience.com/the-art-of-fine-tuning-ai-models/>.

  12. Ivan Belcic and Cole Stryker, ‘RAG vs. Fine-Tuning’, IBM Think (Web Page, 14 August 2024) <https://www.ibm.com/think/topics/rag-vs-fine-tuning>.

  13. Fan Yang, Jake Goldenfein and Kathy Nickels, GenAI Concepts: Technical, Operational and Regulatory Terms and Concepts for Generative Artificial Intelligence (GenAI) (Report, ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), and the Office of the Victorian Information Commissioner (OVIC), 2024) 2 <https://apo.org.au/node/327400>.

  14. Ibid 28.

  15. Ibid 5.

  16. See also Standards Australia Limited, ‘AS ISO/IEC 23053 – Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)’ <https://www.iso.org/standard/74438.html>; Fan Yang, Jake Goldenfein and Kathy Nickels, GenAI Concepts: Technical, Operational and Regulatory Terms and Concepts for Generative Artificial Intelligence (GenAI) (Report, ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), and the Office of the Victorian Information Commissioner (OVIC), 2024) 4 <https://apo.org.au/node/327400>.

  17. IBM, ‘What Is a Machine Learning Algorithm?’, IBM Think (Web Page, 3 November 2023) <https://www.ibm.com/topics/machine-learning-algorithms>.

  18. Standards Australia Limited, ‘AS ISO/IEC 22989:2023 Information Technology – Artificial Intelligence – Artificial Intelligence Concepts and Terminology’ 17 [5.2] <https://www.standards.org.au/standards-catalogue/standard-details?designation=as-iso-iec-22989-202>.

  19. Ibid 51 [9.2.1].

  20. Felicity Bell et al, AI Decision-Making and the Courts A guide for Judges, Tribunal Members and Court Administrators (Report, the Australasian Institute of Judicial Administration, December 2023), 22.

  21. Peter Cashman and Eliza Ginnivan, ‘Digital Justice: Online Resolution of Minor Civil Disputes and the Use of Digital Technology in Complex Litigation and Class Actions’ (2019) 19 Macquarie Law Journal 39, 41.

  22. Vladimir Karpukhin et al, ‘Dense Passage Retrieval for Open-Domain Question Answering’ in Bonnie Webber et al (eds), Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (Conference Paper, EMNLP 2020, November 2020) 6769 <https://aclanthology.org/2020.emnlp-main.550>.

  23. Team Symbl, ‘Conversation Understanding: Open Domain vs. Closed Domain’, Symbl.Ai Blog (Web Page, 10 December 2020) <https://symbl.ai/developers/blog/conversation-understanding-open-domain-vs-closed-domain/>.

  24. Fan Yang, Jake Goldenfein and Kathy Nickels, GenAI Concepts: Technical, Operational and Regulatory Terms and Concepts for Generative Artificial Intelligence (GenAI) (Report, ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S), and the Office of the Victorian Information Commissioner (OVIC), 2024) 9 <https://apo.org.au/node/327400>.

  25. Yunfan Gao et al, ‘Retrieval-Augmented Generation for Large Language Models: A Survey’ (arXiv, 27 March 2024) 1 <https://arxiv.org/abs/2312.10997>.

  26. ‘Rule Based System’, DeepAI (Web Page, 17 May 2019) <https://deepai.org/machine-learning-glossary-and-terms/rule-based-system>.

  27. Felicity Bell et al, AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators (Report, Australasian Institute of Judicial Administration, December 2023) 11.

  28. Elizabeth Richardson, Tania Sourdin and Nerida Wallace, Self-Represented Litigants: Gathering Useful Information, Final Report – June 2012 (Report, Australian Centre for Justice Innovation, Monash University, October 2012) 4 [1.15] <https://research.monash.edu/en/publications/self-represented-litigants-gathering-useful-information-final-rep>.

  29. Standards Australia Limited, ‘AS ISO/IEC 22989:2023 Information Technology – Artificial Intelligence – Artificial Intelligence Concepts and Terminology’ 53 [9.2.2.4] <https://www.standards.org.au/standards-catalogue/standard-details?designation=as-iso-iec-22989-202>.

  30. Ibid 21 [5.11.1].

  31. Felicity Bell et al, AI Decision-Making and the Courts: A Guide for Judges, Tribunal Members and Court Administrators (Report, Australasian Institute of Judicial Administration, December 2023) 19.