Our AI Project
We're building a language processor that will overcome barriers between help-seekers and legal help providers in Australia.
We're building a language processor that will overcome barriers between help-seekers and legal help providers in Australia.
Across 2020 and early 2021, we have been working to produce a proof-of-concept natural language processing AI model using annotated language samples from legal help seekers.
Having launched an online intake system in 2018, we now have thousands of samples of natural language text describing legal problems.
A key challenge in natural language-based AI projects is generating a training data set. In our project, we need legal professionals to annotate language samples – a potentially expensive exercise. However, we’ve partnered with pro bono lawyers to achieve expert annotation at huge volumes.
Our in-house Digital Innovation team have built a Training AI Game (TAG) that presents language samples to participating pro bono lawyers and asks them to annotate the samples in several ways and multiple times to ensure accuracy. The samples can then be exported in an annotated format and provided to the University of Melbourne team helping to train our AI model.
By September 2021, we had onboarded 245 lawyers who collectively made over 90,000 annotations to 9,000+ natural language samples uploaded to our TAG tool.
To deliver this ambitious project, we’ve partnered with Australian academics at the University of Melbourne School of Computing Science who specialise in artificial intelligence. Professor Tim Baldwin, Director of the ARC Centre in Cognitive Computing for Medical Technologies, has been working closely with us to plan and build the AI model.
Based on the success of our project to date, and its potential to deliver further significant results for both Justice Connect and for research in the field of natural language processing, we have successfully received major funding via an Australian Research Council Linkage Grant.
Most AI-driven text classification models are problematically biased, performing substantially worse for under-represented or socially disadvantaged communities. Often language processers are only built using samples from key majority groups. Our project has been intentionally designed to address potential issues experienced by people from marginalised community groups by capturing the voices of different groups across the country.
This project was designed in response to the findings from a range of evaluations we have undertaken internally and externally with our sector peers.
We have actively incorporated recent ethical AI and inclusive technology best practice principles released by the Australian Human Rights Commission. The Human Rights Commission’s principles are focused on eliminating bias in decision-making AI algorithms and ensuring that AI includes human rights principles by design.
While our model is a natural language processing classifier rather than a decision-making algorithm, the risks identified by the Human Rights Commission and the recommended approaches to ameliorate those risks are equally relevant to our project.
The robot-led solution helping marginalised communities find legal help, Pro Bono News, 3 March 2022
AI language tool to improve equal access to legal services, 3CR, 23 February 2022
Real life questions to shape AI-powered legal intake tool, Law Institute Journal, 15 February 2022
NSW government invests in AI to solve legal problems, Lawyers Weekly, 20 July 2021
NSW issues $250K AI grant to improve access to justice, Gov Tech Review, 20 July 2021
Artificial intelligence to help the most vulnerable, LSJ, 19 July 2021
Innovative solutions for fairer justice, Mirage News, 19 July 2021