Our artificial intelligence (AI) project

We’re building a language processer that will overcome barriers between help-seekers and legal help providers in Australia.

Why we’re building AI-based legal technology

Research shows that when people search for legal help online, they often struggle to correctly articulate their legal problem. This poses a major challenge when to connecting them with the right information and services, and helping them access justice.

This issue is growing as demand for legal help increases each year, particularly as a result of the impacts of COVID-19.

Our AI project hopes to make it easier to connect people with legal help, and also extends beyond Justice Connect’s own services to strengthen the sector’s ability to develop information and connect with people with the right support.

Stretched funding is placing the community legal sector under pressure – creating barriers for lawyers to provide efficient and effective legal help without having to manually determine every single legal problem.

Investing in tools and resources is rendered useless unless we can build them to reach and serve the people who need them.

Justice Connect, alongside our funding partners and AI experts, are tackling this challenge.

Instead of expecting people to skill themselves with the technical legal language to explain their problem, we are building natural language processing AI to help assist with the accurate diagnosis of legal issues and empower people to find the right legal services quickly and easily.

 

The result of this project will be an AI-driven system that can take the everyday language of people needing legal help and correctly diagnose their legal issues. This will be able to be accessed at no cost and implemented by legal service organisations across Australia.

We hope by building and sharing this technology, our sector will be better placed to understand and meet people’s legal needs.

How we’re doing it: inclusive design

Most AI-driven text classification models are often problematically biased, performing substantially worse for under-represented or socially disadvantaged communities. Often language processers are only built using samples from key majority groups. Our project has been intentionally designed to address potential issues experienced by people from marginalised community groups by capturing the voices of different groups across the country.

This project was designed in response to the findings from a range of evaluations we have undertaken internally and externally with our sector peers.

We have actively incorporated recent ethical AI and inclusive technology best practice principles released by the Australian Human Rights Commission. The Human Rights Commission’s principles are focused on eliminating bias in decision-making AI algorithms and ensuring that AI includes human rights principles by design.

While our model is a natural language processing classifier rather than a decision-making algorithm, the risks identified by the Human Rights Commission and the recommended approaches to ameliorate those risks are equally relevant to our project.

   We’re working with different groups to train our model.

Building an unbiased AI model takes more effort, but is particularly important for the legal services sector. We know that the people who most need legal help are also the same groups that are at significant risk of biased digital systems that don’t perform as well for them.

We are actively seeking natural language samples from people from a diverse range of groups including:

  • Older people
  • People with disability
  • People with mental health and chronic illnesses
  • First Nations people
  • People without tertiary qualifications
  • People from culturally and linguistically diverse communities
  • LGBTQIA+ people
  • Recently arrived migrants

Contribute to our project by uploading a sample

Upload a sample on behalf of someone else

If you would like to support our project by contributing an un-edited sample of your client describing a legal problem they’ve experienced in their own words, please use this simple form to upload a text or sound file.

We will de-identify and anonymise the language samples prior to them being legally annotated. Read our privacy policy at justiceconnect.org.au/privacy.

Upload a sample

Please note that these samples do not qualify as an application for legal help.
To make a referral for someone to access legal help, please make an application.

Upload your own sample

If you would like to support our project by contributing an un-edited sample of yourself describing a legal problem you’ve experienced in your own words, please take this simple survey.

After completing the survey you will be able to enter a weekly draw to win a $200.00 Coles voucher.

All samples will be de-identified and anonymised. Read our privacy policy at justiceconnect.org.au/privacy.

Take the survey

Please note that this sample does not qualify as an application for legal help.
If you need legal help, please make an application.

The most effective language samples are a true and unedited transcript. This research will help us understand the way people from diverse backgrounds use syntax, grammar, shorthand, slang to describe their problem.

Illustration showing a range of language samples of people explaining their legal issue in their own language.

Pro bono powered AI

Across 2020 and early 2021, we have been working to produce a proof-of-concept natural language processing AI model using annotated language samples from legal help seekers.

Having launched an online intake system in 2018, we now have thousands of samples of natural language text describing legal problems.

A key challenge in natural language-based AI projects is generating a training data set. In our project, we need legal professionals to annotate language samples – a potentially expensive exercise. However, we’ve partnered with pro bono lawyers to achieve expert annotation at huge volumes.

Our in-house Digital Innovation team have built a Training AI Game (TAG) that presents language samples to participating pro bono lawyers and asks them to annotate the samples in several ways and multiple times to ensure accuracy. The samples can then be exported in an annotated format and provided to the University of Melbourne team helping to train our AI model.

By September 2021, we had onboarded 245 lawyers who collectively made over 90,000 annotations to 9,000+ natural language samples uploaded to our TAG tool.

Photograph showing two people looking at a computer screen that shows our in-house Training AI Game (TAG)

Our research partners

To deliver this ambitious project, we’ve partnered with Australian academics at the University of Melbourne School of Computing Science who specialise in artificial intelligence. Professor Tim Baldwin, Director of the ARC Centre in Cognitive Computing for Medical Technologies, has been working closely with us to plan and build the AI model.

Read more about our approach to digital innovation