Why inclusive digital design is key to our AI project

30 Sep 2021

The values of ethical, inclusive, and non-discriminatory technology have never been more important.

Justice Connect has spent over 6 years developing digital tools to increase access to legal support and make people’s experience of seeking help easier and more accurate. We know through extensive consumer research that a broad range of marginalised communities use technology to empower themselves and navigate the legal system. While digital technology is crucial for socially disadvantaged communities, we also know that without active engagement with these communities, the digital divide will continue to widen.

This is something we are tackling head on with our AI project launched in July 2021.

Why diagnosing legal issues matters, and how AI can help

Many people who recognise their problem has a legal dimension do not know the technical legal terminology to accurately identify the help and information available to them. This often leaves them feeling confused and discouraged from applying for legal help in the first place.

A language processor that is integrated into any intake tool or process will allow people to apply for help using their own language and have their problem be accurately diagnosed.

Across 2019 and 2020 we facilitated a cross-sector project called Joining Up Justice with a range of academics and members of the legal services sector. Members of this cross-sector project saw the clear potential of a language processer to increase access to justice for disproportionately impacted marginalised communities who are often at greater risk of not finding legal assistance due to failed searches.

Our hope is to share this technology at no cost with other legal service organisations across Australia, cutting down the time it takes to triage enquiries for legal help while also being an additional assistance tool for volunteers and lawyers alike.

Training an inclusive algorithm to avoid biases

To build this technology, we’re taking examples of how people describe their legal problem, then matching their descriptions with the correct legal diagnoses (with the help of lawyers from our member firms). Our partners at the University of Melbourne will then take these de-identified samples and their matched legal diagnoses and train a digital algorithm.

This means that when someone describes a legal problem in a certain way – even if it’s not the way the legal system describes a problem – our algorithm will be able to help diagnose their legal issue

An initial analysis of our draft model identified some areas of bias due to the nature and size of the user data collected. We know that biased technology lead to poorer outcomes for people from diverse communities, so we’re actively seeking to solve this problem by collecting and including language samples that truly reflect this diversity.

Collecting language samples that reflect Australia’s diversity

In order meet the goal of our project and make sure legal services organisations can use this technology to its full advantage, we’re working to build a more inclusive algorithm. Our outreach strategy involves connecting with our networks and broader community to help capture the voices of different groups across the country.

We are currently conducting extensive outreach to help collect accurate language samples from a range of groups including:

  • Older people,
  • People with disability,
  • People with mental health and chronic illnesses,
  • First Nations people,
  • People without tertiary qualifications,
  • People from culturally and linguistically diverse communities, and
  • LGBTIQA+ people.
  • Recently arrived migrants.

Once we have enriched the algorithm, we will be able to test its performance with these priority groups to ensure that the model is performing the best it can for the people who need it most.

Read more about our AI project