Will AI replace judges? Ontario's Law Commission explores this in a report
Artificial intelligence is changing the way a lot of industries are handling things, including the justice system. (hint, uLaw does this too! For small to-mid-sized Canadian legal firms)
Dubbed "Accountable AI", the Law Commission of Ontario recently released a report which disseminates the impact that AI has had on the Canadian Justice system.
Automated decision making systems (ADM) are sometimes used to assist governments in making choices about how to go about arriving at a verdict on something, whether civil or administrative in nature.
The authors of this report speak favourably of the possible benefits to deferring to AI in some areas if risks are mitigated: "Canadian governments have an opportunity to become leaders in successful AI deployment by
applying hard-learned lessons from other jurisdictions."
The authors claim that AI can reduce discrimination in decision-making, if only the government can "ensure trustworthy and accountable" AI, that is.
The judicial system has a lot of choke points which contribute to extremely huge backlogs which permeate at almost every level of it. The Law Commission's report suggests that government-branded artificial intelligence could significantly improve this. The report lays out how it all should be done.
"There must be broad participation in the design, development and deployment of government AI systems. Unequal access to information and
participation in AI decision-making can worsen existing biases and inequality, result in ineffective public services, and damage trust in government. This participation must include technologists, policymakers, legal professionals and the communities who are likely to be most affected by this technology."
People who are already within positions of authority within the government might be happy to learn that that the authors of this report posit "that it is possible that only the best resourced and most sophisticated litigants will be able to challenge many AI-based government decisions." The solution, according to the intro to the report, is to come up with proactive initiatives aimed at certain groups such as low-income, marginalized, indigenous and racialized communities.
Under the Human Rights section the report authors concede that human rights compliance will be hard to acquire without addressing significant issues such as 'black box' AI tech. Without proper disclosure about the methods and actual code, it'd be impossible to actually believe in decisions rendered by artificial intelligence.
There are a lot of moral and ethical, serious questions about the legitimacy of AI in the justice system. The authors make this clear in the last few pages of their report when they openly begin to question these things: "Will machine generated “explainability” meet the legal standard of “reasonableness”? Is it possible to know whether the reasons generated by an AI system
describe the actual justification for the decision?"
The full text of the LCO report can be read by clicking this link