Cindy Chen, a 1L law student at the University of Calgary and writing contributor for the Tech and Law Association takes a look at the impact of artificial intelligence on civil liability in Canada.
Artificial Intelligence (AI) is a transformative technology that provides numerous benefits for our society. By utilizing advanced data analysis algorithms, AI systems are now capable of semi-autonomous decision-making, creating practical benefits for numerous sectors ranging from art to health care and supply chain management. The rapid advancement in AI technology is largely funded by private investments. In 2021 alone, the Research and Development investment amounts to around 93.5 billion dollars.[1] While increasingly more private companies are investing in this booming industry, the law for the industry remains unclear. According to a recent Stanford study, in 2021 alone, the US proposed 131 bills to regulate the industry, but only 2% ultimately became law.[2]
On June 16, 2022, Canada did the first reading of Bill C-27, which is one of Canada’s first attempts to regulate parts of the AI industry.[3] As Bill C-27 is still up for Parliament debate, there is a lack of clarity for remedy and damages sustained by private individuals using AI technology. With no legislation in place, injured individuals will likely have to rely on fault-based civil liability laws for remedy. However, similar to medical and toxic substance cases in negligence analysis, it is difficult for liability to be established, as the unique characteristics of AI systems create an unfair evidentiary burden for injured plaintiffs. Such difficulty may be present when an AI system is acting autonomously through analyzing multiple inputs. It becomes challenging for an injured plaintiff to pinpoint the responsible developer that caused the AI output that led to the damage.[4]
On September 28, 2022, the EU Commission released a directive that outlined a new liability framework for AI-related civil liability claims to resolve this issue. Although this is not a binding law, it nevertheless provides a possible way to circumvent the evidentiary problem in AI-related claims.
Rebuttable Presumption of Causality
One of the most significant deviations from standard fault base negligence analysis is the rebuttable presumption of causality. Unlike the ordinary “but for” test, the EU commission is proposing a presumption for a causal link between the fault of the defendant and the output or failure produced by the AI system if the plaintiff can show that:
- the defendant’s behavior deviated from the standard of care laid out by relevant legislation;
- it is reasonably likely that the fault has influenced the output or failure of the AI system; and
- the claimant had shown that the output or failure of the AI system led to the damage.[5]
This presumption can effectively lower the evidentiary burden for the injured plaintiff, as the plaintiff no longer has to prove direct causation between system input and ultimate injury. The onus for rebutting this presumption is placed on the defendant, who likely has better technical knowledge than the plaintiff and will be in a better position to rebut this presumption. It is important to remember that this suggestion has not yet been adopted by any court. As the AI industry evolves, hopefully, there will be more legislation and common law practices to ultimately create more certainty for manufacturers and consumers.
[1] Standard University Human-Centered Artificial Intelligence: The AI Index 2022 Annual Report (2022), online (pdf) <https://aiindex.stanford.edu/wp-content/uploads/2022/03/2022-AI-Index-Report_Master.pdf>
[2] Ibid, at page 175
[3] Parliament of Canada: Bill C-27 (2022), online (pdf) <https://www.parl.ca/DocumentViewer/en/44-1/bill/C-27/first-reading>
[4] European Commission. Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (2022), online (pdf) <https://ec.europa.eu/info/sites/default/files/1_1_197605_prop_dir_ai_en.pdf>
[5] Ibid, at Article 4
