Tort liability
Things to know
- Canadian tort law does not specifically address AI models or systems, but existing legal principles — particularly negligence and those that generally apply to product liability claims — apply where a model or system causes a third party to be harmed.
- Additional torts such as defamation, misrepresentation, intentional infliction of mental distress, intrusion upon seclusion, breach of confidentiality, placing a person in a false light, and non-consensual distribution of intimate images (among others) may be relevant, particularly in the context of chatbots, generative AI content generation, predictive AI, and deep fakes.
Things to do
- Conduct regular risk assessments of AI models or systems to monitor system performance and identify foreseeable harms to users and third parties. Ensure risk assessments take into account how a model or system may be used and the potential for malfunction or misuse.
- Establish robust governance, human oversight and quality assurance measures, especially where AI is deployed in high-risk or safety-sensitive sectors.
- Assess risks associated with how any applied AI makes use of proprietary or confidential data and, in particular, whether there are potential risks arising from the data sources and/or the transmission of proprietary data outside of the enterprise (including to foreign jurisdictions).
- Document the design, testing, and deployment processes to support defence of claims made by third parties.
- Monitor legal developments and prevailing industry standards and prepare to adapt your AI model or system use practices to align with applicable legal requirements and industry guidance.
- Explore what AI-insurance is available to protect against claims resulting from AI models or systems not functioning as expected.
Useful resources
- “Report on Artificial Intelligence and Civil Liability [PDF],” British Columbia Law Institute, April 2024
- “Addressing the Liability Gap in AI Accidents [PDF],” Centre for International Governance Innovation, July 2023
- “Chatbots: who could be liable for the accuracy of the output?”, Osler, March 1, 2024
Next