Human rights
Things to know
- If safeguards are not in place to protect against bias and discrimination, AI systems could result in, or perpetuate, human rights violations which, in addition to creating liability, could erode trust amongst affected individuals.
- Human rights are protected under applicable human rights legislation. In Ontario, the governing legislation is the Human Rights Code, R.S.O. 1990, c. H.19 (the OHRC). The OHRC specifically focuses on protecting individuals from discrimination in aspects of public life such as employment, housing and services.
- Canada is a signatory to the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. The Convention aims to ensure that AI systems are consistent with human rights, democracy and the rule of law without stifling technological progress and innovation. The Convention will come into force once five states, including at least three Council of Europe Member States, agree to be legally bound by it.
- The Law Commission of Ontario and the Ontario Human Rights Commission have published a Human Rights AI Impact Assessment tool. The tool provides a framework for organizations to assess their AI models and systems to ensure they comply with human rights legislation.
- Organizations are responsible for the outputs generated by the AI models and systems they use. In general, it may be difficult for users of models and systems to shift liability for breaches of human rights legislation to the providers of the model or system.
Things to do
- Consider if, or to what extent, an AI model or system may present compliance or litigation risks under Canadian human rights laws, including in relation to hiring, performance management and employment termination issues.
- Ensure human rights law and policy is considered in the design and/or implementation of AI models and systems.
- Understand the terms of use governing an AI tool that relate to the treatment of data inputted into the tool as well as any contractual or statutory obligations owed to third parties that might impact the use of the tool.
- Develop policies and procedures to test for bias throughout the life cycle of an AI model or system as well as strategies for mitigating bias should it be detected.
- Ensure all applicable stakeholders within your organization, such as human resources, legal teams and information technology departments, have a seat at the table to identify risks and risk mitigation strategies.
- Ensure AI models and systems are not “black boxes” (so that you are able to explain why a decision was made, including by pointing to objective and non-discriminatory reasons).
- Ensure that privacy, confidentiality and privilege considerations are addressed before using third party impact assessment or similar tools.
Useful resources
- “Human Rights AI Impact Assessment [PDF],” The Law Commission of Ontario and the Ontario Human Rights Commission, November 2024
- “Human Rights AI Impact Assessment Backgrounder [PDF],” The Law Commission of Ontario, March 2025
- “Directive on Automated Decision-Making,” The Government of Canada, June 24, 2025
- “The Framework Convention on Artificial Intelligence,” Council of Europe
Next