Incident 424: Universities' AI Proctoring Tools Allegedly Failed Canada's Legal Threshold for Consent

Description: AI proctoring tools for remote exams were reportedly "not conducive" to individual consent for Canadian students whose biometric data was collected during universities' use of remote proctoring in the COVID pandemic.


New ReportNew ReportNew ResponseNew ResponseDiscoverDiscoverView HistoryView History
Alleged: Respondus Monitor , ProctorU , ProctorTrack , Proctorio , ProctorExam and Examity developed an AI system deployed by Canadian universities, which harmed Canadian students.

Incident Stats

Incident ID
Report Count
Incident Date
Khoa Lam
Online Test Proctoring Software and Social Control: Is the Legal Framework for Personal Information and AI Protective Enough in Canada? · 2022

Academic surveillance can be considered as an emerging field of “capitalism surveillance” (Zuboff) pertaining to the dominance of a few companies in the surveillance field. Online proctoring software represent a variety of tools often based…

Online exam proctoring software during the pandemic: The quest to minimize student privacy risks · 2022


University of Ottawa



Project leader(s)

Céline Castets-Renard, Professor, Faculty of Law – Civil Law Section, University of Ottawa


This project examines how, during the COVID-19 pandemic, many universities…

Online proctoring biometrics use fails to meet Canadian legal threshold, report says · 2022

Online proctoring tools for conducting remote exams do not go far enough to ensure free, clear and individual consent from Canadian students whose biometric data they collect, according to a new report published by the University of Ottawa …

Online proctoring biometrics fails to meet Canada's legal threshold of consent: report · 2022

Online proctoring biometrics for remote exams fails to meet Canada's legal threshold of consent, privacy, and anti-discrimination, according to a new academic report from the University of Ottawa with the support of the Office of the Privac…


A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.