Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 2387

Associated Incidents

Incident 4244 Report
Universities' AI Proctoring Tools Allegedly Failed Canada's Legal Threshold for Consent

Loading...
Online exam proctoring software during the pandemic: The quest to minimize student privacy risks
priv.gc.ca · 2022

Organization

University of Ottawa

Published

2022

Project leader(s)

Céline Castets-Renard, Professor, Faculty of Law – Civil Law Section, University of Ottawa

Summary

This project examines how, during the COVID-19 pandemic, many universities used exam proctoring tools to compensate for the inability to conduct in-person exams.

The researchers found that while many surveillance processes and companies are operating in the industry, most tools use artificial intelligence techniques such as data mining and facial recognition to detect suspicious behaviour that could constitute cheating.

These companies use the personal information collected as part of their mission to monitor university exams. They also use it for secondary purposes of improving artificial intelligence tools. To do this, they obtain consent from students, but the conditions under which it is collected are not conducive to the expression of free, clear and individual consent. In addition, the separation of public and private sector privacy laws makes it difficult to characterize these outsourcing companies. Enforcing their potential liability as a principal under the Personal Information Protection and Electronic Documents Act (PIPEDA) is difficult in the context of enforcing provincial public sector laws.

Furthermore, this project demonstrates that control over the conditions of data collection and retention is made more difficult by the fact that many technology companies are U.S.-based and subject to U.S. law, and even require the transfer of data to the United States.

The researchers make five recommendations to address these issues, which can be found in the conclusion of the research report.

Project deliverables are available in the following language(s)

French

  • Research report (HTML document)
  • Round table "Quels enjeux juridiques des logiciels de surveillance d'examen ?" (HTML document)

English

  • Article: "Online test proctoring software and social control: Is the legal framework for personal information and AI protective enough in Canada?" (HTML document)

OPC-funded project

This project received funding support through the Office of the Privacy Commissioner of Canada's Contributions Program. The opinions expressed in the summary and report(s) are those of the authors and do not necessarily reflect those of the Office of the Privacy Commissioner of Canada. Summaries have been provided by the project authors. Please note that the projects appear in their language of origin.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd