Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる
発見する
投稿する
  • ようこそAIIDへ
  • インシデントを発見
  • 空間ビュー
  • テーブル表示
  • リスト表示
  • 組織
  • 分類法
  • インシデントレポートを投稿
  • 投稿ランキング
  • ブログ
  • AIニュースダイジェスト
  • リスクチェックリスト
  • おまかせ表示
  • サインアップ
閉じる

レポート 2388

関連インシデント

インシデント 4244 Report
Universities' AI Proctoring Tools Allegedly Failed Canada's Legal Threshold for Consent

Loading...
Online proctoring biometrics use fails to meet Canadian legal threshold, report says
biometricupdate.com · 2022

Online proctoring tools for conducting remote exams do not go far enough to ensure free, clear and individual consent from Canadian students whose biometric data they collect, according to a new report published by the University of Ottawa and supported by the Office of the Privacy Commissioner of Canada.

With in-person learning disrupted by the COVID-19 pandemic, many institutions turned to software platforms as a way to conduct examinations. Often based on artificial intelligence, tools such as Respondus, Monitor, ProctorU, Examity and others use data mining and facial recognition to monitor for cheating—and present what Céline Castets-Renard, the Law Professor who led the project, called “legal issues of socio-economic discrimination and privacy.”

The report points to familiar issues with AI discrimination, specifically “the overreach of power such as public surveillance or police surveillance using AI facial recognition software, with a potential for discrimination, such as race, gender and age biases.” But it also identifies the risk of certain socio-economic and situational factors that could trigger unwarranted software alerts. According to the report, “a domestic pet who makes noise, such as a bark or a chirp, during an online proctoring exam has been identified as a cause for flagging a potential cheating incident.” Pets, children and other audio-visual variables can make the proctoring software think something suspicious is going on when it is not—a problem compounded in large, multigenerational homes.

Biometric tools, such as facial recognition, are susceptible to similar errors. “Biometric keystroke analysis which serves to track keystroke data, eye tracking which monitors and analyses eye movements, audio monitoring which records and monitors students sonically, and facial detection are all methods that are used by some proctoring software,” says the research report. And all come with unacceptable risks that the technology will mistakenly flag certain variations in data as cheating.

The report concludes with a series of recommendations pertaining to how AI is defined and categorized, and how human oversight of evolving surveillance technologies can help maintain transparency and reduce error and bias. The final recommendation tidily summarizes the researchers’ findings, calling for “a collective reflection on whether to prohibit certain uses of AI, and the means to determine how to identify such prohibited uses.”

情報源を読む

リサーチ

  • “AIインシデント”の定義
  • “AIインシデントレスポンス”の定義
  • データベースのロードマップ
  • 関連研究
  • 全データベースのダウンロード

プロジェクトとコミュニティ

  • AIIDについて
  • コンタクトとフォロー
  • アプリと要約
  • エディタのためのガイド

インシデント

  • 全インシデントの一覧
  • フラグの立ったインシデント
  • 登録待ち一覧
  • クラスごとの表示
  • 分類法

2024 - AI Incident Database

  • 利用規約
  • プライバシーポリシー
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd