Skip to Content
logologo
AI Incident Database
Open TwitterOpen RSS FeedOpen FacebookOpen LinkedInOpen GitHub
Open Menu
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse
Discover
Submit
  • Welcome to the AIID
  • Discover Incidents
  • Spatial View
  • Table View
  • List view
  • Entities
  • Taxonomies
  • Submit Incident Reports
  • Submission Leaderboard
  • Blog
  • AI News Digest
  • Risk Checklists
  • Random Incident
  • Sign Up
Collapse

Report 2329

Loading...
Tweet: @ArthurCDent
twitter.com · 2022

Interesting. The AI totally fails the cognitive reflection test. It gives the intuitive but incorrect answer every time.

This is despite doing fine with similar word math problems. It is theoretically satisfying, since the neural network is most certainly non-reflective. It’s an association machine, which is basically how intuitive reasoning works.

So this is even more interesting. With a little bit of help, it eventually worked its way to the correct answer, even if the reasoning is a little odd!

Am loving the discussion but will have to mute this thread now. If you want to play further, try Stanislaw’s trick of appending "Think about it step by step and only then produce a solution." to the problem.

Read the Source

Research

  • Defining an “AI Incident”
  • Defining an “AI Incident Response”
  • Database Roadmap
  • Related Work
  • Download Complete Database

Project and Community

  • About
  • Contact and Follow
  • Apps and Summaries
  • Editor’s Guide

Incidents

  • All Incidents in List Form
  • Flagged Incidents
  • Submission Queue
  • Classifications View
  • Taxonomies

2024 - AI Incident Database

  • Terms of use
  • Privacy Policy
  • Open twitterOpen githubOpen rssOpen facebookOpen linkedin
  • e1b50cd