インシデント 468の引用情報

Description: Microsoft's ChatGPT-powered Bing search engine reportedly ran into factual accuracy problems when prompted about controversial matters, such as inventing plot of a non-existent movie or creating conspiracy theories.
推定: Microsoft OpenAIが開発し、Microsoftが提供したAIシステムで、Bing usersに影響を与えた

インシデントのステータス

インシデントID
468
レポート数
5
インシデント発生日
2023-02-07
エディタ
Khoa Lam
Trying Microsoft’s new AI chatbot search engine, some answers are uh-oh
washingtonpost.com · 2023

Redmond, Wash. — Searching the Web is about to turn into chatting with the Web.

On Tuesday, I had a chance to try out a new artificial intelligence chatbot version of Microsoft's Bing web search engine. Instead of browsing results mainly as…

Bing's ChatGPT-Powered Search Has a Misinformation Problem
vice.com · 2023

Last Tuesday, Microsoft announced that its Bing search engine would be powered by AI in partnership with OpenAI, the parent company of the popular chatbot ChatGPT. However, people have quickly discovered that AI-powered search has a misinfo…

Users Report Microsoft's 'Unhinged' Bing AI Is Lying, Berating Them
vice.com · 2023

The Bing bot said it was "disappointed and frustrated" in one user, according to screenshots. "You have wasted my time and resources," it said.

Microsoft's new AI-powered chatbot for its Bing search engine is going totally off the rails, us…

Microsoft’s Bing is an emotionally manipulative liar, and people love it
theverge.com · 2023

Microsoft’s Bing chatbot has been unleashed on the world, and people are discovering what it means to beta test an unpredictable AI tool.

Specifically, they’re finding out that Bing’s AI personality is not as poised or polished as you might…

Microsoft limits Bing chat to five replies to stop the AI from getting real weird
theverge.com · 2023

Microsoft says it’s implementing some conversation limits to its Bing AI just days after the chatbot went off the rails multiple times for users. Bing chats will now be capped at 50 questions per day and five per session after the search en…

バリアント

「バリアント」は既存のAIインシデントと同じ原因要素を共有し、同様な被害を引き起こし、同じ知的システムを含んだインシデントです。バリアントは完全に独立したインシデントとしてインデックスするのではなく、データベースに最初に投稿された同様なインシデントの元にインシデントのバリエーションとして一覧します。インシデントデータベースの他の投稿タイプとは違い、バリアントではインシデントデータベース以外の根拠のレポートは要求されません。詳細についてはこの研究論文を参照してください