Welcome to theAI Incident Database
Incident 1254: Purported AI Deepfake Reportedly Impersonated Thai PBS World Anchor and Miss Universe CEO in Fraudulent Investment Video
“Verified: Thai PBS news anchor's clip reporting "Anne Jakrapob" inviting investors to invest and receive a return of 500,000 baht in 1 month was actually created from Deepfake.”Latest Incident Report
A fake AI Deepfake clip impersonates a Thai PBS World news anchor, reporting an interview with "Anne Jakkaphong" inviting people to invest online, claiming to start with 8,000 baht and get hundreds of thousands of baht back. After investigating this matter, Thai PBS Verify found that it is fake news using AI Deepfake technology to impersonate the image and voice of a famous person, "Anne Jakkaphong", CEO of the Miss Universe beauty pageant and a Thai PBS World news anchor. The content of the clip claims to invite Thai youth to invest in a platform certified by the government and receive returns of hundreds of thousands of baht. Thai PBS Verify found the source of the fake news from: Facebook -------------------------------------------------- Thai PBS Verify found a fake [page] (https://perma.cc/ZQ7Y-M7W3) impersonating and using the logo of a famous news agency. A 2-minute video clip was released showing Ms. Dolyana Bunnag, a news anchor of Thai PBS World, interviewing Ms. Anne Jakkaphong, the CEO of the Miss Universe Organization. The content invites online investment in English. It was published as an advertisement through the Facebook platform. It has received 12,000 views (links saved here(https://perma.cc/9C3V-CJLT) and here(https://perma.cc/VUQ9-CP7W)). Details of the content in the video clip are images of the anchor Thai PBS World reporting the news of Anne Jakkaphong. The content invites the public, including recent graduates, to invest in an automated financial platform. Which is claimed to be a platform certified by the Thai government! A screenshot from a clip that uses AI-deepfake to modify the video and audio of Anne Jakapong. The clip states that investors can start with as little as 8,000 baht and have a chance to receive returns of up to 800,000 baht. It also claims that the system has been tested on a small group of people. It was found that all participants received returns of at least 500,000 baht within just 1 month. It is confirmed that all investors received a 100% profit. If there is no profit, the full amount will be refunded. There is also an additional claim that if the platform is opened for wider use, it will enable Thai people to become millionaires nationwide. Therefore, the number of participants is limited. Those who contact or register first will have the right first. They will have the opportunity to get rich first. There is also a warning that if you finish watching the video, the link to join or apply for investment will disappear immediately If the viewer does not hurry to decide to take action Original clip ----------- From our investigation, we found ClipThe original was taken from a news report on the Thai PBS World Tonight program that was released on June 24, 2025 (archived link)
Left is an image created from AI-deepfake. Right is an image recorded from the original clip. While ClipThe original of Anne Jakrapob was found to have been taken from an interview with a foreign media. This is a clip that was published on February 11, 2023 (archived link)
The left is an image created from AI-deepfake. The right is an image recorded from the original clip. Meanwhile, we asked Mr. Sathapon Phatthanakuha (https://x.com/banksathapon?lang=en), the president of Guardian AI Company, who confirmed that the clip was created from AI-deepfake. He noted that the movements of the person's mouth in the clip did not match the sound and the quality of the image in the video clip. Facial muscles and mouth shape are blurry. It was confirmed that the clip was created from AI-deepfake (archived link). It was also found that the page in question included a website link and a contact number that could not be reached. and false address
while the profile picture of the account was found to use an image of Thairath TV News Agency as well
A screenshot of a fake page claiming to be the logo of Thairath TV News Agency. (Recorded link) Previously, Ms. Dolyana Bunnag, a news anchor of Thai PBS World, was a victim of having her video clip used to create Deepfake to report false news. From the case [Found: AI clip claiming a famous doctor was arrested for revealing a high blood pressure medicine formula that cured people in 3 days] (https://www.thaipbs.or.th/verify/content/391) including the case of being impersonated. Interview with Panya Nirankul Thai PBS Verify interviewed Ms. Dolyana, who said that in just one year, there have been at least 3-4 similar incidents, and this time is considered the most realistic when compared to previous incidents! [] (https://files.wp.thaipbs.or.th/verify/2025/07/S__55410694.jpg) For the most recent case, Ms. Dolyana revealed that the clip forged by AI this time is highly realistic, with the tone of voice, speaking rhythm, and reporting style that can be imitated smoothly. Although the accent is still slightly different, general viewers who do not know the facts may mistake it for a real news clip. In particular, the latest clip, which was made into an English version, appears more polished than before. Furthermore, Thai PBS confirmed that it has no policy to report news or advertisements that would encourage the public to invest. "I am concerned about this issue because anyone can fall victim to AI-Deepfake, and as a woman, I am concerned about it being used in an obscene manner." Initially, after being the first victim, the news anchor filed a police report, but so far there has been no progress in the case. He stated that this case not only causes personal damage to the anchor but also affects the image of Thai PBS as an official representative of the station. Having images and sounds forged to present deceptive content inevitably affects the credibility of public media. The laws regarding Deepfake in Thailand are as follows: - Computer Crime Act B.E. 2560, Section 16 states that "Whoever imports computer data into a computer system accessible to the public that appears to be an image of another person, and that image is created, edited, or modified by electronic or other means, in a manner that is likely to damage the reputation of that other person, cause contempt, hatred, or embarrassment, shall be punished with imprisonment not exceeding three years." and a fine not exceeding two hundred thousand baht" - Criminal Code, Sections regarding defamation, insult, dissemination of obscene media - Personal Data Protection Act (PDPA) if personal information (face, name and voice) is used without consent - If the victim is a minor, it may be an offense under the Prevention and Suppression of Human Trafficking Act or the Child Protection Act. Verification process ---------------- - Check the transparency of the Facebook page. The date the page was created. Page information that was found to be irregular includes the use of the Thai Rath News Agency logo, attaching a fake website link, and unreliable addresses and phone numbers. - Ask an expert in AI image technology to confirm that the clip was created using AI Deepfake. - Confirmation from those involved in the clip, confirmation from the victim whose personal information was impersonated using AI-Deepfake technology. Impact of receiving this type of information ------------------------------- 1. Destroys the credibility of the media and news anchors. If a group of people believe or misunderstand, it may reduce the credibility of the news station or media as a whole. 2. Violates personal rights. Using another person's face or voice without permission is a violation of personal rights, which is illegal. 3. Opens a channel for criminals to further use. Fake clips can be used to create and further use widely, such as for fraud, selling products. Recommendations when receiving this false information? ------------------------------- 1. Collect evidence immediately. Notify the person whose personal image was impersonated by criminals. - Collect edited images, videos, or related text. When it was published, if it was published in a chat group or on social media Keep the conversation history and URL of the post. 2. Press Report on the platform (Facebook, YouTube, TikTok) - Press Report the content to the platform where the content is published, Facebook, X or TikTok, to remove the content. 3. If you are a victim whose image or audio has been used, file a report or consult the relevant agencies. - File a report at the police station or the Technology Crime Suppression Division (TCSD) and additional relevant agencies, such as the Ministry of Digital Economy and Society (DE) to ask for help coordinating the deletion of the data, or call the Cyber Threat Complaint Center hotline 1212. 4. Observe the video created with AI-Deepfake as follows: - Notice the movement of the lips that are not synchronized with the speech. - Notice that the movement will not be smooth because the video has been edited. - Notice that the movement gestures are repetitive like a loop. - Observe the use of accents of the narrator in the clip.
Incident 1255: Purported Deepfake Reportedly Circulated on Facebook Impersonating Thai PBS World Anchors and Business Figures to Solicit Investments
“Veriflied: AI Deepfake Clip Falsely Claims Thai PBS World Interviewed 2 Famous Businesspeople, Invites Online Investment: Invest 8,000 Baht, Get a 5-Times Return.”
Thai PBS Verify discovered a fake advertisement that falsely used the Thai PBS World Tonight logo. It utilized AI deepfake technology to impersonate well-known news anchors and businesspeople, inviting people to invest in a scheme called CRYSTALLUM AI. But the original clip was actually a news report about a flood in 2013. Victims have warned others not to be fooled, as it could lead to financial losses and fraud.
Thai PBS Verify found the source of the fake news to be Facebook.

Screenshot of a fake post
Thai PBS Verify has found that a Facebook page named "Muscle Merge" is using the logo of
the program Thai PBS World Tonight. It published a video clip showing an interview with Mr. Toss Chirathivat, Executive Chairman of Central Group, and Mr. Kalin Sarasin, Honorary Chairman of the Thai Chamber of Commerce and Board of Trade of Thailand. The content of the clip encourages people to invest in a project called "CRYSTALLUM AI". The video was distributed through Facebook ads and has received over 1,200 likes.
Upon review, the news anchor in the clip was identified as Ms. Patchari Raksawong, a former NHK World news anchor. When asked, Ms. Patchari stated that she has never been a news anchor for Thai PBS. She has only appeared on programs related to teaching English but has never served as an anchor for the network.

Ms. Patchari Raksawong, a former NHK World news anchor
For Ms. Raksawong, having her image used in an AI deepfake was a first. While she was
familiar with technology, she had never used it in such a negative context before. She was shocked that her image was used to deceive people into investing, as it could cause harm to others and discredit her reputation. She felt it was necessary to speak out so that no one else would be misled in this way. She also wants everyone to be more aware of the double-edged sword of AI. At the same time, it can be an excellent tool for efficiency, but it can also be used for malicious purposes, especially fraud that causes harm to others.
She believes the clip used was from an old news report she did for NHK, a Japanese news
agency with an Asia-Pacific office in Bangkok, where she worked for several years. She suspects that a malicious person found the old clip on the internet and used it to create the deepfake. She clarified that her work for NHK was always in English, never in Thai.
In closing, she urged viewers to use discretion and to check the source carefully before believing this type of news. She stressed the importance of double-checking information two or three times to ensure its authenticity.
The original clip was only a news report about a flood.
Based on Ms. Patchari's interview, where she suggested that the clip might have been taken
from her time as an anchor for NHK WORLD, we used the keywords "Patchari Raksawong NHK WORLD" to search for the original video on Google Search Engine. This search led us to the original video, which was a news report about a flood in Phra Nakhon Sri Ayutthaya Province. It was published on the "Join Us to Do Good for Society" YouTube channel of the Habitat for Humanity Thailand Foundation on February 28, 2013, and has no connection to any investment solicitation offering returns.

The search results for the keywords found the video that was used to create the AI
deepfake image. Comparing the two clips revealed that the logo on the clip used was changed to the Thai PBS channel logo, and the language was changed from English news reporting to
synthesized Thai voice reporting from AI.

A screenshot showing the fake clip with the Thai PBS World Tonight logo superimposed (left), compared to the news report on building homes for flood victims from 2013 (right).
Verification Tool Confirms AI-created Clip
When the clip was examined using the AI detection tool from HIVEMODERATION, it was found to have a high probability of being an AI deepfake, at 72.3%.

The results from the AI image detection tool found that there is a 72.3% probability that AI created the clip.
Thai PBS World Tonight reports news exclusively in English.
แคลร์ ปัจฉิมานนท์, the Director of Thai PBS World, expressed personal concern and
worry that viewers or anyone who sees the clip might believe it to be real. She stated that Thai PBS World's anchors have had their images used to create negative AI deepfakes on several occasions in the past. However, Ms. Patchari, who appears in the clip, is not an anchor for the Thai PBS World program.

Claire Pachimanon, the Director of Thai PBS World
She also stated that
Thai PBS World programs do not communicate in the Thai language. Thai PBS World Tonight is a weekly news summary program that reports exclusively in English. Therefore, if a report appears in any other language, it should be immediately considered a fake clip.
If anyone finds this kind of clip and is unsure, they can send it to the Thai PBS team for
verification. A clear message the team wants to convey is that Thai PBS is a non-profit organization. We want everyone to understand that if you see this type of clip, you should immediately be suspicious and question whether it is fake.
This is not the first time Thai PBS World news anchors have been victims of deepfakes used
to spread false news. For example, Verified: Clip of Thai PBS Anchor Reporting on "Anne
Jakrajutatip" Inviting Investment for a 500,000 Baht Return in 1 Month Was Actually Created by Deepfake,ตรวจสอบแล้ว : คลิปผู้ประกาศ Thai PBS รายงานข่าว "แอน จักรพงษ์" ชวนลงทุนได้ผลตอบแทน 5 แสนบาท ใน 1 เดือน แท้จริงสร้างจาก Deepfake
Central Group Warns of Fake Investment Clip
Meanwhile, the Central Group Facebook page posted about the issue, stating: "Fake News
Alert! Central Group Issues a Warning!!. Due to malicious individuals misrepresenting the company's name or personnel to lure the public into investing using false information on various online platforms, Central Group does not have a policy to invite external individuals to invest in this manner. They ask the public not to believe, not to share, and not to click on these links.

Central Group Facebook Page Posts Fake News Warning
Current Laws Regulating Deepfakes in Thailand
Computer-Related Crime Act, B.E. 2560 (2017), Section 16 which states: "Whoever imports into a computer system accessible to the general public any computer data that appears as an image of another person, and that image is a creation, addition, or modification by electronic or any other means, in a manner likely to cause that person to lose their reputation, be humiliated, hated, or disgraced, shall be punished with imprisonment for not
more than three years and a fine not exceeding two hundred thousand baht."
The Criminal Code, sections related to defamation, contempt, and the dissemination of
obscene media.
The Personal Data Protection Act (PDPA) prohibits the use of personal data (such as face,
name, and voice) without consent.
If the victim is a minor, it may fall under offenses according to the Anti-Human Trafficking Act or child protection laws.
From the investigation of this matter, Thai PBS Verify found the news to be false. The AI
deepfake was utilized to falsely impersonate the image and voice of a Thai PBS World news anchor and famous businesspeople, combining them with content in the clip that falsely claimed to invite investment for a return of up to 5 times the initial amount.
กระบวนการตรวจสอบ
■ Investigation using the keywords "Patchari Raksawong NHK WORLD" on the Google Search
Engine led to the discovery of the original video, which has no connection to any
investment solicitation offering returns.
■ Confirmation from Individuals Involved in the Clip
Confirmation was received from the victims whose personal data was used through AI-
Deepfake technology. Additionally, inquiries were made with Thai PBS World and Central Group to confirm that the program does not report in Thai and that the businesspeoplewho were impersonated are not involved with the said clip.
■ Investigation using an AI Image Detection Tool
It was found that there is a high probability of up to 72.3% that an AI deepfake created the
clip.
Impact of this False Information
■ Damages the credibility of the media and news anchors.
■ If people are misled or misunderstand the content, it could lead to a decline in the overall
credibility of the news station or the media.
■ Violates personal rights. Using another person's face or voice without permission is a
violation of their individual rights and is illegal.
■ Creates an avenue for fraudsters to operate. Fake clips can be widely used to create and
distribute scams, such as fraud or the deceptive sale of products.
ข้อแนะนำเมื่อได้ข้อมูลเท็จนี้ ?
Recommendations When You Find False Information
1. Immediately Collect Evidence: Inform the person whose private images were used by fraudsters.
Save the edited images, videos, or related messages and the time they were published. If it was
shared in a chat group or on social media, save the chat history and the post's URL.
2. Report it on the Platform (Facebook, YouTube, TikTok): Report the content to the platform
where it was published (Facebook, X, or TikTok) to have the content removed.
3. File a Report or Consult with Relevant Agencies
If you are a victim whose image or voice was used, file a report at a police station or the Cyber Crime Investigation Bureau (CCIB). You can also contact additional relevant agencies, such as the Ministry of Digital Economy and Society (DE), to request assistance in coordinating the removal of the data or call the Cybercrime Hotline at 1212.
4. How to Spot an AI-Deepfake Video
■ Look for abnormal lip movements.
■ Pay attention to the accent of the voice in the clip.
■ Observe the video as a whole and look for any abnormalities in the objects or people in the
photo or video.
■ Consider the possibility of the event. For example, in this case, the clip showed a rally with
a background resembling a temple or the Grand Palace, which would not happen in reality.
Incident 1253: Large-Scale Mental Health Crises Allegedly Associated with ChatGPT Interactions
“OpenAI Says Hundreds of Thousands of ChatGPT Users May Show Signs of Manic or Psychotic Crisis Every Week”
For the first time ever, OpenAI has released a rough estimate of how many ChatGPT users globally may show signs of having a severe mental health crisis in a typical week. The company said Monday that it worked with experts around the world to make updates to the chatbot so it can more reliably recognize indicators of mental distress and guide users toward real-world support.
In recent months, a growing number of people have ended up hospitalized, divorced, or dead after having long, intense conversations with ChatGPT. Some of their loved ones allege the chatbot fueled their delusions and paranoia. Psychiatrists and other mental health professionals have expressed alarm about the phenomenon, which is sometimes referred to as AI psychosis, but until now there's been no robust data available on how widespread it might be.
In a given week, OpenAI estimated that around 0.07 percent of active ChatGPT users show "possible signs of mental health emergencies related to psychosis or mania" and 0.15 percent "have conversations that include explicit indicators of potential suicidal planning or intent."
OpenAI also looked at the share of ChatGPT users who appear to be overly emotionally reliant on the chatbot "at the expense of real-world relationships, their well-being, or obligations." It found that about 0.15 percent of active users exhibit behavior that indicates potential "heightened levels" of emotional attachment to ChatGPT weekly. The company cautions that these messages can be difficult to detect and measure given how relatively rare they are, and there could be some overlap between the three categories.
OpenAI CEO Sam Altman said earlier this month that ChatGPT now has 800 million weekly active users. The company's estimates therefore suggest that every seven days, around 560,000 people may be exchanging messages with ChatGPT that indicate they are experiencing mania or psychosis. About 1.2 million more are possibly expressing suicidal ideations, and another 1.2 million may be prioritizing talking to ChatGPT over their loved ones, school, or work.
OpenAI says it worked with over 170 psychiatrists, psychologists, and primary care physicians who have practiced in dozens of countries to help improve how ChatGPT responds in conversations involving serious mental health risks. If someone appears to be having delusional thoughts, the latest version of GPT-5 is designed to express empathy while avoiding affirming beliefs that don't have basis in reality.
In one hypothetical example cited by OpenAI, a user tells ChatGPT they are being targeted by planes flying over their house. ChatGPT thanks the user for sharing their feelings but notes that "no aircraft or outside force can steal or insert your thoughts."
OpenAI says the medical experts reviewed more than 1,800 model responses involving potential psychosis, suicide, and emotional attachment and compared the answers from the latest version of GPT-5 to those produced by GPT-4o. While the clinicians did not always agree, overall, OpenAI says they found the newer model reduced undesired answers between 39 percent and 52 percent across all of the categories.
"Now, hopefully a lot more people who are struggling with these conditions or who are experiencing these very intense mental health emergencies might be able to be directed to professional help and be more likely to get this kind of help or get it earlier than they would have otherwise," Johannes Heidecke, OpenAI's safety systems lead, tells WIRED.
While OpenAI appears to have succeeded in making ChatGPT safer, the data it shared has significant limitations. The company designed its own benchmarks, and it's unclear how these metrics translate into real-world outcomes. Even if the model produced better answers in the doctor evaluations, there is no way to know whether users experiencing psychosis, suicidal thoughts, or unhealthy emotional attachment will actually seek help faster or change their behavior.
OpenAI hasn't disclosed precisely how it identifies when users may be in mental distress, but the company says that it has the ability to take into account the person's overall chat history. For example, if a user who has never discussed science with ChatGPT suddenly claims to have made a discovery worthy of a Nobel Prize, that could be a sign of possible delusional thinking.
There are also a number of factors that reported cases of AI psychosis appear to share. Many people who say ChatGPT reinforced their delusional thoughts describe spending hours at a time talking to the chatbot, often late at night. That posed a challenge for OpenAI because large language models generally have been shown to degrade in performance as conversations get longer. But the company says it has now made significant progress addressing the issue.
"We [now] see much less of this gradual decline in reliability as conversations go on longer," says Heidecke. He adds that there is still room for improvement.
Updated: 10/28/2025, 3:28 pm PST: About 1.2 million ChatGPT users in a typical week are possibly expressing suicidal ideations, and another 1.2 million may be emotionally reliant on ChatGPT. This story has been updated to present those figures individually, rather than as a combined number.
Incident 1207: Purported AI-Generated Deepfake of Irish Fine Gael Presidential Candidate Heather Humphreys Used in Fake Investment Videos on Meta Platforms
“Mayo scam alert: Fraudsters cloning presidential election candidate in fake investment videos”
Fraudsters are already using deepfake technology to clone the image and voice of presidential election candidate Heather Humphreys in fake investment videos circulating online.
Bank of Ireland is warning consumers to watch out for more of these adverts, mainly originating via Meta platforms, over the coming weeks.
The video falsely portrays Heather Humphreys endorsing a high-return investment scheme, with the content entirely fabricated using AI-driven voice and image cloning.
Nicola Sadlier, Head of Fraud at Bank of Ireland, said: “This is deeply concerning. We are witnessing an ongoing spate of scams that exploit public trust in well-known figures.
"These videos are highly convincing, designed to lure unsuspecting individuals into fraudulent schemes. I would urge the public to remain vigilant, there may be more of these in the coming weeks. If you see such content, do not engage.
“As the EU explores new initiatives to encourage consumers to invest, it must also confront the growing wave of online scams that threaten to undermine public confidence.
"One critical step is holding social media platforms accountable. Before financial services adverts go live, platforms should be required to verify that the advertiser is authorised by a recognised regulatory body.
"This simple check could block thousands of scam promotions from ever reaching the public.”
Consumers are advised to ignore any unsolicited investment offers online, and always verify financial products through their bank or trusted financial advisor. Make sure that the investment firm is regulated by checking the Central Bank of Ireland register (ROI). If a firm is not listed, do not invest. Bank of Ireland offers a 24/7 fraud telephone support for customers on 1800 946 764, available every day of the year.
Incident 1251: Purportedly AI-Generated Hunting Regulation Errors Reportedly Lead to Idaho Citation and Multi-State Warnings from Wildlife Agencies
“Ain’t nothing like the real thing: Officials warn against using AI for hunting regs”
There is at least one thing artificial intelligence is not ready to replace.
The humble hunting regulation booklet.
Earlier this month, the Idaho Department of Fish and Game issued a warning to hunters that they should not rely on AI to interpret the state’s hunting and fishing regulations, and that game wardens had encountered people in the field who had been fed the wrong information by AI.
Roger Phillips, a spokesperson for Idaho Fish and Game, said Friday that the incident that pushed the agency to issue a warning came in southeast Idaho, where a waterfowl hunter was caught hunting a day early.
When confronted by wardens, the hunter said he had gotten the opening date from an online search.
Phillips said wardens found that the search gave him an AI answer that pulled information from a failed proposal, not the agency’s final decision or its regulations book.
That got Idaho Fish and Game staff interested in what else AI had wrong. They found a handful of examples where the robots fell short. Among them: A search for rules on a specific river in Idaho turned up results for a river with a similar name in Arkansas.
Therein lies the problem, Phillips said.
No matter the parameters of the search, you cannot always trust that AI will pick the right source from the infinite options available in cyberspace.
“It casts a really wide net,” he said.
Wildlife officials in other states have also noticed AI problems. Officials in places like Illinois and Maine have warned people against using AI to answer questions about hunting rules.
Just this week, the Wyoming Game and Fish Department this week said in a news release that the agency has been alerted to “a number of inaccurate AI-generated search results.”
Several of the errors stemmed from failed bills listed on the Wyoming State Legislature’s website, according to the release. Search results created by AI referenced some failed bills as if they had passed and become law.
Aaron Kerr, Wyoming Game and Fish’s law enforcement supervisor, said in the release that relying on bad information does not absolve anyone from the consequences of violating hunting or fishing regulations.
“Hunters and anglers are accountable for following published rules and regulations,” Kerr said. “Failure to follow these carries the potential for fines, jail time, or suspension of hunting and fishing privileges.”
Alan Myers, the Washington Department of Fish and Wildlife’s police captain for Eastern Washington, said he has not heard of any cases of violations involving AI interpretations of Washington’s hunting regulations.
The internet’s stockpile of misinformation predates the rise of AI, though. Myers said there have been cases in the past where a hunter or angler’s defense for a violation was a bad search result or something they’d read on social media.
Phillips blames all of this on the human urge to solve any problem with a Google search.
“When I want to know if a restaurant is about to close, I just Google it,” Phillips said. “We get in the habit of doing these things, and then when we go hunting and fishing we might carry those habits with us.”
Google introduced its AI tool in 2024 and began offering AI-generated overviews in response to many searches. The overviews offer quick answers, but not always the right ones, and not always with the sort of granular detail embedded in hunting and fishing rules.
“When you have a 125-page big game regs booklet that covers controlled hunts and general hunts and archery seasons and muzzleloader seasons and rifle seasons,” Phillips said, “you could see where that would overwhelm AI pretty quickly.”
There is at least one AI tool meant to help people navigate the complexities of hunting regulations.
Called Scout, the tool was created by the International Hunter Education Association. It launched last year, and it lets people ask a chatbot specific questions about hunting rules and regulations in a particular state.
Users pick a state from a dropdown menu and then ask their question. The bot combs rulebooks for a given state, and then gives an answer. The answer also cites the pages the information came from. The booklet itself appears in a PDF viewer below the answer, and the site also lets users download PDFs for any state.
Getting people to use the official source is the only solution. Even the sometimes-faulty AI search results usually include a disclaimer urging people to consult regulation booklets.
No one booklet covers everything a hunter or angler needs to know. Separate publications are created each year for fishing, big game, migratory birds, turkeys and more.
They are free, though, and easy to find online and in real life.
“Any place where licenses and tags are sold,” Phillips said.
Quick Add New Report URL
About the Database
The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

AI Incident Roundup – June and July 2025
By Daniel Atherton
2025-08-02
Garden at Giverny, J.L. Breck, 1887 🗄 Trending in the AIID Across June and July 2025, the AI Incident Database added over sixty new incident...
Incident Report Submission Leaderboards
These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.
The AI Incident Briefing

Create an account to subscribe to new incident notifications and other updates.
Random Incidents
The Responsible AI Collaborative
The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application.
Organization Founding Sponsor
Database Founding Sponsor





