レポート 2639
- 情報源として元のレポートを表示
- インターネットアーカイブでレポートを表示
A new chatbot, similar to ChatGPT, is able to turn text into celebrity voices, creating "deepfakes" in the style of Morgan Freedman, Jordan Peterson, Donald Trump and many more.
NoiseGPT can even be trained by users to imitate their own voice, or that of their friends, family members or work colleagues.
Imagine getting a happy birthday voice-message from your favourite US president, or a voice from beyond the grave in the form of John Lennon or Elvis sharing some personal information with you, that only your closest relatives know about.
This is the selling point of the newest chatbot application to be released following the much-hyped launch of Microsoft-backed (MSFT) ChatGPT artificial intelligence content generator in November 2022.
NoiseGPT's chief operational officer Frankie Peartree told Yahoo Finance UK: "We are training the AI to mimic around 25 celebrity voices at the moment, and will soon have 100 plus celebrity voices to offer."
NoiseGPT was released on Telegram on Monday allowing users to send social media messages to friends, spoken in the voice of well-known celebrities.
Peartree said instructions on how to train the app to use your own voice will soon be available on the company's website.
The app can be used by any smartphone that can download the Telegram social messenger application, enhancing its ability to reach mass-adoption.
The future capability of the AI applications being able to imitate your own voice, or that of your friends, or whomever you can get a voice sample from has raised concerns such as children getting messages imitating their parent's voice.
The concept of a deepfake is not technically illegal in any jurisdiction. However, the potential of deepfakes to create mistrust, suspicion and manipulation is a concern.
NoiseGPT app said its app will attempt to obfuscate the violations of personal and intellectual property rights that deepfake tech enables. When selecting celebrity voices that the user wants their text to be spoken in, these choices will be labeled "not Donald Trump" or "not Jennifer Lawrence", to circumvent infringements.
Is society on the verge of plunging into deepfake chaos?
Peartree thinks it won't all be bad. He told Yahoo Finance UK: "I think it's a good thing, it will cause some chaos in the start, but in the end we will find a balance. This was also the concern when Photoshop came out for example."
He added that in light of its legal implications, censorship risk mitigation is being factored into the application's design. The application will not be stored on a centralised server, but will use blockchain-based decentralised storage.
"Legal issues are one of the reasons why we will decentralise fast, for the training as well as the API connection, so we cannot be censored," he said.
The decentralised nature of the new application will mean that the computational burden to run the application will be shared amongst computers across the world, that "will run the models, training and API feed from people's homes". Running the programme from your home computer will be rewarded with NoiseGPT cryptocurrency tokens.
Peartree said: "People that create new popular voices for the app will also be rewarded in the cryptocurrency.
"There is currently a 5% tax on each transaction with this cryptocurrency, but this will be removed in future. All funds are used for development/operations, and they were not team tokens and the whole supply was publicly sold."
Legal and societal implications of deepfake technology
Being able to manipulate of the human voice could pose a challenge to the veracity of the information we receive online and through our phones and cast into doubt the personal communications we receive on messaging apps.
This also has implications for nation-state interplay, and the way it could be used to influence rivals and please public opinion.
Policymakers are now working to mitigate the risks of deepfakes. But, current UK laws need to catch up.
These laws only cover the distribution of real images, specifically in cases such as revenge porn, where private and confidential explicit material is shared publicly by an ex-partner.
If an offender creates and shares deepfake material that features the identity of their "target" in pornographic content, they can only face prosecution if they directly harass the target by sending them the material or if the offence is related to copyright infringement.
The legal and wider societal implications of deepfake technology could extend to:
- Infringement of intellectual property rights— deepfake technology can be used to impersonate someone who owns intellectual property, potentially violating their rights.
- Violation of personal rights — deepfakes can be used to create exploitative or pornographic content, infringing on an individual's privacy and personal rights.
- Damage to reputation — deepfakes can spread false information and harm a person's reputation, potentially leading to consequences in their personal and professional life.
- Compromise of data protection and privacy - deepfakes can threaten an individual's privacy and data protection, making them vulnerable to identity theft and other forms of cybercrime.
- Disruption of political agendas — deepfakes can be used to manipulate public opinion, especially during times of heightened political tension such as elections.
- Spread of misinformation — deepfakes can be used to spread false information and lead to a general distrust of news sources, individuals, and institutions.
- Liability concerns — the use of deepfakes in marketing and other promotional materials can lead to liability concerns if consumers are misinformed or misled.
- Threat to national security — deepfakes can cause geopolitical tensions and pose a threat to national security if they are used to spread false information or manipulate public opinion.
Deepfakes are becoming highly realistic and, as technology advances, online video and audio communication could become increasingly dubious.