Associated Incidents

A British company's AI technology has been hijacked to create a pro-Chinese news channel featuring a pair of computer-generated presenters.
Wolf News had been building a profile on social media but was exposed this month as being linked to a political spam operation using tools developed by a London-based AI specialist. Its videos featured a highly realistic pair of "news anchors", Jason and Anna, who were subsequently unmasked as being computer generated.
In one clip the US government was accused of a failure to tackle gun violence while a second stressed the importance of Sino-American co-operation to aid the recovery of the global economy. Both featured branding from the seemingly fictitious Wolf News, with a logo of a wolf silhouette and the tagline "focus on hot spots and broadcast in real time".
Graphika, the social media experts behind the discovery, claimed that the videos were the work of a pro-Chinese influence operation known as Spamouflage, which operates across Facebook, Twitter and YouTube.
Graphika raised the alarm in a nine-page report, Deepfake It Till You Make It. "This was the first time we observed a state-aligned operation promoting footage of AI-generated fictitious people," it said.
While Wolf News's videos were not widely shared, the fear is that misuse of such tools could balloon as the technology becomes more widely available and increasingly convincing.
A second crop of videos has since been discovered targeting online conversations in Burkina Faso. The discovery of Wolf News has prompted fears that rapidly evolving AI technology will make it increasingly easy to fool the public.
William Bartlett, executive creative director Framestore, which specialises in virtual special effects, has created deepfake versions of Queen Elizabeth II, Boris Johnson and Donald Trump. He said that he was becoming "very worried" about the misuse of AI-generated video.
"In the wrong hands the technology has the potential to corrode the public's trust in institutions and governments and undermine how society functions," he said. "I'm surprised that it's taken so long for this kind of thing to happen. It's becoming easier and easier to make."
Bartlett predicted that such videos would gain traction as they started to feature deepfake versions of famous people, indistinguishable from the real thing. Rapid developments in AI technology will open up the ability for a person's gestures, body language and voices, as well as their faces, to be easily cloned and manipulated based on a single image.
"Deepfakes will become 100 per cent, totally realistic," he added. "And we instinctively believe in fake videos more than images because people haven't become aware of just how easy they are to make. So they maintain a great deal of credibility."
The tools used by Spamouflage have been linked to , an AI specialist founded in 2017 by a team of researchers and entrepreneurs from UCL, Stanford and Cambridge University.
Claiming to be the world's top-rated AI video software provider, 's service can be accessed from £23 per month and has attracted more than 30,000 users to create a multitude of videos.
High-profile customers include the NHS, Ocado and the advertising giant WPP, which have used it to improve staff training. The charity Malaria No More worked with to create a deepfake David Beckham. The football star raised awareness of the disease by speaking the words of actors in nine different languages.
Synthesia has banned the user behind the Wolf News channel for breaching its guidelines, which prohibit the use of its tools to create "political, sexual, personal, criminal and discriminatory content".
Victor Riparbelli, co-founder of Synthesia, apologised and condemned the activity. "We invested very early into content moderation to ensure our tens of thousands of customers benefit from a safe platform," he said.
"It pains us to see people misuse the product we built to help benefit society — this was never our intention. However, we won't let the minority ruin the good AI has to offer and endless positive opportunities it opens up for us all."
He said that "99.9 per cent of bad actors" attempting to use the service were caught. However, the Wolf News videos slipped through its content moderation system, due to the difficulty in detecting problematic content which does not fall into its "red category" of hate speech and slurs.
"The grey areas are much harder to detect," he said. "If a user is creating content around cryptocurrencies are they educating people about blockchain technology or behind a get-rich-quick scheme?"
Riparbelli called for existing laws to be updated for the digital age and urged all AI providers to go further in preventing misuse. "The right regulation will help curtail misuse and incentivise companies to safeguard powerful technology," he said. "But it will also ensure that powerful AI tools are accessible for the public to use and interact with."