The standard definition of a phenomenon is “something or someone that attracts attention and admiration exceptional enough to create an emotional bond.” Undoubtedly, the making of an online phenomenon, especially on social media, is measured by its number of followers. Trolls loom as the agitators, troublemakers, and manipulation experts of the online world. While phenomena emanate a powerful magnetic field from a single source, trolls fire machine guns at certain targets to expand their gravitational field. It takes only one account to become a phenomenon, whereas trolls more often require multiple fake accounts to feed their appetite for chaos.
No phenomenon, even with its hundreds of thousands of followers, could be considered saintly. Such fame inevitably comes with an ethical and moral cost. But trolls exist to lie, slander and harass. Without exception, they all share this sinister intent. For this reason, the online world faces an even greater menace if trolls become a phenomenon by gaining plausibility and appeal through deepfake.
CNN Business warns about the troll accounts that aspire to become more convincing as they metamorphose into flesh and bones by acquiring deepfake identities built with artificial intelligence (AI). It stresses that troll accounts are changing their strategy to expand and sustain their sphere of influence by applying deepfake technologies.
Troll accounts used to steal real photos in the past
Previously, internet trolls would sometimes use hundreds of accounts simultaneously. They would spew hate and harassment messages on social media by using these fake accounts hidden under “anonymity.” Attempting to appear as an authentic account, they would post photos that they had stolen from other users as their profile picture. Take, for instance, Jenna Abrams, a supposedly conservative American woman. Before Twitter removed the account in 2017, it had more than 70,000 followers. In reality, a troll group working for the Russian government had managed this account with the fake profile picture belonging to a 26-year old Russian woman. When approached by CNN, the woman stated that she was not aware that her photo was being used for such a purpose. Many large social media platforms have rules in place against the use of other people’s photos in such a manner. People with compromised identities can file a complaint. As a consequence, trolls embraced deepfake methods to prevent them from being easily spotted and caught.
CNN Business reports that today trolls use artificially designed faces, built with the help of AI and deepfake technologies. Thus, they present a more plausible identity while taking a leap from photos to videos. Also, they avoid getting reported by the rightful owners of the accounts they assumed by stealing the photos.
Deepfake trolls harass activists
In the aftermath of the 2016 US presidential elections, when Trump was elected president, the Sleeping Giants social media activist group was formed on Twitter to launch a campaign to stop companies from placing ads on media outlets that allowed the spread of discrimination. Activist Nandini Jammi, one of the founders of the group, was no stranger to being harassed online by unknown social media accounts. However, this time the threatening tweet was not from an anonymous account, but from someone called Jessica, whose profile picture showed a smiling blond woman. In the tweet, Jessica asked Jammi, “Why haven’t you cleaned your info from Adult Friend Finder?” The implication was clear. “Jessica” was claiming to have potentially embarrassing information from an old online dating profile about Jammi. Jammi told CNN Business that she had never used the online matchmaking site actively.
What set “Jessica” apart from other Twitter users, however, was that the woman smiling in the account’s profile picture did not exist. Multiple experts who reviewed the image told CNN Business that the image was created using AI technology. Hany Farid, a professor at University of California, Berkeley, Jeff Smith, the associate director of the National Center for Media Forensics at the University of Colorado in Denver, and Siwei Lyu, a professor of computer science at the State University of New York at Albany determined that Jessica’s profile picture was created by AI and a deepfake technology known as Generative Adversarial Network (GAN). Later, it became clear that Jessica was part of a coordinated troll network of 50 accounts managed by the same person or group. These accounts were put to work to harass the activist. Experts told CNN Business that the pictures used on the other accounts were of AI-built robotic individuals who did not exist, just like Jessica.
Deepfake trolls give precedence to political campaigns
Nandini Jammi was not the only activist to be harassed by deepfake trolls. Another victim of deepfake trolls was the independent activist E.J. Gibney, a Sleeping Giants contributor. Gibney began meticulously documenting the web of accounts that included “Jessica,” the perpetrator of the harassment directed at him and other activists. The username of one of the troll accounts was actually the name of the building address next to Gibney’s house. Gibney reported the activated accounts to Twitter. Last July, Twitter announced the removal of many similar accounts. Upon being contacted by CNN Business, Twitter announced that it had closed nearly 50 accounts, all managed by the same person or group.
In the last month of last year, Facebook identified and halted another coordinated manipulation campaign that was conducted through troll accounts displaying artificial images built by deepfake technology. Facebook announced that deepfake synthetic faces of non-existing people were used in fake accounts in an effort to deceive other users and feature these accounts by manipulating the system. Experts stated that this is the first time that such deepfake synthetic images were used as part of a social media campaign. There is talk that this coordinated troll attack may have had a political aspect, going all the way to Trump and the Chinese government. Such instances are treated as a justification for the concerns prior to the US presidential elections in November.
Phil Wang, a former software engineer at Uber, warns online users that “this person does not exist.” Visitors to the site come across a new, seemingly real human face created using deepfake methods with the help of AI. Wang stated that his “goal is to show people what the technology can do and vaccinate them against such future attacks.” Nathaniel Gleicher, leader of the Facebook team that is fighting against coordinated disinformation campaigns, says that those who are developing deepfake technology and producing data sets need to consider how these tools may be abused by malicious users.
Deepfake trolls give precedence to political campaigns
There are times in the online world when social media narcissists and daring ignoramuses do not flinch from unwittingly spreading subjective and aggressive content, just as efficiently as the coordinated troll accounts, for the sake of scoring a few hundred clicks. How is it possible to discern and deal with organized troll attacks when social media is already so tarnished, and online users have become so unscrupulous? In Simone, a 2002 movie in which Al Pacino plays a Hollywood producer in a bind after his leading actress leaves the set, the whole Hollywood circles, including the film crew, were made to believe in a non-existing actress. Who is going to think about questioning the authenticity of troll accounts once they evolve from suspicious anonymous users into more convincing and appealing profiles? What law is going to go after them once they begin to spread videos of disinformation and manipulated messages with heartfelt smiles and impressive dialogs, especially in the audienc es’ own languages, to their ever-increasing followers on YouTube, Twitter, Facebook, Instagram, and other outlets?
Deepfake trolls will never exist in the real world, no matter what they do or say, but their victims and the destruction will be real.