In 2018, FireEye – a company based in California – informed Facebook and Google about a large group of fake Iranian social media accounts that was running movements to control the U.S people. As a result, Facebook and Google identified them, along with fake YouTube channels and blogs, using back-end data and then removed them.
“Right now, you know something’s automated just by the sheer volume of content pushing out,” Lee Foster, information operations manager at FireEye, says. ”It’s not possible for a human to do this, so it’s clearly not organically created. Often you’ll see automated retweeting of some list of accounts that just to boost out a message. “
But the situation is about to take a new turn, he claims, as Artificial Intelligence (AI) system that covers its automated heritage is available now.
“Imagine having a capability out there that can automate the organic creation of original content effectively enough that it looks real, but you don’t even have to have it operate or touch it,” Foster says.
Other analysts are also worried as The Future of Political Warfare report forecasts:
“In the very near term, the evolution of AI and machine learning, combined with the increasing availability of big data, will begin to transform human communication and interaction in the digital space. It will become more difficult for humans and social media platforms themselves to detect automated and fake accounts, which will become increasingly sophisticated at mimicking human behavior.”
The report further mentions that AI will be capable of targeting people with deep personal messages, and it can also play with their feelings better than any human. So what can be done about it? A new AI should come in picture and catch the fake AI. It means war between AIs to detect what is real.
Big data available at the back end of social media is a big challenge in this war. Although it will help Facebook, Twitter and Google to use AI to regulate their posted content, it can also go into the hands of wicked sources.
“Social media companies can tweak their algorithms to better detect disinformation campaigns or other forms of manipulation (and they have begun to do so), but the underlying systems and revenue models are likely to stay the same,” report says.
To worsen the matters, it is also difficult to distinguish between “fake” and “real” news which raises concerns about how AI can be trained to learn what is fake. Foster claims that AI will follow in the footsteps of researchers: try to predict the motivations behind social media pushes.
However, humans will be involved somewhere in the whole process, whether it is during the learning process of the AI, or after the AI has recognized something doubtful. Usage of AI as a device to recognize these fake movements is maybe essential, but the machine’s computational power may depend upon the processing capability of a human brain in order to reveal the fake secret movements.