Planned and targeted proliferation of fake news is both cheaper and potentially more dangerous than a nuke, according to Jedidiah Yueh, founder and Executive Chairman of Menlo Park, California-based data technology company Delphix. In an interview with AndroidHeadlines, Mr. Yueh reflected on recent controversies stemming from attempts of certain Russian agents to meddle with the 2016 presidential election in the United States, a subject that Facebook CEO Mark Zuckerberg himself repeatedly referenced during this week's congressional hearings over the Cambridge Analytica scandal he attended, having once again apologized for Facebook's omissions and its failure to adequately police its platform in order to guard it against being abused by malicious foreign actors.
"Warfare has fully moved into technology now and even warfare has been disrupted by the Silicon Valley," according to the technology industry veteran who points to the digitization of warfare as one of the most concerning trends of the 21st century. "Its [digital warfare's] costs are a fraction of those attached to traditional warfare, a boiler room designed to manipulate social media can have a bigger impact than a nuke and is also significantly cheaper to realize," Mr. Yueh said. His warnings largely echo those of United Nations Secretary-General António Guterres who recently called for an international treaty meant to regulate digital warfare, serving as something akin to a Geneva Convention for military applications of artificial intelligence and other emerging, potentially highly dangerous technologies. Even outside of the scope of fake news and general social media manipulation, a malware capable of shutting down a nation's energy grid could potentially be more devastating than any first-instance physical attack, top EU officials warned earlier this year, having also concluded NATO isn't ready to encounter AI on the battlefield.
As of mid-February, Special Counsel Robert Mueller indicted thirteen Russian nationals and three entities from the transcontinental country for illegally trying to interfere in the last U.S. presidential election race with the goal of manipulating its democratic process and hence attacking its sovereignty. Mr. Mueller's investigation into the matter is still ongoing and may result in more charges being pressed later this year. Facebook itself has recently been committing significant resources to combating the proliferation of misleading and inaccurate information meant to push a malicious agenda on its social network but has so far had limited success with employing more human content reviewers and making its AI algorithms adopt a stricter approach to assessing the validity of information posted online.
Those challenges are believed to have at least partially prompted Facebook's mid-January decision to revamp its News Feed and purge it from the majority of content created by pages. By minimizing the number of posts from publishers, Facebook also minimized the volume of fake news that goes viral on its platform, even if it caused the reach of legitimate media outlets to plummet in the process of doing so. The company is presently trying to help trusted publishers reach their audiences despite recent changes by automatically promoting local news on people's feeds, having also vowed to introduce more similar media-friendly features in the future.