United States President Donald Trump has signed into a law a new mandate that tasks federal agencies with increasing their research and development focus on relevant use cases for artificial intelligence. The initiative, signed as an executive order, essentially tells federal entities to prioritize the use of AI in their research and development efforts, and more importantly, opens up government data stores for that purpose.
The order itself rests on five key tenets. The first is that the United States must pursue and achieve technological breakthroughs across a multitude of AI research areas and use cases, from academia to industry. Second, the United States must work as a unit to develop new technical standards in AI and break down barriers to adoption. Third, there must be an increased emphasis on education and training surrounding AI. Fourth, there must be further efforts to "foster public trust" in AI technologies. Finally, the mandate states that the United States must innovate and open up opportunities for itself on the international stage, without allowing competitors to get their hands on our technology.
A big part of the aim of this mandate, as per its name, is to keep the United States on top in AI development and deployment. It also seeks to increase collaboration between sectors on AI developments, lower barriers to development and deployment of AI technologies, and establish an AI action plan for the country, complete with relevant budgets. Critically, the mandate also states that government data availability for AI development and training must be increased.
The move is rife with potential privacy concerns, for obvious reasons. Essentially, federal departments are being given access to almost any data that's relevant to their core functions for the purpose of enabling AI training and development. This could hypothetically enable things like the Census Bureau training an AI to predict religious demographics per metropolitan area, or the vastly more scary potential scenario of Homeland Security creating a battlefield AI that processes data on past terrorism incidents and domestic terrorism trends in order to try to predict new attacks.
The United States has had a heavy interest in the development of artificial intelligence for quite some time, and has partnered up with big names in Silicon Valley in the past to accomplish its aims in this field. One example is the failed Project Maven, an AI project involving sorting through drone footage that was being worked on with Google.
This mandate empowers government entities to use relevant data at their disposal to create AI tools to both make their existing functions easier, and enable new functions within their wheelhouse. With the mandate's call to focus R&D efforts on AI solutions over other possibilities, this means that government departments can develop AI tools with or without Silicon Valley's help.
This comes amid heavy AI development of all sorts in China. Though China's AI use cases are more in line with the country's cultural and governmental values, such as mass surveillance operations, the fact that China is trying to pull ahead in AI could spell trouble for the United States. This development seems to have turned the AI space into an arms race of sorts. This is especially true of military operations, a field of AI research that has seen many warnings and ethical recommendations, but little international lawmaking thus far.
The United Nations has made no secret of its efforts to regulate AI research aimed at the battlefield. While using AI in warfare could drastically reduce loss of human life and even collateral damage, it could also be misused, malfunction, or otherwise cause problems. The aim is a Geneva Convention-esque global agreement regarding AI in battle, but that hasn't happened yet.
There are worrying implications for privacy here, as well as the usual caveat that AI can, and usually does, screw up in unfathomable ways, especially early on. Even so, the government has taken pains to assure that all AI research will be used for the betterment of the United States and its standing on the world stage.
One assured positive of this development is that the US is now working on drafting up ethics legislation for AI, a venture that's sure to attract outside help from security experts, tech bigwigs, and more. Putting limitations on how AI can be used and what kind of breaches or sacrifices can be made is nothing but good news as the technology's inevitable march threatens to skirt privacy and ethical concerns in innovative ways.