Undersecretary of the Army Gabe Camarillo sits down ahead of his question-and-answer session at the annual Defense News Conference on Sept. 7.

White House AI ‘Bill of Rights’ Could Endanger National Security

The White House Office of Science and Technology Policy has proposed guidelines for the use of artificial intelligence in its Blueprint for an AI Bill of Rights. While the plan emphasizes the fundamental rights and principles of our democracy and lists examples of the harm AI can cause, it falls short of how to put them into practice without hampering one of the hardest parties. dynamics of America’s high-tech economy – its innovation ecosystem.

Compared to the European Union and China, America has a fundamentally different economic relationship with its technological innovation landscape: the United States innovates, the EU regulates, and China is determined to lead. The United States far outpaces Europe in nearly every AI metric, from citations of scientific papers to venture capital dollars to business activity. Meanwhile, AI is a key area of ​​economic and military competition between the United States and the Chinese Communist Party.

In an attempt to get ahead in this duopoly, China is investing heavily in AI and has made it a linchpin in its commercial and national security sectors. The CCP is catching up with the United States across many dimensions through the judicious deployment of government guidance funds and the whole range of national industrial policies, including Made in China 2025 (2015) and the Next Generation AI Development Plan (2017).

Legislating AI just because it’s the zeitgeist is risky for American competition and national security. Following the EU’s lead in a regulatory quagmire could hamper the speed of U.S. innovation in AI, limiting the country’s ability to compete with China in the economic and military arenas.

The OSTP document has four glaring flaws: fair algorithms, data privacy, regulation, and overly broad and vague definitions.

Fair algorithms

How do you define a “fair algorithm”? The plan focuses on protection against algorithmic discrimination but does not provide a proper empirical definition. Equity as an abstract concept seems easy to grasp, but as a quantitative definition it is much more obscure. Princeton computer scientist Arvind Narayanan has highlighted 21 different approaches to defining fairness. Simply put, how can automated systems be assessed and OSTP guidelines implemented, without a clear target metric??

In contrast, expert opinion is that China’s foray into AI regulation with the Internet Information Service Algorithmic Recommendation Management Arrangements (2022) will provide few restrictions on the use of AI. While this policy has superficial similarities to EU law, its constraints will not apply to the Chinese government. As Russel Wald, policy manager at Stanford’s Institute for Human-Centered Artificial Intelligence, has argued, this “regulation [is] oriented towards the benefit of the scheme.

Data Privacy

The White House suggests that companies allow users to withdraw their consent to the use of their data. When a user does, the White House advises companies to remove that data from any machine learning models built from it.

Recycling all AI algorithms across all products and services every time a user requests it is economically unfeasible. Would that mean companies like Amazon, Netflix, and social media have to rebuild their customer referral systems every time someone deletes their data? If so, would the same be true for retailers like Walmart that use personal data to optimize supply chain and inventory? ? On this point, the plan is not clear. The economic and operational impact of these issues is potentially enormous. Mismanagement could allow Chinese competitors to outpace American companies.


Given the growing pervasiveness and pervasiveness of AI and automated systems, the OSTP blueprint would impose a onerous burden on large swathes of the existing economy, hampering innovation. It’s possible that AI startups will take years rather than weeks and months to get started. Given languishing US federal government R&D spending (about a third of Cold War levels), this has direct implications for the US national security innovation ecosystem.

Vague definitions

The OSTP definition of automated systems includes “any system, software, or process that uses computation in whole or in part to determine outcomes, make or assist in decisions, inform policy implementation, collect data or observations, or otherwise interact manner with individuals and/or communities”. What modern electronic product or service is not covered by this definition?

This would require a dystopian nightmare of pre-deployment data use interviews, community input, pre-deployment testing and evaluation, ongoing monitoring and reporting, independent evaluation, opt- out and deletion of data, and timely human alternatives. This would create a huge burden, not just for the tech industry, but for any sector impacted by plausible AI or “automated” systems. It is essentially deautomatizing automation at extremely high administrative and economic cost.

Machine learning preserving privacy

The goals of the White House plan are lofty. AI should be used responsibly, in a way that it benefits society and prevents the illiberal purposes to which China is deploying the technology. That said, rather than achieving this through procedural controls, the US government could promote non-regulatory protections such as privacy-preserving machine learning, or PPML. This broad class of approaches, including synthetic data generation, differential privacy, federated learning, and edge processing, would address some of the core blueprint concerns without slowing the pace of innovation.

PPML certainly does not address all of the issues highlighted in the Blueprint for an AI Bill of Rights but provides a plausible alternative to legislation that might otherwise undermine America on AI. In doing so, it can create a model for non-regulatory mechanisms that protect the public, while mitigating national security threats to AI innovation in our continued competition with China.

If the Biden administration is serious about demonstrating thought leadership that matches American tech leadership, let’s go beyond idealistic principles. Or the next generation of advanced AI and automated systems will be built by our big competitors.

Jonah Cader is a graduate of Stanford University’s Institute for Human-Centered Artificial Intelligence. As a management consultant at McKinsey & Company, he worked between the United States and China, leading strategic projects for companies in the high-tech value chain.

Have an opinion?

This article is an Op-Ed and the opinions expressed are those of the author. If you would like to respond or would like to submit your own op-ed, please email C4ISRNET and Federal Times Senior Editor Cary O’Reilly.

#White #House #Bill #Rights #Endanger #National #Security

Leave a Comment

Your email address will not be published. Required fields are marked *