Inside Google and Meta's Arms Race to Create the Most Deceptive AI

Inside Google and Meta’s Arms Race to Create the Most Deceptive AI

Illustration by Luis G. Rendon/The Daily Beast

Illustration by Luis G. Rendon/The Daily Beast

If you’ve never played the Diplomacy game before, I wouldn’t recommend starting as it will consume your life. The game is typically a seven-player affair that involves a lot of negotiation, persuasion, and alliances, not to mention a healthy dose of deception, in order to control and gain territory on a map of Europe at approach of the First World War.

But, there are countless other versions of the game, some of which feature dozens of players playing on a world-sized map. Each player fights for power with the ultimate goal of capturing enough territory to win, or just surviving long enough until you can negotiate a draw. These matches can get very messy, very quickly, making them the perfect game for the sick and depraved.

And, ultimately, it’s also a great game for training the AI ​​to negotiate, cooperate, and even deceive. The most recent effort comes from researchers at Google’s DeepMind AI Research Lab who published a study Dec. 6 in the journal Nature Communication on a new approach to teaching bots to play Diplomacy. The authors claim that this method allows for better communications between AI “actors” while encouraging cooperation and honesty.

Meta made DALL-E for video and there’s no way it’s going to end well

“We view our findings as a step towards evolving flexible communication mechanisms in artificial agents and allowing agents to mix and match their strategies to their environment and peers,” the authors wrote.

One of the high-level insights the researchers gleaned from the experiment was that AI actors were able to build honesty in negotiations by to punish those who broke deals and lied about what they would do. They found that “responding negatively to broken contracts allows agents to enjoy increased cooperation while resisting deviations.”

So, as is the case with history and poetry, the deepest circle of AI hell is always reserved for traitors.

Beyond being able to dominate us in a passionate game of Diplomacy, AI trained in this way can potentially be used to help us solve complex problems. After all, robots are already being used to do everything from automating manufacturing to creating efficient shipping routes for the transportation industry. But if AI can also find solutions to less black-and-white problems like negotiations and compromises, it can help do things like create contracts or even broker political deals.

DeepMind’s AI is just the latest in a long line of strategy game bots, including Meta’s own diplomacy game AI announced in November and a recently unveiled DeepMind’s Stratego game bot. However, AI has a long history with gaming dating back to Deep Blue, IBM’s famous supercomputer that successfully defeated chess grandmaster Garry Kasperov in a series of heated games in 1996 and 1997. Bots are not only become more sophisticated, learning to defeat humans. in a variety of different games that require strategy and deception.

“AI tricking humans is not a new phenomenon,” Vincent Conitzer, an AI ethics researcher at Carnegie Mellon University, told The Daily Beast. “AI became superhuman in poker before Diplomacy.”

Conitzer explained that perhaps the most important thing about bots playing Diplomacy is the fact that they require the use of natural language. Unlike a game of chess or poker, there is often no clear solution or objective. Just like in real life, you have to make deals and compromises with other players. This presents a much more complex set of workflows that a system must deal with in order to make a decision.

This also means that AI models must take into account whether or not someone is lying– and if that should be misleading too.

Stop saying Google’s AI is sensitive, you’re fooling

A bot cannot lie the way we usually define lying; a bot won’t just give the wrong answer to a question unless it’s down. But by definition, lying requires an intent to deceive. And bots can have intentions. After all, they are designed to perform specific functions by humans, and lying can be part of that functionality.

“He doesn’t understand the full social context of the lie, and he understands what he’s saying, at best, in a limited way,” Conitzer said. “But to us, AI systems using language strategically may seem more ominous.”

He is not alone either in this logic. “Introducing an explicitly misleading model might not introduce as much new ethical territory as you might think, simply because there isn’t a lot of intentionality to begin with,” Alexis Elder, an ethicist from ‘IA at the University of Minnesota-Duluth, told the Daily Beast. However, she echoed Conitzer’s sentiment about how convincing and misleading AI “seems potentially quite disturbing.”

On top of all the ethical concerns surrounding deceptive AI is the fact that it is funded, researched and pushed by some of the most powerful and wealthy tech companies in the world, namely Meta and Alphabet. Both companies have had a sordid track record when it comes to AI in the past. Meta, for example, has a history of racist, sexist, and biased bots. Alphabet came under fire in 2015 after Google Photos labeled dozens of photos of black people as gorillas. Both companies have made significant AI missteps, particularly when it comes to biased and racist behavior.

It’s no surprise that these concerns resurface when it comes to developing a bot that can use language to deceive and coerce as well. What happens when a bot is used to negotiate an unfair contract between a boss and his employees or a landlord and his tenants? Or if he was armed by a political party to disenfranchise people of color by drawing electoral districts that do not accurately reflect the population? Of course, this may not be a reality yet, but unless there are definite regulations on what these robots can and cannot do, the way is there.

This all serves as a good lesson in something we learn over and over again: take everything an AI tells you with a big grain of salt.

“If nothing else, it’s an important reminder that text produced by AI systems isn’t necessarily true,” Conitzer said. “This is true even if the system is not intended to mislead. Big language models such as OpenAI’s GPT-3 and even Meta’s science-focused Galactica produce spurious text all the time, not because they’re designed to mislead, but rather because that they only produce a text that seems probable without a thorough understanding of what the text is about.”

For now, however, we just have bots getting better in games. While they might not be able to go all the way through HAL-9000 and totally manipulate us (yet), they might be able to overpower us on a game of diplomacy – and honestly, that might be just as bad.

Learn more about The Daily Beast.

Get the Daily Beast’s biggest scoops and scandals straight to your inbox. Register now.

Stay informed and get unlimited access to The Daily Beast’s unrivaled reports. Subscribe now.

#Google #Metas #Arms #Race #Create #Deceptive

Leave a Comment

Your email address will not be published. Required fields are marked *