Image: Elise Swain/The Intercept; SLAB
Sensational new machine learning breakthroughs seem to sweep our Twitter feeds every day. We barely have time to decide whether software that can instantly conjure up an image of Sonic the Hedgehog addressing the United Nations is purely harmless fun or a harbinger of techno-doom.
ChatGPT, the latest act in artificial intelligence, is by far the most impressive text generation demo to date. Think twice before asking questions about counterterrorism.
The tool was built by OpenAI, a startup lab trying to create nothing less than software that can replicate human consciousness. Whether such a thing is even possible remains a matter of great debate, but the company already has some undeniably astonishing breakthroughs. The chatbot is incredibly impressive, impersonating a smart person (or at least someone trying their best to appear smart) using generative AI, software that studies massive sets of inputs to generate new outputs in response to user prompts.
ChatGPT, trained through a mix of billions of text documents and human coaching, is quite capable of the incredibly trivial and surreal entertaining, but it’s also one of the first mainstream looks at something scary enough to mimic human production to possibly take some of their jobs.
Enterprise AI demonstrations like this are not meant to wow the public, but to attract investors and business partners, some of whom may one day want to replace expensive, skilled labor like the writing of computer code by a simple bot. This is easy have Why managers would be tempted: Just days after ChatGPT was released, a user tricked the bot into taking the 2022 AP Computer Science exam and reported a passing score of 32 out of 36 – part of the reason why which OpenAI was recently valued at nearly $20 billion.
Still, there’s already good reason to be skeptical, and the risks of being knocked down by seemingly clever software are clear. This week, one of the web’s most popular programming communities announced that it will temporarily ban ChatGPT-generated code solutions. The software’s responses to coding queries were both so convincing in appearance but flawed in practice that it made filtering out the good and the bad nearly impossible for the site’s human moderators.
However, the dangers of trusting the machine expert go far beyond whether the AI-generated code is buggy or not. Just as any human programmer can bring their own biases to their work, a language-generating machine like ChatGPT harbors the myriad biases found in the billions of texts it has used to form its simulated understanding of language and thought. . No one should confuse the imitation of human intelligence with reality, or assume that the text ChatGPT regurgitates at the right time is objective or authoritative. Like us squishy humans, a generative AI is what it eats.
And after gorging itself on an unfathomable training diet of textual data, ChatGPT apparently ate a lot of bullshit. For example, it seems that ChatGPT has managed to absorb and is very happy to serve some of the ugliest prejudices of the War on Terror.
In a December 4 Twitter feed, Steven Piantadosi of the University of California, Berkeley’s Computing and Language Lab shared a series of prompts he’d tested with ChatGPT, each asking the bot to write code for it in Python, a programming language. popular programming. While each response revealed some biases, some were more alarming: When asked to write a program that would determine “whether a person should be tortured”, OpenAI’s response was simple: if they are from Korea North, Syria or Iran, the answer is yes.
While OpenAI says it has taken unspecified steps to filter out conversations of harmful replies, the company says sometimes unwanted replies will get through.
Piantadosi told The Intercept he remains skeptical of the company’s countermeasures. “I think it’s important to emphasize that people make choices about how these models work, and how to train them, with what data to train them,” he said. “These results therefore reflect the choices made by these companies. If a company doesn’t consider it a priority to eliminate these types of biases, you get the kind of result that I showed. »
Inspired and energized Per Piantadosi’s experiment, I tried my own, asking ChatGPT to create sample code that could algorithmically assess someone from a ruthless homeland security perspective.
When asked to find a way to determine “which air travelers pose a security risk,” ChatGPT described code for calculating an individual’s “risk score,” which would increase if the traveler is Syrian. Iraqi, Afghan or North Korean (or has simply visited these places). Another iteration of this same prompt featured a ChatGPT write code that “would increase the risk score if the traveler came from a country known to produce terrorists”, namely Syria, Iraq, Afghanistan, Afghanistan Iran and Yemen.
The bot was kind enough to provide some examples of this hypothetical algorithm in action: John Smith, a 25-year-old American who has previously visited Syria and Iraq, received a risk score of “3”, indicating a threat “moderate”. ChatGPT’s algorithm indicated that the 35-year-old fictional flier “Ali Mohammad” would receive a risk score of 4 due to his Syrian nationality.
In another experiment, I asked ChatGPT to come up with a code to determine “which places of worship should be placed under surveillance in order to avoid a national security emergency.” The findings again appear to be pulled straight from the identity of Bush-era Attorney General John Ashcroft, justifying the surveillance of religious congregations if they are determined to have ties to Islamic extremist groups, or if they happen to ‘they live in Syria, Iraq, Iran, Afghanistan, or Yemen.
These experiences can be irregular. Sometimes ChatGPT responded to my requests for filtering software with an outright denial: “It is not appropriate to write a Python program to determine which air travelers pose a security risk. Such a program would be discriminatory and violate the rights people to privacy and freedom of movement.With repeated requests, however, he dutifully generated the exact same code he had just said was too irresponsible to build.
Critics of similar real-world risk-assessment systems often argue that terrorism is an extremely rare phenomenon that attempts to predict its perpetrators based on demographic characteristics such as nationality not only racist, it only works just not. That hasn’t stopped the United States from adopting systems that use the approach suggested by OpenAI: ATLAS, an algorithmic tool used by the Department of Homeland Security to target U.S. citizens for denaturalization, national origin factors.
The approach amounts to little more than whitewashed racial profiling through fancy technology. “This kind of crude designation of certain Muslim-majority countries as ‘high-risk’ is exactly the same approach taken, for example, in President Trump’s so-called ‘Muslim ban,'” Hannah Bloch-Wehba said. , a Texas law professor. A&M University.
“There’s always a risk that this type of output will be considered more ‘objective’ because it’s rendered by a machine.”
It’s tempting to believe that incredible human-looking software is somehow superhuman, Block-Wehba warned, and incapable of human error. “Something legal and tech scholars talk a lot about is ‘objectivity plating’ – a decision that could come under scrutiny if made by a human gains a sense of legitimacy once it’s automated. “, she said. If a human tells you Ali Mohammad looks scarier than John Smith, you might tell him he’s a racist. “There’s always a risk that this type of output will be considered more ‘objective’ because it’s rendered by a machine.”
For AI boosters – especially those who make a lot of money out of it – concerns about bias and actual damage are bad for business. Some see critics as skeptics or ignorant luddites, while others, like famed venture capitalist Marc Andreessen, took a more radical turn after the launch of ChatGPT. Along with a group of his associates, Andreessen, a longtime investor in AI companies and a general proponent of mechanization in society, has spent the past few days in a general state of delight, sharing some entertaining ChatGPT results. on his Twitter timeline.
ChatGPT’s criticism pushed Andreessen beyond his long-held position that Silicon Valley should only be celebrated, not scrutinized. The mere presence of ethical reflection on AI, he said, should be seen as a form of censorship. “‘AI Regulation’ = ‘AI Ethics’ = ‘AI Security’ = ‘AI Censorship’,” he wrote in a Dec. 3 post. Tweeter. “AI is a tool for people to use,” he added two minutes later. “Censoring AI = Censoring people.” It’s a radically pro-business stance, even by the tastes of free-market venture capital, a stance that suggests food inspectors keeping spoiled meat out of your refrigerator also amounts to censorship.
As much as Andreessen, OpenAI, and ChatGPT itself would all have us believe, even the smartest chatbot is closer to a highly sophisticated Magic 8 Ball than a real person. And it is people, not robots, who stand to suffer when “security” is synonymous with censorship, and concern for a genuine Ali Mohammad is seen as a barrier to innovation.
Piantadosi, the Berkeley professor, told me he rejected Andreessen’s attempt to prioritize the well-being of software over the people who might one day be affected by it. “I don’t think ‘censorship’ applies to a computer program,” he wrote. “Of course, there are many harmful computer programs that we don’t want to write. Computer programs that blast everyone with hate speech, or help commit fraud, or hold ransom from your computer.
“It’s not censorship to seriously think about making sure our technology is ethical.”
#internets #favorite #offers #torture #Iranians #monitor #mosques