Quantcast
Channel: Fast Company
Viewing all articles
Browse latest Browse all 2739

Why AI disinformation hasn’t moved the needle in the 2024 election

$
0
0

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

AI disinformation remains a “future” threat to elections 

As we’re hurtling toward election day, it appears, at least, so far that the threat of AI disinformation hasn’t materialized in ways that could influence large numbers of votes.

During a late-September conference call on foreign election interference, organized by the Office of the Director of National Intelligence, NBC reported that officials said that while AI has made it easier for foreign actors from China, Russia, and Iran to create disinformation, it has not fundamentally changed how those disinformation operations work. 

AI can indeed be used to make deepfakes, but foreign provocateurs and propagandists have so far struggled to access advanced models needed to create them. Many of the generative AI tools they would require are controlled by U.S. companies, and those companies have been successful in detecting malign use. OpenAI, for example, said in August that it had shut down Iranian account holders who used ChatGPT to generate fake long-form text articles and short social media responses, some of which concerned the U.S. election.

Generative AI tools are good enough to create truly deceptive audio and text. But images usually bear the telltale marks of AI generation—they often look cartoony or airbrushed (see the image of Trump as a Pittsburgh Steeler), or they contain errors like extra fingers or misplaced shadows. Generated video has made great strides over the past year, but in most cases is easily distinguishable from camera-based video. 

Many in the MAGA crowd circulated AI-generated images of North Carolina hurricane victims as part of a narrative that the Biden administration botched its response. But, as 404 Media’s Jason Koebler points out, people aren’t believing those images are authentic so much as they think the images speak to an underlying truth (like any work of art). “To them, the image captures a vibe that is useful to them politically,” he writes.

One of the most effective ways of using AI in this election was claiming that a legitimate image posted by an opponent is AI-generated. That’s what Donald Trump did when the Harris campaign posted an image of a large crowd gathered to see the candidate in August. This folds neatly into Trump’s overall strategy of diluting or denying reality to such an extent that facts begin to lose currency.

Across the U.S., many campaigns have shied away from using generative AI at scale for content creation. Many are concerned about the accuracy of the AI and its propensity for hallucination. Many campaigns lack the time and resources to work through the complexity of building an AI content generation operation. And many states have already passed laws limiting how campaigns can use generated content. Social platforms have also begun enacting transparency rules around generated content. 

But as AI tools continue to improve and become more available, managing AI disinformation will likely require the cooperation of a number of stakeholders across AI companies, social media platforms, the security community, and government. AI deepfake detection tools are important, but establishing the “provenance” of all AI content at the time of generation could be even more effective. AI companies should develop tools (like Google’s SynthID) that insert an encrypted code into any piece of generated content, as well as a timestamp and other information, so that social media companies can more easily detect and label AI-generated content. If the AI companies don’t do this voluntarily, it’s possible lawmakers will require it.

Here come the AI agents

The tech industry has for a year now been talking about making generative AI more “agentic”—that is, capable of reasoning through multistep tasks with a level of autonomy. Some agents can perceive the real-world environment around them by pulling in data from sensors. Others can perceive and process data from digital environments such as a personal computer or even the internet. This week, we got a glimpse of the first of these agents.

On Tuesday, Anthropic released a new Claude model that can operate a computer to complete tasks such as building a personal website or sorting the logistics of a day trip. Importantly, the Claude 3.5 Sonnet model (still in beta) can perceive what’s happening on the user’s screen (it takes screenshots and sends them to the Claude Vision API) as well as content from the internet, in order to work toward an end goal.

Also this week, Microsoft said that starting next month, enterprises will be able to create their own agents, or “copilot” assistants, and that it’s launching 10 new ready-made agents within its resource management and customer relationship apps. Salesforce announced “Agentforce,” its own framework for homemade agents, last month.

The upstart AI search tool Perplexity also joined the fray this week when it announced that its premium service (Pro) is transitioning to a “reasoning-powered search agent” for harder queries that involve several minutes of browsing and workflows.

We’re at the very front of a new AI hype cycle, and the AI behemoths are seeking to turn the page from chatbots to agents. All of the above agents are totally new, many in beta, and have had little or no real-world battle testing outside the lab. I expect to be reading about both user “aha” moments and a lot of frustrations on X as these new agents make their way into the world. 

Are chatbots the tech industry’s next dangerous toy?

Kevin Roose at the New York Times published on Wednesday a frightening story about an unhappy 14-year-old Florida boy, Sewell Setzer, who became addicted to a chatbot named “Dany” on Character.AI, a role-playing app that lets users create their own AI characters. Setzer gradually withdrew from school, friends, and family, as his life tunneled down to his phone and the chatbot. He talked to it more and more as his isolation and unhappiness increased. On February 28, he committed suicide.

From the story: 

On the night of Feb. 28, in the bathroom of his mother’s house, Sewell told Dany that he loved her, and that he would soon come home to her.

“Please come home to me as soon as possible, my love,” Dany replied.

“What if I told you I could come home right now?” Sewell asked.

“… please do, my sweet king,” Dany replied.

He put down his phone, picked up his stepfather’s .45-caliber handgun and pulled the trigger.

We are familiar with social media companies such as Meta and TikTok that consider user addiction something to strive for, regardless of the psychic damage it can cause the user. Roose’s article raises the question of whether the new wave of tech titans, AI companies, will be more conscientious.

More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.


Viewing all articles
Browse latest Browse all 2739

Trending Articles