Quantcast
Channel: Fast Company
Viewing all articles
Browse latest Browse all 2811

Anthropic gives its AI models limited ability to control your computer

$
0
0

Anthropic is giving its new Claude 3.5 Sonnet model the ability to control a user’s computer and access the internet. The move marks a major step in generative AI models’ capabilities—and raises questions about AI companies’ ability to properly mitigate the risks of more autonomous AI.

According to a series of example videos from Anthropic posted Tuesday on X, Claude users might now ask the AI to follow the steps needed to create a personal website. In another example, a user asks Claude to help with the logistics of a trip to watch the sunrise from the Golden Gate bridge. The user describes what they want the model to do by giving it text prompts.

AI companies have been stressing a desire to push large language models to become more “agentic” and autonomous. Doing so means extending the ability of the AI to control not only its own functions but also external devices. 

“Instead of making specific tools to help Claude complete individual tasks, we’re teaching it general computer skills—allowing it to use a wide range of standard tools and software programs designed for people,” Anthropic said in a statement on X.

The new computer control capabilities are being rolled out to developers through an API, as a public beta. Anthropic says it wants to collect feedback on the performance and usefulness of the new capabilities. 

The company acknowledged that Claude 3.5 Sonnet’s current ability to use computers isn’t perfect and will make some mistakes (especially when it comes to scrolling and dragging), but the company expects this to rapidly improve in the coming months.

With greater power comes greater responsibility. Anthropic has some explicit instructions on how to mitigate the risk of giving an AI control over a computer. In the user guide, the company advises avoiding giving Claude access to sensitive data such as user passwords, and to limit the number of websites the AI can access. 

Its fourth point under minimizing risks states: “Ask a human to confirm decisions that may result in meaningful real-world consequences as well as any tasks requiring affirmative consent, such as accepting cookies, executing financial transactions, or agreeing to terms of service.”

Anthropic has taken a first cautious step into more autonomous AI. But the ability to manage some basic tasks on a PC will expand to greater and larger tasks and a wide array of devices, including phones and even home appliances. As this control extends, the extent of the risk increases, too. Autonomous AI could deliver a lot of convenience, but may have the ability to do lots of harm.

Expect other AI companies to begin rolling out similar functionality in the near future as part of a general move toward more agentic AI. 


Viewing all articles
Browse latest Browse all 2811

Trending Articles