Quantcast
Channel: Fast Company
Viewing all articles
Browse latest Browse all 2739

Governor Gavin Newsom signed a flurry of AI bills—but not the most high-profile one

$
0
0

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

Newsom signs pile of AI bills as the SB 1047 deadline approaches

California Governor Gavin Newsom signed a pile of AI bills into law on Tuesday. Two of those bills concern the rights of actors in a world where studios have the option to use an AI-generated version of an actor rather than the genuine article. AB 2602 requires studios to state explicitly in contracts with actors that they’re claiming the right to create an AI-generated likeness of their body or voice. AB 1836 imposes a 70-year requirement in which studios must get consent from a deceased actor’s estate before generating an AI likeness. (Both bills build on AI-related concessions that actors won during the writers’ strike.)

Another trio of bills signed into law by Newsom deal with the use of AI in politics. AB 2655 requires online platforms to remove or label deepfakes that misrepresent political candidates during election season. AB 2839, meanwhile, expands the time period around elections in which individuals are prohibited from knowingly sharing deepfakes and other AI-generated disinformation. And AB 2355 requires campaigns to disclose any use of AI-generated or AI-manipulated ad content. 

Tuesday’s news is just as notable for what it didn’t include: SB 1047, which would impose a basic set of safety and reporting requirements on companies developing large “frontier” models. The bill intends to get the state more involved in ensuring that AI companies don’t create unsafe models that could cause or enable catastrophic harm (for example, the creation of a bioweapon).

Many Silicon Valley venture capital and startup people, along with some powerful political allies, claim the bill’s requirements would slow research progress on a technology that could revolutionize technology and business. Proponents of the bill argue that frontier AI models may soon pose severe risks and that developers of such models should implement reasonable safeguards against such risks.

Tuesday’s signature of the politics- and entertainment-related AI bills is no indicator that Newsom will indeed sign AB 1047 (he has until September 30 to decide). In deciding SB 1047, Newsom must size up the real risk of catastrophic harm from large AI models, then balance the threat against the need of the state’s biggest industry to push forward—and profit from—a transformative technology.   

Microsoft rolls out the second “wave” of AI at work

Microsoft rolled out on Monday a new set of AI features within its productivity and collaboration apps. The showcase event,  called Microsoft 365 Copilot: Wave 2, was meant to display the second phase of Copilot’s integration into modern business workflows. As demonstrated on Monday, the AI Copilot is ever present in the interface, and has become more adept at fetching relevant contextual information, including proprietary or company-specific information from a knowledge graph.

“Copilot Pages” is a good example. The tool is something like Google Docs, with the AI copilot acting as a coworker in a collaboration group. One demo video shows a user asking Copilot to fetch information about a potential project. The user can then chat with the AI, and finally move all the AI’s responses onto a “Page,” where other users are invited to weigh in. As the team iterates and fleshes out the idea, it can use the Copilot to pull in documents that might help advance the work, like proposal templates or project plans from the past.

Another feature, Narrative Builder, takes a similar approach, but within the PowerPoint environment. The tool starts by generating a sample presentation outline based on a small amount of information provided by the user, then pulls in presentation templates and art that fit the company’s style. The tool lets a user get to a reasonably good draft quickly, then begin reacting to it, instead of staring at blank pages. A new Prioritize my Inbox tool uses AI to prioritize emails based on the body of the email itself. For instance, it can glean from an email if an action needs to be taken by the recipient, and how urgently the recipient needs to act. 

Perhaps most interesting of all, Microsoft is now rolling out a new tool called Copilot Studio where regular worker-users (not coders or people with AI skills) can build their own AI agents. Microsoft believes such agents are the way of the future for businesses. For instance, an HR department might use Copilot to build a “new employee assistant” that can guide a new hire through all the paperwork, orientation, and training that happens on the first day of a new gig. Such a bot would know the employee handbook and onboarding procedures and could answer any questions the employee might have.

Or a customer service department might use Copilot to build an agent that assists field service workers during customer calls. Such an agent would come armed with all the company’s product information, information on specific customers, and protocols for repairs and trouble-shooting. Microsoft originally announced the Copilot agents back in May, but the user-friendly agent builder is new, and will become available to businesses that subscribe to the Microsoft 365 Copilot services within the next few weeks.

NewsGuard: Two-thirds of top news sites block AI crawlers

Large language models are trained using massive amounts of data scraped from the public internet without explicit permission—and without paying for it. As this has become better understood, many publishers have included a line of code in their websites telling the web crawlers “do not scrape.” In a new report, NewsGuard says that 67% of news websites it rates as “top quality” now block web crawlers’ access to their content.

NewsGuard, which provides anti-misinformation tools, deduces that AI model developers must then rely disproportionately on news data from low-quality sources such as The Epoch Times and ZeroHedge, which may publish rumors or conspiracy theories, or have a political agenda. Among these low-quality sites, 91% allow the crawlers, NewsGuard finds. “This helps explain why chatbots so often spread false claims and misinformation,” the report states.

Of course, NewsGuard has no way of knowing what news data is used to train AI models because AI companies don’t disclose that (it acknowledges this). But it’s true that publishers are blocking crawlers used by AI companies, and that AI companies now routinely sign content deals with publishers so that they can train models using the publisher’s content. OpenAI, for example, has now signed such agreements with Time, The Atlantic, Vox Media, and others.

The AI industry is working on ways of fixing the news problem. Perplexity and OpenAI Search, for example, can call on an index of current web content to help inform the answers the AI generates. And a growing area of research focuses on “recursive” learning, or the ability of an AI model to constantly learn new information and integrate it into their training data.

More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.


Viewing all articles
Browse latest Browse all 2739

Trending Articles