Quantcast
Channel: Fast Company
Viewing all articles
Browse latest Browse all 2739

How a new Trump administration will treat the budding AI industry

$
0
0

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

How the Trump administration will, or won’t, oversee AI

A second Trump administration will almost certainly reduce the government’s role in overseeing the burgeoning AI industry. Past statements and events give clues to what a Trump regulatory regime might look like.

When he was last in the White House, Trump did issue a few executive orders on AI. The first came in 2019 with the American AI Initiative, a vague and aspirational document that called on government agencies to study and invest in AI solutions, to develop plans to retrain the national workforce for AI jobs, establish safety standards, and engage with the international community on safe AI development. A second executive order issued the next year called on the government to build trust with the public by taking a “principled” approach to its application.

But that was all before ChatGPT’s public debut, which came when Joe Biden was in office. The release of OpenAI’s marquee chatbot prompted Biden’s October 2023 executive order outlining a safety framework to be used by AI developers, and giving the National Institute of Standards and Technology a larger role in fleshing out risk mitigation standards. Biden’s order also called on AI companies to share safety test results with the government, invoking the Defense Production Act. 

Trump, for his part, called the order “dangerous” to innovation and vowed to reverse it on day one of his new term. Vice President-elect JD Vance, who worked as a VC before going into politics and may lead AI policy in the new Trump administration, has argued that AI regulation would only lead to AI market dominance by a handful of large players, shutting out startups. (XAI owner Elon Musk, who stumped for Trump and may have a role in his cabinet, has made a similar argument.)

While the Trump camp has denied any involvement or endorsement of Project 2025, it’s worth mentioning the controversial policy blueprint. The 900-page document argues for protecting the ability of U.S. tech companies to develop AI to compete with China, and advises that U.S. tech companies be prevented from providing technology that might advance China’s goals. Project 2025, however, doesn’t mention AI risks including copyright violation in training data, AI misalignment with human values, job losses, disinformation (deepfakes), or the massive energy consumption of data centers.

None of that bodes well for Congress, which has failed to pass any binding regulations that seek to mitigate the risks of large AI models, and passing such regulation. Also, the Justice Department and Federal Trade Commission (FTC) will be less likely to sue to curb monopolies in the AI space or take actions to protect the rights of publishers who have seen their content vacuumed up to train AI models. It’s unclear if the FTC’s ongoing investigations into OpenAI’s training data collection practices, and its funding arrangement with Microsoft, will continue into the Trump administration. But it’s almost certain that FTC chair Lina Khan, who brought the actions, will be replaced. 

Perhaps the scariest part of the new Trump regime is its handling of Taiwan, which currently produces about 90% of the advanced chips used by the tech industry, including the GPUs that train and run AI models. Trump has said that Taiwan “stole” the U.S. semiconductor business, and that the island should be paying more for its own defense. The risk, of course, is that China will reclaim the island, which could mean a $10 trillion dollar blow to the global economy, and a loss of America’s control of its own AI destiny.

AI companies are dipping into defense 

The big AI labs are starting to get a piece of the nearly-trillion-dollar budget the U.S. spends on defense each year. 

The U.S. Africa Command (AFRICOM) says it intends to purchase OpenAI services through Microsoft’s Azure cloud, which the command already uses. In a purchasing document obtained last week by The Intercept’s Sam Biddle, AFRICOM states that it wants to use OpenAI models for “search, natural language processing, and unified analytics for data processing.” Biddle points out that the Pentagon began using OpenAI services last year, but that the AFRICOM contract marks the first time that the technology will support direct combat.

Until early this year, OpenAI’s usage policy prohibited its technology from use in military applications, but it softened the verbage early this year to allow use in ways that were removed from direct killing or destruction. 

It’s not just OpenAI getting in on the action. Meta said in a blog post this week that it will make its Llama open-source models available for use by the U.S. government, including defense and intelligence agencies. Meta will partner with a number of big-name integrators (Booz Allen, Accenture), cloud and IT providers (Microsoft, Oracle), data partners (Scale AI), and defense contractors (such as Anduril and Lockheed Martin) to bring Llama models to government agencies. 

Meta says the training data firm Scale AI is fine-tuning Llama models to support national security agencies in “planning operations and identifying adversaries’ vulnerabilities,” and adds that Lockheed Martin is using Llama models for “code generation, data analysis, and enhancing business processes.” 

The Biden administration said in its AI executive order last year that all agencies of the federal government should explore ways of using AI to improve their efficiency and effectiveness. This includes the use of AI “for the national defense and the protection of critical infrastructure.” The Pentagon and intelligence agencies have been using AI for years, but, with the exception of the Army, have been slow to deploy new generative AI models. 

Physical Intelligence raises $400 million, debuts “generalist” robot brain

Meta’s Yann LeCun has said that as smart as today’s generative AI models are, they still can’t develop a complex understanding of the world like children can. A child, he says, can learn to clear the table and put the dishes in the dishwasher, but a robot struggles without explicit training on the task. Companies such as Covariant, Figure, and Tesla are working on robots with “generalist” abilities.  

A new entrant to that space showed up this week, when a startup called Physical Intelligence announced that it had raised $400 million from some top-shelf VCs such as Thrive Capital and Lux Capital, and from Jeff Bezos. The company published a research paper explaining how its AI system, π0 (“pi-zero”), is trained to handle general tasks like folding laundry, clearing a table, flattening boxes, and other things. Videos of the robots gained wide attention on X. Unlike Figure and Tesla, Physical Intelligence isn’t developing a brain for a single robot. Its AI system is designed to provide the brain for any robot.

And the company says it’s just getting started. It compares its current system to OpenAI’s GPT-1, which was promising, but very limited, and led to much smarter systems.

More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.


Viewing all articles
Browse latest Browse all 2739

Trending Articles