Quantcast
Channel: Fast Company
Viewing all articles
Browse latest Browse all 2739

Ben Horowitz’s dual support for Trump and Harris says a lot about the Valley’s AI politics

$
0
0

Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. I’m Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy.

What Ben Horowitz’s flip-flopping  presidential support says about the Valley’s current AI politics

Venture capitalist Ben Horowitz of the influential Andreessen Horowitz (aka @16z) says he personally supports Kamala Harris. In a memo to his firm’s employees last week he says he plans to donate to the Harris campaign, but that the firm’s political allegiance remains with Trump, as Horowitz and his partner Marc Andreessen announced this summer, to the surprise of some. The two billionaires explained at the time that the Biden Administration’s tech policies have been hostile to tech startups, especially those in crypto and AI, and that a second Trump term would be best for what they call “little tech” startups, which they say they champion.

In a traditionally liberal Silicon Valley, the shift of high-profile tech billionaires like Andreessen and Horowitz toward the MAGA right has been remarkable. But their support for Trump seems narrow and transactional, almost quid pro quo. Before announcing plans to donate to a Trump super-PAC, Andreessen Horowitz partners held several meetings with the Trump camp, and came away confident that the candidate would take a light-touch regulatory approach to new technologies. Trump, for example, has pledged to reverse Biden’s October 2023 AI executive order, which offered nonbinding safety guidelines for AI companies, among other things. 

Horowitz says he has now met with Harris and her team, but came away with no firm idea of how a Harris administration might regulate AI companies. It’s likely Harris would keep, and build on, Biden’s executive order. Harris held a meeting with the heads of Google, Microsoft, OpenAI, and Anthropic in May 2023 that led to those companies and others pledging to uphold the EO’s safety and transparency standards. 

Horowitz wrote in a blog last December that his firm is nonpartisan and willing to side with candidates that support an “optimistic technology-enabled future.” In this year’s presidential contest, it means the firm is also willing to ignore the potential political, social, economic, and environmental risks of another Trump term. 

This attitude might be partially explained by the large risk-reward dimension of the technology shift that’s just getting underway with AI. Developing and applying AI models is an extremely expensive business, in which investors are being asked for checks of unprecedented size. OpenAI just raised $6.6 billion to add to its $13 billion in funding, and new rounds will surely follow. The AI revolution that started with ChatGPT, it turns out, isn’t transforming the world overnight; many AI startups will need long runways to get to break-even (OpenAI reportedly projects it’ll reach profitability in 2029). So, with AI, tech investors may be even more allergic to new government regulation, and politicians who support it, than usual. It’s simply seen as additive to the already significant risks. 

Consider the industry’s outsized opposition to California’s AI bill that sought to impose safety rules on large frontier models. The bill was popular with the public and sailed through the state’s legislature, but was vetoed by governor Gavin Newsom amid intense lobbying pressure.

Two Humane refugees go to war against AI hallucinations 

The tech industry is betting that generative AI will transform business operations at all levels. In order to do that, the AI must swallow the organization’s data—its combined knowledge and experience—then make it available to workers through apps, chatbots, or copilots. But large language models still have a tendency to hallucinate facts—a dealbreaker for businesses like banks or hospitals.

A new company founded by former Apple and Google execs sees that problem as an opportunity. The company, called Infactory, said this week it raised a $4 million seed round at a $25 million valuation from Andreessen Horowitz and others to build out its enterprise search and fact-checking AI platform, which it expects to deploy with partners later this year. Infactory’s founders, Ken Kocienda and Brooke Hartley Moy, most recently worked at Humane but left after the lukewarm debut of that company’s personal AI device. Kocienda is known as the brain behind the iPhone’s touchscreen and autocorrect keyboard, while Hartley Moy once led Google’s business relationship with Samsung. 

Kocienda says in an email to Fast Company that the Infactory platform provides tools to instill confidence in the data it retrieves. For instance, generated responses come with source attributions and confidence scores. Infactory’s secret sauce is the semantic modeling techniques it applies in advance to prepare the enterprise’s data for useful and accurate retrieval. Infactory is different from other AI search engines in that it’s concerned only with the enterprise’s internal data. It doesn’t pull in data from the internet, for example, Kocienda says. 

Virginia Congressional candidate will debate a chatbot version of his opponent

Bentley Hensel is a long-shot independent candidate for the Virginia House seat currently occupied by Democrat Don Beyer. Hensel and Beyer took part in one roundtable discussion earlier this year, and Hensel badly wants to debate Beyer again. But Beyer has far outspent Hensel, is far ahead in the polls, and sees no need to debate. What’s Hensel to do? Train an AI chatbot version of Beyer, of course, and debate with it during a livestreamed event October 17. 

Hensel, a software engineer by trade, says he trained “DonBot,” a chatbot powered by an OpenAI model, using information from his opponent’s official websites, press releases, and Federal Election Commission data, Reuters reports. DonBot is “text-based” so someone will have to read its responses during the “debate.” Pity Hensel didn’t use the new Realtime API from OpenAI, which would have allowed DonBot to speak for itself. 

More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.


Viewing all articles
Browse latest Browse all 2739

Trending Articles