Garry Tan, president and CEO of Y Combinator, told a crowd at the Economic Club of Washington, D.C., this week that “regulation is probably necessary” for artificial intelligence.
Tan spoke with General Catalyst board member Teresa Carlson as part of a one-on-one interview where she discussed everything from getting into Y Combinator to ai, noting that “there's no better time to work in tech than The right one”. now.”
Tan said he was “generally supportive” of the National Institute of Standards and technology’s (NIST) attempt to build a GenAI risk mitigation framework, and said “much of the EO by the Biden Administration “We are probably on the right path.”
ai.600-1.GenAI-Profile.ipd.pdf” target=”_blank” rel=”noreferrer noopener”>The NIST framework proposes things such as defining that GenAI must comply with existing laws governing aspects such as data privacy and copyright; disclose the use of GenAI to end users; establish regulations prohibiting GenAI from creating child sexual abuse materials, etc. Biden's executive order covers a wide range of maxims, from requiring ai companies to share security data with the government to ensuring small developers have fair access.
But Tan, like many Valley venture capitalists, was wary of other regulatory efforts. He called ai-related bills moving through the California and San Francisco legislatures “very concerning.”
One California bill that is causing a stir is one introduced by state senator Scott Wiener that would allow the attorney general to sue artificial intelligence companies if their products are harmful. tech-giants-jockey-influence-california-status-ai-authority-00156720″ target=”_blank” rel=”noreferrer noopener”>Political reports.
“The big debate generally in terms of policy right now is what would a good version of this actually look like?” Tan said. “We can look at people like Ian Hogarth, in the UK, to be thoughtful. They are also aware of this idea of concentration of power. At the same time, they are trying to figure out how we support innovation while mitigating the worst possible harms.”
Hogarth is a former YC entrepreneur and ai expert who the UK has chosen to sit on a working group on ai models.
“What scares me is that if we try to address a science fiction concern that is not present,” Tan said.
As for how YC handles liability, Tan said that if the organization doesn't agree with a startup's mission or what that product would do for society, “YC just doesn't fund it.” He noted that on several occasions he read about a company in the media that had submitted an application to YC.
“We go back and look at the interview notes and think we don't think this is good for society. And luckily we didn’t fund it,” she said.
Leaders in artificial intelligence continue to make mistakes
Tan's guideline still leaves room for Y Combinator to spawn many ai startups as cohort graduates. As my colleague Kyle Wiggers reported, the Winter 2024 cohort had 86 ai startups, nearly double the Winter 2023 batch and close to triple the Winter 2021 batch, according to the official YC Startup Directory. .
And recent news is making people wonder if they can trust those selling ai products to be the ones to define responsible ai. Last week, TechCrunch reported that OpenAI is getting rid of its ai accountability team.
The debacle then involved the use of a voice that sounded like actress Scarlet Johansson's when demonstrating her new GPT-4o model. Turns out they asked her about using her voice and she turned them down. OpenAI has since removed Sky's voice, although it denied it was based on Johansson. That, and issues related to OpenAI's ability to recover equity acquired from employees, were among several elements that led people to openly question The scruples of Sam Altman.
Meanwhile, Meta made ai news when it announced the creation of an ai advisory board that featured only white men, leaving out women and people of color, many of whom played key roles in creating and innovating that board. advice. industry.
Tan did not refer to any of these cases. Like most Silicon Valley venture capitalists, what he sees are opportunities for huge, lucrative new businesses.
“We like to think of startups as a labyrinth of ideas,” Tan said. “When new technology, like big language models, comes out, the whole labyrinth of ideas is disrupted. ChatGPT itself was probably one of the most quickly successful consumer products to be launched in recent times. And that's good news for founders.”
artificial intelligence of the future
Tan also said that San Francisco is at the center of the ai movement. For example, that's where he started Anthropic, started by YC alumni, and OpenAI, which was a spin-out of YC.
Tan also joked that he wasn't going to follow in Altman's footsteps, noting that Altman “had my job several years ago, so he has no plans to start an ai lab.”
One of YC's other success stories is legal tech startup Casetext, which was sold to Thomson Reuters for $600 million in 2023. Tan believed Casetext was one of the first companies in the world to have access to generative ai and then she was one of the first to leave. in generative ai.
Looking toward the future of ai, Tan said “obviously, we have to be smart with this technology” when it comes to risks related to bioterrorism and cyberattacks. At the same time, he said there should be “a much more measured approach.”
It also assumes that there is not likely to be a winner-take-all model, but rather an “incredible garden of choice for consumers and founders to be able to create something that affects a billion people.”
At least, that's what he wants to happen. That would be best for him and YC: many successful startups would return a lot of cash to investors. So what scares Tan the most is not the evil, uncontrolled AIs, but the scarcity of AIs to choose from.
“In reality, we could find ourselves in another truly monopolistic situation where there is a large concentration on a few models. “Then you’re talking about rent extraction and you have a world I don’t want to live in.”