ai systems and large language models must be trained on massive amounts of data to be accurate, but they should not be trained on data they have no right to use. OpenAI license agreements with The Atlantic and Vox last week show that both sides of the table are interested in landing these ai training content licensing deals.
Human Native ai is a London-based startup that is building a marketplace to negotiate these types of deals between the many companies that create LLM projects and those that want to license data to them.
Its goal is to help ai companies find data to train their models while ensuring that rights holders opt-in and are compensated. Rights holders upload their content for free and connect with ai companies for revenue sharing or subscription deals. Human Native ai also helps rights holders prepare and price their content and monitors any copyright infringements. Human Native ai takes a cut of each deal and charges ai companies for their monitoring and transaction services.
James Smith, CEO and co-founder, told TechCrunch that he got the idea for Human Native ai from his past experience working on Google's DeepMind project. DeepMind also struggled with not having enough good data to properly train the system. He then saw other ai companies running into the same problem.
“It feels like we're in the Napster era of generative ai,” Smith said. “Can we reach a better era? Can we facilitate the acquisition of content? Can we give creators some level of control and compensation? I kept thinking, why isn't there a market?
He proposed the idea to his friend Jack Galilee, a GRAIL engineer, during a walk in the park with his children, as Smith had done with many other possible initial ideas. But unlike in times past, Galilee said they should try.
The company launched in April and is currently operating in beta. Smith said demand from both sides has been really encouraging and they have already signed several partnerships that will be announced in the near future. Human Native ai this week announced a £2.8 million seed round led by LocalGlobe and Mercuri, two British micro-venture capitalists. Smith said the company plans to use the funds to develop his team.
“I'm the CEO of a company founded two months ago, and I've been able to have meetings with CEOs of 160-year-old publishing companies,” Smith said. “This suggests to me that there is a great demand in the publishing field. Likewise, every conversation with a large ai company goes exactly the same way.”
While it's still very early, what Human Native ai is building appears to be a missing piece of infrastructure in the growing ai industry. The big ai players need a lot of data to train, and giving rights holders an easier way to work with them, while giving them full control over how their content is used, seems like a good approach that can make people happy. both sides of the table.
“Sony Music just sent letters to 700 ai companies asking them to cease and desist,” Smith said. “That's the size of the market and the potential customers that could be acquiring data. The number of publishers and rights holders could be thousands, if not tens of thousands. “We think that’s why we need infrastructure.”
I also think this could be even more beneficial for smaller ai systems that don't necessarily have the resources to sign a deal with Vox or The Atlantic to be able to access data for training. Smith said they hope so, too, and that all of the notable licensing deals so far have involved the biggest players in ai. He hopes Human Native ai can help level the playing field.
“One of the main challenges with licensing content is that the upfront costs are high and it greatly restricts who you can work with,” Smith said. “How do we increase the number of buyers of your content and reduce the barriers to entry? “We think it’s really exciting.”
The other interesting piece here is the future potential of the data that Human Native ai collects. Smith said that in the future they will be able to give rights holders more clarity on how to price their content based on a history of deal data on the platform.
It's also smart timing for the launch of Human Native ai. Smith said that with the evolution of the European Union's ai Law and possible regulation of ai in the US in the future, ai companies that source their data ethically (and have the receipts to prove it ) will only become more urgent.
“We are optimistic about the future of ai and what it will do, but we have to make sure that as an industry we are responsible and don't decimate the industries that have gotten us to this point,” Smith said. “That would not be good for human society. We need to make sure we find the right ways to allow people to participate. “We are optimistic about ai on the human side.”