What is the business model for generative ai, given what we know today about the technology and the market?
OpenAl has created one of the fastest-growing companies in history. It may also be one of the most expensive to run.
The ChatGPT maker could lose as much as $5 billion this year, according to an analysis by The Information, based on previously undisclosed internal financial data and people involved in the business. If we're right, OpenAI, recently valued at $80 billion, will need to raise more cash in the next 12 months or so.
I have spent some time in my writings here talking about the technical and ai-boom-279300a24184″ rel=”noopener”>resource limitations of generative ai, and it's very interesting to see how these challenges become clearer and more urgent for the industry that has emerged around this technology.
However, I think the question this raises is what the business model for generative ai really is. What should we expect and what is just hype? What is the difference between the promise of this technology and practical reality?
I've had this conversation with a few people and heard it talked about quite a bit in the media. The difference between a technology and a technology characteristic and a product The question is whether it has enough value in isolation for people to buy access to it alone, or whether it really proves most or all of its value when combined with other technologies. We’re seeing “ai” being added to a lot of existing products right now, from text and code editors to search and browsers, and these apps are examples of “generative ai as a feature.” (I’m writing this very text in Notion and they’re continually trying to get me to do something with ai.) On the other hand, we have Anthropic, OpenAI, and other assorted companies trying to sell products where generative ai is the core component, like ChatGPT or Claude.
This can start to get a little confusing, but the key thing I’m thinking about is that for those who believe in “generative ai as a product,” if generative ai doesn’t live up to customer expectations, whatever those are, then they will stop using the product and stop paying the vendor. On the other hand, if someone (understandably) finds that Google’s ai search summaries are rubbish, they can complain and turn them off, and continue using Google search as before. The core business value proposition isn’t based on ai, it’s just an additional potential selling point. This translates into much lower risk to the business overall.
The way Apple has approached much of the generative ai space is a good example of conceptualizing generative ai as a feature, not a product, and to me their apparent strategy is more promising. At the last WWDC, Apple revealed that they are working with OpenAI to allow Apple users to access ChatGPT through Siri. There are a few key components to this that are important. First, Apple is not paying OpenAI anything to create this relationship – Apple is providing access to its economically very attractive users, and OpenAI has the opportunity to convert these users into paying ChatGPT subscribers, if they can. Apple is not taking any risk in the relationship. Second, this does not preclude Apple from making other generative ai offerings, such as those from Anthropic or Google, available to its user base in the same way. They are not explicitly betting on a particular horse in the broader generative ai arms race, even though OpenAI is the first partnership to be announced. Of course, Apple is working on ai-features.html” rel=”noopener ugc nofollow” target=”_blank”>Apple ai, its own generative ai solutionBut they're clearly aiming these deals at expanding their existing and future product lines, making their iPhone more useful, rather than selling one model as a standalone product.
All of this is to say that there are multiple ways to think about how generative ai can and should be incorporated into a business strategy, and there is no guarantee that the creation of the technology itself will be the most successful. When we look back a decade from now, I doubt that the companies we will consider the “big winners” in the generative ai business space will be the ones that actually developed the underlying technology.
Okay, you might think, but someone has to build it, if the features are valuable enough to be worth having, right? If the money isn't in actually creating generative ai capability, are we going to have this capability? Will it reach its full potential?
I have to admit that many investors in the tech sector believe there is a lot of money to be made in generative ai, which is why they have already invested billions of dollars in OpenAI and its peers. However, I have also written in several previous articles about how, even with those billions in hand, I strongly suspect that we will only see slight, incremental improvements in generative ai performance going forward, rather than continuing the seemingly exponential technological advancement we saw in 2022-2023. (In particular, the limitations in the amount of human-generated data available for training to achieve the promised progress cannot be solved by simply throwing money at the problem.) This means that I am not convinced that generative ai is going to become much more useful or “smarter” than it is now.
All that being said, and whether you agree with me or not, we should remember that having a very advanced technology is very different from being able to create a product from that technology that people will buy and turning it into a sustainable and renewable business model. You can invent something cool and new, but as any product team at any startup or tech company will tell you, that's not the end of the process. Figuring out how real people can and want to use your cool new product, and communicating that and getting people to believe that your cool new product is worth a sustainable price is extremely difficult.
We are definitely seeing a lot of ideas proposed for this from many channels, but some of these ideas are failing quite a bit. technology/archive/2024/07/searchgpt-openai-error/679248/” rel=”noopener ugc nofollow” target=”_blank”>The new beta version of OpenAI's search engine, announced last week, already had major bugs in its results. Anyone who has read my previous ai-taking-over-the-world-a970a5ccdad5″ rel=”noopener”>parts Those of you who know how LLMs work won't be surprised (I was personally surprised that they didn't think of this obvious problem when they developed this product in the first place). Even those ideas that are somewhat attractive can't just be “nice to haves,” or luxuries, they must be essentials, because the price required to make this business sustainable has to be very high. When your burn rate is $5 billion a year, to become profitable and self-sustaining, your paying user base must be astronomical, and/or the price those users pay must be exorbitant.
This leaves those who are most interested in pushing the technological boundaries in a difficult position. Research for research's sake has always existed in some form, even when the results are not immediately useful in practice. But capitalism doesn't really have a good channel to sustain this kind of work, especially when engaging in this research costs sky-high amounts. The United States has been draining the resources of academic institutions for decades, so academics and scientists have no ability to do anything to help scientists. technology/2024/03/10/big-tech-companies-ai-research/” rel=”noopener ugc nofollow” target=”_blank”>Academic researchers have little or no opportunity to participate in this type of research without private investment..
I think this is a real shame, because academia is where this kind of research could be done with proper oversight. Ethical, safety, and security issues can be taken seriously and explored in an academic setting in ways that are simply not prioritized in the private sector. The culture and norms around research for academics allow for money to be valued below knowledge, but when private sector companies take over all the research, those decisions change. The people our society relies on to do “purer” research don’t have access to the resources needed to meaningfully participate in the rise of generative ai.
Of course, there is a significant chance that even these private companies will not have the resources to sustain the frenzied race to train more and larger models, which brings us back to the quote I started this article with. Because of the economic model that governs our technological progress, we stand to miss potential opportunities. Generative ai applications that make sense but don’t generate the billions needed to support GPU bills may never be explored in depth, while socially harmful, dumb, or useless applications receive investment because they pose greater opportunities to make money.