Several enterprise SaaS companies have announced generative ai features recently, which is a direct threat to ai startups that lack sustainable competitive advantage
Back in July, we dug into generative ai startups from Y Combinator’s W23 batch — specifically, the startups leveraging large language models (LLMs) like GPT that powers ChatGPT. We identified some big trends with these startups — like focus on very specific problems and customers (eg. marketing content for SMBs), integrations with existing software (eg. with CRM platforms like Salesforce), ability to customize large language models for specific contexts (eg. voice of your company’s brand).
A secondary, not-so-harped-upon part of the article was around moat risks — quoting from back then:
A key risk with several of these startups is the potential lack of a long-term moat. It is difficult to read too much into it given the stage of these startups and the limited public information available but it’s not difficult to poke holes at their long term defensibility. For example:
If a startup is built on the premise of: taking base LLMs (large language models) like GPT, building integrations into helpdesk software to understand knowledge base & writing style, and then generating draft responses, what’s stopping a helpdesk software giant (think Zendesk, Salesforce) from copying this feature and making it available as part of their product suite?
If a startup is building a cool interface for a text editor that helps with content generation, what’s stopping Google Docs (that is already experimenting with auto-drafting) and Microsoft Word (that is already experimenting with Copilot tools) to copy that? One step further, what’s stopping them from providing a 25% worse product and giving it away for free with an existing product suite (eg. Microsoft Teams taking over Slack’s market share)?
That’s exactly what’s played out in the last few months. Several large enterprise SaaS companies have announced and / or launched their generative ai products — Slack, Salesforce, Dropbox, Microsoft, and Google to name a few. This is a direct threat to generative ai startups that are building useful productivity applications for enterprise customers but have a limited sustainable, competitive advantage (i.e. moatless). In this article, we’ll dive into:
- Recap of ai value chain
- Recent ai features from enterprise SaaS companies
- How startups can build moats in this environment
We won’t spend much time on this but as a quick reminder, one way to think about how companies can derive value from ai is through the concept of the ai value chain. Specifically, you can break down the value chain into three layers:
- Infrastructure (eg. NVIDIA that makes chips to run ai applications, Amazon AWS provides cloud computing for ai, Open ai provides large language models like GPT for building products)
- Platform (eg. Snowflake provides a cloud-based solution to manage all your data needs in one place, from ingesting to cleaning up to processing)
- Applications (eg. a startup building a product that helps SMBs quickly create marketing content)
Though the generative ai wave started with OpenAI’s launch of ChatGPT, which is powered by the GPT model (infrastructure layer), it’s becoming increasingly clear that the infrastructure layer is commoditizing, with several large players coming into the market with their own LLMs including Facebook (LLaMA), Google (LaMDA), Anthropic to name a few. The commoditization is explained by the fact that most of these models are trained using the same corpus of publicly available data (like CommonCrawl which crawls sites across the internet, and Wikipedia).
Outside of this data pool, every large company that has a large corpus of first party data is either hunkering down their data for themselves or creating licensing models, which means that this data is going to be either unavailable or available to every model provider for training, i.e. commoditization. This is a similar story to what played out in the cloud computing market where Amazon AWS, Microsoft Azure and Google Cloud now own a large part of the market but aggressively compete with each other.
While the platform layer is a little less commoditized and there is likely room for more players to cater to a variety of customer needs (eg. startups vs SMBs vs enterprise customers), it is moving in the direction of commoditization and the big players are starting to beef up their offerings (eg. Snowflake which is a data warehousing platform recently acquired Neeva to unlock application of LLMs for enterprises, Databricks which is an analytics platform acquired MosaicML to power generative ai for their customers).
Therefore, a majority of the value from ai is going to be generated at the Application layer. The open question, however, is which companies are likely to reap the benefits of applications unlocked by large language models (like GPT). Unsurprisingly, of 269 startups in Y Combinator’s W23 batch, ~31% had a self-reported ai tag. While the applications are all objectively useful and unlock value for their customers, particularly in the enterprise SaaS world, it’s becoming more and more clear that incumbent SaaS companies are in a much better position to reap the benefits from ai.
There has been a flurry of announcements from SaaS companies in the past few weeks. Let’s walk through a few.
Slack initially started by supporting the ai-business-messaging” rel=”noopener ugc nofollow” target=”_blank”>ChatGPT bot to function within your Slack workspace, both for summarizing threads and for helping draft replies. This was quickly expanded to support Claude bot (Claude is Anthropic’s equivalent of the GPT model). More importantly, Slack announced their own generative ai built natively within the app, which supports a wide range of summarizing capabilities across threads and channels (eg. tell me what happened in this channel today, tell me what is project X). What could have been plugins built by startups is now a native feature built by Slack, because Slack can easily pick up models like GPT off the shelf and build a generative ai feature. This is not terribly difficult to do and it also saves Slack the hassle of dealing with integrations / clunky user experiences from unknown plugins.
Another announcement came from Salesforce. Their product ai/?d=cta-body-promo-8″ rel=”noopener ugc nofollow” target=”_blank”>Einstein GPT is positioned as generative ai for their CRM. It will let Salesforce users query a wide range of things (e.g. who are my top leads right now), automatically generate and iterate on email drafts, and even create automated workflows based on these queries. It’s likely that the feature looks nicer in screenshots than it is in reality, but it would be a fair bet that Salesforce can build a reasonably seamless product in a year’s time. This, in fact, is the exact functionality being built by some of the generative ai startups today. While useful in the short term, the success for these startups depends not just on being better than Einstein GPT, but being so much better that an enterprise SaaS buyer would be willing to take on the friction of onboarding a new product (I’m not going to name startups in my critique because building products ground up is hard and writing critiques is easier).
In a similar vein, Dropbox announced ai-powered-tools” rel=”noopener ugc nofollow” target=”_blank”>Dropbox Dash which is positioned as an ai-powered universal search. It supports a wide range of functionality including Q&A answers from all the documents stored on Dropbox, summarizing content in documents, and answering specific questions from a document’s content (eg. when is this contract expiring). Again, there are generative ai startups today that are essentially building these functionalities piecemeal, and Dropbox has an easier path to long-term success given they already have access to the data they need and the ability to create a seamless interface within their product.
The list continues:
- Zoom announced ai-assistant/” rel=”noopener ugc nofollow” target=”_blank”>Zoom ai that provides meeting summaries, answers questions in-meeting if you missed a beat & want to catchup, and summarizes chat threads. Several startups today are building these features as separate products (eg. note-taking tools).
- ai-companion/” rel=”noopener ugc nofollow” target=”_blank”>Microsoft 365 Copilot will read your unread emails & summarize them, answer questions from all your documents, and draft documents among other things. These capabilities will also be embedded seamlessly into interfaces of products like Word, Excel, OneNote and OneDrive.
- Google has an equivalent product called ai/” rel=”noopener ugc nofollow” target=”_blank”>Duet ai for their productivity suite of products
- Even OpenAI (though not a dominant SaaS company) launched ChatGPT enterprise that can essentially plug into all of a company’s tools and provide easy answers to any questions from an employee
I am, by no stretch, claiming that the battle is over. If you have used any generative ai products so far, there are some wow moments but more not-wow moments. The pitches for the products above are appealing but most of them are either being run as pilots or are news announcements describing a future state of the product.
There are also several unresolved issues limiting the adoption of these products. Pricing is all over the place, with some products offering ai features for free to compete, while other broader copilot products charging a fee per seat. Microsoft 365 Copilot is priced at ai-ambitions-announcing-bing-chat-enterprise-and-microsoft-365-copilot-pricing/” rel=”noopener ugc nofollow” target=”_blank”>$30/user/month and ChatGPT enterprise is around technology/2023/08/29/chatgpts-new-paid-business-tier-all-you-need-to-know/” rel=”noopener ugc nofollow” target=”_blank”>$20/user/month — while this seems palatable at face value for a consumer, several enterprise buyers might find this price laughable at scale, especially given that costs add up quickly for thousands of employees. Data sharing considerations are another big blocker, given enterprises are hesitant to share sensitive data with language models (despite enterprise ai offerings explicitly saying they won’t use customer data for training purposes).
That said, these are solvable problems, and the focus with which large SaaS companies are building ai features means that these will be unblocked near-term. Which brings us back to the moat problem — generative ai startups building for enterprise customers need to figure strong moats if they want to continue to thrive in the face of SaaS incumbents’ ai features.
Let’s start with the obvious non-moats: taking a large language model off the shelf and building a small value proposition on top of it (e.g. better user interface, plugging into one data source) does not create a long-term, sustainable advantage. These are fairly easy to mimic, and even if you have first-mover advantage, you will either lose to an incumbent (that has easier access to data or more flexibility with interfaces), or end up in a pricing war to the bottom.
Here are some non-exhaustive approaches to building a moat around enterprise ai products.
1. Domain / vertical specialization
Some domains / verticals are more suited to build ai applications than others. For example, building on top of CRM software is really hard to defend because CRM companies like Salesforce have both the data connections and the control over interfaces to do this better. You could come up with really smart innovations (eg. creating a LinkedIn plugin to auto-draft outreach emails using CRM data) but innovators / first to market players don’t always win the market.
Legal is one example of a vertical where ai startups could shine. Legal documents are long, take an incredible amount of person hours to read, and it’s a frustrating process for everyone involved. Summarizing / analyzing contracts, Q&A from contract content, summarizing legal arguments, extracting evidence from documents are all time-consuming tasks that could be done effectively by LLMs. Casetext, ai-to-answer-legal-questions-lands-cash-from-openai/” rel=”noopener ugc nofollow” target=”_blank”>Harvey.ai are a couple of startups that have copilot products catering to lawyers, and have built custom experiences that specifically cater to legal use cases.
Another vertical that is dire need of efficiency in healthcare. There are several challenges with deploying ai in healthcare including data privacy / sensitivities, complex mesh of software (ERP, scheduling tools, etc.) to work with, and lack of technical depth / agility among large companies that build products for healthcare. These are clear opportunities for startups to launch products quickly and use the first-to-market position as a moat.
2. Data / network effects
Machine learning models (including large language models) perform better the more data they have had to train against. This is one of the biggest reasons why, for example, Google Search is the world’s most performant search engine — not because Google has all the pages in the world indexed (other search engines do that as well), but because billions of people use the product and every user interaction is a data point that feeds into the search relevance model.
The challenge with enterprise products however, is that enterprise customers will explicitly prohibit providers of SaaS or ai software from using their data for training (and rightfully so). Enterprises have a lot of sensitive information — from data on customers to data on company strategy — and they do not want this data fed into OpenAI or Google’s large language models.
Therefore, this is a difficult one to build a moat around but it can be possible in certain scenarios. For example, the content generated by ai tools for advertising or marketing purposes is less sensitive, and enterprises are more likely to allow this data to be used for improving models (and consequently their own future performance). Another approach is having a non-enterprise version of your product where usage data is opted into for training by default — individuals and SMB users are more likely to be okay with this approach.
3. Bring in multiple data sources
The hardest part of applying large language models to a specific enterprise use case is not picking up a model from the shelf and deploying it, but building the pipes needed to funnel a company’s relevant data set for the model to access.
Let’s say you are a large company like Intuit that sells accounting and tax software to SMBs. You support tens of thousands of SMB customers, and when one of them reaches out to you with a support question, you want to provide them a customized response. Very likely, data on which products this customer uses sits in one internal database, data on the customer’s latest interactions with the products sits in another database, and their past support question history lives in a helpdesk SaaS product. One approach for generative ai startups to build a moat is by identifying specific use cases that require multiple data sources that are not owned by a single large SaaS incumbent, and building in the integrations to pipe this data in.
This has worked incredibly well in other contexts — for example, the whole market of Customer Data Platforms emerged from the need to pull in data from multiple sources to have a centralized view about customers.
4. Data silo-ing
Large enterprises do not want to expose sensitive data to models, especially models owned by companies that are competitors or have too much leverage in the market (i.e. companies with whom enterprises are forced to share data due to lack of alternatives).
From the YC W23 article, CodeComplete is a great example of a company that emerged from this pain point:
The idea for CodeComplete first came up when their founders tried to use GitHub Copilot while at Meta and their request was rejected internally due to data privacy considerations. CodeComplete is now an ai coding assistant tool that’s fine tuned to customers’ own codebase to deliver more relevant suggestions, and the models are deployed directly on-premise or in the customers’ own cloud.
5. Build a fuller product
For all the reasons above, I am personally skeptical that a majority of standalone ai applications have the potential to be businesses with long-term moats, particularly the ones that are targeting enterprise customers. Being first to market is definitely a play and could indeed be a good path to a quick acquisition, but the only real way to build a strong moat is to build a fuller product.
A company that is focused on just ai copywriting for marketing will always stand the risk of being competed away by a larger marketing tool, like a marketing cloud or a creative generation tool from a platform like Google/Meta. A company building an ai layer on top of a CRM or helpdesk tool is very likely to be mimic-ed by an incumbent SaaS company.
The way to solve for this is by building a fuller product. For example, if the goal is to enable better content creation for marketing, a fuller product would be a platform that solves core user problems (eg. time it takes to create content, having to create multiple sizes of content), and then includes a powerful generative ai feature set (eg. generate the best visual for Instagram).
I’m excited about the amount of productivity generative ai can unlock. While I personally haven’t had a step function productivity jump so far, I do believe it will happen quickly in the near-mid term. Given that the infrastructure and platform layers are getting reasonably commoditized, the most value driven from ai-fueled productivity is going to be captured by products at the application layer. Particularly in the enterprise products space, I do think a large amount of the value is going to be captured by incumbent SaaS companies, but I’m optimistic that new fuller products with an ai-forward feature set and consequently a meaningful moat will emerge.