In my last post I talked through the big stack game of poker that is happening with the LLM players. So how does the game play out? There is a sentiment you hear in the valley that these players are doomed to razor thin margins, and we all should be grateful for their hard work and sacrifice. I'll take the other side, and long term this is an enormous risk for a whole class of startups being funded.
It seems inevitable that as the underlying foundation models become more powerful, the LLM players will seek to justify the enormous investment that has gone into training their models by moving "up the stack", and evolve from an API or chat interface, to async agents.
I can't help but think back to Twitter's early incarnation as a platform, which then gradually competed with its platform developers. Right now, OpenAI/Anthropic/et al have an API consumable by almost anyone, but it's not hard to imagine a world in which they begin to compete with some of their API developers. I'd guess the async coding agents are most vulnerable to this potential in the near term given the seemingly unbounded economic value of owning this use case, and the already existing product market fit LLMs have found with coding.
But this will extend beyond coding. The most AI-optimists amongst us (Leopold Aschenbrenner does an excellent job articulating this view) believe that as the underlying LLMs get more powerful, they will get to a place where they can power “drop in” async AI workers that will act like super intelligent remote employees with code-creating superpowers. In this view, the AI workers obviate the need for most AI application software.
As an example, imagine a future enterprise buying decision: Why buy a specialized AI application that lets you automate internal IT ticket response, when the foundation model companies offer an AI agent that if you point it in the right direction with a job spec, will read your knowledge base, build its own integrations to connect to your existing systems of record (e.g., Jira), and then handle all the internal requests automatically?
Some might laugh at this scenario, but I’d suggest that if you are a B2B founder building an AI-native application, you NEED to do the thought experiment of assuming it over the next 3-5 years as you consider the strategy for your company. Not just because of the risk of this scenario happening, but because any progress down the path of the scenario will meaningfully increase competition for your company (as I describe in #3 below). So how do you future proof your B2B AI application layer company?
The best answer I have, in the face of this fast-changing future, is three-fold:
A network effect. If you've got one, run like hell to get the flywheel to tip. And email or DM me :). By the way, the last investment I led was because of a cold email from a founder who read one of my posts, so I promise you: it works.
Capture some proprietary data or hard to access data, either that you’ve accrued as you grow, or that you have access to through some other means. This forms a moat.
Execute like hell and land grab in an overlooked vertical. The foundation model companies will inevitably focus on the big markets (e.g., coding, as discussed). But outside of that, it’s hard to imagine the foundation models ever develop a GTM and packaged offering to go after the smaller (but still large!) verticals, which do require more care and packaging for a less sophisticated customer. So if you are going after these other verticals, assume it will be more symmetrical warfare with other focused startups. The difference is that as the underlying LLMs continue to improve, it will become a lot easier for other startups to compete. Imagine what a “LLM Wrapper” startup can accomplish now vs two years from now. So you have to assume more startup competition and more homegrown competition. For example, eventually, it might just take one employee to decide to train an Anthropic agent to compete with the offering you took years to get right. Being obsessively customer focused is always critical. If anything, that obsession will lead you to finding more workflows that you automate faster — which means you’ll add more value out of the box than anything else. That might just be enough to hang your hat on.
Completely agree, would love to chat
Hi Sarah, thanks for translating well the provoking ideas! I'm curious if you've also considered in your post the workflows that are already needed and will be more needed as the JTBD shifts.
For instance, I imagine customer support is one such market you had in mind. We’ve seen a shift where companies need less of the traditional and extensible helpdesk features and more around changed existing workflows (eg AI-human collaboration, ticket escalation rules, and AI-driven knowledge bases), new workflows (eg explainability and control of AI and responses, easy back-office integration) and new needs now that they have more time (eg insights to be strategic).
As a practical example, we automate on avg 65% of tickets; our customers still pay more for helpdesk than us but are more and more open to changing their system of record.
In summary, how do you think the LLM companies will attack these verticals that seem obvious but require a lot of workflows?
I can see the plug here and in multiple systems you've described but only see this for the enterprise. I imagined I'm overlooking sth
Thanks once again.