One of the fascinating things about what's happening in AI is that, rather than be a few distinct moments of technological disruption that unlocks new opportunities for startups (e.g., when Apple launched the AppStore, or integrated a GPS chip into the iPhone), I believe we're going to have a rolling thunder of AI breakthroughs that catalyze startup opportunities.
Yes, it's certainly true that as the foundation models progress from 3 to 4 to 5, etc., we will mark time in retrospect with these milestones and how each step-function improvement unlocked increasingly complicated tasks that can be automated by LLMs. What feels different here is that it’s also true that single research papers will unlock new opportunities.
To take two recent examples:
What preceded ElevenLabs? https://arxiv.org/abs/2305.07243
What preceded Krea.ai? https://latent-consistency-models.github.io
The combination of both broad-based (foundation model upgrades) and narrow (research breakthroughs) step-function changes will continue to unlock brand new AI opportunities.
As Ben wrote in his comment on my last post:
One concept I like is that while the raw capacity of something like an LLM is increasing continuously over time, there's a hard threshold at which it crosses from being [not at all useful] to [useful] for a given application. Until we get true human-level AI-generated audio, ElevenLabs is impossible...but the second we do, it's a 10x improvement. Feels like part of the reason it's harder to spot these opportunities in advance.
So if you are a founder worried you’ve missed the window, don’t. It’s a land grab right now, but a single research paper can mean the difference between [not at all useful] to [useful] and therefore a new opportunity unlock. Obvious in hindsight, but tricky timing to predict. It's going to be an exciting (and wild) few years.
The flip side of these “discontinuous innovations” is that an approach that would seem to create a lot of value now may be in the dust tomorrow. For one small example, RAG as a common dev term, did not exist a year ago. What new design patterns will emerge within the next year let alone the next 3 or 5 or even 10 years? I think this provides an interesting conundrum when investing (money or personal attention) at the AI infrastructure or tooling layer. But if one focuses on identifying the right business problems (aka application layer) then surprisingly emergent, radical tooling changes can be managed - without (hopefully) huge user/customer impact.
This is true. The 'delta' between the hyped expectations of AGI coming too soon and reality is too wide this time. There's a lot to show for.