You are lying down on your sofa, and you realize that your phone is far away from you. You just call out your best mate, Siri. Music plays. No complicated steps in between.
Bubble gum. Sweet, insipid, a throwback to simpler times. There's a 90s nostalgia baked into the word itself, a longing for when things felt uncomplicated. But bubble gum also pops.
I think it's worth mentioning my friend M who handed me this interesting thought candy to chew on.
Services: The New Software — Sequoia Capital
Sequoia Capital recently argued that the next trillion-dollar emerging company won't sell productivity software — it will sell the work itself: AI drafting contracts, generating marketing campaigns, or managing logistics.
AI as a faster, cheaper executor of existing processes. Like Siri, a reductionist approach of providing services.
In the short term, Yes. This will definitely be a money-making playbook for a few. I agree.
But the article came across contradictory to me. The companies it describes would be the next generation that can sell 'We could replace labor, but with AI.' But I think the process itself would eventually demolish. We would end up at a point where we don't even have to think about the mid-process. The steps themselves dissolve. If so, the article's framework breaks down — there is no 'work budget' to capture. You can't sell 'we'll do your NDA drafting' if the conditions that made NDAs necessary have been restructured at a deeper level.
Even then, I think the human role would still be needed — for deciding the end product. Should this product exist?
I think of it this way: the system constantly requires clarity and heads toward oversimplification, abstracting everything. This is not about the process-level judgement this article tries to explain. It is something more upstream — deciding what holds value at all.
That said, at scale, one dominant system could dominate, depending on how capable they are at simplifying the end results. The article frames this as an exciting opportunity — probably to encourage founders and investments — but I have this repellent gut feeling that this might actually be describing the preconditions for monopoly.
I think of this as layers. Sequoia imagines multiple companies emerging to create outcomes. But the technology they share would also be backend-functioned through one AI system domain. Any company built on top is easily duplicated from the front. So the monopoly forms at the foundational level — not over a product, but over something like a new type of functional infrastructure. Nothing to unbundle.
When intelligence becomes infrastructure, competition moves upward while power concentrates downward. The real concentration of power would not be at the service layer. It would be at the intelligence layer that makes all the services possible. One company powering all functions, with interchangeable front-ends.
The basis of economy was always about scarcity of resources. But if AI makes cognitive labor infinite, the traditional sense of choice-making disappears. This is what scares people. This is why people are afraid of losing white-collar jobs.
AI bubble (this is a cringe way to phrase — I mean this by the rush, the funding, the vertical plays) instantly reminded me of comparative advantage. Even if one entity is better at everything, trade still makes sense because everyone should specialize in what they are relatively best at. In the early phase of AI, basically now, that logic holds. Lots of companies make money. Investors fund vertical autopilots because each one captures a different niche. Founders build them because the opportunity is real. This is the phase the Sequoia article is mapping.
However, comparative advantage again has an underlying assumption of scarcity. Eventually it would cease to function, because the precondition no longer exists. Once the underlying technology converges toward general capability — once one system can handle all these verticals at comparable quality — the comparative advantage logic collapses, the ecosystem contracts, and you get convergence into a few dominant systems.
I would boldly say this convergence is relevant to the period when the AI loop closes and completes AGI.
The bubble isn't irrational. It's the market correctly responding to a temporary window of comparative advantage, one that will be destroyed by the very technology it's investing in.
Yet, in my view this is only applicable toward services and processes. If the processes collapse and comparative advantage evaporates, then the only thing that matters is what humans choose to care about. As mentioned in my previous piece, I think the framework of problem-solving might not be meaningful anymore. I foresee a future of value creation, how to expand our resources rather than finding problems and handing out solutions.
And still value creation inevitably relocates the scarcity problem. The new scarcity becomes the ability to decide which problems matter. When everyone can create, the ability to recognize what's worth caring about becomes the bottleneck. The question is where that scarcity relocates to, whether the new scarcity is more or less humane than the current one. Moreover, value is relational by definition. Money was the primary frame for determining it for a long time. If we adopt my framework — that market logic eventually collapses — then the question gets more complicated.
If so, what determines value?
It was always humans.
Art is somewhat surprising if you think about it. A line that Picasso drew is worth millions. It could be from anyone, but because 'Picasso did it' it creates value and messes up the traditional sense of economic worth. This is clearly shown through Banksy's 2013 performance. Near a street in Central Park, an old man sold some art pieces for about $60. And pedestrians had no clue that they were real Banksy works. The old man sold about revenue of $420. But through real market value, it was more than $200k.
This reveals a striking reality. Money is just a mere representation of meaning and value.
I believe in humans. Now would be a time when positing meaning and framing is more important than ever.
What makes life meaningful might no longer be something finite. It might be the process itself. Say, your personal assistant robot might fold your laundry, cook for you. But these activities are not a matter of what you solved. For some people, the action itself — the pure action itself — is what matters.
Then what makes meaning?
Of course, it varies between individuals. But one of the greatest values I believe in is relationships and mutual recognition. We are social creatures, and a lot of what makes life feel meaningful is being seen and needed by other humans, not functionally needed but existentially needed. Big sentence from Hegel: consciousness only becomes real through encountering another consciousness. No AI system, no matter how capable, no matter how successful a title you have, closes that loop.
This is something that I am focusing on at the moment, relational work. Human relationships are not necessarily about the problem-solving, especially when the abundance of resources arrives. Relationship is not something that can be simplified. It is more of a social negotiation and revise-iteration process. Which means that the end product is not what matters but the pure action or the process is what counts.
The irony of being, the more AI can do, the more the question of what's worth doing becomes purely, irreducibly human.