Strategy

The Real AI Risk Isn't Incremental Progress. It's a Step Function.

Jason Lemkin's warning about 'jaspering' reveals why workflow integration isn't a moat when AI models make a step function leap. The real risk: leaders who don't use AI daily can't see the existential threat coming.

The Real AI Risk Isn't Incremental Progress. It's a Step Function.

TL;DR: When AI reasoning jumps from minutes to seconds, workflow moats collapse. Leaders who don't use AI daily won't see it coming.

If you are confident your AI strategy is safe, there's a good chance you don't use these tools enough.

I was listening to Jason Lemkin and Harry Stebbings debate Harvey's $160M raise at an $8B valuation on 20VC recently. Reasonable or insane. Defensible moat or hype cycle excess. What mattered wasn't the answer. It was how differently they were reasoning about risk.

Jason made an uncomfortable point. So uncomfortable that Harry mostly talked past it.

Jason's argument had nothing to do with Harvey specifically. It was about instability. And about how most people, especially investors and senior leaders, fundamentally misunderstand what a step function change in AI actually looks like when you're the one doing the work.

The Jasper problem

Jason used a phrase that should terrify anyone building on top of LLMs: "getting jaspered." To understand what he means, you don't need a post-mortem. You just need to understand the shape of what happened.

Jasper was a breakout success in the first wave of generative AI. It grew fast, raised aggressively, and became the default reference for "AI-powered content." It did everything right according to the playbook: strong GTM, clear positioning, real customer value.

And then the ground shifted.

As foundation models improved, Jasper's core differentiation stopped feeling like a product and started feeling like a feature. Not because Jasper executed poorly, but because the capability it wrapped became native. What once justified a standalone workflow began to feel redundant.

The result wasn't failure. Jasper is still a real company with real revenue. The result was something more subtle and more dangerous: a trajectory break. Growth continued, but the narrative power collapsed. The app layer went from "obviously necessary" to "why does this exist?"

Nothing went wrong at Jasper. That's the point. The value it captured was contingent on a temporary limitation in the models.

Jason's argument is that most AI applications today sit on similarly unstable ground. Fin. Sierra. Harvey. Impressive products. Genuinely useful. Meaningfully better than what came before. But deeply coupled to the current limits of reasoning speed, cost, and reliability.

Those limits are not fixed.

We haven't seen deep reasoning yet

Here's the part Harry struggled with. When he tried to rebut Jason, he talked about models improving 3–5%. That's not what Jason was talking about. He wasn't describing improvement. He was describing a different class of capability.

Jason made this point by referencing tools like Lovable, Replit, and Cursor. When you actually spend time inside those systems, the limits are obvious. You feel the friction. You see how often human intervention is required. You notice the retries, the guardrails, the sanity checks.

Today, deep reasoning is slow, brittle, and expensive. Long-context analysis takes minutes. Setups are complex. Hallucinations still break trust. You wait, you retry, you manually compensate. That's acceptable for demos. It's tolerable for async work. It is not acceptable for operators trying to make decisions in the flow of their job.

Many AI products exist because of these constraints. They add structure where models are fragile, workflow where reasoning is slow, and interfaces where reliability is inconsistent. In other words, they are renting today's limitations.

Now imagine this changes. Not gradually. Not marginally. But suddenly.

Complex, multi-hop, long-context reasoning goes from 10–15 minutes to one second. Hallucinations collapse. Models reason directly over your data. No elaborate setup. No fragile chains. No human babysitting.

That is not "a better Fin." That is a category collapse. What felt magical five minutes ago now feels quaint.

Speed is the moat killer

This is the part many VCs miss because they don't live inside the workflows they evaluate.

Most so-called AI moats today are built on friction: long setup times, brittle integrations, user training, prompt tuning, human-in-the-loop review, and slow feedback cycles. These feel like lock-in only because the alternative is equally painful.

If a system can reason over all of your data in milliseconds, those frictions stop being defenses. Setup collapses from weeks to minutes. Training shrinks from months to hours. Output quality jumps by an order of magnitude. Latency disappears.

At that point, integration is not a moat. Workflow familiarity is not a moat. Switching costs are not a moat. They exist only while the pain is tolerable.

Jason put it bluntly: when something is 10x better, customers don't wait for renewals. They don't debate migration plans. They move immediately. That is how operators behave when friction vanishes.

This is why "3-5% better" is the wrong mental model

This is where the conversation really broke down. Harry's response focused on incremental improvement. He came back with a scenario about "if Anthropic gets 3-5% better..." showing he fundamentally missed the point. Jason was talking about discontinuity. That gap matters.

Jason gave Harry grief for being a VC and not an operator, meaning he doesn't understand what a 10x improvement on deep reasoning could mean for people doing the work. You cannot reason about step-function risk from a board deck. You have to feel the latency.

Incremental thinkers believe GTM compounds, workflow lock-in defends, and switching costs protect incumbents. Step-function thinkers understand that switching costs collapse when the value jump is massive, lock-in only exists while pain is tolerable, and nobody stays loyal to "good enough" when "magic" shows up.

This is not theoretical. We have already seen it happen once in AI with Jasper. We will see it again.

The operator blind spot

The real takeaway from this debate is not about Harvey's valuation. It's about who understands the risk.

Operators who use AI every day feel the friction. They know where the slowness is. They know where hallucinations still break trust. They know where they are compensating manually. Most leaders do not. Most investors definitely do not.

Which is why Jason's frustration came through so clearly. He explicitly told Harry, "You don't even know what you're talking about here Harry" regarding coding with deep reasoning. You have to experience the current limitations to understand what 10x better actually means.

Why leaders need their hands dirty

This is the uncomfortable conclusion. If you are a manager, executive, or investor making decisions about AI, and you are not personally using these tools day to day, you are flying blind.

You will overestimate moats that disappear with speed, underestimate how fast PMF can evaporate, and miss existential risk until it is too late. AI does not just threaten jobs at the bottom of the org chart. It threatens decision-makers who mistake familiarity for understanding.

The leaders who survive this transition will not be the ones with the best frameworks. They will be the ones who noticed when five minutes turned into one second and realized everything just changed. In a world where step function changes can make entire categories obsolete overnight, that hands-on experience is not optional. It's existential.