And then what?
AI, price caps, and the cost of not finishing a thought

A guy on a comment thread the other day was explaining why you shouldn’t trust AI. His argument, in full: he’d asked a chatbot the same question twice and gotten two different answers. One was iffy. One was wrong. Therefore — and I’m paraphrasing only slightly — don’t trust the technology, and don’t trust anyone who says you should.
Then he went to bed.
I don’t bring this up to dunk on the guy. He’s not wrong that large language models are probabilistic, not deterministic. He’s not wrong that they hallucinate. He’s not wrong to be cautious. What he is, though, is done thinking. He arrived at a feeling — distrust — and mistook it for a conclusion. Then he declared the case closed and logged off.
This is not a technology problem. This is a thinking problem. And it’s one of the most corrosive habits in American life right now — not because it’s new, but because it’s everywhere, and because we’ve gotten so comfortable with it that we barely notice when we’re doing it.
The Backwards Thinkers
Here’s the pattern. You start with a feeling — fear, outrage, moral conviction, tribal loyalty, whatever — and then you work backward to construct an argument that justifies the feeling you already had. The conclusion comes first. The reasoning is just decoration.
This isn’t the same as having principles. Principles are starting points you reason forward from: I believe in individual liberty, so how should I think about this regulation? I believe people deserve a fair shot, so what does that mean for education policy? You follow the logic wherever it goes, even when it lands somewhere uncomfortable. You hold the principle, but you let the evidence shape the application.
Reasoning backward is the opposite. You start at the destination — I don’t like this technology, this price is unfair, this group is wrong — and you cherry-pick whatever evidence, anecdote, or moral framework gets you there fastest. It feels like thinking. It has all the surface features of thinking. But it skips the part where you might actually change your mind.
Here’s a simple test for which one you’re doing: can you name a piece of evidence that would change your position? If you can — if there’s some fact, some study, some outcome that would make you say “okay, I was wrong about this” — you’re reasoning forward. You’re holding a principle and following it honestly. If you can’t, if every conceivable piece of new information just gets absorbed into the conclusion you already hold, you’re not reasoning. You’re performing.
This shows up in several distinct flavors, each wearing a different disguise:
Fear as conclusion. “AI gave me a wrong answer once, so the whole technology is untrustworthy.” The feeling is discomfort with something new. The stopping point is a blanket rejection that doesn’t require learning anything.
Magical thinking about institutions. “Groceries are too expensive, so cap the prices.” The premise — usually unexamined — is that government can mandate outcomes into existence just by wanting them hard enough. The mechanism between “pass this law” and “problem solved” goes unexplored because the feelingof having a solution is more satisfying than the work of finding one that functions.
Moral identity. “I oppose X because opposing X makes me a good person.” The position isn’t the product of analysis — it’s a costume. It signals virtue to the right audience, and the righteous feeling it produces becomes self-reinforcing. Actually examining whether the position works would threaten the identity, so you don’t.
Tribal sorting. “My people believe Y, so I believe Y.” The reasoning runs from belonging to conclusion, not from evidence to conclusion. You don’t need to understand the issue — you just need to know which side you’re on.
Different motivations, same failure mode. Nobody plays it forward. Nobody asks the only question that actually matters about any policy, technology, or position: and then what?
The Test
This is the question that separates a position from a posture.
Cap grocery prices — and then what? Well, grocery stores operate on margins of one to three percent. Cap prices below what the market can bear and stores don’t just eat the loss — they cut product lines, reduce quality, or close entirely. The small neighborhood grocery goes first. Supply contracts. Shelves thin out. The people the policy was designed to help are now standing in longer lines with fewer options, or driving farther to a big-box store because their local shop shut down.
This is not speculation. Economists across the political spectrum agree that price controls on competitive markets produce shortages and make underlying problems worse. It has happened every time it’s been tried, from Nixon’s wage and price freeze in the 1970s to Venezuela’s catastrophic controls on basic goods. Ruy Teixeira, in what turned out to be one of his final pieces at The Liberal Patriot before the site closed its doors, described the current Democratic grab bag of price caps and controls as policies whose purpose is “mostly, if not solely, to signal that Democrats want to do something about the problem” — not to actually solve it. Even the people proposing these policies seem to know they won’t work. But they feellike answers, and in a culture that treats feelings as arguments, that’s enough.
The problem isn’t that people want affordable food. Of course they do. The problem is that wanting something isn’t a plan. The feeling — this is too expensive, someone should fix it — arrives, and it feels so obviously right that the hard work seems unnecessary. Why would you need to trace a supply chain, or understand the difference between a price shock caused by drought and one caused by trade policy, or grapple with the tradeoffs of every possible intervention, when the answer is right there? Just make it cheaper. Done.
Except it’s not done. It’s never done. Because reality doesn’t care what you wanted. Reality only cares about the chain of consequences your decision set in motion.
Now apply that same test to AI.
Refuse to engage with AI — and then what? The technology develops anyway. It gets integrated into industries regardless of whether you personally approve. The people who learned to use it — who understand its failure modes, who know when to trust it and when to check its work — have a massive advantage over the people who spent those years composing angry comment-thread manifestos. You don’t stop the wave by refusing to learn how to swim.
Declare AI untrustworthy because it gave you a bad answer — and then what? By that standard, you’d also have to declare Google untrustworthy (SEO-gamed garbage on page one), Wikipedia untrustworthy (edit wars, incomplete citations), your doctor untrustworthy (misdiagnoses happen), your accountant untrustworthy (errors happen), and the newspaper untrustworthy (corrections run daily). We navigate imperfect information all the time. We’ve built an entire civilization on the skill of taking flawed inputs and making reasonable decisions anyway. The question was never “is this source perfect?” It was always “what’s the failure mode, and what’s my process for catching it?”
The “and then what” test isn’t complicated. It’s just work — the boring, unglamorous work of following a chain of reasoning past the first step, even when the first step already felt satisfying.
The History of Not Adapting
I keep coming back to the historical analogies, not as gotchas but as genuine data.
The painters who hated the camera weren’t wrong that photography would destroy portrait painting as a livelihood. The carriage makers weren’t wrong that automobiles would end their industry. The human computers — yes, that was a job title — weren’t wrong that electronic computers would replace them. The original Luddites weren’t irrational cranks smashing machines for fun. They were skilled textile workers watching automation eliminate their craft, and their fear was entirely justified.
In every single case, the fear was accurate. The disruption was real. The pain was legitimate.
And in every single case, the people who turned that fear into adaptation fared better than the people who turned it into identity.
That’s the distinction that matters. Fear is an input — maybe the most important input you can have. It tells you something powerful is happening and you need to pay attention. Fear that drives you to learn, to retrain, to understand the new landscape, to figure out where you fit in the changed world — that’s not just useful, it’s essential. That kind of fear is really just respect: respect that something is significant enough to demand your engagement.
But fear that hardens into a stance — fear that becomes a tribal marker, a personality trait, a substitute for the engagement it should have prompted — that’s not caution. That’s calcification. And the world doesn’t wait for calcified people to catch up.
What to Do Instead
This isn’t a Silicon Valley “learn to code” dismissal. The pace of AI development is genuinely disorienting. The labor implications are real and, for some people, terrifying. The environmental costs of the data center buildout are real. The potential for misuse — deepfakes, synthetic propaganda, surveillance infrastructure — is real. There are serious people raising serious concerns, and those concerns deserve serious responses.
But serious is the key word. Oren Cass at American Compass has spent years arguing that when labor markets shift, the answer isn’t to mandate the outcome you want — it’s to invest in the conditions that help people adapt. Build workforce training that actually connects to available jobs. Fund education systems that prepare people for the economy that exists, not the one we’re nostalgic for. Create the conditions for productive work across a range of skills and geographies, rather than assuming everyone will become a software engineer or be left behind. Cass calls this “productive pluralism,” and while his framework is aimed at manufacturing and trade, the logic applies perfectly to the AI disruption: you can’t wish the disruption away, but you can invest in the inputs that help people navigate it.
Tyler Cowen at Marginal Revolution has been arguing for years that the defining economic question of this era isn’t whether AI will displace jobs — it will — but whether we’re building the systems that help people land on the other side. Recent research he’s highlighted suggests the answer is cautiously encouraging: workers in AI-exposed occupations who go through retraining programs see real earnings gains, though the returns are better for those who pursue broad skills rather than chasing AI-specific roles. The adaptation isn’t impossible. But it doesn’t happen by accident, and it doesn’t happen at all for people who opt out of the conversation entirely.
That’s the adult version of the conversation. Not “AI is fine, stop complaining.” Not “AI is dangerous, ban it.” But: given that this is here and accelerating, what are we going to invest in so people can actually deal with it?
The same logic applies to the grocery problem. You want affordable food? Great — so does everyone. Now do the work. Improve supply chain resilience. Reduce regulatory barriers that make it harder for small producers to compete. Target subsidies to the people who actually need them instead of distorting the entire market with blunt-instrument price caps. These are harder policies to design, slower to show results, and much less satisfying to announce at a press conference. But they work. They address the inputs — the conditions that produce the outcome you want — rather than trying to mandate the outcome directly and hoping the mechanism figures itself out.
Freddie deBoer, writing from a very different political position than Cass, has been making a parallel argument for years: that progressive politics has developed a habit of substituting what feels morally correct for what works practically. You can want desperately to close achievement gaps in education, but if your theory of how to do it doesn’t survive contact with the evidence, your wanting isn’t a plan — it’s a pose. Matt Yglesias at Slow Boring has built an entire publication around the same frustration: that boring, pragmatic, evidence-driven policy keeps losing to emotionally satisfying gestures that don’t accomplish anything.
These are writers who disagree with each other on plenty. But they share a core instinct: that the feeling is where thinking starts, not where it stops.
The Deeper Problem
Here’s what’s really going on under all of this, and it’s not about any single technology or policy. We’ve developed a culture that confuses feelings with arguments. Not feelings as inputs to arguments — that would be fine, that would be human. But feelings as replacements for arguments. The feeling arrives, and the thinking stops.
This isn’t a left-right thing, or a smart-dumb thing, or a young-old thing. Progressives do it when they propose mandating outcomes without grappling with mechanisms. Conservatives do it when they invoke tradition as a conversation-ender without examining whether the tradition still serves the conditions it was designed for. Tech enthusiasts do it when they wave away legitimate concerns with “progress is inevitable.” Tech skeptics do it when they treat their discomfort as a veto.
The common thread is the same every time: someone arrives at step one of a multi-step problem, finds step one emotionally satisfying, and decides the remaining steps are optional.
They’re not. The remaining steps are where the actual answer lives. The remaining steps are the difference between a position and a posture, between a policy and a bumper sticker, between someone who’s thinking and someone who stopped.
What I Might Be Wrong About
Maybe the pace of AI really is different this time. Maybe the adaptation curve is steeper than anything we’ve faced, and the gap between “this technology exists” and “most people can use it productively” is wider than optimists like me assume. Maybe “just learn to use it” is glib when you’re a fifty-five-year-old paralegal watching your entire job description get automated in eighteen months.
Maybe there are domains — education, criminal justice, medicine — where the cautious voices aren’t being fearful at all, but are doing the harder, slower, more important work of insisting we get it right before we move fast. The person who says “we shouldn’t use AI to make sentencing recommendations until we understand the bias in the training data” isn’t reasoning backward from fear — they’re reasoning forward from a principle, and that’s exactly the kind of thinking I’m arguing for.
And maybe I’m overweighting individual agency in a system that’s moving faster than individuals can respond. The original Luddites didn’t fail because they lacked grit. They failed because the economic forces they were up against were structural, and no amount of personal adaptation could change the macroeconomic reality. If AI displaces labor at a pace and scale that outstrips any retraining program we can build, then the “adapt” crowd — my crowd — needs a better answer than bootstrapping.
I don’t think any of that invalidates the core argument. But it should make me honest about where my own certainty might be doing the very thing I’m criticizing: arriving at a comfortable conclusion and skipping the hard parts.
The Only Question
The technology is here. It’s not going anywhere. Neither are the forces that make groceries expensive, or the disruptions that make people afraid, or the complexity that makes easy answers so seductive.
The relevant question was never how do you feel about it. It was never do you like it. It was never does it scare you. The relevant question is always, only, relentlessly: given that this is real, what are you going to do?
And then what?
And then what?
And then what?
Price caps don’t make groceries cheaper. Refusing to engage with AI doesn’t make it go away. Outrage about immigration doesn’t secure a border. Nostalgia for manufacturing doesn’t reopen a factory. Declaring your moral superiority doesn’t feed anyone, teach anyone, or protect anyone. At some point, the feeling has to give way to the work — or the feeling is all there is.
I don’t know what the right AI policy looks like. I don’t know how to retrain everyone who needs retraining. I don’t know how to weigh the environmental costs of the data center buildout against the productivity gains. These are genuinely hard problems, and anyone who tells you they have clean answers is selling something.
But I know that the people who will figure it out are the ones who got past the feeling and started working the problem. Not the ones who went to bed.

