I pay $39 for the same AI that costs $200
The Chinese Claude that does everything my $200 subscription does—for a fifth of the price
Anthropic issued a warning last month.
Not to users.
To Chinese AI labs.
They caught DeepSeek, Moonshot, and MiniMax running what they called "industrial-scale distillation campaigns"—24,000 fake accounts, 16 million conversations, all designed to extract Claude's capabilities and clone them.
Most people read that as a security story.
I read it as a shopping list.
The $39 revelation
I was on a call with my AIAA students last week.
Someone asked about Claude Code alternatives.
The Max subscription is $200 a month.
Codex burns tokens fast.
The bills add up.
I told them about Kimi.
$39 a month.
Same features.
Same agentic workflows.
Same terminal integration.
Same everything.
Someone asked the obvious question: "Is it actually the same?"
I pulled up my terminal. Live demo.
Created an agent. Gave it a task.
Watched it iterate through the same loop Claude Code would use.
Write test, run test, fix error, repeat.
It took four minutes.
Claude Code would've taken three.
Is it 1:1 the same?
No.
But it’s more than good enough for 99% of people.
How Chinese labs cloned Claude (and why you should care)
Here's what Anthropic revealed.
Three Chinese labs spent months feeding millions of queries into Claude through fake accounts and proxy networks.
They weren't just using the API.
They were distilling it.
Recording every response, every reasoning chain, every tool use pattern.
Then training their own models to mimic Claude's behavior.
DeepSeek grabbed 150,000 exchanges focused on reasoning and politically-sensitive queries.
Moonshot AI extracted 3.4 million examples of agentic behavior and tool use.
MiniMax harvested 13 million conversations targeting coding workflows.
The result?
Models that behave like Claude.
Reason like Claude.
Code like Claude.
But cost a fraction of what Claude costs.
The arbitrage opportunity
I'm not here to debate the ethics of model distillation.
I'm here to tell you what I learned the hard way.
Last year I nearly hit a $5,000 API bill in a single month.
Same usage patterns. Same agent workflows. Same everything I do now.
The difference? I was paying per token instead of paying flat.
I switched to Claude Code Max - $200 a month.
Saved 96%.
Then I found Kimi.
$39 a month.
Chinese company.
Trained on Claude's outputs.
Functionally identical for the agentic coding work I do every day.
The math isn't subtle.
When I use what
I don't use Kimi for everything.
Company work - AIAA, Applied Leverage, anything with client data - stays on Claude or Codex.
Anthropic has better data policies.
Better security.
But my personal projects?
My experimental automations?
The heartbeat scripts that run every 30 minutes to keep my Openclaw running and don't touch sensitive data?
That's Kimi territory.
$39 a month for workflows that would cost $200 on Claude Code Max.
Or $500+ if I were still paying per token.
The quality gap exists. It's real.
Claude still wins on nuanced reasoning and edge cases.
But for 80% of what I build.
API integrations, cron jobs, CLI tools, workflow automation.
The gap doesn't matter.
The code works. The tests pass. The deployment succeeds.
That's the bar.
What Anthropic won't say
Anthropic's blog post about Chinese distillation campaigns sounded like a warning.
In reality, it’s competitive signaling.
They're telling policymakers:
"Export controls on AI chips aren't working because our competitors are stealing our model outputs instead of training their own."
They're telling investors:
"Our IP is valuable enough that nation-states are spending millions to extract it."
But here's what they're not saying:
The clones are already here.
They work.
They're priced at a fifth of Claude's subscription cost.
And they're getting better every month because the extraction hasn't stopped, it's just gotten harder to detect.
This is the current market reality.
The cost of loyalty
I like Anthropic.
I like their safety research.
I like their Constitutional AI approach.
I like that they published a detailed technical breakdown of how they caught the Chinese labs instead of just issuing a vague press release.
But I also like not burning money.
If you're paying $200 a month for Claude Code Max and not using the absolute edge of its capabilities like multi-step reasoning, nuanced analysis, high-stakes decision support, then you're probably overpaying.
Chinese labs built a $39 alternative by copying the 80% of Claude that handles routine tasks.
That 80% covers most of what developers actually do.
What happens next
Anthropic will keep improving detection.
Chinese labs will keep improving evasion.
The models will keep converging.
Meanwhile, the price gap stays wide.
$39 versus $200 isn't a rounding error.
It's a 5x difference.
For a solopreneur running agentic workflows at scale, that's the difference between "affordable automation" and "expensive hobby."
I'm not telling you to cancel Claude Code.
I'm telling you to run the math on what you're actually using.
If 80% of your AI work is routine API calls, script generation, test writing, deployment automation, etc.
Then Kimi handles that for $39.
Save the $200 subscription for the 20% where Claude's edge actually matters.
Or don't.
Keep paying for loyalty.
For ethics.
For the warm feeling of supporting the company that built the original.
Just know that somewhere in Beijing, a developer is running the same workflow you are.
They're paying $39.
And their code works too.
If you're running agentic workflows and haven't priced the alternatives, you're leaving money on the table. Reply and tell me what you're paying for AI tools - I read every response.



