The readme absolutely buries the features that are actually non-trivial: It runs a proxy to switch models mid-session, and does combined cost tracking between Anthropic and other models you might be using. The LLM that wrote the readme never updated the general project description to highlight these features.
I'm curious how well it actually works. I tried Deepseek with Hermes and Opencode and it seemed extremely bad about using some of the basic tools given, like the Hermes holographic memory tools, even with system prompt instructions strongly pointing them out.
I've been experimenting with Hermes, I'm convinced hermes is also just bad. Like as a harness it has got to be doing something to lobotomize these models- Even GPT-5.4 performs badly in Hermes vs just using it in Codex.
I don't always copy paste vibe coded project readme mds into Claude code and ask them to rewrite it but when I do... actually that's all I do now because my goal in life is to make wealthy overvalued companies wealthier.
Anthropic is the opposite of wealthy, the more you use their service, the more money they lose. Unless you think your precious MDs being used for training data is gonna make them rich eventually.
There probably isn't a point. Someone didn't understand something, didn't research it, so they 1 shotted their first thought and sent it to the front page of HN and all of their socials. It's the future bruh
I embrace it at this point. It ends all the shilling of vibe coded tools at work that I have endured over the past year. Everyone can now make their own tools with zero obligation to coordinate beyond shared hardware resources
It's really getting a lot of upvotes so it's nearly as if people were feeling locked-in and wanted a way out but...
Why would you keep using CC CLI if you want to use the much cheaper DeepSeek v4 models (Flash and Pro): isn't it the opportunity to kiss CC CLI goodbye and use something not controlled by Anthropic?
Anyone here successfully moved from CC CLI to a fully open-source project? I'm asking this as a Claude Code CLI (Sonnet/Opus) user. My "stack" is all open-source: from Linux to Emacs to what-have-you. I'd rather also have open-weight models and a fully open-source (not controlled by a single company) AI CLI.
Any suggestion for something that works well? (by "well" I mean "as well as Claude Code CLI", which is not a panacea so my bar ain't the end of the world either).
CC can use the Flash model for sub-agents and other tasks but keep using the Pro model for the main thread and thinking there. It also has a decent permission system and commands that people are quite used to.
Personally I’d say that the closest alternative you’d get is OpenCode.
They are TUI focused and also do have a web/desktop version, though that’s not quite as good as Claude’s Desktop app (and Claude Code functionality within it) yet. The TUI itself is solid though and they support a lot of different providers and models (/connect and /models).
For GUI driven stuff, I’ve personally had good experiences with KiloCode (in Visual Studio Code, personally I also liked RooCode of which KiloCode is a fork, but that stopped development) or just using the Zed editor with their built in agent.
I don't know how the TI-85 compares to the other models without looking it up, but there's a forever soft spot in my heart for mine. It got me through a comp sci degree and still works flawlessly today.
You make your own subscription. If you want to pay $20/month then put $20 into your account. When you use it up, wait till the next month (or buy more).
I'm asking because with most providers (most egregiously, with Anthropic) it doesn't work that way because the API pricing is way higher than any subscription and seemingly product/company oriented, whereas individual users can enjoy subsidized tokens in the form of the subscription. If DeepSeek only offers API pricing for everyone, I guess that makes sense and also is okay!
The planetary alignment that allowed the Voyager probes to move so fast only occurs every 175 years. Even with this advantage it took them 12 years to get to Neptune. So the short answer is no.
The truth is that, as much as people LOVE bringing it up, the alignment was special only because it allowed us to slingshot from body to body with almost no fuel aboard the probe itself.
That's it. Nothing to do with speed. We could launch something that goes way faster right now, if someone wanted to pay for it. Hell, we could have done it 50 years ago.
We didn't because it would go in a straight line towards "nothing".
reply