Cargo is spiritually based on NPM so it's not much better.
Go Get is closer to always locking dependencies unless you explicitly upgrade them with a go get, so it's much much better in my view.
Yes, you can lock deps in NPM/Cargo/etc. but that's not the default. It is the default in Go.
In Go projects my policy for upgrading dependencies includes running full AI audit of all code changed across all dependencies, comes out to ~$200 in tokens every time but it gives those warm 'not likely to get pwned' vibes. And it comes with a nice report of likely breaking changes etc.
BTW a curated mirror of <whatever ecosystem> packages, where every package is guaranteed to have been analyzed and tested, could be an easy sell now. Also relatively easy to create, with the help of AI. A $200 every time is less pleasant than, say, $100/mo for the entire org.
Docker does something vaguely similar for Docker images, for free though.
People are already scanning npm constantly. You can limit yourself to pre-scanned packages by setting npm's minimum release age setting to 1 or 2 days (a timeframe that all the recent high-profile malicious package versions were unpublished within).
Note to self: the test suite for vetting a package should include setting the system date some time in the future, to check if an exploit is trying to sleep long enough to defeat the age limit.
It's insane to me you spend $200 on a report you likely rarely read in detail or double check for correctness, yet you're doing it to feel good about security.
If it runs in a harness that will alert me when something dodgy is detected I'm fine to stay at that level.
I don't read it in detail because reading in detail is precisely what I delegate to the harness. The alternative is that I delegate all this trust to package managers and the maintainers which quite clearly is a bad idea.
Whether the $$ pricetag is worth it is.. relative. Also in Go you don't update all that often, really when something either breaks or there is a legitimate security reason to do so, which in deep systems software is quite infrequent.
Funnily enough for frontend NPM code our policy was to never ever upgrade and run with locked dependencies, running few years old JS deps. For internal dashboards it was perfectly fine, never missed a feature and never had a supply chain close call.
What do you when a critical vulnerability gets discovered and you have to update a package? How many critical/high severity vulnerabilities are you running with in production every day to avoid supply chain attacks?
For the stuff in more sensitive deployments it's really quite simple, just setup CORS etc properly and don't do anything overly fancy on the frontend. Worst case the user may force some internal function to eval some JS by pasting scripts into the browsers debug console.
Critical severity vulnerabilities are only critical when they are reachable, but are completely meaningless if your application doesn't touch that code at all. It's objectively more risky to "patch" those by updating dependencies than just let them be there.
It is the default in both cargo and npm, but "npm install" stupidly enough still updates the lockfile, and you need "npm ci" to actually respect it. I think there's some flag to make install work sanely, but long-term I find the best approach is to use anything other than npm.
I ditched npm for yarn years ago because it had saner dependency resolution (npm's peer dependency algorithm was a constantly moving target), and now I've switched from yarn to bun because it doesn't run hooks in dependencies by default. It also helps that it installs dependencies 10x faster.
There was once a 2.5" SSD Mushkin Source 16TB SATA drive. At its cheapest it was ~1700 USD (or 1500 EUR). That was mid 2023 (like 3 years ago!).
Nowadays it feels like that this time and price region is like decades away in the future. I was hoping I can store more data in future on modern tech like SSDs and not less.
Yeah it sucks :( Almost exactly a year ago, I got a brand new 15.36TB Kioxia CD-6R (u.3 pcie4x4 drive) for $1450+tax from serverpartdeals.com - that same drive is now listed for ~$4600 (and it’s also out of stock there)
Funny you mention that, I have very recently just came back from a one-shot prompt which fixed a rather complex template instantiation issue in a relatively big very convoluted low-level codebase (lots of asm, SPDK / userspace nvme, unholy shuffling of data between numa domains into shared l3/l2 caches). That codebase maybe isn't in millions of lines of code but definitely is complex enough to need a month of onboarding time. Or you know, just give Claude Opus 4.5 a lldb backtrace with 70% symbols missing due to unholy linker gymnastics and get a working fix in 10 mins.
And those are the worst models we will have used from now on.
Template instantiation is relatively simple and can be resolved immediately. Trying to figure out how 4 different libraries interact with undefined behavior to boot is not going to be easy for AI for a while.
Visual puzzle solving is a pretty easily trainable problem due to it being simple to verify, so that skill getting really good is just a matter of time
In Go you know exactly what code you’re building thanks to gosum, and it’s much easier to audit changed code after upgrading - just create vendor dirs before and after updating packages and diff them; send to AI for basic screening if the diff is >100k loc and/or review manually. My projects are massive codebases with 1000s of deps and >200MB stripped binaries of literally just code, and this is perfectly feasible. (And yes I do catch stuff occasionally, tho nothing actively adversarial so far)
If you have Pro users why not leverage with debt without giving up equity for no good reason?
Maybe the value prop is not clear, the website talks a bunch about AI agent integrations, that sounds like a completely different product to a parser library, which however advanced it may be, investors will likely see as tangential bit of IP that a senior engineer can build for $10-20k in a few days.
Thanks for the suggestion! At this stage, debt is not a feasible option for us. Our focus is on scaling the business quickly and VC funding is the preferable route to achieve that. Having the runway and support from investors will allow us to fully dedicate ourselves to growth and execute our vision effectively.
Re value prop - true. While the existing clients are using FIXParser as a plain old library I was thinking where the puck is going to be, it's going to be all MCP with A2A frameworks so i pivoted and most of the focus was on building the MCP features. We have received interest from VERY prominent firms in finance thanks to our focus on MCP FIX features. I doubt that a sr engineer can put that together in a few days.
Go Get is closer to always locking dependencies unless you explicitly upgrade them with a go get, so it's much much better in my view.
Yes, you can lock deps in NPM/Cargo/etc. but that's not the default. It is the default in Go.
In Go projects my policy for upgrading dependencies includes running full AI audit of all code changed across all dependencies, comes out to ~$200 in tokens every time but it gives those warm 'not likely to get pwned' vibes. And it comes with a nice report of likely breaking changes etc.
reply