It also isn't a unique thing. See (for example) the entire history of, well, pretty much any country. There is a reason Utopia literally means "no place".
* Genocide of the natives? Literally all countries in the Americas, for starters.
* Slavery of captives from Africa? Pretty much everyone with colonies in and around the Caribbean was guilty of that too.
* Multiple unnecessary wars that have killed millions of people? That encompasses more or less all of European history.
By all means, criticize Palantir. But don't pretend US history has anything in particular that would set up the prerequisites for it to exist.
I know it’s not unique. I just assumed the parent was a nick fuentes America First type who wants to transfer all of our guilt and sin onto Israel (and in the parent’s case a specific ethnicity). It’s more common in the US than you would think.
Same. Opencode + oMLX (0.3.4) + unsloth-Qwen3-Coder-Next-mlx-8bit on my M5 Max w 128GB is the sweet spot for me locally. The prompt decode caching keeps things coherent and fast even when contexts get north of 100k tokens.
The 8 bit MLX unsloth quant of qwen3-coder-next seems to be a local best on an MBB M5 Max with 128GB memory. With oMLX doing prompt caching I can run two in parallel doing different tasks pretty reasonably. I found that lower quants tend to lose the plot after about 170k tokens in context.
That's good to know. I haven't exceeded a 120k context yet. Maybe I'll bite the bullet and try Q6 or Q8. Any of coder-next quants larger than UD-Q4_K_XL take forever to load, especially with ROCm. I think there's some sort of autotuning or fitting going in llama.cpp.
MBP M5 Max. 128GB ram. oMLX. unsloth-Qwen3-Coder-Next-mlx-8bit. opencode with the telemetry stripped out. This seems to be the sweet spot for now for my local dev. Helps me to not accidentally blow through $100 in Claude tokens in a day when exploring different performance tradeoffs the backend of my $DAYJOB codebase.
My usecase was building an append-only blob store with mandatory encryption, but using a semaphore + direct goroutine calls to limit background write concurrency instead of a channel + dedicated writer goroutines was a net win across a wide variety of write sizes and max concurrent inflight writes. It is interesting that frankenphp + caddy came up with almost the same conclusion despite vastly different work being done.
this makes sense for your workload, but may the right primitive be a function of your payload profile and business constraints ?
in my case the problem doesn't arise because control plane and data plane are separated by design — metadata and signals never share a concurrency primitive with chunk writes. the data plane only sees chunks of similar order of magnitude, so a fixed worker pool doesn't overprovision on small payloads or stall on large ones.
curious whether your control and data plane are mixed on the same path, or whether the variance is purely in the blob sizes themselves.
if it's the latter: I wonder if batching sub-1MB payloads upstream would have given you the same result without changing the concurrency primitive. did you have constraints that made that impractical?
In my case, "background writes" literally means "do the io.WriteAt for this fixed-size buffer in another goroutine so that the one servicing the blob write can get on with encryption / CRC calculation / stuffing the resulting byte stream into fixed-size buffers". Handling it that way lets me keep the IO to the kernel as saturated as possible without the added schedule + mutex overhead sending stuff thru a channel incurs, while still keeping a hard upper bound on IO in flight (max semaphore weight) and write buffer allocations (sync.Pool). My fixed-size buffers are 32k, and it is a net win even there.
right — no variance, question was off target. worth noting though: the sema-bounded WriteAt goroutines are structurally a fan-out over homogeneous units, even if the pipeline feels linear from the blob's perspective. that's probably why the channel adds nothing — no fan-in, no aggregation, just bounded fire-and-forget.
If the performance charts are to be believed, this has uniformly worse performance in fetching and iterating over items than a boring old b-tree, which makes it a total nonstarter for most workloads I care about.
It is also sort of ironic that one of the key performance callouts is a lack of pointer chasing, but the Go implementation is a slice that contains other slices without making sure they are using the same backing array, which is just pointer chasing under the hood. I have not examined the code closely, but it is also probably what let them get rid of the black array as a performance optimization.
`defer` is obviously not implemented in this way, it will re-order the code to flow top-to-bottom and have fewer branches, but the control flow is effectively the same thing.
In theory a compiler could implement `comefrom` by re-ordering the basic blocks like `defer` does, so that the actual runtime evaluation of code is still top-to-bottom.
I submitted a suggestion to add the sophisticated multi-engine FOSS soft synth that I use, Yoshimi (https://yoshimi.sourceforge.io/) which is a linux only fork of ZynAddSubFX.
Vital is a wave table synth; Helm is a subtractive synth.
Helm was the first synthesizer that I really excelled with. I would recommend anyone who wants to actually learn the fundamentals of synthesis, to start on it. Once you get good at to it, it's faster to dial in the exact sound you want than to reach for a preset.
It's far more straightforward and less complicated than additive (ZynAddSubFX), FM, or wave table synths.
That being said, if you just want a very advanced synth with a lot of great presets, Vital is far more advanced.
* Genocide of the natives? Literally all countries in the Americas, for starters. * Slavery of captives from Africa? Pretty much everyone with colonies in and around the Caribbean was guilty of that too. * Multiple unnecessary wars that have killed millions of people? That encompasses more or less all of European history.
By all means, criticize Palantir. But don't pretend US history has anything in particular that would set up the prerequisites for it to exist.