Hacker Newsnew | past | comments | ask | show | jobs | submit | nopurpose's commentslogin

My layman question is why plastic cant be painted? Case temperatures are not that high and there are no plastic parts rubbbing.

This is answered in the first paragraph of the article. Painting requires re-calculating the weight, strength and aerodynamics. Paint does not weigh zero, it changes the flexibility of the plastic, and the texture which changes flow.

But the article didn't give any ballpark numbers, so the interesting bit is missing, and we still know basically nothing.

It can very well be like the snake oil which makes you feel better maybe for the three seconds after you bought it. Or those gold plated audio jacks which are 0.0001% improvement in quality.


Recent example I looked at: https://github.com/nilskch/zed-jj-lsp, which downloads jj-lsp if not found in the system. I have seen other extensions doing similar for convenience, but can't remember names to give concrete links.

There was a story how similar initiative for a courts decisions scraping was shut off.

Weren't there reports that quality decreased when using non-CC harnesses too? Nothing in blog post can explain that.

From the linked blog post:

> The standard price for a GB of egress from a cloud provider is 10x what you pay racking a server in a normal data center.

From the exe.dev pricing page:

> additional data transfer $0.07/GB/month

So at least on the network price promise they don't seem to deliver, still costs an arm and a leg like your neighbourhood hyperscaler.

Overall service looks interesting, I like simplicity with convenience, something which packet.net deliberately decided not to offer at the time.


If it was anyone else, I'd have totally lost trust after seeing such a cynically diabolical take. I do eventually expect the co-founders at exe.dev, David & Josh, to workout someway to meet the promises laid out in TFA.

It amazes me how entertaining Raymond's writing on most mundane aspects of computing often is.

For as much flack Microsoft gets today, they have some of the best people writing about low-level computing. James Mickens writings managed to make me literally laugh-out-loud on these subjects. Chen described him best as "the funniest man in Microsoft Research" ( https://devblogs.microsoft.com/oldnewthing/20131224-00/?p=22... )

same thing with io_uring zero copy in my limited testing: buffer usage accounting is not free and copying memory makes things drastically simpler.

I agree that sandboxing whole agent is inadequate: I am fine sharing my github creds with the gh CLI, but not with the npm. More granular sunboxing and permission is what I'd like to see and this project seems interesting enough to have a closer look.

I am not interested in the "claw" workflow, but if I can use it for a safer "code" environment it is a win for me.


When the agent uses your GH credentials to nuke all your projects or put out a lot of crap, this separation will not save you.

whitelisting `gh` args should solve it. Event opencode's primitive permission system allows that.

The ability to whitelist specific args for commands has been the source of several (countless?) sudo CVEs over the years.

True zero-copy is not achievable with Protobuf, you need something like FlatBuffers for that. What is presented here is more like a zero-allocations.


I also find this misleading, and could be solved so easily by just explaining that of course varints need resolving and things will just happen lazily (presumably, I didn’t read the code) when they are requested to be read rather than eagerly.


Is this still true? New versions of protobuf allow codegen of `std::string_view` rather than `const std::string&` (which forces a copy) of `string` and `repeated byte` fields.

https://protobuf.dev/reference/cpp/string-view/


It allows avoiding allocations, but it doesn't allow using serialised data as a backing memory for an in-language type. Protobuf varints have to be decoded and written out somewhere. They cannot be lazily decoded efficiently either: order of fields in the serialised message is unspecified, hence it either need to iterate message over and over finding one on demand or build a map of offsets, which negates any wins zero-copy strives to achieve.

This is true but the relative overhead of this is highly dependent on the protobuf structure in one's schema. For example, fixed integer fields don't need to be decoded (including repeated fixed ints), and the main idea of the "zero copy" here is avoiding copying string and bytes fields. If your protobufs are mostly varints then yes they all have to be decoded, if your protobufs contain a lot of string/bytes data then most of the decoded overhead could be memory copies for this data rather than varint decoding.

In some message schemas even though this isn't truly zero copy it may be close to it in terms of actual overhead and CPU time, in other schemas it doesn't help at all.


The win could be only decoding the fields you actually care about, rather than all fields.

It's the same for any other high performance decoding of TLV formats (FIX in finance for instance).


Those field accessors take and return string_view but they still copy. The official C++ library always owns the data internally and never aliases except in one niche use case: the field type is Cord, the input is large and meets some other criteria, and the caller had used kParseWithAliasing, which is undocumented.

To a very close approximation you can say that the official protobuf C++ library always copies and owns strings.


Well that is very disappointing news.

Even the decoder makes a copy even though it's returning a string_view? What's the point then.

I can understand encoders having to make copies, but not in a decoder.


is there a risk that domain renew fee will skyrocket, because cost of change it for you now even higher?


For all of these newfangled TLDs that are springing out of the woodwork with strictly for-profit interests, yes. Even some ccTLDs have seen rapid price hikes in recent years.

I think the safest bet is to pre-renew the domains you really want to keep for as far out as you can (most registries allow you to renew a domain for up to 10 years). That way, if there is some major change to cost structures, you have a decade to either weather the storm or come up with a migration strategy.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: