Hacker Newsnew | past | comments | ask | show | jobs | submit | plqbfbv's commentslogin

> so it just boils down to strictness even when we're talking LLMs?

The article describes what I've been doing for the past few months - I did small python projects in the past because of the ecosystem: I couldn't possibly write a ton of the stuff required for the things I wanted to do, so I leaned into python because someone already wrote it for me. Quality of deps was mostly ok for the happy paths, but always a chore to patch the broken ones.

Nowadays I tell Claude what I want to build and I always ask it whether rust is a good choice for it. It'll pick up the right crates or choose whether it should DIY, do all the plumbing, nail all the logic, and in ~30m I'll have something very solid that would have taken me 3+ weeks of part-time evening coding in python. I think the article is right and rust is the closest to the "best language" we have for LLM coding at the moment: the strict typing and the tooling dramatically reduce the output space for LLMs, and 99% of errors have a clear, precise explanation that is actionable, and the compiler helps you a lot there too.

I think it also boils down to the fact that you cannot reliably and quickly answer "why is this arg None?" in languages like python without figuring out the call graph and evaluating possible states and inputs/outputs. Rust makes all that explicit and forces you handle it, which I feel dramatically cuts the time an LLM needs to spend figuring out why it's broken or what to do next. EDIT: The fact that you get memory safety on top of all this and it's handled by the compiler is yet another advantage for LLMs: the logic that gets written is simpler to reason about, because if you try to mutably access the same variable in two different places, the compiler will feed this back to the LLM at build time. In other languages that would be a "code smell" or would require static analysis.

Strictness is a quality for software and a chore for humans, and of course the stricter you are at representing your logic and your state machine, the less ways a program can break. LLMs writing in rust give you the strictness without the chore part, and it's a very good deal from my point of view.


My wife is a freelance digital marketing specialist, this post basically describes what she's been seeing since the start of her career 10 years ago.

As a tech guy, I've found that business owners tend to be way more pushy. Normally they fall in the boomer category, or they simply are not in the field.

Both categories seem to assume that "ahhh IT, things are instant and tomorrow I'll have 10'000 daily visitors", while that's very far from the case. They think that spending money today means results tomorrow as in "next sunrise", while digital marketing is basically subject to the whims of Google/Instagram/... and their algorithms, and investing today means seeing results at the very earliest after three months.

You tell them beforehand many times, they sign a contract, they agree with everything... and then after 2 weeks they start asking you daily why things are not improving, with zero respect for work hours or personal boundaries. That's how they choose someone else via "high-friction", and they normally land on bigger agencies because they think bigger agency = faster results.


> We were on self-hosted Gitlab but after a merger were forced to Github. Navigation feels painful in comparison and basic features such as commit graph are now behind more expensive tiers.

Same experience here. Add to that that even on Enterprise tier:

- 1 Enterprise : 1 namespace - although you can segment it with Orgs, we were advised not to do it because we're too small (~2k people) (GL: groups, subgroups, sub-subgroups, ...)

- SSH deploy keys are singletons across the entire instance and repo-bound (and Weblate for instance can only use its own key), so you need a service account for that (GL: instance-wide SSH deploy keys that you can activate in specific repos)

- GHCR only really supports classic PATs for authentication ( https://docs.github.com/en/packages/working-with-a-github-pa... - GL: proper deploy keys properly inherited throughout the hierarchy)

So all in all the experience so far is a huge step-down. I really liked pinning commonly accessed pages in the sidebar.


I can't speak for the UK, but with my Model 3 2019 I was charging <10% to 30% @ 250kW (max the car supports) for a good 5+m in Switzerland and Italy already 4 years ago (both at Tesla Superchargers V3 and Ionity 350kW public chargers).

Of course the charger is not the only limiting factor, the grid also needs to support it. If you're in a small town with no big shops/industry, you're way less likely to have 1+MW cables installed, there was never a need for such peak capacity before.


> Part of the reason it is so valuable is that patients usually must take it for the rest of their lives.

Well yes, Ozempic doesn't solve the habits of a bad diet.

The weight rebound is surely due in little part to removal of hunger suppression as in "hormone rebound", but if you resume eating 5000+ kcal/day because you don't have something that keeps you from doing it, you'll end up in the same situation as before. Ozempic was never meant and is not going to fix your diet. That's a psychological and environmental problem.


> This is a clear restriction on liberty. ... Just like many stupid decisions (junk food included), it ought to be my right to decide how to live.

I guess that liberty was plenty abused on every non-smoker in a non-smoking area, that ended up coughing in clouds of smoke anyway. Smoking affects everyone around you whether you want it or not, and while you may smoke for 50 years and end up being perfectly healthy, some may get cancer from it, even for a very small dose.


There's already some pretty comprehensive bans on smoking in places where it could affect other people. I don't really encounter cigarette smoke in my day-to-day life.


> There's already some pretty comprehensive bans on smoking in places where it could affect other people.

Which I'm arguing are disregarded most of the times by most smokers. I do encounter cigarette smoke in my day-to-day, unfortunately. And unfortunately it's always the same places, mostly bars and restaurants that have outdoor spaces. Places where I'm supposed to smell food I pay for, and I end up smelling smoke instead.


However, people born before 2008 smoking around you, affects you, as well. If they want to protect people they need to ban it for everybody..


> If they want to protect people they need to ban it for everybody..

Last time governments tried to force people to do something for their own sake, you saw how it ended (COVID). If people can't start smoking cigarettes, they won't get hooked up, so gradually at least regular cigarettes will be phased out. Vapes are still controversial, but as a non-smoker with a very sensitive nose, vape smoke is 10000x better than cigarette smoke. It doesn't cause me to cough, it doesn't contain harmful chemical compounds, it doesn't soil clothes nearly as much, and I can still smell food at a restaurant.


It's well known for a while now, and it's not to avoid recording being active, it's to avoid a possibly damaged computer to keep working in a likely compromised situation. What happens if the car crashes and flips, AP/FSD has no training on that, and wheels keep spinning at full speed while first responders try to secure the car?

AEB should still be working to pump the breaks AFAIK, but auto-steer and cruise control will be disabled while the computer and electronics are still perfectly operational to make the car more secure for the passengers and first responders after the event.

EDIT: IIRC the threshold for disengagement is 1s.


>> Teslas turning off autopilot seconds before a crash, apparently avoiding being recorded as active during an incident, is wild https://futurism.com/tesla-nhtsa-autopilot-report

> It's well known for a while now, and it's not to avoid recording being active, it's to avoid a possibly damaged computer to keep working in a likely compromised situation. What happens if the car crashes and flips, AP/FSD has no training on that, and wheels keep spinning at full speed while first responders try to secure the car?

That sounds like an ass-covering justification. There may be a good reason for triggering some kind of interlock to prevent the problems you outlined, but if their implementation 1) also stopped recording seconds before a crash or 2) they publicly claimed it wasn't responsible since it turned itself off, then Tesla is behaving unethically and dishonestly.


I'm just stating what I remember, I'm not trying to defend Tesla.

For 1) it's the first time I hear it from a technical point of view - Tesla's dashcam records continuously for the last 10m, and should save the data on the internal computer in case of a crash and send it back to Tesla if feasible AFAIR (I'm an owner). IIRC it's not the first case though where Tesla claimed the data wasn't available or corrupted, and then it was actually recovered some time later after pressure from authorities. So I think technically the data is there, but also believe Tesla is behaving unethically and dishonestly to cover up or delay retrieval.

2) I often hear it as FUD, as in: AP/FSD was off, the user just did it by accident, wasn't accustomed to it, or just didn't know how it worked. AFAIR most of the accidents had the data released and showed some of the following: user touched steering wheel and disengaged autosteer/FSD (whether knowingly or by accident), user was pressing accelerator pedal by accident, user was pressing accelerator instead of brake, etc etc


> What would it take to come up with IPv7 which takes in the lessons of IPv6 and produces something better that we can all agree is worth transitioning to over IPv4.

The only lesson to learn from IPv6 deployment is that if there's a workaround available and the world isn't burning, it'll take 30 years from initial design to actual adoption. So if you went out and took 10 years to design IPv7, it'd likely take until 2070 for it to gain some adoption. This is because big network hardware is costly and has very long replacement cycles.

IPv6 was already designed as a lessons-learnt protocol from IPv4 issues. The header is greatly simplified and it's more hardware-friendly, it incorporates the required features into the protocol and leaves extensibility as an optional add-on that doesn't slow down routing packets, all the while granting an infinite address space.


Perhaps the problem isn't the BigScreen, it's the youtube video?

I normally run applications maximized on my 28" 4k, unless I need input from 2 applications at the same time, then I tile them.

Working from my work-issued 16" Macbook Pro or any other of my laptops is a pain because of the limited estate - it's hard to see patterns at a glance or get the whole context when I can only see 30 lines of text that is truncated at <=80 columns. Plus, the fact that the keyboard isn't detachable from the screen forces bad habits on the posture.


> A really bad solar storm could be devastating.

Starlink already accounts for these (e.g. https://www.theregister.com/2025/11/18/starlinks_method_of_d... ), and in any case they are put in orbit so that they eventually fall back to earth in case control is lost.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: