Hacker Newsnew | past | comments | ask | show | jobs | submit | indolering's commentslogin

I really wish I had messed with Windows Phone when it was a thing. They were the only ones not to just ship a clone of an existing interface ASAP. But it was closed source and offered no advantages for carriers or device makers compared to Android.

WebOS needed WASM and a lot more to be successful. I think WASM/WASI is to the point that the next major platform build out can use it.


webOS was a bit ahead of its time - it does live on in LG TV's where its done quite well.


The floating point "standard" was basically codifying multiple different vendor implementations of the same idea. Hence the mess that floating point is not consistent across implementations.


IEEE 754 basically had three major proposals that were considered for standardization. There was the "KCS draft" (Kahan, Coonen, Stone), which was the draft implemented for the x87 coprocessor. There was DEC's counter proposal (aka the PS draft, for Payne and Strecker), and HP's counter proposal (aka, the FW draft for Fraley and Walther). Ultimately, it was the KCS draft that won out and become what we now know as IEEE 754.

One of the striking things, though, is just how radically different KCS was. By the time IEEE 754 forms, there is a basic commonality of how floating-point numbers work. Most systems have a single-precision and double-precision form, and many have an additional extended-precision form. These formats are usually radix-2, with a sign bit, a biased exponent, and an integer mantissa, and several implementations had hit on the implicit integer bit representation. (See http://www.quadibloc.com/comp/cp0201.htm for a tour of several pre-IEEE 754 floating-point formats). What KCS did that was really new was add denormals, and this was very controversial. I also think that support for infinities was introduced with KCS, although there were more precedents for the existence of NaN-like values. I'm also pretty sure that sticky bits as opposed to trapping for exceptions was considered innovative. (See, e.g., https://ethw-images.s3.us-east-va.perf.cloud.ovh.us/ieee/f/f... for a discussion of the differences between the early drafts.)

Now, once IEEE 754 came out, pretty much every subsequent implementation of floating-point has started from the IEEE 754 standard. But it was definitely not a codification of existing behavior when it came out, given the number of innovations that it had!


The answer is unequivocally yes: RISC-V is designed to be customizable and a vendor can put whatever they like into a given CPU. That being said, profiles and platform specs are designed to limit fragmentation. The modular design and core essential ISA also makes fat binaries much more straight-forward to implement than other ISAs.


You can choose to develop proprietary extensions, but who’s going to use them?

A great case study is the companies that implemented the pre-release vector standard in their chips.

The final version is different in a few key ways. Despite substantial similarities to the ratified version, very few people are coding SIMD for those chips.

If a proprietary extension does something actually useful to everyone, it’ll either be turned into an open standard or a new open standard will be created to replace it. In either case, it isn’t an issue.

The only place I see proprietary extensions surviving is in the embedded space where they already do this kind of stuff, but even that seems to be the exception with the RISCV chips I’ve seen. Using standard compilers and tooling instead of a crappy custom toolchain (probably built on an old version of Eclipse) is just nicer (And cheaper for chip makers).


Yes, extensions are perfect for embedded. But not just there.

Extensions allow you to address specific customer needs, evolve specific use cases, and experiment. AI is another perfect fit. And the hyperscaler market is another one where the hardware and software may come from the same party and be designed to work together. Compatibility with the standard is great for toolchains and off-the-shelf software but there is no need for a hyperscaler or AI specific extension to be implemented by anybody else. If something more universally useful is discovered by one party, it can be added to a future standard profile.


I think it's a joke: you REALLY don't want to own your own servers.


I don't think it is. There are many many cases where you do want to own them. The people you rent yours from are making a shit load of money so it doesn't sound that bad of an idea


I buy lots of things from people who make a pile of money from low margin goods/services sheerly on scale. There are many things i could not reproduce more cheaply from constituent parts, even if i value my time at $0.

This includes things I have expertise in.


AWS is clearly not in a low-margin business, though.


you're going to save money by having your own physical servers?


Yes, a lot of money in fact.


The challenges of doing so are often significantly overstated. It can make a lot of sense to own them.


It sometimes makes financial sense to own your own servers


You do if you need absolute control over data location, isolation, and physical access.


> The same reasons not to deploy DNSSEC that face large organizations apply to you: any mistake managing your DNSSEC configuration will take your domain off the Internet (in fact, you'll probably have a harder time recovering than large orgs, who can get Google and Cloudflare on the phone).

Set your TTL to five minutes and/or hand over DNS management to a service provider.

> Meanwhile, you get none of the theoretical upside, which in 2026 comes down to making it harder for an on-path attacker to MITM other readers of your site by tricking a CA into misissuing a DCV certificate for you --- an attack that has already gotten significantly harder over the last year due to multiperspective. The reason you don't get this upside is that nobody is going to run this attack on you.

Didn't save Cloudflare from a bad TLS certificate being issued. I still think that reducing the number of bad actors from 300 to the root servers and your registrar is a meaningful reduction in attack surface.

> DNSSEC attempts to address just a subset of these; most especially MITM attacks, for which there are a huge variety of vectors, only one of which is contemplated by DNSSEC.

How would authenticating DNS records cryptographically not address cache poisoning, MITM, and DNS spoofing in relation to DNS lookups? Also, DNSSEC doesn't have to solve all problems to make it worth doing.

> Finally, I have to tediously remind you: when you're counting signed domains, it's important to keep in mind that not all zones are equally meaningful. Especially in Europe, plenty of one-off unused domains are signed, because registrars enable it automatically. The figure of merit is how many important zones are signed. Use whichever metric you like, and run in through a bash loop around `dig ds +short`. You'll find it's a low single-digit percentage.

Yet you complain about DNSSEC being to hard to deploy and not getting enough deployment. Wouldn't it be nice if they could leverage that automatic signing to also generate TLS, SSH, and other certificates?


It can be used alongside WebPKI. And as someone who is worried about other protocols, it sure would be nice if I could setup DNSSEC for my domain and have clients pick up on that automatically.


Phishing existing isn't a good argument against cryptographically authenticating DNS records.


"Phishing existing" isn't the argument. "The dominant vector for actual domain takeover over the last 5 years is phishing" is.


But it also applies to every other part of the stack, including WebPKI. Would you accept this as a valid argument against using HTTPS everywhere?


I can't even follow your argument anymore. DNSSEC is proposed as a feature to make DCV certificates more difficult to misissue. But DCV misisuance is overwhelmingly caused by registrar ATO. DNSSEC therefore can't address most DCV misissuance. And it has no other mainstream security proposition.

That is obviously not a claim you can make of the WebPKI. Your problem here is that the WebPKI is a very large superset of the security capabilities of DNSSEC. Unlike with DNSSEC, people --- millions of them --- actually rely on it.


I'll rephrase the argument to make it more clear for you: Phishing attacks are far more common than HTTP MITM, so we don't need protection against HTTP MITM. If you think this conclusion doesn't follow from this premise, then what differentiates HTTP from DNS in your mind, because you are making this argument about DNS?


Neither DNSSEC nor the WebPKI are defenses against phishing. But phishing (registrar ATO more generally) is the dominant vector through which DNS spoofing occurs, and DNSSEC solely addresses DNSSEC spoofing.


Do you agree that we don't need HTTPS because phishing is the most common HTTP attack, not MITM?


No? This is the third attempt you've made at this faulty syllogism. If we simply can't resolve enough premises to hash it out, that's fine, we don't have to try to understand each other.


HTTPS also has expiring keys that also need to be rotated. Most people outsource this to a service provider for them - as is the case with DNS. It's weird how people gripe about standard cryptography/PKI when it comes to DNSSEC but not HTTPS.


Which is a problem with the OS and browser, not with DNSSEC.


Eric Rescorla's post, linked upthread, goes into detail about why "OS's and browsers" can't easily solve this problem without breaking the Internet for materially large fractions of their users. In practice, browsers that care about DNS security just use DoH.


It's a lot like HTTP and every other early internet protocol that existed before the crypto. Everyone agrees that it's a problem but fixing all the existing infra is really hard and expensive.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: