Sometimes I have lamented the fact that 80% of everything we're ever going to properly discover in Computer Science has already been discovered forty years ago.
What hasn't been explored very well is how to formulate these solutions so mere mortals can comprehend how they work. Algorithms accessibility is, I believe, the limiting factor on building systems any bigger than the ones we have now. When there is one tricky bit in the code, you can get away with asking people to dive in and learn it. When there are 50? 100? Just figuring out the consequences of how those systems interact is a full time job, let alone how they function internally.
Give me a SAT solver, or a parser, or half a dozen other things, that decomposes a problem the way a human would do it, just faster and with far more accuracy, and I could learn it in a week. Take all my Paxoses and replace them with Raft, then swing by and make a second pass on a few.
The real problems are conventions and ease of use. You're not going to solve these problems by pointing at a paper from the 80s and shouting "We did this 40 years ago! Why is everyone so dumb nowadays?".
Lets take WebAssembly as an example. It's only reason for existence is convention. You need everyone to agree on an IR that is not only cross platform and stable but also low level and allows execution of untrusted code within a sandbox.
If you look at competitors then it becomes obvious that they are not following these core principles. LLVM IR just isn't meant to be used in a browser. JVM bytecode just wasn't meant to be used in a browser. So what are we going to do? Use them anyway? That's how you get situations like the horribly insecure JVM plugin. You can restrict the JVM plugin to a subset that is secure enough for browsers and add additional opcodes for low level behaviors but then you are no longer using JVM bytecode. It's a completely new invention at that point but Oracle will still hunt you down.
This is a bit harsh. JVM byte code was meant to be in the browsers of that time. Plugins were a respected part of the ecosystem back then. There were 2 types of security problems.
The majority were not bytecode related, but were bugs in the interface to the outside world. There is no reason why WebAssembly is better in this regard. E.g. webglvs java' s graphic APIs. This mainly comes from corporate culture prioritizing security. If financial hardship or other stresses befalls the browser maker, webassembly will probably give the same trouble as java had.
The minority were related to the soundness of the jvm itself. Most of these have been fixed. These bugs are in general nasty, as the basics have been valisated by a mathematical proof. A few are still there and very hard to fix, like locking system objects like Thread.class I think this was a learning experience for all secure VMs that follow it, and WebAssembly knew what to avoid because of Java. Only time will tell how good WebAssembly withstands the nasty ideas humanity throws at it.
It's hard to optimize algorithms while also making them general and composable. We are forever reimplementing good ideas in different combinations of systems.
What hasn't been explored very well is how to formulate these solutions so mere mortals can comprehend how they work. Algorithms accessibility is, I believe, the limiting factor on building systems any bigger than the ones we have now. When there is one tricky bit in the code, you can get away with asking people to dive in and learn it. When there are 50? 100? Just figuring out the consequences of how those systems interact is a full time job, let alone how they function internally.
Give me a SAT solver, or a parser, or half a dozen other things, that decomposes a problem the way a human would do it, just faster and with far more accuracy, and I could learn it in a week. Take all my Paxoses and replace them with Raft, then swing by and make a second pass on a few.