The assumption is that orders of magnitude more people will benefit from the efficiency gains, like it was the case in agriculture automation or factory work automation.
In those cases, that led to a transition period, nowadays only a small fraction of the human population is working to produce food, and their job is more about planning, finance and orchestration of machine work, but many specialised jobs were lost or made miserable in the process.
IMHO any job that can be done by a machine should not be done by a human, the tricky part is going there with as little undesirable effects as possible.
Of course they do. For example you can lower a string from the top to query the fill level. Or you can wrap a string around the pot to query the circumference.
The most important argument is phishing. People aren’t good at recognizing when a web site is legitimate. One reason that app certification is a shitshow is that recognizing bad players while minimizing false negatives and false positives is a difficult problem. Domain names fundamentally don’t solve that problem.
If you look at the V-Dem Electoral and Liberal Democracy Indices there, you can also see that it’s been decreasing since around 2010. It’s back to mid-1990s levels, coincidentally around when mainstream internet usage started.
Mathematical concepts are invented, but they live in a space of possible (conceivable) mathematical concepts, and we can only invent concepts from that selection of possible concepts. This can be reframed as a process of discovery regarding which conceptions are possible.
Furthermore, the results of theorems aren’t an invention, they are a discovery of what the base assumptions (axioms) logically entail. Finding out which theorems are true and provable is a discovery process. For example, the results of Gödel’s incompleteness theorems were a discovery. They weren’t invented, in the sense that the results couldn’t have been otherwise. We merely could have failed to discover them.
This also holds for physical inventions. You discover a working way to build some functioning mechanism. It’s a process of discovery of what is possible in the physical world.
Whether you portray somethings as a discovery or as an invention is more a matter of degree, a matter of from which angle one is looking at it.
The possible states of an LLM are finitely enumerable. The same likely holds for the possible states and configurations of a human brain, in approximation. Therefore there is only a finite set of possible ideas, thoughts, and conceptualizations an LLM or a human can have, and in principle they could be exhaustively enumerated and thus “discovered”.
The task of a proof verifier is much simpler than the task of a proof finder (it’s basically equivalent to P vs. NP), and hence the bar for the required skills is lower. Merely verifying proofs isn’t research, and doesn’t impart research skills.
Verification on its own is not research, but judgement is research.
"Hey, Prove something a machine can't", sure I can't, "Hey, Say something worth proving & judge it well", ah, now I might have a few unique observation/ideas/curiosities/problems from my having being a human.
Imo, the feeling of intelligence or the process of originality(originativity) test for ai is subjective & is coming down to 4 paths: novel relative to a reference class, valuable within a domain, counterfactually sensitive to internal state and environment, and revisable through learning.
It matters because most mathematicians thrive on the recognition of their achievements. If what you do any mediocre mathematician could have done, that takes away motivation and fulfillment.
> there's a decent chance projects like Debian might have to radically overhaul or just shut down completely - the whole philosophy of slow and steady with old code just won't work.
Debian continuously issues security updates for stable versions, ingestable with automatic updates. “Stable” doesn’t mean that vulnerabilities aren’t getting fixed.
The argument that could be made is that keeping up with getting vulnerabilities fixed might become such a high workload that fewer releases can be maintained in parallel, and therefore the lifetime and/or overlap of maintained releases would have to be reduced. But the argument for abandoning stable releases altogether doesn’t seem cogent.
It goes both ways: Stable code that only receives security updates becomes less vulnerable over time, as the likelihood of new vulnerabilities being introduced is comparatively low. From that point of view, stable software actually has a leg up over continuous (“eternal beta” in the worst case) functional updates.
I can only dream, but this may re-popularize (among the rest of the non-Debian software industry) the general best practice of keeping a "sustaining" branch green, buildable, and with frequent releases, for security fixes.
I hate software that forces you to take new features as a condition of obtaining bug and security fixes. We need to keep old "stable" builds around for longer and maintain them better. I know, I know, it is really upsetting to developers to have to backport things to old versions--they wish that all they had to work on was the current branch. But that just causes guys like me to never upgrade because the downside of upgrading (new features) is worse than the upside (security fixes).
reply