>Finding routes for drivers and deliveries, planning schedules, optimizing store layouts, etc. -- that's AI.
If a path-finding algorithm that I can write on paper is AI, AI has completely lost all meaning. Let's not call graph traversal and sorting "AI" please.
(1) Optimization is much, much bigger than graph traversal and sorting.
(2) Modern route-finding algorithms are to your on-paper-Dijkstra what a rocket ship is to your bicycle.
(3) I think you're under the same misconception I'm talking about: graph traversal is absolutely a fundamental of AI. Ask anyone what the main AI textbook is, they'll tell you it's Russell and Norvig: http://aima.cs.berkeley.edu/
The first topic they cover is graph traversal and search.
No, you're just projecting the modern bar for AI into the past. AI roughly means "things human brains can do that computers can't": when computing was primarily straightforward and analytical ("calculating"), then relatively more sophisticated algorithms that could "solve problems" like mazes absolutely were on the AI frontier. The fact that they've since retreated so far from "things computers can't do" is just a function of the fact that it was an early success in the field.
Your comment is just a crystallization of what the parent comment is talking about: claiming something isn't AI because "pft that's such an easy, solved problem" _after_ it's solved defines away the possibility that AI can solve problems.
If anything the opposite is true. When Minsky et al set out to define AI what they really meant was 'thinking machines'.
If anything the opposite has happened. In a painful attempt to push forward notions of success in AI almost purely mechanical tasks have been claimed to be AI, while there is virtually no progress on building machines that can think.
I mean sure you can claim all day that the navigation system in your car calculates you a billion routes per second and if that's intelligence my smart-toaster is probably more intelligent than everyone here together, but it completely misses the point, and the reason why people have expanded the term so much is because there has been so little progress on genuine intelligence.
> If anything the opposite is true. When Minsky et al set out to define AI what they really meant was 'thinking machines'.
Yes, there's two word senses, the theoretical and the colloquial. You're referring to the former, and I'm referring to the latter; the latter is a lot more relevant to this thread's topic, which is public perception of AI and its value. Wikipedia actually has a pretty good concise description of these two senses:
> Leading AI textbooks define the field as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] Colloquially, the term "artificial intelligence" is often used to describe machines (or computers) that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving"[2]
(The [2] citation is of Russell & Norvig's '09 edition, substantially predating the recent mass-interest in AI)
> if that's intelligence my smart-toaster is probably more intelligent than everyone here together
The leap from "this is reasonably described as a step on the road to intelligence" to "being really good at this step means you're _really_ intelligent" is obviously nonsense, though I suspect you know that. The fact that a cat has the ability to orient itself and navigate home and am amoeba couldn't is a sign of relative intelligence; but if your cat has a better sense of direction than you, it obviously doesn't make it smarter than you.
>AI roughly means "things human brains can do that computers can't"
Ok, then nothing being done today (neural nets, gan, etc) on computers is AI. What a dumb definition.
>claiming something isn't AI because "pft that's such an easy, solved problem"
Nope. It's not AI because instructions were directly written by a programmer of how to path-find. There was no input training where the program learned how to pathfind. It just had one method from day 1 that hasn't deviated since.
i mean, these kinds of simple CS201 algorithms are still at the core of a lot of things that definitely count as AI by any reasonable definition.
In particular, the best superhuman poker-playing systems use counterfactual regret minimization, which is literally just traversing a tree and updating small arrays of numbers at each node, augmented with some clever heuristics to make it scale to realistic poker scenarios. there aren't even any neural networks involved.
But this very much was the main focus of AI only a few decades ago. If AI has changed that much such that AI 20 years ago is no longer AI now, perhaps modern AI researchers should find a different name? I very much mean this. Previous-generation AI was very much about graph algorithms, whether via LISP, Prolog, RDF, reasoning expert systems, parsing, etc. These are all graph concepts
What you're picking up on is that the old school vision of how AI would be achieved (Minsky, Chomsky etc) had some early success with simple games (which they then touted as the forerunners of AI) and then stopped. Data driven ML approaches are completely different.
Physics once held that there was a substance called aether rather than vacuum, and chemistry that fire came from phlogiston, but we didn't need to rename the discipline when reasoning evolved.
The existence of the aether isn't borne out by experiments; rather, we have justification in believing in its non-existence.
The algorithms of AI of decades past still work.
That's a big difference.
E.g. SHRDLU can be built today and you can have a conversation with it about its world of blocks.
Algorithms are artifacts of mathematics. To calculate distance between two points, we still use sqrt(dx^2 + dy^2); it doesn't go out of fashion due to advancements in topology.
The techniques aren't that new; all that is new is having the gigabytes upon gigabytes of RAM to run them, not too mention CPU power, and scads of data.
I knew what a neural network was, and understood it as part of AI, when Wham! was in the Top 40 charts.
The hate isn't for the "new" techniques, I think; just for the posers who claim to be AI experts because they know how to use some Python library or whatnot.
You don't have AI creds if you have no background in the symbolic stuff.
We'll have to disagree. You can be perfectly well credentialled without having years of studying techniques that aren't very successful. It is really rewriting history to pretend that old style NN and backpropagation is the same as modern systems -- implementation techniques matter.
> Physics once held that there was a substance called aether rather than vacuum, and chemistry that fire came from phlogiston, but we didn't need to rename the discipline when reasoning evolved.
The key difference being that physical theories based on aether never actually worked, whereas computer programs based on graph algorithms solved many problems they set out to solve.
A huge amount of stuff can and should be automated by mundane programming. Since that hasn't been done yet, AI isn't about to automate all of the stuff that shouldn't be automated by mundane programming.
You're proving the point of the original post. Today's mundane programing used to be considered AI. See: search, fuzzy logic, character recognition. Or even more mundane: object-oriented programming, interpreted languages, and tons of generic algorithms used in daily life, all of which came out of AI labs.
If a path-finding algorithm that I can write on paper is AI, AI has completely lost all meaning. Let's not call graph traversal and sorting "AI" please.