Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It comes down to whether you believe AGI is achievable.

No, it does not. I very much believe AI (or AGI, as you call it) is achievable, but may I remind you that some years after the invention of neural networks, Norbert Wiener, one of the greatest minds of his generations, said that the secret of intelligence would be unlocked within five years, and Alan Turing -- a component of your very own post-pre-AGI era's AGI -- another great believer in AI, scoffed and said that it will take at least five decades. That was seven decades ago, and we are not even close to achieving insect-level intelligence. Maybe we'll achieve AI in ten years and maybe in one hundred, but you don't know which of those is more likely, and you certainly don't know whether any of our pre-AGI technology even gets us on the right path to achieving AGI. There have been other paths towards AI explored in the past that have largely been abandoned.

OpenAI is not actually building AGI. Maybe it hopes that the things it is working on could be the path to an eventual AGI. OpenAI knows this, as does Microsoft.

This does not mean that what OpenAI does is not valuable and possibly useful, but it does make calling it "pre-AGI" pretentious to the level of delusion. Now I know there were (maybe still are) some AI cults around SV (I think a famous one even called themselves "The Rationalists" or something), but what makes for a nerdy, fanciful discussion in some dark but quirky corner of the internet looks jarring in a press release.

> If you believe AGI might be achievable any time soon, it becomes hard to work on any other problem — and it's also very important to put in place guardrails like https://openai.com/blog/openai-lp/ and https://openai.com/charter/

I can't tell if you're serious, but assuming you are, the problem is that there are many other things that if you think chould be achievable any time soon would make it hard to work on any other problem, as well make it important to put guardrails in place. The difference is that no one actually knows how to put guardrails on AGI. We are doing a pretty bad job putting guardrails on the statistical clustering algorithms that some call (pre-AGI?) AI and that we already use.



If AGI is achievable (seems likely given brains are all over the place in nature) and achieving it will have consequences that dwarf everything else then doesn't it make sense to focus on it?

Yes, historically people were way too optimistic and generally went down AI rabbit holes that went nowhere, but two years before the Wright flyer flew, the Wright brothers themselves said it was 50 years out (and others were still publishing articles about human flight being impossible after it was already flying).

People are bad at predictions, in the Wright brothers case since they were the people that ultimately ended up doing it two years later they were likely the best to make the prediction and were still off.

Given that AGI is possible and given the extreme nature of the consequences, doesn't it make sense to work on alignment and safety? Why would it make sense to wait? If you accidentally end up with AGI and haven't figured out how to align its goals then that's it, the game is probably over.

Maybe OpenAI is on the right path, maybe not - but I think you're way too confident to be as sure as you are that they are not.


First of all, I was talking the language, not the work. It makes sense to study AI as it does many other subjects, but we don't know that it will "have consequences that dwarf everything else" because we don't know what it will be able to do and when (we think that it could but so could, say, a supervirus, or climate change, or the return of fascism). People hang all sort of dreams on AI precisely because of that. That cult I mentioned, The Rationalists, basically imagined AI to be a god of sorts, and then you can say "wouldn't you want to build a god?" But we don't know if AI could be a god. Maybe an intelligent being that thinks faster than humans goes crazy? Of course, we don't know that, but my point is that the main reason we think so much of AI is that at this time, we don't know what it is and what it could do.

> Why would it make sense to wait?

Again, that's a separate discussion, but if we don't know what something is or when it could arrive, it may make more sense to think about things we know more about and are already here or known to be imminent. Anyway, anyone is free to work on what they like, but OpenAI does not know that they're "building artificial general intelligence."

> I think you're way too overconfident to be as sure as you are that they're not.

I don't know that they're not, but they don't know that they are, and that means they're not "building AGI."


I can understand your point about the language, but I guess I think it's reasonable to set the goal for what you actually want and work towards it. It may turn out to be unattainable, but I think generally you need to at least set it as the goal. It also seems less clear to me that they are close or far from it (I don't think it's on the same level as warp drive).

I don't know about the god thing you mention and the rationalist stuff I've read hasn't been about that. The main argument as I understand it is:

1. AGI is possible

2. Given AGI is possible if it's created without the ability to align its goals to human goals we will lose control of it.

3. If we lose control of it, it will have unknown outcomes which are more likely to be bad than benign or good.

Therefore we should try and figure out a way to make it safe before AGI exists.

Maybe humans just happen to be an intelligence upper bound and anything operating at a higher level goes crazy? That seems unlikely to me given that humans have a lot of biological constraints (heads have to fit out of birth canals, have to be able to run on energy from food, selective pressure for other things besides just intelligence). You could be right, but I'd bet on the other side.

The last bit is if we can solve this in a way that aligns the goals with human goals (open question since humans themselves are not really aligned) we could solve most problems we need to solve.


I think discussions of AI safety at this stage -- when we're already having problems with what passes for AI these days that we're not handling well at all -- is a bit silly, but I don't have something particularly intelligent to say on the matter, and neither, it seems, does anyone else, except maybe for this article that shows that the AGI paranoia (as opposed to the real threats from "AI" we're already facing, like YouTube's recommendation engine) may be a result of a point of view peculiar to Silicon Valley culture: https://www.buzzfeednews.com/article/tedchiang/the-real-dang...


I agree with you in a way, if AGI ends up being 300yrs out then work on safety now is likely not that important since whatever technology is developed in that time will probably end up being critical to solving the problem.

My main issue personally is that I'm not confident if it's really far out or not and people seem bad at predicting this on both sides. Given that, it probably makes sense to start the work now since goal alignment is a hard problem and it's unknown when it'll become relevant.

I read the BuzzFeed article and I think the main issue with it is he assumes that an AGI will be goal aligned by the nature of being an AGI:

"In psychology, the term “insight” is used to describe a recognition of one’s own condition, such as when a person with mental illness is aware of their illness. More broadly, it describes the ability to recognize patterns in one’s own behavior. It’s an example of metacognition, or thinking about one’s own thinking, and it’s something most humans are capable of but animals are not. And I believe the best test of whether an AI is really engaging in human-level cognition would be for it to demonstrate insight of this kind."

Humans have general preferences and goals built in that have been selected for for thousands of years. An AGI won't have those by default. I think people often think that something intelligent will be like human intelligence, but the entire point of the strawberry example is that an intelligence with different goals that's very good at general problem solving will not have 'insight' that tells it what humans think is good (that's the reason for trying to solve the goal alignment problem - you don't get this for free).

He kind of argues for the importance of AGI goal alignment which he calls 'insight', but doesn't realize he's doing so?

The comparison to Silicon Valley being blinded by the economies of their own behavior is just weak politics that's missing the point.


We don't know that "goal alignment" (to use the techo-cult name) is a hard problem; we don't know that it's an important problem; we don't even know what the problem is. We don't know that intelligence is "general problem solving." In fact, we can be pretty sure it isn't, because humans aren't very good at solving general problems, just at solving human problems.


> Therefore we should try and figure out a way to make it safe before AGI exists.

Makes no sense to me, how would you ever be able to figure out a way to make something safe before it even exists?

Someone who has never built a nuclear reactor most likely could not think of a way to prevent the Chernobyl disaster.

(OK, maybe this is a wrong example as someone who did couldn't do this either, but the point should be clear)


I think the argument is that decision theory and goal alignment can be worked on without knowing all the details about how an AGI will work.

https://intelligence.org/2016/12/28/ai-alignment-why-its-har...


ah yes Yudkowsky, the well established AI researcher & definitely not a crank


Personal attacks are not ok here, regardless of whom you're attacking. Can you please not post like this to HN?

https://news.ycombinator.com/newsguidelines.html


We don't know whether AGI is possible or even exactly what it is. However, if a form of intelligence where adding more hardware adds more capabilities in the fashion of present computing but where the capacities are robust and general purpose like humans rather than fragile and specialized like current software, then we'd have of amazing power - brilliant people can amazing things. A device that's akin to an army of well-organized brilliant people in a box clearly would many capacities. So it's reasonable to say that if that's possible, investing in it may have a huge payoff. (Edit: the "strong" "AGI is possible" would be that AGI is an algorithm that gives a computer human-like generality and robustness while having ordinary soft-like-abilities. There are other ideas of AGI, of course - say, a scheme that would simulate a person on such a high level that the simulated person had no access to the qualities of the software doing the simulation but that's different).

The problem, however, I think another gp's objection. OpenAI isn't really working on AGI, it's making incremental improvements on tech that's still fragile and specialized (maybe even more specialized and fragile), where the only advance of neural nets is that now they can be brute-force programmed.


> However, if a form of intelligence where adding more hardware adds more capabilities in the fashion of present computing but where the capacities are robust and general purpose like humans rather than fragile and specialized like current software, then we'd have of amazing power - brilliant people can amazing things.

That's a very big if... Also, I'd argue that most progress happens not because of some brilliant people, but because of many people working together... Then if your AGI only reaches the level of intelligence of humans and maybe a bit more (what does more even mean in terms of human intelligence? more emphatic? faster calculation ability? more memory? what would the use of this be? all things we can't really oversee), it raises the question whether this would ever be possible in a cost-efficient way (human intelligence seems like it is, in a certain way, "cheap").


>That's a very big if...

Oh, this is indeed a big if. A large, looming aspect of the problem is we don't anything like an exact characterization of "general intelligence" so what we're aiming for is very uncertain. But that uncertainty cuts multiple ways. Perhaps it would take 100K human-years to construct "it" and perhaps just a few key insights could construct "it".

> Also, I'd argue that most progress happens not because of some brilliant people, but because of many people working together...

The nature of a problem generally determines the sort of human-organization one needs to solve a problem. Large engineering problems are often solved by large teams, challenging math problems are generally solved by individuals, working with published results of other individuals. Given we're not certain of the nature of this problem, it's hard to be absolute here. Still, one could be after a few insights. If it's a huge engineering problem, you may have the problem "building an AGI is AGI-complete".

> Then if your AGI only reaches the level of intelligence of humans and maybe a bit more (what does more even mean in terms of human intelligence? more emphatic? faster calculation ability?

I've heard these "we'll get to human-level but it won't be that impressive" kinds of arguments and I find them underwhelming.

"What use would more memory be to an AGI that's 'just' at human level?"

How's this? Studying a hard problem? Fork your brain 100 times, with small variations and different viewpoints, to look at different possibilities, then combine the best solutions. Seems powerful to me. But that's just the most simplistic approach and it seems like an AGI with extra-memory could jump between the unity of an individual and the multiple views of work groups and such is multiple creative ways. The plus humans have a few quantifiable limits - human attention has been very roughly defined as being limited to "seven plus or minus two chunks". Something human-like but able to consider a few more chunks could possibly accomplish incredible things.


> If AGI is achievable

It almost certainly is. Humans make new intelligences all the time.

> and achieving it will have consequences that dwarf everything else

It probably won't, humans make new intelligences all the time. Hanging the technology base for that doesn't have any significant necessary consequences.

A revolution in our ability to understand and control other intelligences might have consequences that dwarf anything else, with or without AGI, but that's a different issue, and moreover one whose shape is basically impossible to even loosely estimate without some more idea of what the actual revolution itself would be.


The difference is in the scale of the intelligence, not just the technology.

It's not so much a new human like intelligence that runs on silicon, it's a general problem solving intelligence that can run a billion times faster than any individual human. This is the part I think you're underestimating.

If you have that without the ability to align its goals to human goals then that's a problem.


> The difference is in the scale of the intelligence, not just the technology.

AGI is inherently no greater in scale than human intelligene, so scale is not a difference with AGI, though it might be with AGsuperI. But that's a different issue than mere AGI, and may be impossible or impractical even if AGI is doable; we have examples of human-level intelligence so we know it is physically acheivable in our universe, we don't have such examples for arbitrarily capable superhuman intelligence.


I think that's somewhat of an arbitrary distinction likely not to exist in practice.

If you have an AGI you can probably scale up its runtime by throwing more hardware at it. Maybe there's some reason that'll prevent this from being true, but I'm not sure that should be considered the default or most likely case.

Biology is limited in ways that AGI would not be due to things like power and headsize constraints (along with all other things that are necessary for living as a biological animal). Human intelligence is more likely to be a local maxima driven by these constraints than the upper bound on all possible intelligence.


> If you have an AGI you can probably scale up its runtime by throwing more hardware at it

Without understanding a lot more than we do about both what intelligence is and how to acheive it, that's rank speculation.

There's not really any good reason to think that AGI would scale particularly more easily than natural intelligence (which, in a sense, you can scale with more hardware: there are certainly senses in which communities are more capable of solving problems than individuals.)

> Biology is limited in ways that AGI would not be due to things like power and headsize constraints

Since AGI will run on physical hardware it will no doubt face constraints based on that hardware. Without knowing a lot more than we do about intelligence and mechanisms for achieving it, the assumption that the only known examples are particularly suboptimal in terms of hardware is rank speculation.

Further, we have no real understanding of how general intelligence scales with any other capacity anyway, or even if there might be some narrow “sweet spot” range in which anything like general intelligence operates, because we don't much understand either general intelligence or it's physical mechanisms.


This is especially true considering we're talking about software vs. hardware (airplane). A few, or even one, brilliant mind(s) could make a break thru in AGI in a matter of months.


> A few, or even one, brilliant mind(s) could make a break thru in AGI in a matter of months.

same goes for warp drives doesn't it?

The point is that people don't see how we build THAT out of these tools we currently have. We only build pastiches of intelligence today and either we have an arrogant view of the level of our own intelligence or we can't make THAT with THIS.

But maybe warp-drives, maybe world-peace too?


Not really - we're not sure if warp drive is possible given the physical constraints of the universe.

AGI is possible because intelligence is possible (and common on earth) in nature.


> AGI is possible because intelligence is possible (and common on earth) in nature.

But you haven't asked if we're capable of building it. While it might be technically possible are we capable of managing its construction?

All I see today is ways of making the process more opaque at the benefit of not having to provide the implementation. How does that technique even start to scale in terms of its construction? I worry about the exponentially increasing length of the "80% done" stage, and that's on the happy path.


Following your reasoning, flying with the speed of light is possible because photons travel with the speed of light. We are not photons though. Is it possible or not to travel with the speed of light with a spaceship?


Warp drives travel faster than the speed of light? That's what I meant by not possible.

Ignoring that if we saw miniature warp drives everywhere around us in nature then yes I would be more confident they were possible.


I see your point I just wanted to point out that there are different challenges for us than for nature. Flying like a bird has dirrent challanges than flying with a Boeing 747, even though these challenges might share a subset of the physics, like the Bernoulli’s principle.


Yep - I think that's fair and a good analogy.

Similarly to how human airplanes don't flap their wings like birds, there will probably be implementation differences that make sense but share the underlying principles. Particularly since the artificial version isn't constrained by things biology needs to handle.


No, way wrong, there would be enormous hardware costs to building a warp drive. There are near zero costs (possibly) of building AGI.


Mythical man month. I feel like you're seriously underestimating how hard this is. Its got to be one of the greatest engineering challenges of our species and to flaunt that its "near zero cost" is offensive.


Not offensive in the least, it's a compliment to what our species has done to date. We are all standing on shoulders and the shoulders have never been higher. Think of the things you can build in a day that were impossible to build 30 years ago. To think that there isn't at least some chance someone will build AGI in the next 30 years is foolish. Again, I'm just saying there is a reasonable chance, like hitting a homerun. It's not likely for any given plate appearance, but given the number of games and players it happens every summer day.


We're making the process more opaque. How can that scale to AGI? We'll be stuck at 80% done for much longer.

I would posit that while its possible, it will take so long on this tech stack that we'll find another in the interim that will produce better results. I'm not convinced this branch is the winner.


Oh, do you mean how can Azure scale to AGI? I have no opinion on Azure, I just meant someone smart will figure it out. There are huge financial incentives to do so, when that happens, we (humans) figure shit out.


> Oh, do you mean how can Azure scale to AGI?

No, not in the slightest. I mean as we progress the dev cycles get harder and slower. Then we need more engineers and the administration of more engineers working together makes everything harder.

Have you ever considered that making a rock think might be one of the greatest engineering projects our species has ever taken on? Sure humans might figure it out but I'm of the belief it will take them a very long time to. In addition, I believe that in that timescale a different tech stack might show more promise. I'm not convinced this technological branch scales all the way to AGI.


Fair point, but I think admin needs have gotten lighter. It took 400k people to get us to the moon. I want to see the results of 400k engineers working independently or in small teams on AGI.


Sounds like a great idea and I'm all for it but I'm talking about the integration of that mess. It will be like trying to hit an ant on the moon with the precision of 18th century artillery.

> Well we've removed its irrational hatred of penguins but now it struggles with the concept of Wednesday again...


Why would AGI dwarf anything?

There are at least 7 billion beings on the planet with AGI already. I think a bigger problem is the general well being of the aforementioned 7 billion entities.


For a simple thought experiment take one human brain architecture (say it can operate at 100 operations per second), scale that up to a billion operations per second. It can think centuries worth of human thinking in a couple of hours.

If you have an AGI that has goals that are not aligned with your interests it'll dwarf everything else because it thinks faster than you (and can therefore act faster than you) in pursuit of its goals.


But human intelligence is weird. It's not clear to me that increasing the speed of my brain would really accomplish much in my day-to-day life. A lot of the value that I add happens during these 'Eureka' moments, often triggered when I am working on a different problem, taking a break, or after a good nights sleep. Adding more processing speed may or may not make that process more scalable.

And another thing to consider, is that in the real world success is not easy to define and it is only loosely correlated with intelligence. We have 7 billion people, each attempting random little variations on 'succeeding at life'. And the 'winners' generally require that some of the 7 billion people agree to 'reward' them (i.e. by giving them money). My last 3 purchases were watermelon seeds for my garden, a pair of jeans, and a dinner at a Vietnamese restaurant. It's not clear to me how AI would take over any of those transactions. Maybe make the jean manufacturing more efficient, but the price I paid was already pretty low.


Sure it happens in Eureka moments which for you are when you take a break that might be a few hours, but if you're running a billion times faster then a few hours turns into a billionth of that time. That's what I'm trying to get at as an example - even assuming the exact same architecture otherwise.

For the real world success part that's where goal alignment comes in. If we're going to solve things like dealing with the sun burning out, becoming an interplanetary species, or death then having an AGI that can work on these problems with us (or as part of us if Neuralink can succeed on what they want to do) will be a big deal.

It sounds crazy, but I think success here is a lot bigger than automating what clothes you were going to buy. Incentive based systems like capitalism work pretty well, but not being able to coordinate effectively at scale is a major source of current human problems, theoretically a goal aligned AGI could do that, or at least help us do it.


AGI would dwarf those 7 billion people because it would concentrate tremendous power into the hands of a very few.

It's the dream of being one of the few who gets to control and direct that concentrated power that fuels these dreams, which is why it's imperative that they dress it up in the language of benefiting society.

The essence of the ethical problem with AI is that there is no person or small group of people who can be trusted to use such power without creating a real dystopia for the rest of us.


I think this is a pretty big misunderstanding of the AGI issue.

Nobody is going to control the hyper-intelligent AGI if its goals are not aligned with human goals more generally. That's the nature of something being a lot smarter than you with its own goals.


Wait, is the claim that intelligence alone determines who is in control? I've certainly seen lots of examples where people were controlled by others, even though they were orders of magnitude more intelligent than those who had power.

Are the people who are trying to make AI a reality planning to give away the ability to unplug the machine it will inevitably depend upon, or give it the ability to control nuclear weapons so that it can wipe out humanity before that happens, like a bad movie script? It really does seem that ridiculous to aver that the creators of AI, if it ever comes to be, won't retain ultimate control over what it can do in the physical world.

Whatever its goals end up being, they will be aligned with Microsoft's goals if OpenAI gets there first. That's what the billion dollars is meant to ensure.


I don’t think humans are orders of magnitude apart.

The difference between the world’s smartest human and its dumbest human is tiny relative to the possible spectrum of intelligence.

Do you see the smartest chimps tricking or controlling humans?

If you wanted to trick a chimp to do something you wanted it to do, do you think it could stop you? And we’re probably a lot closer to chimp intelligence than an AGI would be to us.


> If AGI is achievable (seems likely given brains are all over the place in nature)

I don't see how that conclusion follows the antecedent.


Brains aren't magical, if the laws of nature allow for them to exist in nature and we see generalized intelligence develop and get selected for repeatedly then that suggests it can be done - it's just a matter of knowing how.


It might also be a possibility that this isn't possible to be replicated in an artificial way (that is, human beings maybe aren't smart enough to ever understand their own intelligence, even using all tools to their disposition)

In a certain way, brains are so complicated that (at least for the moment) they seem quite magical to us


Things often seem magical until they're understood.

As far as humans not being able to ever understand it, I guess that could be true but I wouldn't bet on it.


Our struggle to understand the brain suggests that "just a matter of knowing how" might take a while.


Pron, I fully support your take here. Most AGI campaigners here clearly think that we must have already figured out a lot about how consciousness works. But is there any evidence to back that up? No, because we _haven't_ created consciousness. The most we've done is manipulate _existing_ consciousness. Sure, we can point to similarities between deep learning and the brain, and these avenues are interesting and, I think, worthwhile to explore. But false starts happen often in science (e.g. bloodletting / astrology / pick your own) and seem to occur at intersections where concrete evidence of results is inaccessible. No one can say with certainty we aren't in the middle of one now.

Like pron, I don't mean to dismiss the work any AI researcher is doing, but the industry has growing money and power and I just think people should be careful with statements like the one pointed out already and so often encountered: "if you believe AGI might be achievable any time soon, it becomes hard to work on any other problem."


Consciousness may not have anything to do with AGI. Besides, we haven't as a species defined consciousness in a consistent and coherent way. It may be an illusion or a word game. AGI may end up being more like evolution, a non-conscious self optimizing process. Everyone is talking about AGI but we can't even define what we mean by any of these terms, so to put limits on how near or far away major discoveries might be is pointless.


> It may be an illusion or a word game.

If consciousness is an illusion, what is experiencing the illusion? What makes an experience of consciousness an illusion rather than actual?

(Don't quote Dennett in response, I'm curious to see a straightforward reply to this that makes sense.)


True. I used “consciousness” haphazardly.

Not super related, but AGI enthusiasts sometimes remind me of this: https://youtu.be/bS5P_LAqiVg?t=9m50s


>we are not even close to achieving insect-level intelligence

Then again I don't know many insects that drive cars, beat the champions at chess and go or similar.


I say insect intelligence is good enough to at least drive car. Bee, for example, is pretty damn amazing at flying & navigation. I mean flying and avoiding obstacles through miles of forest to find food is no easy task.


The difference is that insects can perform a wide variety of tasks needed for their survival, but all "AI" created by humans so far can only perform a single task each.


I dont know ___a single___ software that can do these either. This is exactly one challenge for AGI, to know which pattern recognition part to pull up in which situation. An insect can make decisions when to fly, crawl or procreate. I do not think that we have something similar in software just yet.


Also Atari! Don't forget the mighty achievement of Atari somewhat mastery - the key indicator of intelligence in our time.


Not taking a side on the over/under for AGI, but perhaps you are also acquainted with this little gem:

https://pdfs.semanticscholar.org/38e6/1d9a65aa483ad0fb4a219f...

Shannon, Minsky, and McCarthy!


It's an interesting dream team. But AFAICT this is only a proposal. Did the proposed series of studies take place? If so what was the outcome?



I wonder if they were the dream team back then or just promising young researchers.

I think the interesting take away is that they (seem to have) expected to solve the major problems of AI (language, common sense etc) over a summer with a small stipend.


So you’re saying.. it comes down to whether you believe AGI is achievable within our lifetime? Or even worth contributing to at all even. I think the parent has made their position pretty clear through their employment choices, that’s a level of skin in the game that naysayers don’t really have.


Perhaps I haven't been clear. I have no issue with the research OpenAI is performing, nor with anyone's beliefs in AI's imminence or their personal role in bringing it about. However, no one knows whether what they're doing is even on the right path towards AI, and certainly not when it will be achieved, plus the topic has been subject to overoptimism for decades now, so I do take issue with publicly calling what you do "working on AGI" or "pre-AGI" even though you have no idea whether that is what you're doing. Hopes and aspirations are good, but at this stage they fall far short of the level required for such public proclamations. My issue is with the language, not with the work.


I think you're issue with the language is not shared by most people. In research we rarely know beforehand what research is on the right or wrong path. But we are comfortable with someone saying they are researching something even if they don't know beforehand whether the research will be useful or a wild goose chase. For example most people's first though to hearing "I'm researching ways to treat Alzheimer's" isn't "Only if it passes phase 3 trials!".


Yeah, in this release they're not saying they're doing research towards AI, or even that they're researching AI. They're saying that they're "building artificial general intelligence" and developing a platform that "will scale to AGI." (emphasis mine) They're also calling what they're actually building "pre-AGI."


> We’re partnering to develop a hardware and software platform within Microsoft Azure which will scale to AGI.

This sentence might by itself imply they are farther along than they are, but in the context of the whole article I never got the impression they were close to actually building an AGI.

> The most obvious way to cover costs is to build a product, but that would mean changing our focus. Instead, we intend to license some of our pre-AGI technologies, with Microsoft becoming our preferred partner for commercializing them.

This read pretty straightforwardly to me. Pre-AGI seems like a shorthand for useful technologies like GPT-2.

Reading the article I never got the impression they'd solved AGI, or were even close. The context of the article is a partnership announcement not a breakthrough. I could see how a few people who are very unsophisticated might get a little confused as to how far along they are. But I assumed they were writing for people who had heard of OpenAI which pretty much eliminates anyone this unsophisticated.


They don't know what connection, if any, what they're doing has with AGI. For all we know right now, some botanist researching the reproductive system of ferns is as likely to bring about a breakthrough in AI as their research is. To me this feels like peak-Silicon Valley, the moment they've completely lost touch with reality.

People may also not be confused if Ben and Jerry's start an ice cream ad with mentions of AGI and the change of human trajectory and Marie Curie, and name it Pre-AGI Rum Raisin, but that doesn't mean the text isn't a beautiful and amusing example of contemporary Silicon Valley self-importance and delusion, and reads like a parody that makes the characters in HBO's Silicon Valley sound grounded and humble. Especially the "pre-AGI" bit, which I'm now stealing and will be using at every opportunity. Maybe it's just me, but I think it is quite hilarious when a company whose actual connection with AGI is that, like many others, they dream about it and wish they could one day invent it, call their work "pre-AGI." Ironic, considering they're writing this pre-apocalypse.


Although, perhaps you would agree that someone saying "my work on Alzheimer's might help your friend" would be behaving in a cruel and unprofessional way unless the treatment was indeed in human trials?


I read the article more as "maybe my work will one day be able to cure Alzheimer's and help people like your friend".

What in the article gave you the impression they had made a large breakthrough or were close to an AGI?


Do you hate all marketing or just around AI in particular? Would you bat an eye at MS investing $1B into a project + ads about say a new GPU architecture promising how "games will never be the same" because the new hardware (or even Cloud Integration) lets developers efficiently try and satisfy the rendering equation with ray tracing?

FWIW I thought you were clear, but there are only so many middlebrow dismissals one can make towards AI or AGI efforts and I think I've seen them all plus the low-value threads they generate. (I've made some too, and suspect we might get brain emulations before AGI, but I try to avoid the impulse and in any case it doesn't stop me from hoping (and minor contributing) for the research on the article's load-bearing word "beneficial" to precede any realistic efforts of building the actual thing. At least the OpenAI guys aren't entirely ignorant of the importance of the "beneficial" problem.)


I don't hate this copy at all; I absolutely love it! I think it is a beautiful specimen of the early-21st c. Silicon Valley ethos, and it made me laugh. Pre-AGI is my new meme, and that means something coming from a pre-Nobel Prize laureate.

What I'm interested in is how many dismissals of AI, most end up justified, can the field take before considering toning down the prose a bit, especially considering that the dismissals are a result of setting unrealistic expectations in the first place.


> the topic has been subject to overoptimism for decades now,

But so has every other big idea that went on to become reality, like planes (da Vinci was drawing designs for planes over 400 years before the first working ones).

> no one knows whether what they're doing is even on the right path towards AI

This is completely wrong. That would be like saying "no one knows if working on a wing is on the right path to flight".

Look at the way deep learning works. Look at the way the brain works. They share immense similarities. Some people say "neural nets" aren't like the brain, but that's not true--they are just trying to not over-exaggerate the differences which laymen commonly do. They are very similar.


> But so has every other big idea that went on to become reality, like planes (da Vinci was drawing designs for planes over 400 years before the first working ones).

And so has every other big idea that didn't become reality, and that was the majority. Again, I have no problem with AI research whatsoever, but the prose was still eyebrow-raising considering the actual state of affairs.

> They are very similar.

They are not. The main role of NNs is learning, which they still mostly do pretty much with backpropagation gradient-descent (+ heuristics). The brain does not learn with backpropagation.


> And so has every other big idea that didn't become reality, and that was the majority.

This is a really good point. Would be a fascinating read if someone were to collect all those examples and explore that a bit.


The most famous is the Philosopher's Stone: a substance that can convert base metals into gold.

But in itself, that was not the point. It would also transform the owner or user -- it was a hermetic symbol, a mechanical means to "pierce the veil" and to see the deep mystical and magical truths, the Real Reality. It was immanent, a thing in the world, that enabled the transcendent, to go beyond, above, outside of the world. Its discovery would have been the single most important moment in the history of the world, the moment in which humans had a reliable road to divinity.

Hmm. Sounds familiar, doesn't it?

But out of alchemy came modern chemistry, and also some parts of the scientific method. After all, as some smart people worked out, you could systematically try all the permutations of materials that your reading had suggested as possibilities. That meant measuring, weighing, mixing properly, keeping detailed notes. Fundamental lab work is the unglamorous slab of concrete beneath the shining houses of the physical sciences. There were waves of hysteria and hype, but after each, something useful would be left behind, minus the sheen of unlimited dreams.

Hmm. Sounds familiar, doesn't it?

These days it is possible for a device to transmute base metals into gold. But the operators have not, so far as I can deduce, ascended to any higher planes of existence. They have eschewed the ethereal and remained reliably corporeal.


> Hmm. Sounds familiar, doesn't it?

I'm not sure the reference you are making. Red pill/blue pill? I wasn't aware of the symbolism in the Philosopher's Stone.

> But the operators have not, so far as I can deduce, ascended to any higher planes of existence.

I guess I'm unfamiliar with this non-literal aspect to the philosopher's stone.

I'm missing the allusions you are making. To ascend to higher planes of existence, no need for AGI, some acid will do.


Do you have a reference on the brain not learning with backpropagation? I'd like to learn more.


https://arxiv.org/abs/1502.04156

> Towards Biologically Plausible Deep Learning

> Neuroscientists have long criticised deep learning algorithms as incompatible with current knowledge of neurobiology. We explore more biologically plausible versions of deep representation learning, focusing here mostly on unsupervised learning but developing a learning mechanism that could account for supervised, unsupervised and reinforcement learning. The starting point is that the basic learning rule believed to govern synaptic weight updates (Spike-Timing-Dependent Plasticity) arises out of a simple update rule that makes a lot of sense from a machine learning point of view and can be interpreted as gradient descent on some objective function so long as the neuronal dynamics push firing rates towards better values of the objective function (be it supervised, unsupervised, or reward-driven). The second main idea is that this corresponds to a form of the variational EM algorithm, i.e., with approximate rather than exact posteriors, implemented by neural dynamics. Another contribution of this paper is that the gradients required for updating the hidden states in the above variational interpretation can be estimated using an approximation that only requires propagating activations forward and backward, with pairs of layers learning to form a denoising auto-encoder. Finally, we extend the theory about the probabilistic interpretation of auto-encoders to justify improved sampling schemes based on the generative interpretation of denoising auto-encoders, and we validate all these ideas on generative learning tasks.


There are plenty of people who worked on AI (as graduate students, as ML researchers at hot startups for self-driving cars, as pure researchers or supporting engineers at Google or Facebook, and many other places), and then left because once they saw how limited the research was they lost hope it was going to happen before they died.

Also while I fully understand and appreciate the necessity of OpenAI abandoning the 'open' part, it says a lot about who is going to benefit from this technology when you have investors who want to make money. It's just ironically poetic in this instance.


I honestly don't see any possibility of truly "wide-spread benefits for humanity" if AGI is achieved anytime soon. Current state of humanity when it comes to how we treat each other and anything akin to species-level awareness and collaboration is only barely better in recent history than the dark ages. If a group of people gets access to an AGI, I think it will very quickly result in a bit wider group than that having their lives prolonged, no disease, no lack of resources and practically infinite wealth, and everyone eventually being either rid off, or allowed to live in far-away slums and left to die off.


> I think the parent has made their position pretty clear through their employment choices,

Their employer has just received a 1 billion dollar cash investment to futz around with computers. I don't think the employment choice is some sort of personal sacrifice here.

The level of skin here is a cushy guaranteed job for years to come until the next AI-winter hits, likely set into motion by such very claims of "AGI" being near or feasible.

"AGI" is good marketing for getting money to research real AI, ostensibly on the path to "AGI", but if one drives it too far, one might up retarding the whole field as the hype winds down (again).


That's true but at the same time I'm not trying to be in AI because it's such a specialist role which may or may not be a fad which the employment market overfills with new grads with a new degree in "ML", bricking salaries. But I think the internet will be here for awhile.


Agreed. By the way, does anyone want to try my new VR tech? It will change everything forever!


>OpenAI is not actually building AGI. Maybe it hopes that the things it is working on could be the path to an eventual AGI. OpenAI knows this, as does Microsoft.

Yes they are? They are making breakthroughs/incremental advances that required to get there, and building components along the way. It would be like you saying well Henry Ford isn't building a vehicle, he's just building a wheel, and a tire, and an engine, etc...


> They are making breakthroughs/incremental advances that required to get there

They don't know that. We have no idea what's required to achieve AI. Now I don't know how long before Ford actually built cars he started saying he's building cars, but if Wikipedia is to be believed, it could not have been more than three-four years. Also, when he started building cars, he pretty much knew what's required to build them. This is not the case for AI.


> We have no idea what's required to achieve AI

Yes, we do. Lots of data, lots of training, better algorithms, more understanding of the brain...At this point we still need 10x+ improvements in a lot of areas, but it's pretty clear what we need to do.

If you can process around 100 petabytes per second (1 Google Index of data per second), you could fully simulate a human being, including their brain. We're still a little bit from that, but it's pretty clear we'll get there (barring usual disclaimers about an asteroid, alien invasion, etc).

Source: I work in medical research, doing deep learning, and do research on programming languages and deep learning for program synthesis.


> Yes, we do. Lots of data, lots of training, better algorithms, more understanding of the brain...At this point we still need 10x+ improvements in a lot of areas, but it's pretty clear what we need to do.

This is absurd. How much data? How much training? What kind of training? How much better do the algorithms need to be? How do you define better? Also we literally don't even know how our brains work, so we don't know how "actual" intelligence works, but you're saying we have a clear road map for simulating it?

Your entire argument distill down to "we just need to do the same things, but better." And even that statement might be wrong! What if standard silicon is fundamentally unsuited for AGI, and we need to overhaul our computing platforms to use more analog electronics like memristors? What if everything we think we know about AI algorithms ends up being a dead end and we've already achieved the asymptote?

I'm not saying AI research is bad. I'm saying it is absolutely unknown by ANYONE what it will take to achieve AI. That's why it's pure research instead of engineering.


> Yes, we do. Lots of data, lots of training, better algorithms, more understanding of the brain..

So to build AI all that remains is to understand how it could work.

> but it's pretty clear what we need to do

It isn't (unless by "clear" you mean as clear as in your statement above). I've been following some of the more theoretical papers in the field, and we're barely even at the theory forming stage.

> but it's pretty clear we'll get there.

First of all, I don't doubt we'll get there eventually. Second, I'm not sure simulating a human entirely falls under the category of "artificial". After all, to be useful such a mechanism would need to outperform humans in some way, and we don't even know whether that's possible even in principle using the same mechanism as the brain's.


> I've been following some of the more theoretical papers in the field, and we're barely even at the theory forming stage.

I read those papers too. And I write code and train models day in and day out. I could get very specific on what needs to be done, but that's what we do at our job. If you're curious, I'd say join the field.

I agree with you in that I don't think for a second anyone can make an accurate prediction of when we will AGI, but I have no doubt that it will be relatively soon, and that OpenAI will likely be one of the leaders, if not the leaders in creating it.


I’ve been doing research in DL field for the last 6 years (just presented my last paper at IJCNN last week), and I can say with confidence we have no clue how to get to AGI. We don’t even know how DL works on the fundamental level. More importantly, we don’t know how the brain works. So I agree with pron that your “relatively soon” is just as likely to be 10 as 100 years from now.


I could explain it to you in an afternoon. But I’m not going to do it online, because then you have a thousand people call you “delusional”, because you simply are stating that exponential processes are going to continue. For some reason, many people who think themselves rational and scientific believe that things that have been going exponentially are suddenly going to go linear. To me, that is delusional.


> because you simply are stating that exponential processes are going to continue.

Exponential process continuing doesn't imply "we're going to get there soon" in any way, shape or form. The desired goal can still be arbitrarily far.


Explain what?


How to get to AGI.


If you know how to get there why don’t you build it?


1) Indeed we are doing a few of the things on the checklist to build AGI.

2) Our focus is on helping improve medical and clinical science and cancer tooling first.

3) If we needed AGI to cure cancer, perhaps we'd be working directly on AGI. If anyone thinks this is the case, please let me know, as at the moment I don't think it is.


You don’t think AGI would dramatically speed up cancer research (or any other research)?


Of course I do, but my back of the envelope guess is there's a 30% shot we can cure cancer in 15 years without AGI, and a 1% shot we can reach AGI in 15 years. I think AGI is cool but I'm much more concerned about helping people with cancer.


These are all assumptions, and there is a lot of disagreement in the academic community around it.

Humans don't seem to need anywhere near the same level of data or training that our current models need. That alone is a sign that deep learning may not be enough. The focus on deep learning research has a lot of useful benefits, so I'm not discounting that, but there are a decent amount of smart people who don't believe it's going to lead us to AGI.

Source: I also work in medical research, and am doing deep learning- and I've worked for a company that's focused on AGI, and I've worked with several of the OpenAI researchers.


> Humans don't seem to need anywhere near the same level of data or training that our current models need.

I find this to be a common misunderstanding. If I show you one Stirch Wrench, and you've never seen one before, you learn instantly and perhaps for the rest of your life you'll know what a Strich Wrench is. The problem is I didn't show you 1 example. You saw perhaps millions of examples (your conscious process filters those out, but in reality think of the slight shaking of your head, the constant pulsing of the light sources around you, etc, to be augmenting that 1 image with many examples). I think humans are indeed training on millions of examples, it's just we are not noticing that.

> That alone is a sign that deep learning may not be enough.

I 100% agree with that. It's going to take improvements in lots of areas, many unexpected, but I think the deep learning approach is the "wings" that will be near the core.


I think what you're terming a misunderstanding is actually fairly well known, but doesn't account for the magnitude of the sitution.

Here's a great article about a paper showing that humans prior knowledge does help with learning new tasks- https://www.technologyreview.com/s/610434/why-humans-learn-f...

However, that doesn't account for how quickly toddlers learn a variety of things with a small amount of information. Even more important, you can also just look at things like AlphaGo- they train on more examples than could be accumulated in a hundred human lifetimes.

For these reasons I don't believe "more data" and "more training" is the answer. We're going to need to do a lot more work figuring out how humans manage recall, how we link together all the data, and I would be surprised if this didn't involve finding out that our brain processes things in ways that are far different than our current deep neural nets. I don't believe incrementalism is going to get us to AGI.


I’m always puzzled at this idea that humans, at whatever age, are learning things with a small amount of information. The full sensory bandwidth of a baby from pregnancy to toddlerhood seems huge to me. I suspect that helps, as does the millions of years it took to create the hardware it all runs on.


I don’t believe incrementalism will get us their either. We need many more 10x+ advances. But I think it’s relatively clear where those advances need to be. I think simply by making 10x advances in maybe 100 or 1k domains we’ll get there. Neuralink for example, just announced many 10x+ advances, such as the number of electrodes you can put in the brain. Our lab is working on a number of things that will be also 10x advances in various sub domains.

Lots of advances in many fields will lead to something greater than the sum of their parts.

Edit: p.s. I like your comment about toddlers. As a first time father of a 6 month old, its been very intellectually interesting watching her learn, in addition to just being the greatest bundle of joy ever :)


I think that the lack of a hundred or a thousand 10x advances (you may be more pessimistic than me) does not merit calling your work pre-AGI.


> doing deep learning, and do research on programming languages and deep learning for program synthesis.

That sounds fascinating. Could you link to some relevant stuff about languages and deep learning for program synthesis? I'd love to read more about this.


Sure! Shoot me an email to remind me


To my knowledge Henry Ford didn’t start off selling wagon wheels.

Also when he started there were working automobiles already.

The fact that no one knows how to make an AGI, doesn’t make it a bad goal. But OP is right, if you think you know the timeframe it will arrive in, you have no idea what kind of problem you’re dealing with.


Allow me to repost Altman's wager:

- If OpenAI does not achieve AGI, and you invested in it, you lose some finite money (or not, depending on the value of their other R&D)

- If OpenAI does not achieve AGI, and you did not invest in it, you saved some finite money, which you could invest elsewhere for finite returns

- If OpenAI achieves AGI and you invested in it, you get infinite returns, because AGI will capture all economic value

- If OpenAI achieves AGI and you did not invest in it, you get negative infinite returns, because all other economic value is obliterated by AGI

Therefore, one must invest (or in this case, "work on the most important problem of our time").

(And yes, this tongue-in-cheek.)


This does not presuppose any kind of precise definition of infinity.


I think infinity in the gp comment could well be defined as, "the new AGI regime will or won't obliterate me." The gp comment is just Pascal's Wager, with AGI taking the part of God, and "infinite returns" taking the part of an eternity in Heaven or Hell.


That was seven decades ago, and we are not even close to achieving insect-level intelligence.

[citation needed]

I guess this depends on what "close" is. For something as blue sky as AGI, let me propose the following definition of "close:" X is "close" if there's over a 50% chance of it being achievable in the next 10 years if someone gave $10 billion 2019 US dollars to do it.

I think this is a fair metric for "close" for a blue-sky goal which has the potential to completely change human history and society. It's comparable to landing someone on the moon, for instance. Now, let's pick the insect with the simplest behavior. Fleas and ticks are pretty stupid, as far as insects go. I think we're "close" to simulating that level of behavior. Of course, that's straw-manning, not steel-manning. If we pick the smartest insects, like jumping spiders and Tarantula Hawks, we're arguably not "close" by the above metric. Simulating a more capable insect brain of a million neurons is not an insignificant cost, and training one through simulation would multiply the computing requirements many times that. However, there are evidently systems which are capable of simulating 100 times that number of neurons:

https://www.scientificamerican.com/article/a-new-supercomput...

So I would say, we're arguably not "close." However, we're not that far off from "close."


For a comment this precise I'm surprised you've mistaken spiders for insects :) Anyway, I think that "if you gave us $10B then in ten years we have even odds of producing something as smart as a jumping spider" does make for less inspirational copy than "[we're] building artificial general intelligence with widely distributed economic benefits."


For a comment this precise I'm surprised you've mistaken spiders for insects :)

True. They're fellow arthropods, and have similar levels of nervous complexity. (BTW, are you by any chance confusing Tarantula Hawks for spiders?)

does make for less inspirational copy

The levels of inspiration in the copy and generalizing across the phylum Arthropoda aside, are you effectively conceding that we're close to AGI at insect levels?


> are you effectively conceding that we're close to AGI at insect levels?

By "we are not even close to achieving insect-level intelligence" I think I meant that what we have now is not close in intelligence (whatever that means) to insects. I don't know if we have a 50% of getting there in a decade, but I certainly wouldn't conclusively say that "we are not even close" to that. I mostly regret having chosen bikes rather than electric scooters for my original comment. I think that sounds funnier.


By "we are not even close to achieving insect-level intelligence" I think I meant that what we have now is not close in intelligence (whatever that means) to insects.

Some insects are pretty stupid! Fleas and ticks have a good and highly adapted repertoire of behaviors, but for the most part, as far as we know, most individual behaviors are fairly simple.

I mostly regret having chosen bikes rather than electric scooters for my original comment.

Here's where your analogy falls down. We don't even have working examples of a complete warp drive, or anything like it. On the other hand, we don't have any commercial airliner sized beamed-power electric jets, but we have smaller conceptual models of the involved devices which demonstrate the principles. This is why I'd say we're "close to close" to insect level intelligence. 10 years and $10B would get us to the flea level. I think that's "close" like airliner sized beamed-power electric jets is close.


I think my point was lost because it's my pre-Primetime Emmy material.


I think your point was lost because there's some scaling problems in the mental models used to formulate it.


> AI (or AGI, as you call it)

AI and AGI may have meant the same thing a long time ago, but the term "AI" has been used almost ubiquitously to represent things that are not AGI for so long now, that I don't think the terms are interchangeable any longer.


>we are not even close to achieving insect-level intelligence.

Is this true? Is there an insect turing test?


Compare the most advanced self driving car to the simplest insect and you should immediately realize how far we are from insect level AI.


I don't see how it could be. What can insect brains do that we couldn't get AI to do?


Pretty much everything that insects do is beyond our current AI and engineering tech. Ignoring the "engineering" feats that biological beings perform such as replication, respiration and turning other plants and creatures into their own energy source, their behaviour is very sophisticated. Imagine programming a drone to perform the work of a foraging Bee, using a computer that fits into a brain the size of a grain of rice. It can manage advanced flight manoeuvres, navigation to and from the hive, finding pollen, harvesting it, dodging predators and no doubt a dozen skills I can't even imagine.


Aside from the miniaturization, I'd be surprised if we couldn't make an exact simulacrum of a honey bee in software today, to the limits of our understanding of honey bees.

As with AI... a system can be simulated to a given level of fidelity without necessarily simulating the entire original underlying system.


This doesn't necessarily say much about the state of our AI expertise, but our understanding of honey bees is an insufficient basis for the construction of anything that would survive of be an effective member of a hive. Just a week or two ago on HN there was an article about how scientist finally have just now acquired a reasonably complete understanding of the waggling language that they use to communicate with one another. (https://www.youtube.com/watch?v=-7ijI-g4jHg)

Perhaps more relevantly, an automaton that could observe such a waggle dance using computer vision and then navigate to the food source described by the waggle seems to me to strain the bounds of our current capabilities, or maybe even to surpass them by an order of magnitude.


Bees also have sophisticated communication skills to tell other bees where to find food.


In terms of intelligence, there isn’t. What prevents us from actually building a uber-insect is miniaturization, self sustaining energy production of some kind and reproduction in an artificially built system. I guess it would be possible to demonstrate insect level intelligence by actually replacing an insect brain with an artificial one.


Your guess would be wrong. Our actual level of AGI development is maybe more on the level of a flatworm. Complex, social insects like bees are still far beyond our ability to simulate.


Controllably fly in strong wind using very primitive sensors.


What can a modern F1 tire do that we couldn't do with a 500BC wooden wheel ?


How do you know we are not closer to AGI? Because we don't how to create AGI we cannot know whether we are closer or not. We can say artificial neural networks are not the way to go because they are not like real neurons, but we can say very little about neurons that could be possible artificial neural networks are actually the way to achieve intelligence. The topic is so complex and we know very little that any strong claim is very likely to be wrong


I believe your opinion aligns pretty well with a crescent number of researchers that see in investments like this exactly the scenario for the warp-drive you described.

In your opinion what should computer scientists be focusing on in order to achieve more advanced AI systems? I'm thinking things such as reasoning, causality, embodied cognition, goal creation, etc.

And this is without even delving into the ethics aspects of (some instances of) AI research.


I can't find any results for either Norbert Weiner or Alan Turing saying those things - do you have a source?


That's a good question. About a year and a half ago I compiled a large anthology of the history of logic and computation, and I remembered coming across that during my research. What I've been able to find now is the following section from Hodge's Turing biography (p 507-8 in the edition I have):

> Wiener regarded Alan as a cybernetician, and indeed ‘cybernetics’ came close to giving a name to the range of concerns that had long gripped him, which the war had given him an opportunity to develop, and which did not fit into any existing academic category. In spring 1947... Wiener had been able to ‘talk over the fundamental ideas of cybernetics with Mr Turing,’ as he explained in the introduction to his book... Wiener had an empire-building tendency which rendered almost every department of human endeavour into a branch of cybernetics... Wiener delivered with awesome solemnity some pretty transient suggestions, to the general effect that solutions to fundamental problems in psychology lay just around the corner, rather than putting them at least fifty years in the future. Thus in Cybernetics it was seriously suggested that McCulloch and Pitts had solved the problem of how the brain performed visual pattern recognition. The cybernetic movement was rather liable to such over-optimistic stabs in the dark.

So if this passage is indeed the source of my recollection, while very poor and perhaps exaggerated, I think it's pretty true to the spirit...


Insects are super predictable. They almost always act identically in response to the same stimuli, which is why cockroaches will always eat poisoned bait if it's within a foot of them, no matter the circumstance, while rats are wiley.


guardrails in terms of policy, if not technical details, are still valuable.

the thing is, there are actually lots of reasons to think AGI cannot be constrained in this way. open AI researchers know this.

so that means, the promise and the charter are irrelevant. open ai will never release a general AI.

but in the meantime, deep learning is still reaping. every day it's being applied to something new and solving real, tangible problems. there's money to be made here, and that is what open AI seems to really be doing. being philosophical and "on top" of the futuristic moral dilemmas, whatever, is just marketing? and in the unlikely event that an AGI is created that can be tamed, great for open ai! if an AGI is created that cannot be tamed, what then? if it's really worth a trillion dollars, is it really just buried, or will the charter simply be rewritten?

you know, this reminds me a lot of all the great physicists working on the atom bomb, thinking it was never going to be used.


> and we are not even close to achieving insect-level intelligence.

Aren't we close to this? Most insects only have a few million neurons in their central nervous system, so we can model their intelligence in real time at least. Maybe we still lack the tools for training such networks into useful configurations?


Yes, if you assume the technical model of each neuron only having one scalar output bfloat16, then we could simulate insect brains right now. But the technical neuron model of sum of inputs plus sigmoid activation function is only an approximation.

Neurons communicate with each other with a multitude of neurotransmitters and receptors [1]. As a cell, each neuron is a complex organism of its own that undergoes transcriptomic and metabolic changes. We aren't even close to simulating all protein interactions in a single cell yet, let alone in millions of them.

Of course you could say that full protein simulation of an entire brain is not neccessary if we can build an accurate enough technical model of a single neuron. In fact, already now we have to apply a model of how we believe proteins behave as "properly" simulating interactions of two proteins (or one with itself) with lattice QCD approaches is beyond our computational capabilities. For protein interaction we have pretty good models already. But finding a model of all types of neurons in insect brains is right now an open, unsolved challenge.

[1] https://en.wikipedia.org/wiki/Neurotransmitter#List_of_neuro...


> we could simulate insect brains right now

AFAICT this suggests that we have the computational power but wouldn't it also be a significant challenge to create an accurate model for the brain simulation?


Lattice QCD is used for sub-nuclear simulations, proteins are studied with much more tractable methods based on regular quantum mechanics.


Yes, that's my point: you don't need to simulate a protein with that tool because we have good enough models of higher level structures like atoms. And similarly we might find models for neurons that allow us to avoid full emulation all protein interactions. We figured out how atoms work before we figured out how nuclei work, but with neurons it's the opposite: we know/can figure out how the the parts (proteins) of the machine work but not how the entire machine works.


Once we know how a neuron works, ask again. I am not sure how this detail keeps getting glossed over.


You don't need planes to flap the wings in order to fly.


But you do need to understand that they generate lift, and be able to mathematically describe something that generates lift. The Wright brothers wrote to the Smithsonian in 1899 and got back, among other things, workable equations for lift and drag.

I think people think back propagation is the metaphorical lift equation here and we just need a “manufacturing” advancement (ie, more compute and techniques for using it). We’re close to that (I personally feel like with poor evidence) but definitely not there yet (as evidenced by nobody publishing this). We cannot describe what is happening with modern architectures as fully as a lift equation predicts fixed wing flight, and so it is largely an intuition + trial and error, which is a slow unreliable way to make progress.


My point is that while the brain and neurons are very complex and inherently confusing, there are billions of lifeforms that operate on this architecture and do not display sentience or intelligence.

Secondarily, just because neurons are complex on technical level, it does not mean that they should be complex on logical level.

For example, in computers if you would look at the CPU structure, on a low level you have quantum effects and tunneling and very insane stuff but on a logical level you are dealing with very trivial boolean logic concepts.

I would be not be surprised in a slightest if copying and reverse engineering neurons per se would not be necessary and defining aspect of anything related to AGI.


Yeah but we didn't need to fully understand how animal wings actually work, we just needed to understand what they do (generate lift). Similarly I don't understand the focus in this conversation on fully understanding the protein interactions that make neurons work. We just need to understand what neurons do. And I thought what they do is actually pretty simple due to the "all or nothing" principle. https://en.wikipedia.org/wiki/All-or-none_law


That’s pretty far from “when you do this, you get generalizable thought required for AGI”. The lyft equation said “this equation shows that when you do this, this object moves upward against the air”, which was the goal of flight- for AGI we have “when you do this, the loss goes down for this task”, we are missing so many pieces between that and the concept of AGI.

People think maybe the missing pieces might be in the other things we don’t understand about the brain. It makes sense- it does what we want, so the answer must be in there somehow. I agree we don’t need to perfectly understand it, it just seems like a good place to keep looking for those missing pieces.


We are and in a sense we know how they work. It's called swarm intelligence which does not even require neural nets to begin with.

OP probably just wanted to downplay the current state of AI.


We still cannot convincingly model behaviour of even simplest individual organisms whose neural circuitry we know in minute detail.


What do you mean by "model behavior"? We have AI systems that can learn walking, running and other behavior with just trial and error, I would call that simple behavior.

Now here's a more advanced example to teach a virutal character how to flex in the gym: https://www.youtube.com/watch?v=kie4wjB1MCw

That's a bit more advanced than simple walking.

Here's a deployed AI to a real robot "crab":

https://www.youtube.com/watch?v=UMSNBLAfC7o

How about virtual characters learning to cooperate?

https://www.youtube.com/watch?v=LmYKfU5O_NA


"We have AI systems that can learn walking, running and other [...]"

In one of your examples, which are all of narrow AI, we see a mechanical crab powered by ML that has become specialized in walking with a broken limb, which is not even close to what we need if we aim for AGI. For AGI we don't need agents that mimic simple behavior. In my opinion, _mimicking_ behavior will not lead to AGI.

What _will_ lead to AGI? No one knows.


macleginn's complaint was that we haven't even modelled simple behavior and I brought these narrow AI examples as a counter argument since they demonstrate that we can, even complex ones. Domain specific? Yeah, bummer.

Nowhere I have stated this is the clear path to AGI and you are right, we are missing key building blocks. But I feel like there's too much skepticism agains this field while the advancements are not appreciated enough.

I don't know either what will lead there, but I see more and more examples of different networks being combined to achieve more than they are capable of individually.


> macleginn's complaint was that we haven't even modelled simple behavior

No, the complaint was about modelling the behavior of simple organisms.

Certainly we can model some of their behaviors, many of which are highly stereotyped. But the real fly (say) doesn't only walk/fly/scratch/etc, it also decides when to do all of these things. It has ways to decide what search pattern to fly given confusing scents of food nearby. It has ways to judge the fitness of a potential mate, and ways to try to fool potential mates. Our simulations of these things are, I think, really terrible.


I linked to modeled organisms, I always feel the HN crowd expects academic level of precision and discussions, but that kills regular discussions I would have at dinner tables with friends, I wish it would be a more casual place. Yes, I meant "behavior of simple organisms" :)

Since everything here is loosely defined I feel it's totally pointless to discuss AI, but it's still an intriguing topic. If you look at those insects, they tend to follow brownian motion in 3D, get food and get confused by light, we can get an accurate model of them and more [0].

The key word here is to model, not replication. Simulations are just that, simulations. Given current examples of what's already possible if someone wanted to, could model a detailed 3D environment with physics, scents and food for our little AI fly.

[0] https://www.techradar.com/news/ai-fly-by-artificial-intellig...

Is that a terrible attempt?


We can model some aspects of insect behavior. The simulation even looks convincing at first glance (just as simple "AI" looks convincing with a superficial examination of a conversation or text-generation). But we have not been able to fully model the behavior of, say, a bee (which may be enough to solve self-driving cars and then some).


Exactly.

> they tend to follow brownian motion in 3D

Well, their entire neural system exists to make deviations from Brownian motion. That's the whole point of being an animal not a plant. And doing it well is very very subtle.

First steps towards modelling such behavior can be super-interesting science, not a terrible use of time at all. They can capture a lot of truth about how it works. But like self-driving cars, the thing that kills you is usually a weird edge case, not the basic thing.


Terrible? Not at all.

I'm sorry and apologize if you feel I was one to kill the discussion you wanted to have around AI.

I'm one of those dreamers who think AGI is or at least should be possible, soon, through means we have not yet discovered but will, soon. I base that on absolutely nothing, I suppose, other than the fact we have lots of "bright/smart/crazy" devs working on it. It's my own personal "believie", as Louis CK would say about things we believe in but cannot or care not prove.

Just like you I'm looking at organisms much simpler than us as a way forward. Many specialized neural network does not make up AGI, is what I think. Is it the organic and human neuron we should model? I don't necessarily think so. Also. robotics + ML is a dead end to me. An amoeba that can evolve into something more complex, is perhaps what we should model.


>I can't tell if you're serious, but assuming you are,

I assume he is, given he is Greg Brockman the CTO and a co-founder. I know Sam Altman is similarly optimistic, having told me on multiple occasions something along the lines of 'I can't focus on anything else right now' which in context I very much took as 'this presently consumes my waking thoughts and I only have time for it'.

This sort of drive is great, but I don't think it necessarily makes it true. Mr. Altman is financially independent, he needn't worry about things like rent or putting food on his table and I imagine Mr. Brockman is also independently wealthy (or at least has several years of a cushion if his OpenAI salary were to suddenly dry up), perhaps not as much though, given his previous position at Stripe.

These two, and perhaps other members of the team, can be overly optimistic about their passion. Both of them have this view, and they both co-founded OpenAI. This optimism and enthusiasm, and interesting project successes so far, certainly gives them steam and attention but how many aspiring athletes think they're going to to get drafted for tens of millions of dollars when in reality they might be lucky to get scouted by a college, or lucky to get drafted to a European or Asian league and not necessarily a major league US team. How many musicians think they'll get into Juilliard and go on to some top-tier symphony/orchestra, or will be the next Country/Rock/Rap/Pop star that takes the world by force, only to end up playing music with their friends at some dive bar a few times a year despite their enthusiasm and skill?

I think a major problem OpenAI has, which I've expressed to Altman, is that they suffer what Silicon Valley in general does. They are myopic, their ranks are composed of people that are 100% behind AI/AGI, they dream about AGI, they want to create AGI, they absolutely think we will have AGI, they want AGI with every fiber of their being. They're high in the sky with apple pie about AGI.

But who's going "hey wait a minute guys" and climbing up a ladder to grab them by the cuff of their pants to pull them back down to the floor and tie a tether to their leg? As far as I know, no one under their employ.

I think OpenAI needs to bring in some outsiders, have a team internally that roles a sanity check, and probably a board member as well. I think it is very dangerous to only have people working on your project that are overly optimistic. It reminds me somewhat of the movie Slingblade, a lawnmower is taken to be repaired and the folks don't know why, they present it to Billy Bob Thornton's Character that has some sort of mental deficit, he looks at it briefly and states "It ain't got no gas". He has a different perspective of the world, he sees things differently, this allows him to see something that the others overlooked. While gobs of gobblygook code and maths is a far different thing than a lawn mower not having fuel, I still think there is a danger in having one of the greatest STEM projects mankind has ever attempted only staffed by a bunch of coders, in a field that is effectively new, that largely have the same training and same life experiences.

Here's a portion of what I said to Mr. Altman back in May of this year and I think it applies more than ever, that isn't exactly related to this comment chain but maybe posting it here will get it seen by more people at OpenAI:

---

You are aware, you guys are in a bubble there. People in the Bay Area are at least peripherally aware of what Artificial Intelligence is presently and could be. For the bulk of the country, and the majority of the world, people are largely clueless. If you say ‘artificial intelligence’ people either have no idea what you are talking about (even people in their 20s and 30s which was shocking to me) or something like HAL 9000, Skynet, Colossus: The Forbid Project, etc come to mind. I think the industry, and OpenAI especially, are missing out on an opportunity to help educate people on what AI can and will be, how AI can be benevolent and even beneficial.

OpenAI is missing out on an opportunity here. While the bulk of resources obviously need to go to actually pursuing research, there is so much you could be doing to educate the masses, to generate an interest in the technology, to get more people passionate about/thinking about machine learning, AI and all of the potential applications.

...possible examples given...

You need to demystify AI Sam, you need to engage people outside of CS/Startup culture, engage people other than academics and venture capitalists.

...more examples given...

---

I will point out in that same exchange I told him that I thought raising the billions OpenAI would need is laughable, well I'll take a healthy bite out of my hat. They managed to raise a billion from a single source, bravo.

I had the pleasure of visiting OpenAI towards the end of Spring '18 and certainly from what I saw they are very serious towards their goal and aren't joking about believing 100% that AGI is an obtainable goal within their reach.

It's also worth noting I applied to OpenAI in the past year, post my visit, for their "Research Assistant, Policy" position and that I was somewhat miffed by the form rejection which, from outside of STEM, seems very cold:

>We know that our process is far from perfect, so please take this primarily as a statement that we have limited interview bandwidth, and must make hard choices. We'd welcome another application in no fewer than 12 months - the best way to stand out is to complete a major project or produce an important result in that time

I still haven't a clue as to what major project or important result, that I can achieve in researching policy for Artificial Intelligence given that:

- Artificial intelligence doesn't exist

- No one has created policy for it outside of science fiction

I may have not been the most qualified, which is fine, as I lacked the 4-year degree they had listed as a requirement, but a human being never once talked to me, never once asked me a question, just a web form and a copy-paste email with my first name inserted.

We don't always need someone with a stack of degrees, that is 100% pro-AI, that has programming experience, to help research policy and presumably lay the groundwork for both OpenAI and the industry. I think a team like that should only involve 10-20% individuals that are experienced in the field, I think you need a diverse team, with diverse experience, with diverse backgrounds. If an AGI is developed, it won't just serve the programmers of the world, it won't just have an impact on their life, STEM folks are far outnumbered by those with no STEM backgrounds.

Who is representing the common human in this? Who's going "are you sure this is a good idea" "should we really be training it with that data" "is it really in the best interests of humanity to allow that company/entity to invest or to license this to these types of causes?"

But hey, what do I know?



Exactly, and if they do create an AGI it's probably going to be a lot like the creators as its original parameters were set by them.

Just look at Amazon's warehouse algorithm:

~Biotic unit achieved goal, raise goal~

~Biotic unit achieved new goal, raise goal~

~Biotic unit achieved new new goal, raise goal~

~Biotic unit failed new new goal, replace biotic unit~

~New biotic unit failed new new goal, replace biotic unit~

~New new biotic unit failed new new goal, replace biotic unit~

With Amazon though, a human can eventually go "wow, w're firing new hires within the first 3 weeks like 97% of the time, and 100% within 6 weeks, erm, let's look at this algorithm".

But if you create an AGI that has the Silicon Valley mindset "we will do this, because we have to do this" (an exact quote I heard from an individual while in the Bay Area to stop by OpenAI : "We will figure out global warming, because we have to") then the AGI is probably going to be designed with the 'mindset' that "Failure is not an option, a solution exists, continue until solution is found" which, uh, could be really bad depending on the problem.

Here's a worst case scenario:

"I am now asking the computer how to solve climate change"

~~beep boop beep boop, beep beep, boooooooop~~ the CO2 emissions are coming from these population centers.

~~boop beep beep beep boop boop boop boop beep boop~~ nuclear winter is defined as: a period of abnormal cold and darkness predicted to follow a nuclear war, caused by a layer of smoke and dust in the atmosphere blocking the sun's rays.

~~boop beep beep, boop~~ Project Plowshare and Nuclear Explosions for the National Economy were projects where the two leading human factions attempted to use nuclear weaponry to extinguish fires releasing excessive carbon dioxide as well as for geoengineering projects. Parameters set, nuclear weapons authorized as non-violent tools.

~~beep beep beep boop boop boop, beep boop, beep, boop, beep~~ I now have control of 93% of known nuclear weapons, killing the process of 987 of the most populous cities will result in sufficient reduction for the other biotic species to begin sequestering more carbon than is produced, fires caused by these detonations should be minimal and smaller yield weapons used as airbursts should be capable of extinguishing them before they can spread. Solution ready. Launching program.

Watch officer at NORAD some time later "Shit, who's launching our nuke?!?"

Someone else at NORAD "they're targeting our own major population centers!"

Somewhere in Russia "Our nuclear weapons are targeting our own cities!"

Somewhere in Pakistan "our nuclear weapons are targeting our own cities!"

somewhere...


> calling it "pre-AGI" pretentious to the level of delusion.

I don't think you know what you are talking about. Do you do Deep Learning? If you are not actively engaged in the field, I wouldn't be so quick to dismiss others who are (especially not others who are at the top of the field).

That being said, you brought up some interesting points, even if I think your overall position is wrong--I think OpenAI is definitely going to hit "pre-AGI" if not AGI, and I do this stuff all day long.


I study ML, and I completely agree with the quoted statement. Deep networks have gotten pretty good at recognizing correlations in data. That's not on the same map as AGI. I don't know what "pre-AGI" means exactly, but I would include things like counterfactual reasoning or ability to develop and test models of the world, which are far from our AI capabilities so far. (edit: yes I am including RL, considering the relative performances of model-based vs model-free, I think this is a fair statement. Don't mean to be pessimistic, just realistic and trying to set expectations to avoid more winters.)


To be clear, I don't think Deep Learning = AGI. I think it's just one important piece, but I think we are also making many other rapid advances in relevant areas (neuralink's 10x+ improvement in electrodes, for one).


I was actively engaged in the field in the late-nineties when AI was also five years around the corner. I’ve mostly lost interest since then, and the disappointment that is deep learning has only dulled my enthusiasm further (not that it hasn’t achieved some cool things, but it’s a long way from where we’d thought we’d be by now).


> I was actively engaged in the field in the late-nineties

So was Geoffrey Everest Hinton

> I’ve mostly lost interest since then

but he didn't give up.

If you expect someone to just hand us AGI in a nicely wrapped package with a bow, with all the details neatly described, you are absolutely right, that's really far off!

But for the record there are many people actively grinding it out in the field, day in and day out, who don't give up when things get hard.


The kind of language used in this release has actually hurt AI considerably before, so by pointing out that it's delusional I am not giving up and helping save AI from the research winter that OpenAI seems to be working on. You're welcome, AI!


okay I'll concede your point that perhaps being bold could be bad publicity for the field. I think that's a reasonable position to take. I don't think it is correct, but I think it's reasonable. Even if it were the precursor to a drop in funding, I don't think the previous "AI Winter" was so long, in comparison to the century-long gaps in the advance of other technologies in history, (binary was invented hundreds of years before the computer).

I would definitely not call OpenAI delusional. I would say all OpenAI is being here is "honest".

They are simply stating what the math tells them.

"E pur si muove"


> They are simply stating what the math tells them.

Which math?


He's also telling us we're going in the wrong direction and have been with our approach to reinforced learning. He's not convinced that's how the brain works. In fact he's convinced it's not.


> > calling it "pre-AGI" pretentious to the level of delusion.

>I don't think you know what you are talking about. Do you do Deep Learning? If you are not actively engaged in the field, I wouldn't be so quick to dismiss others who are (especially not others who are at the top of the field).

I do (or at least I try, I get money for my attempts though) and I concur with calling it delusion. So does Francois Cholet. So does Hinton to some degree, so does the founder of deepmind (or at least they did in 2017: https://venturebeat.com/2018/12/17/geoffrey-hinton-and-demis... ).

I want to like OpenAI, I think they did the right thing with GPT-2 and I give them a lot of credit for publishing things. That being said, I remain skeptical about AGI, highly skeptical about AGI being feasible, or the thing to worry about. I always make the argument that either research towards controlling an AGI/AGI alignment is a techified version of reserach into the problem of good global governance (in which case it is an interesting problem that desperately needs solving), or it is useless (because no matter how nicely you control the AGI, a non-accountable elite within the current system, the less-than-perfectly aligned government etc. will strongarm you into giving control to THEM before you come close to deploying it) or it is delusional (because you think you are smart enough to build AGI without these elites finding out AND smart and/or wise enough to do what is best for humanity).


> the less-than-perfectly aligned government etc. will strongarm you into giving control to THEM before you come close to deploying it

and

> because you think you are smart enough to build AGI without these elites finding out AND smart and/or wise enough to do what is best for humanity

Are very good points and I share those concerns too, and have no good answers. I'm in the pessimist camp when it comes to AGI--I would be heavily it's going to happen but I wouldn't bet a dollar whether it will end up being good for humanity, as I haven't a clue.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: