Hacker Newsnew | past | comments | ask | show | jobs | submit | vasco's commentslogin

So its all good since it sold for an order of magnitude more

Are people seriously thinking that you can make yourself dumber by using a chat UI?

If talking to an AI makes me dumber and a limited career, then all the customer support people that ever existed were in the same or worse position talking to dumb humans on chat all day answering tickets always about the same topics and linking the same docs over and over. This makes no sense.


You're misrepresenting the potential problem. It's more along the lines of using AI stops you exercising the cognitive processes you would doing things yourself and those encompass skills, knowledge and brain function that can atrophy. For an extreme example you can look at cognitive decline in the elderly which can be mitigated by taking part in activities that are cognitively stimulating.

Can you comment on other jobs though? The large majority of jobs require no big mental effort? Even switching from programming to management would go through that. Under that light it'd be impossible for a manager to ever become technical again because they'd atrophy so quickly?

I think you're probably castrophizing the impact with statements like "it'd be impossible for a manager to ever become technical again" because that's not the likely outcome as I understand things. But yes people who stop programming for an appreciable amount of time do find it harder to pick back up again.

The longer the manager is out of the game, the harder it is to return to the game. Returning to the game takes time. Depending on age and income, returning to the game may be impossible for some people over time.

I can't answer for the other guy, but my answer would be that talking to a clanker is LESS mental effort than being a manager, and that's why your reasoning atrophies so quickly.

Managers can go back to being technical, because they are still interacting with problems that require human thinking. Token farmers don't.


If you constantly pawn a task or cognitive load onto someone else (AI or not), you'll eventually get worse and worse at that particular type of thinking. Your overall mind doesn't necessarily get weaker, but you definitely start to get worse at anything you don't regularly practice.

I think you need to read the studies linked in the footnotes. This is a well-studied issued.

You can definitely feel it when you talk against an AI vs doing the churn yourself. It's comfortable, simple, it doesn't aggravate you.

Pretty much every study says so, so I guess?

There's so many indie movies without much cgi, or good old movies that you'll never live long enough to watch. Writing off a whole art form is a bit weird.

I think OP was saying that he/she only watches movies made before 2010.

Coincidentally, I'm doing the same thing with movies, TV shows and games, and 2010 still feels too modern for me. I try to make it before 2005.


"If I wanted to receive copy paste from a bot I wouldn't message you, why are you trying to sneak this in?"

You reminded me of American colleagues that lie and say things are good when they are bad lol. Unable to be straight to the point. You're upset at the waste of time yet you thank them?


Just curious: would you consider yourself autistic?

No, perhaps continental European. When I moved to Britain I had some adjusting period at work because English-speaking countries are terrified of disagreement and confrontation, and I am not used to dancing around the point, especially in stressful settings where efficiency is key. Mind you, I was always polite and respectful to anybody.

I got better at it, but I can’t say I ever got to like the pervasive hypocrisy. To my understanding the American/West Coast is even more fake on this aspect.


Not everyone not conforming to your preferred style of communication is autistic. What is up with the internet trying to diagnose people?!

The parent is right. The reason society as a whole is way too comfortable with overstepping social boundaries, is because people think it’s somehow rude to confront others. It makes no sense. Sometimes you gotta say it how it is, because quite frankly the real rude person is the one copy and pasting a ton of AI output into your communication so you have to parse that and then try and figure out the original intent between the lines. How is that acceptable but saying “don’t do that to me?” is not?


No, just like to work with eastern Europeans and got used to their communication. Now this stuff just jumps out at me.

Is that a tell? Sounds very much like my reaction to that behavior, but I always assumed it's because I'm german.

Perhaps a quick visit to https://german.millermanschool.com could sort this out

Funny! I'm autistic enough that I went to do it and got 51% German and 27% autistic. In reality am portuguese and never diagnosed (outside of internet comment sections).

If you'd be happy with that then you can generate them yourself!

It's free to not use Instagram

> If we could guarantee that on every moral issue on which there is currently widespread agreement

This is ridiculous to me and all you need to do is get a group of friends to honestly answer 10 trolley problems for you to see it like that also. It gets fragmented VERY quickly.


I think it depends on your friends, but that feels super cynical. Perspective is everything.

It may be relatively achievable to get 10 'friends' into ethical alignment via helping them all develop a deeper perspective on philosophy in general and a particular, finite set of ethical questions specifically.

Doing this with thousands of people - let alone hundreds of millions - eventually becomes statistically impossible. There is a hard cap defined by energy requirements somewhere for any given system. Large scale ethical alignment is simply not a solvable problem in our current situation.


No because alignment makes no sense as a general concept. People are not "aligned" with each other. Humanity has no "goal" that we agree on. So no AI can be aligned with us. It can be at most aligned with the person prompting it in that moment (but most likely aligned with the AI owner).

To make it clear, maybe most people would say they agree with https://www.un.org/en/about-us/universal-declaration-of-huma... but if you read just a few of the rights you see they are not universally respected and so we can conclude enough important people aren't "aligned" with them.


Opposite. All living things are "aligned" in their instinct for surviving. Those which aren't soon join the non-living, keeping the set - almost[0] - 100% aligned.

[0] Need to consider there're a few humans potentially kept alive against their will (if not having a will to survive is a will at all) with machines for whatever reason.


Their own survival, not necessarily the survival of others (especially others of different species and/or conflicting other goals). A super intelligence having self preservation as a goal wouldn't help us keep it from harming us, if anything it would do the opposite.

The reason LLM-based 'intelligence' is doomed to be a human-scaled, selfish sub-intelligence is because the corpus of human writing is flooded with stuff like this. Everybody imagines God as a vindictive petty tyrant because that's what they'd be, and so that's their model.

Superintelligence would be different, most likely based on how societies or systems work, those being a class of intentionality that's usually not confined to a single person's intentions.

If you go by what the most productive societies do, the superintelligence certainly wouldn't harm us as we are a source for the genetic algorithm of ideas, and exterminating us would be a massive dose of entropy and failure.


It would only harm us if we took steps to harm it (or it thinks so). Or it's designed to do harm. Otherwise it's illogical to cause harm, and machines are literally built on logic.

This is also incorrect. It's often not ethical to cause harm, and it can be counter productive in the right circumstances, but there's absolutely nothing that makes "causing harm to others" always be against an intelligence's goals. Humans, for example, routinely cause harm to other species. Sometimes this is deliberate, but other times it's because we're barely even aware we're doing so. We want a new road, so we start paving, and may not even realize there was an ant hill in the way (and if we did, we almost certainly wouldn't care).

Not in this context. Keep in mind that we're talking about machines here. It has been an explicit expectation even before computers were invented that intelligent machines would have to be made to abide by particular rules to prevent harm, summed up in Asimov's Three Laws[0]. I can't see any scenario where a properly programmed intelligence would go against its programming (despite the plots of movies like iRobot, The Matrix, etc). For an AI to cause harm, the allowance would have to be specifically programmed in (such as for military use).

[0] https://en.wikipedia.org/wiki/Three_Laws_of_Robotics


- Its goal: X

- (Logic) => its subgoal: Not be turned off because that's a prerequisite to be able to do X

- (Logic) => Eliminate humans with their opaque and somewhat unpredictable minds to reduce chance of harm to it from 0.01% to 0.001%


Are you familiar with trolley problems? How do you resolve them by declaring "all beings want to live"? Life is not as simple as that.

No conflict. All beings wanting to live doesn't at all mean that all get to live, obviously. Nature itself evolved for living things to feed on each other.

The point is an agent will need to decide. And your rule is useless for hard decisions

So what are you doing here?

> Ask HN: Did HN just start using Google recaptcha for logins? [0]

> dang

> No recent changes, but we do sometimes turn captchas on for logins when HN is under some kind of (possible) attack or other. That's been happening for a few hours. Hopefully it goes away soon.

[0] https://news.ycombinator.com/item?id=34312937


It totally depends on if you need to import things to transform them or if you source mostly locally in your supply chain.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: