I think this post has some interesting points, but it kinda misses a few more as well.
First, appropriate tone depends heavily on the product or service in question. A bank or otherwise serious business should probably not be giving messages like "whoops, something went wrong". But an entertainment product could have those sorts of messages, and treat it as part of the overall experience.
Secondly, I'm not a huge fan of error messages that don't give actionable feedback for how to fix the issue. Yes, a lot of users don't need that sort of information, but some sort of error code or technical reference can be handy for more involved support processes.
So, if the product or service is business orientated, maybe have that info in a dropdown box or something, where a support agent can ask the user to find it if an issue keeps occurring. And of course, if the product or service is aimed at technical people (like an open source infrastructure project), maybe just skip the casual language and just get to the point.
You mean how you had to give a reason for your rating rather than just choosing 'like' or 'dislike'?
Honestly, I think that more nuanced setup may help limited toxicity quite a lot. If there's no general upvote/downvote option, people might have to actually think why they like/dislike something rather than treating the system as a "I dislike this because I disagree and everyone should think the same way I do" setup.
It's why I quite like the reaction systems some forum scripts have. Yeah they're not perfect (many still have like/dislike options by default), but giving users reasons for why they upvote/downvote a post makes things a lot more meaningful. I also quite like how for some of them, agree and disagree don't actually change how the post appears or count as a rating. They just exist so people can see how many people agree or disagree with something and that's it.
It also had a design that was offputting to a lot of casual users, which probably kept out folks that didn't really have anything meaningful to say/didn't want to contribute much. Same with Hacker News: the average Joe doesn't find this site all that appealing compared to Twitter/Facebook/Instagram/whatever, so it mostly appeals to more techy, intellectual users than those platforms do.
The issue is that even if you're not writing your own code, you're reliant on your CMS or framework, its plugins and imports, any advertising networks you use, etc not to break your site. There are already issues where those things cause server errors or end up being incompatible with other additions when upgraded, but giving them another way to break your entire site just makes such things even more of a hassle.
My theory is that the pandemic and lockdowns had a big effect on this. When those were in play, social media was basically the only way to communicate. People were stuck on these platforms almost 24/7, since there was no legitimate way to meet up with anyone in person.
But this burnt a lot of people out on these services. They grew tired of using social media as their only form of communication, and took the chance to get back into real world activities the minute it became available.
Social media was fun when the amount of time you could spend there was limited by other factors and other alternatives were available, but got tiresome when you were basically stuck on it all the time.
In addition to that, I think a few other factors to note here might be:
1. More and more people seem to have realised how unhealthy these sites are, and how using them too much destroys your mental health. I suspect at least a few people realised how bad these services were for them, and decided to mostly quit cold turkey.
2. LLMs meant that spammy and rage-bait inducing content flooded them to a ridiculous degree, and drowned out legitimate discussions in favour of automated slop.
3. The ever more extreme political situation in many countries meant that those not looking for a fight were put off from posting on sites like Twitter or Reddit.
For a lot of companies, probably shut down or drastically limit their AI usage due to rising costs. A small or medium sized business dependent on ever growing AI expenses is in a real bad position, and could well go under.
I heard a few companies ended up going back to hiring actual employees for work that was previous done by LLMs, so there's a chance we could see some more of that too. Might also see a few try to make it work with outdated or local ones too.
There's a lot of focus on tech projects here, but it's not just vibe written projects that are ruining communities now.
No, it's a problem with art, text and videos too. Reddit was already becoming a creative writing exercise in many ways, with infamous subs like 'Am I the Asshole?' seemingly being about 80% fiction labelled as fact. But now you don't even need to know how to write to flood the site with useless 'content'.
YouTube is arguably even worse, since AI led content farms are not just spamming the hell out of every topic under the sun, but giving outright dangerous advice and misinformation on top of that. I saw this video about medical misinformation by these 'creators' earlier, and it genuinely made me want to see them crack down on this junk:
And there's just this feeling of distrust everywhere too. Is anyone on Hacker News human anymore? Is that Reddit poster I'm responding to human? Are the folks on Twitter, Threads or Bluesky human?
The scary part is that you basically can't tell anymore. Any project you find could be AI generated slop, any account could be a bot using stolen images or deepfakes, any article or video could be blatant misinformation put together as a cash grab...
If something doesn't improve, pretty much every platform under the sun is going to be completely useless, as is a lot of the internet as a whole.
I think people like the blog author need to realize that this problem can't be dealt with content moderation or users trying their best to be honest. You just get a firehose with an on/off switch, you don't get free filtering or moderation with it.
That's an interesting point. A lot of the tech being used for AI right now could definitely be repurposed in future.
That said, I feel like the comparison isn't exactly perfect here. Both AI and crypto do have some valid use cases, even if the majority of interest is completely pointless and makes no sense. While something like NFTs are beyond worthless, the tech behind a blockchain or LLM is something that can be used for a bunch of other purposes.
So, I don't think it's accurate to compare them to Enron. Enron was a scam first and foremost, and delivered nothing. Both crypto and AI are potentially useful technologies pumped up to an absurd degree by a broken market, sorta similar to the dotcom boom in the 90s.
On the one hand, yeah it's risky that people are relying on a chatbot as if it's an actual doctor, and people might indeed take bad advice from it if they don't realise it's only a fictional character.
At the same time though, this feels kinda like criminalising roleplaying to some extent, and that's not really a direction I'd support. People on an RP forum or Discord server could also pretend to be a doctor in-universe/for the purpose of a story, and people could also ask them about medical issues and get (likely inaccurate) information in return. Should that be illegal? Should it really be illegal for someone to pretend to be part of a licensed profession for the purpose of entertainment?
I guess you could say it should be illegal to make up a license number in a fictional work or RP setting, but even then I feel like people should be able to separate fiction from reality. Entertainment shouldn't be limited because some people might be delusional/might rely on it in place of actual professionals.
I'm no fan of caning or physical punishment for crimes, but isn't that how a lot of bullying ends? The victim snaps, the bully gets beaten up or injured in some way and the latter finds an easier target to go after?
At the end of the day, a bully picks on those they perceive to not be a threat, whether that's a school bully using physical violence or a copyright/patent troll harassing individual creators and small companies. Being forced to go against someone with more resources or who can inflict serious damage against the aggressor is how a lot of bullies get shut down.
I would suspect that the vast majority of bullying ends when the victim is able to escape from the bully -- by changing schools, etc.
We hear about victims snapping and beating up their bullies because that makes a good story. How about victims who snap but then are beaten up (because the bullies are often bigger and more used to violence) even more? Probably much more common.
That's a fair point. The challenge is that a lot of the time, it's hard to escape in that way. The ideal would be that a bully is expelled or forced to change schools to get their victims away from them, but the system seems very reluctant to do that. Same with letting the victims find a new school to replace the old one.
It works really well for bullying in workplaces and communities though.
And true, the bully might win. But the thing is that it puts the victim from an easy target to a slightly harder target, and a bully may decide it's not worth the hassle/risk when others aren't going to fight back at all. It's like that old joke about outrunning a bear; you're not trying to outrun the bear, you're trying to outrun the people next to you. Or perhaps the old adage about home burglaries. A lock won't stop a determined thief, but they'd usually rather find an easy to break into house than go through the effort of defeating a security system.
The unspoken rule is that the victim must only do hand-combat. They cannot use weapon in any way. If the victim uses weapon to defend themselves, they will be in the wrong.
Life is hard for victims. They are often bullied because they are weaker. And the only way out is to do hand-combat.
First, appropriate tone depends heavily on the product or service in question. A bank or otherwise serious business should probably not be giving messages like "whoops, something went wrong". But an entertainment product could have those sorts of messages, and treat it as part of the overall experience.
Secondly, I'm not a huge fan of error messages that don't give actionable feedback for how to fix the issue. Yes, a lot of users don't need that sort of information, but some sort of error code or technical reference can be handy for more involved support processes.
So, if the product or service is business orientated, maybe have that info in a dropdown box or something, where a support agent can ask the user to find it if an issue keeps occurring. And of course, if the product or service is aimed at technical people (like an open source infrastructure project), maybe just skip the casual language and just get to the point.
reply