Meta cancels the contract with the outsourcing company they contracted to classify smart glasses content after employees at the company whistleblow about serious privacy issues with the content they were paid to classify.
How else do you want companies to remove and prevent CSAM? It seems like you must have some human involvement to train and monitor.
It’s a terrible job, I wouldn’t want to do it, but someone needs to. Perhaps one day, AI will be accurate enough to not need it, but even then you need someone to process complaints and waivers (like someone’s home photos being inaccurately flagged).
> How else do you want companies to remove and prevent CSAM?
Different situation.
Facebook has to do CSAM moderation because it's a publishing platform. People will post CSAM on facebook, so they must do moderation.
And "just don't have facebook" isn't a solution because every publication of any sort has to deal with this problem; Any newspaper accepting mail has this problem. (Albeit to a much more scaled down version) People were nailing obscene things to bulletin boards for all recorded history.
---
In contrast, OpenAI has no such problem. It did not have CSAM pushed onto it, it actively collected such data itself. It could have, at any point before and after, simply stopped scraping all of the web indiscriminately and switched to using more curated sources of scraped data.
The downside would be "worse LLMs" or "LLMs being created later", which is a perfectly acceptable compromise.
---
This is not to say that genuine content flagging firms have no reason to curate such data & build tools to automatically flag content before human moderators have to. (But then they also shouldn't be outsourcing this and traumatizing contract workers for $2-3 an hour)
But OpenAI is not such a firm. It's a general AI company.
There's no dollar amount but proper support during and after employment is a minimum, and a large paycheque will both offset some of the human cost and make it easier for people to be pushed to quit the job; Such that they aren't doing the job for too long.
The current support systems for police in this subject are already insufficient. Facebook's treatment of their moderation staff is abhorrent. The point of including the pay figure is to further illustrate just how damning this subcontracting practice is.
There is labor that is necessary for our societies to function, but a direct threat to the people doing the work. Someone has to do it, and it should be seen as a great service to society and rewarded accordingly. In a just world, we would be paying significantly extra for threats to health that come from work, in the one we are currently in we use threat of worse harm instead.
Someone has to do it, and it should be seen as a great service to society and rewarded accordingly
You are just too priviledge to understand people: many people would be glad do do it for the minimum wage, I would fight to have that oportunity (I live in west EU).
We have coal miners destroying their bodies and lungs, cobalt mining slavery, cocoa nut child labour and de facto slavery, sex workers, CPS investigators, first responders, and doctors with high rates of suicide…
Not only is there an acceptable market rate for trauma, it’s sometimes competitive and requires licensing.
There is one difference between first responders/doctors and the other classes (and the moderators under discussion here)
First responders/doctors/CPS investigators see the worst but they also have days where they make a difference. Save a life or multiple lives. I'm sure it's a huge part of what makes the job bearable, and to some meaningful.
I'm not discounting your point about high rates of suicide either. If anything, when you take away any good days, you're left, as a content moderator, with just seeing the worst of the world day in, day out, with nothing to make it meaningful. I'd suggest that's something we as a society should not tolerate as being an acceptable trade for the ability to share cat photos.
>First responders/doctors/CPS investigators see the worst but they also have days where they make a difference. Save a life or multiple lives. I'm sure it's a huge part of what makes the job bearable, and to some meaningful.
You think miners don't make a difference or save lives?
> You think miners don't make a difference or save lives?
Do you think miners mining is saving lives in the same way that doctors saving lives is saving lives?
To continue the parents point, do you think miners derive a deep or powerful satisfaction from some of their mining work which might offer some of the heavy cost it has on them physically and emotionally?
I think miners save more lives (through the supply of gas, energy, battery materials, pesticides, fertilizers, solar panel minerals, and ultimately electricity, computing materials, etc) than doctors do.
And I think what prevents miners to "derive a deep or powerful satisfaction from some of their mining work which might offset some of the heavy cost it has on them physically and emotionally" is not anything inherent in their work, but people thinking that only direct affect should be prestigious and satisfying and underapreciating the thankless background work to keep the lights on.
Same way people sneer at cleaning people or teachers and their meagre salaries and no respect, or domestic labor.
Emergency Department^ doctors, what do they make? give people who have to review the worst humanity has to offer and pay them that. and while we're at it, ambulance personnel should get a huge pay bump. Take it from nurses' pay.
^ i originally said "triage doctors" but i meant the resident ER doc.
OpenAI runs ChatGPT where users submit text and photos and OpenAI generates and sends text and photos back. So users could be submitting CSAM. And yes, OpenAI could be generating CSAM. It's not limited to being a pull operation. What am I missing?
What you're missing is that they're "separate" parts of the business.
The core Facebook product is users' posts. It's not possible to separate those two. Nor can one downscale Facebook in a way that stops the problem; The aforementioned "Facebook has had this problem because it's a problem we've had since the medieval days of a town bulletin board"
With OpenAI, the way ChatGPT was built and user submissions are separate things. The GPT models could have been have been trained without this mess. OpenAI could be more selective in what data it scrapes.
While OpenAI cannot stop users sending god knows what in their prompt text and images, OpenAI can choose to not interact with that data beyond the minimum legal retention, by e.g. not using it for training the next generation of models. This would massively downscale the problem.
AI output is another such problem, where A) Maybe this'd be less of a problem if they didn't recklessly include a bunch of CSAM into the training data by accident, and B) LLMs just aren't the kind of fundamental human right that "having a public opinion" is. It would be fine if they were less good, invented years later, or even not invented at all.
The main counterargument to the latter has been the "But China is inventing evil AI" spiel, which is fairly weak. If China builds an orphaned baby crushing machine, we do not need to build an orphaned baby crushing machine of our own. (And the reality is that China is only chasing AI so aggressively because the west does. They're reasonable people, it would have been entirely possible for both the west and China to make a mutual "no orphan crushing" agreement and just accept slower rollout of technology. This is exactly what has been done with human genetic engineering, and China did in fact enforce these norms.)
> In contrast, OpenAI has no such problem. It did not have CSAM pushed onto it, it actively collected such data itself. It could have, at any point before and after, simply stopped scraping all of the web indiscriminately and switched to using more curated sources of scraped data.
You've just thrown the garbage over your fence.
Instead of OpenAI contracting Sama to classify CSAM, the "Curators" have to.
At the end of the day, someone needs to classify it. If you say the platforms need to, and they miss some, and it ends up in OAI training data, OAI is going to be the entity paying the prices.
> In contrast, OpenAI has no such problem. It did not have CSAM pushed onto it, it actively collected such data itself. It could have, at any point before and after, simply stopped scraping all of the web indiscriminately and switched to using more curated sources of scraped data.
This is of course incredibly illegal, but megacorps (by valuation) and oligarchy members are above the law so who cares. I assume there could be a regulatory framework which can make this legal for an extremely specific purpose, but there is zero change that OpenAI was part of this/abiding by this in 2022, absolutely none.
CSAM exists on social media because they are so large that it's not possible to moderate them effectively. To me this is a a no-go. If a business is so large that it cannot respect laws, it needs to be shut down.
The correct way to organize social media is in federated way. Each server only holds on average a few hundred or few thousand people. Server moderators should be legally responsible for content on their server. CSAM on social media will be 100x suppressed because banning people is way easier on small servers.
Not many moderators will have to look at CSAM because the structure of the system makes is unappealing to even try sharing CSAM, knowing you will be immediately blocked.
Having tens of thousands of decentralized, independently moderated servers would result in an order of magnitude more CSAM being shared than having a few oligopolies. The abusers just have to find the weakest link, and that weakest link will have fewer resources than multi trillion dollar companies. You would also likely not hear many news stories about it, because they won't have the expertise to even detect it.
That's a tradeoff you can choose to make, but you need to enter into it with open eyes.
> Having tens of thousands of decentralized, independently moderated servers would result in an order of magnitude more CSAM being shared than having a few oligopolies.
It doesn't matter how many are shared but how many are viewed. On a small server community policing works just fine, bad actors are easier and faster to block and to top it off, the smaller reach of each server makes it unprofitable to target multiple serves, fish for their weak points. etc - the dirty jobs become unprofitable which is what matters most.
With the help of AI, small players can do a better job at removing CSAM.
>That's a tradeoff you can choose to make, but you need to enter into it with open eyes.
No it's not. It's certainly not my choice. No one asked me if it's okay for Facebook to distribute CSAM because you insist it would be worse if it didn't.
I don't really care if you classify it as a choice or not. One set of actions results in more CSAM than others. Just because you don't like the implication of there being tradeoffs doesn't mean there aren't tradeoffs.
In what regard is it incorrect that a single, larger entity that is at least notionally committed to avoiding the existence of any specific type of content on their platform is more likely to successfully avoid the existence of that type of content on their platform than smaller entities with less resources?
Now consider that some of those smaller entities might not be even notionally interested in avoiding the existence of that specific type of content on their platform, and are small enough for regulators to be unaware of its existence?
But its not a general problem, it's a problem specifically about child porn that we are discussing. The idea that there is no point in discussing the child porn problem on Facebook is exactly what I'm disputing.
Content moderation is a general problem, whether you're talking about child porn, content intended for mature audiences, or memes about Winnie the Pooh.
What I and others are trying to tell you is that your obsessive focus on Facebook as if they are the root cause of the problem is incorrect. There is no magic solution I'm aware of because each of them have some sort of tradeoff.
The most extreme version of content moderation I can think of is that a human being examines and approves every single message of any kind before it is published, any image of a minor is banned because it's too hard to objectively define child porn (that still leaves the open question of how to determine if someone is a minor visually), and no accounts for anyone under the age of legal majority are allowed, as verified by a legal ID that is checked by a human being.
Even in that case, kids will find some way to get an account or just use their parent's account, and the door is cracked open again. And the pedophiles will just go elsewhere, probably using a service with significantly less resources available to attack the problem, which is probably worse than the status quo.
This doesn't even touch on the privacy concerns that most people would have with every message being reviewed.
As I said before, I would welcome you to share the solution that you imply exists which addresses every issue above.
I don't see others trying to tell me what you are.
> Content moderation is a general problem,
Easy to reform any problem in a more general manner. Doesn't make your discussion any less dishonest.
>As I said before, I would welcome you to share the solution that you imply exists which addresses every issue above.
It's not really my burden to come up with a solution. That's ridiculous. It's Facebook's problem, not mine. You haven't even disputed that they could not do a better job. Your argument was that it's better for the child porn to be on Facebook than smaller websites, which is specious at best.
There's nothing dishonest about my attempts to have a conversation with you about this.
You've decided that there's some relatively easy solution to a problem that existed before Facebook and will exist after Facebook that Facebook should be implementing to solve what appears to be an unsolvable problem to basically everyone else on earth, yet you have no ability to describe this solution and don't seem to have put much effort into thinking about it beyond assuming it exists.
No one is arguing that it's better for child porn to be anywhere. What myself and others have said is that there are tradeoffs to be made concerning content moderation, and you basically refuse to even contemplate the theoretical benefits and downsides of different approaches and their outcomes.
I don't know what your motivation is, whether you just have some irrational hatred of Facebook, are a zealot concerning child porn, both, or there's some other explanation for your obstinate ignorance, but attempting to talk to you appears to be a complete waste of time.
> You've decided that there's some relatively easy solution
I never used these words either. That's where the dishonesty is. Look back at our thread, how many times have you done that? You ask me to define basic words and then don't respond when I do... everyone else on earth agrees with you? Just read this thread. There is literally someone else in this very thread here agreeing with me.
>No one is arguing that it's better for child porn to be anywhere
You did. You argued it's better to be on Facebook than on smaller sites and audaciously asked me how I could disagree?
> I don't know what your motivation is, whether you just have some irrational hatred of Facebook, are a zealot concerning child porn, both, or there's some other explanation for your obstinate ignorance, but attempting to talk to you appears to be a complete waste of time.
It's much more telling that you think those are the only two reasons why someone would think "Facebook should really do something about its child porn problem already."
>but attempting to talk to you appears to be a complete waste of time.
You don't use any words, other than repeatedly saying "Facebook should be solving this problem they created", so people have to fill in the gaps because that is a very strange perspective and you refuse to elaborate.
> That's where the dishonesty is. Look back at our thread, how many times have you done that? You ask me to define basic words and then don't respond when I do... everyone else on earth agrees with you? Just read this thread. There is literally someone else in this very thread here agreeing with me.
You don't define basic words, that's the issue.
I never said everyone agrees with me, and the one person "agreeing" with you is just as clueless about the pros and cons of a centralized vs distributed system.
> You did. You argued it's better to be on Facebook than on smaller sites and audaciously asked me how I could disagree?
I did not. You're either confusing me with someone else (and twisting their words) or just imagining messages, just like you're imagining that you've diligently responded to every request for clarification on your ill-defined yet adamant stance.
> It's much more telling that you think those are the only two reasons why someone would think "Facebook should really do something about its child porn problem already."
>You don't use any words, other than repeatedly saying "Facebook should be solving this problem they created", so people have to fill in the gaps because that is a very strange perspective and you refuse to elaborate.
Thinking that Facebook should solve its own child pornography problem is not a weird perspective at all. What is weird about that? What do I need to elaborate on? That's my position. Are you saying it's unfounded?
>You don't define basic words, that's the issue.
I did, you asked me to and didn't respond.
>I never said everyone agrees with me, and the one person "agreeing" with you is just as clueless about the pros and cons of a centralized vs distributed system.
Oh, excuse me, not everyone, just "basically everyone else on earth". Again, incredibly dishonest on your part.
>I did not. You're either confusing me with someone else (and twisting their words) or just imagining messages, just like you're imagining that you've diligently responded to every request for clarification on your ill-defined yet adamant stance.
There's nothing ill-defined about my stance. It's very clear. Meta should clean up its child porn mess,
>Again, feel free to elaborate.
Well, I think it's incredibly disingenuous to act as if the only reason one could come to such belief is because of an extreme opinion. I'm willing to bet you that most people would agree with me that Facebook should do something meaningful about its child porn problem. For no discernable reason you jumped to the conclusion that what I stated is an extreme opinion only shared by zealots. I'd bet most parents would agree. I'd bet most people would agree. In fact, you haven't at all explained what is extreme about that opinion. I think most people think child pornography is a problem, and I think most people think that Facebook, a website which facilitates the proliferation of child pornography and enables predators to get in touch with children, shouldn't. That all seems fairly self-evident, actually. I'm not sure where you spend most of your time such that you think people don't think child pornography is a problem and that only zealots care about it. What a weird place that must be.
> Likewise
Yet you came back to respond again. Either engage in a conversation honestly or fuck off.
> Thinking that Facebook should solve its own child pornography problem is not a weird perspective at all. What is weird about that? What do I need to elaborate on? That's my position. Are you saying it's unfounded?
Again, the person you originally were talking to about this and myself have pointed out that it's not just Facebook's problem, it's society's problem, and all I have said is that there are tradeoffs, which you deny for inexplicable reasons (probably because you have no idea what you're talking about, but feel free to correct that assumption).
In a similar vein, I asked you what specifically you'd like Facebook to do, and you didn't have any meaningful answer (probably because you have no idea what you're talking about, but feel free to correct that assumption).
Where is your comment where you've defined these basic words and got no response?
> Oh, excuse me, not everyone, just "basically everyone else on earth". Again, incredibly dishonest on your part.
I'll restate to "basically everyone on earth with a clue about the differences between centralized and distributed systems".
> There's nothing ill-defined about my stance. It's very clear. Meta should clean up its child porn mess,
The first obvious question is: How (what is the definition of "clean up")? The obvious question after that is: If they do so, where do the pedos go next, because Facebook didn't create their interest in child porn? The obvious question after that is: Is that better than the status quo?
Yet you have literally no comment on this. Why are you so adamant about your position when it's apparently so uninformed?
> Well, I think it's incredibly disingenuous to act as if the only reason one could come to such belief is because of an extreme opinion. I'm willing to bet you that most people would agree with me that Facebook should do something meaningful about its child porn problem.
See the link above where I asked you to define meaningful and you didn't respond. They aren't doing nothing now from what I can tell, and they certainly could be doing more, to the point of shutting down their service entirely. What is "meaningful" to you?
> For no discernible reason you jumped to the conclusion that what I stated is an extreme opinion only shared by zealots. I'd bet most parents would agree. I'd bet most people would agree. In fact, you haven't at all explained what is extreme about that opinion. I think most people think child pornography is a problem, and I think most people think that Facebook, a website which facilitates the proliferation of child pornography and enables predators to get in touch with children, shouldn't. That all seems fairly self-evident, actually. I'm not sure where you spend most of your time such that you think people don't think child pornography is a problem and that only zealots care about it. What a weird place that must be.
I live in a world where Facebook is used for a lot of things, just like every other service on the internet, recognize that those services are far from the root cause of any issue related to the creation or distribution of undesirable content, understand that they are not able to solve the root cause, and that the only way for them to fully eradicate any specific type of content from their service is to shut it down, with the end state being no internet once this is applied to all services that host content.
If you see that state as acceptable or desirable, then just come out and say so. If not, then you need to accept that online services will end up hosting some objectionable content at some point. You rejected both of these options previously when stated slightly differently, and have yet to describe a third state that must exist for that rejection is valid, which is what leads to believe you might be some sort of zealot (as they are known for rejecting reality). Feel free to describe why this rejection of the only two option I'm aware of is valid at any time, beyond just saying "Facebook needs to do more".
> Yet you came back to respond again. Either engage in a conversation honestly or fuck off.
You could start by sharing a coherent thought beyond "Facebook bad" on this topic. I've presented numerous comments and questions that have gone unanswered.
What your opponent is saying is, "there are mutually exclusive A and B". A being widespread CSAM and B being somebody need to look at CSAM to remove it.
Can you elaborate on what exactly is wrong there? Do you see the third alternative C and it's not the "whole choice"? Or are you saying A or B do not exist and therefore there's no choice? Please name C, or tell us why A or B don't exist (or aren't acceptable), or explain your view that doesn't fit into these options.
Some people are not okay with actively facilitating harm to people, even if inaction results in harm to other people. See: the trolley problem. This is totally okay, but the point made above is that
>That's a tradeoff you can choose to make
is not correct: It is a tradeoff that one specific person can choose to make, but not one that I or we can choose to make, because we don't control facebook. Mark Zuckerberg controls facebook. He alone can choose to make that tradeoff, or not, on behalf of society.
> Server moderators should be legally responsible for content on their server.
And therefore anything that is remotely questionable will be blocked. Not just kiddie porn. Pissed off a local business with a bad review? Blocked.
Child abusers are twisted people, and I really don’t care much what happens to them, but making it impossible for them to use the internet means sterilizing the whole thing.
>And therefore anything that is remotely questionable will be blocked. Not just kiddie porn. Pissed off a local business with a bad review? Blocked.
This is already the case. There is a lot of lawful, useful, medical or educational content that is actively censured on social medias because they include words or pictures of organs while same social medias actively encourage and develop algorithm to push underage girls (and possibly boys) posting pictures of themselves in sexual poses, attires and context.
Big tech and social media networks love and push CSAM, they just hide the genitals but the content really is the same.
> a lot of lawful, useful, medical or educational content
Like what? It’s all there on Wikipedia, and for all of Wiki’s faults, I have trouble imagining what kind of useful, educational, medical information you will find on social media that is better than that.
You are just saying that physical life doesn't function. People get banned or removed from all sorts of informal and formal groups all the time because of completely illegitimate reasons. That's just human politics embedded so deeply in our psychology it will never go away. They simply move to different groups - and similarly online they can move to a different federated server.
But that's not possible in today's oligopoly of social media. An invisible algorithm will ban you, and there is no way back, and few alternates. Big Social Media is way worse from a sanitizing perspective than some federated social media.
I have no deep problem with exclusion; as you say, that’s human nature and unfixable. Making mods personally legally liable for everything that appears on their board is just insane. How many minutes are acceptable for them to see and review content? Or does everything have to be pre-approved?
I know a local blog that pre-approves every comment. He lets a lot of stuff through, because he lets people be dumbasses. If he were personally liable, the conversation would get a lot quieter.
Also, if you've gone from zero to one of the biggest coroporations in the country, and have billions to throw at the 'metaverse', I find it hard to believe that removing CSAM is where you struggle.
No. It's a legitimately difficult problem because there not all naked pictures of kids are illegal. The false positive problem is bad for business, but also generally bad even if the big social media was benevolent.
Moderators need to actually understand the context of the picture/video, which requires knowledge of culture and language of the people sharing the pictures. It's really difficult to do that without hiring moderators from every culture in the world.
But small federated servers can often align along real world human social networks, so it's easier for the server admin to understand what should be removed.
The amount of CSAM online is completely out of control. There's already nation-level and sometimes international cooperation to catch any known images with perceptual hashing (think: the opposite of cryptographic hashing) as well as other automated and manual tools.
My impression is it would take Manhattan-Project levels of effort and funds to come close to "solving" this problem, especially without someone getting on a watchlist for having a telehealth-first primary care provider insurace plan and asking for advice on their toddler's chickenpox.
Human review? Meta has small armies worth of content moderators already that tend to burn out with psychological problems and have a suicide rate where you're probably better off going to fight in a real war. (This includes workers hired by Sama in Kenya, to link back to the OP.)
I will reluctantly grant Meta that they're up against a really hard problem here.
Yeah, I agree with you. Of course, it’s not Meta’s blame that the CSAM actually exists, but calling the problem of filtering it extremely difficult at Meta’s scale is a problem that is easily solvable but fundamentally requires changing how the platform works, and would likely require a lot more money to be spent.
Isn't this more about disincentivizing the posting of it in the first place by increasing the chances of getting banned? Once you have to remove it, it's too late.
>CSAM on social media will be 100x suppressed because banning people is way easier on small servers.
No it isn't. Small servers often don't have paid security or moderation, are run in anonymous fashion, and have no profit motive that can even be used to incentivize them against hosting illegal content.
That's visible when it comes to porn. There's a million bootleg porn sites on the internet hosted that show off illegal content. The only site that was ever forced to curate its content was Pornhub, because they're sufficiently large, work in a jurisdiction that has laws and can be held accountable. From a content moderation standpoint going after a million web forums is an absolute pain in the ass compared to going after Facebook.
Which is the first argument any decentralization advocate always brings up (and they're correct to do so), censorship is harder and evasion of law enforcement easier when dealing with a network of independent actors.
> Server moderators should be legally responsible for content on their server.
So if you want to send someone to jail, just talk your way into joining their server, upload some illegal content, and report them for it?
> Not many moderators will have to look at CSAM because the structure of the system makes is unappealing to even try sharing CSAM, knowing you will be immediately blocked.
Why would someone join a server with active moderation if they wanted to share CSAM with their social media friends?
They would seek out one of those servers that was set up specifically for those groups, where it was known to be a safe space.
This is what many people don't get about federated networks: The people in those little servers DGAF if you block them. They want to be surrounded by their likeminded friends away from the rules of some bigger service like Facebook or Twitter. Federated social media is the perfect platform for them because they can find someone who set up a server in some other country with their own idea of rules and join that, not be subject to the regulations of mainstream social media.
right, and you have other users on fediverse that notice that server leaking, and if the content is bad enough, report the service to an authority. Having all of the pedophiles and other creeps on a tiny subset of servers, isloated islands of them; well, that ought make enforcement easier.
It also makes it relatively easy to avoid, as server admins share blocklists. I know a dozen servers offhand that i'd block if i ran another fediverse server.
Fosstodon fediverse server doesn't have this issue, for example.
I replied this way because the way you wrote it, it sounds like an indictment of a system that's designed to avoid advertisers getting user profiles, over all else.
The problem is the people who participate in this (the illegal and immoral), and not "the network."
So people congregating to do things online will do such in places where it isn't illegal to do such things?
We're all aware that it is possible to run a private website, forum, chat server (irc-like or discord-like), including "federated" servers, but not federate? in fact, Element, a chat client, has a parent company that even sells "completely private, encrypted chat", which will never "leak."
I'd much rather have leaky CSAM federated servers than every bad actor behind a VPN. I don't want to see the shit, but i can null route the entire domain and be done with it, or i can send links to my local authorities and let them deal with it.
A similar thing is racism, would you rather have someone be openly racist, or just privately? This was said, i believe, about Joe Biden, about why people tend to trust Trump more, since he wears everything on his sleeve, and Biden speaks out of both sides of his mouth. Like how Carlin said Clinton won people over by saying "Hi folks, i'm completely full of shit and what do you think about that? and folks said, "well, at least he's honest."
Sunshine is the best disinfectant.
and authorities will know if there's new CSAM, and will crop the images and send it to the groups that track down where clothes came from, what the decor in the room means, if there's anything else identifiable. None of this is possible if it's underground and "non-federated."
I'd rather CSAM ceased to be a thing; but again, i'd much rather have idiots announcing it publicly than on an E2EE private network.
The one thing I will throw out here that I can add to this conversation is that I think the government simply does not care, either. It's mainly only in regard to mass public outrage, or when someone is a political target that it gets dealt with from a law enforcement level.
Anecdotally, when I was a young adult I was a volunteer moderator for a large forum. We got reports of CSAM several times a month and had a process for escalating and reporting it to the FBI IC3 - we retained a lot of information about the users that posted it.
One of the administrators of the website mentioned to me that over the years since the inception of the forum, they'd reported almost a thousand incidents of CSAM distribution - and the FBI followed up with them to get information less than 10 times in total.
The FBI is interested in busting perverts in closets. That's often how they work their way up the "supply chain" when it comes to CSAM. Consumers lead them to distributors, who lead them to producers.
A fair point. But it still seems reasonable that only about 1% of suspect posts lead to a formal inquiry. Doesn’t mean they aren’t taking the report into account. You have to figure that they already have leads on most of them.
Do we really have to give the benefit of the doubt to the agency that was literally running one of the largest CSAM distribution outlets in the world for years as a honeypot?
If you want ro argue that the FBI is a fundamentally flawed agency that on balance is a net negative, I won’t fight you that hard. But during the civil rights struggle, they were the only force that could be trusted at all.
Yes, that was 60 years ago. No one involved at that time is still there - and in fact, most of them have passed. I don't know why you think there's a shred of relevance there.
Yep. If you cannot both safely and legally provide the thing you are selling you are no longer a legitimate company you are a criminal enterprise profiting off of exploitation.
Sure, then they can go demand said standards for social media platforms including expected amount per N post, just as car companies are not expected to have car fatality rates be 0.
The fact is that simple scale means that there will always be something, no matter how abhorrent. Small scale doesn't change this, it just concentrates it.
Do car companies sell cars without air bags, or seat belts? What about cars that haven't been crash tested? What happens to them if they don't do this do you think?
Would you drive a car optimized for profit that didn't have those safety features? How about on a highway? Daily?
Here it's said that it's the users fault. I disagree. Completely. Most of these companies, staying on topic many of these companies have laid off the employees who tried to prevent things like this,
When FORD dngaf with the Pinto and Corsair( like tech companies do not gaf), they deservedly got this same level of contempt/demand for oversite. A dude named Ralph Nader went on a huge crusade about it. And they got a ton more oversite, safety requirements, etc put on them.
I voted for Ralph Nader a few times, until he stopped appearing on ballots for whatever reason. For this reason, and many others. I don't remember any negative press about him, either. maybe he got out when mudslinging became defacto in elections.
I am not sold on the federated thing to solve CSAM or similar issues.
Actually companies should be bullied about privacy and copyright so they are unable to share any contents at a scale with 3rd parties. Thus they have to solve it on their own and forced to realize their business model is shit.
when i ran a fediverse server for myself and 3 people, but allowed public signups if someone came by; it was very easy to ban people, and very easy to null-route entire swaths of the fediverse, because i didn't want their content on my service.
That's more what i got from that pull-quote. I know a company that has hundreds of individual forums, and those are all moderated quickly and correctly (last i heard). They're moderated so effectively they often get DDoS by Russian IPs for banning users for scam posts from that country.
These workers prepare data for AI. I don't think the need for them will go away anyway soon.
Westeners are too expensive and unwilling to do it. AI is a business model that requires poverty and extreme inequality to function. Yes other businesses do that too, but they don't claim it's a solution to everything while it actually has very special human requirements.
There are more reasons why these jobs are located in developing countries, it's not only the price of labour. Imagine for a second, these annotations would have to be done in the US. The public outrage would probably be audible across the Atlantic. This is another form of imperialism.
I agree that there’s no good way to do this other than like… no user generated content ever or just ban everyone for their baby pics and etc….and nobody can post them.
Granted the latter is kinda happening distantly on YouTube where you can’t talk about “ suicide “ so everyone self censors…
you must be extremy priviledge to think that way, even as EU I would be glad to do it for the minimum salary. For your info, a terrible job for most human is a job that is extremly hard physically at the point of destroying your health. That said, like many people, I would find it much more interesting than many boring job. [If someone read this, please hire me for this, in exchange I would work the 5 first hour for free]
Isn't it more that tech companies are just more high profile and integral to political and social landscape than older companies; but reviewing the current political zeitgeist, they're in lockstep to what some, if not all, would just call fascism?
They are literal defense and offense contractors. They hang out at the Pentagon. They sell political data to sway elections. They give gifts to leaders for favors. It is technofacism.
Safety and user pain is a part of tech which seems largely ignored, even on sites like HN.
I really have no idea why this ignorance prevails; commenters seem to genuinely be unaware of what goes on in Trust and Safety processes.
I mean, most users would complain about content moderation, but their experience would be miles ahead of what most of humanity enjoys when it comes to responsiveness.
I believe this lack of knowledge, examples, and case history is causing a blind spot in tech centric conversations when it comes to the causes of the Techlash.
Unfortunately this backlash is also the perfect cover for authoritarian government action - they come across as responsive to voters while also reigning in firms that are more responsive to American citizens and government officers than their own.
Companies of the 20th century certainly weren't more ethical. (Though a few select tech companies seem to be intent on proving the opposite.)
But it's not really a fascism thing. While fascism does love the oppression of women, and the current crop of fascists have a notable connection to the Epstein case, this is a lot more boring.
Sam Altman's not a fascist, he's a wet noodle who sucks up to the Trump administration for money. He's not even good at it. The way his company handled CSAM does cast aspersions on Altman & the accusations from his sister, but all other evidence suggests he's just a moron acting recklessly. Not identifying the problem ahead of time, and acting poorly in response.
In the case of Meta. We know who Zuckerberg is. The company got it's start as, in crude terms, a sex pest website. The original "Facemash" website forcibly taken down by Harvard. This is not some new consequence of this turn to fascism, Zuckerberg's always been like this, and the actions taken against him were clearly not enough to avoid the company culture following his precedent.
> Companies of the 20th century certainly weren't more ethical.
Disagree, not on average. There was a non-trivially higher % of decisions made based on "what's good for the customer" or "what's good for the product" or "I would be ashamed to do this" and a lower % of decisions made based on "what maximizes profit in the next quarter". I think that is more ethical. To take it to an extreme, using slave labor because it's good for the customer is more ethical than using slave labor to maximize profit in the next quarter.
Sounds about right. If you know someone who uses these smart glasses, it's important not to tolerate them whatsoever. Don't speak with them, interact with them. I wouldn't even recommend being in their presence.
> It is not up to you to deprive anyone their right to use them.
I don't see anyone saying that people don't have the right to use them. I see people saying that they have the right to avoid being anywhere near the people who use them and to disapprove of those people. Which is just as much of a right as the right to wear spy glasses.
I'm glad to see opinion seems to be swaying back in this direction. It was only a few months ago that the general sentiment seemed to be "times are different than the glasshole days, it's fine now."
It is unfortunate that a large number of users here are not hackers, not even in an idealistic philosophical sense, and will betray the public good for their own short-term gain.
>I don't think that's fair. Smartglasses have legitimate purposes.
I think that's true in principle, but in practice there are going to be two kinds of smart glasses users; extraordinarily annoying kids or you adults acting annoying in public so they can post videos to social media, and then normal people who have no clear sense for how much they're violating the privacy of those around them, and just like cool tech.
Very, very few users are going to be an interesting or valid use case -- eg: someone who is using them to assist with a disability, or for research, or something.
Even most dash cams don't stream to Meta -- they just record the last _n_ hours and you need to know to save off the video if you're in an crash / incident. In other words, most of the time no privacy is violated, and the only potential privacy violation occurs during an incident.
Even policy body cams, which I wholeheartedly support, have some pretty strong downsides: currently, if you're at the end of your rope, having the worst day of your life, and in your dishevelment turn a speeding ticket into a BATLEO, you're famous forever for being a lunatic. Maybe the rest of the time you're a good person, and you can learn from this and move on. Except now you have a permanent albatross around your neck. This is a secondary penalty that the justice system did not intend, and has no answer for.
I saw there is at least one company working on offline smart glasses for disabled users. I don’t have such a problem with this, and I wonder if the industry as a whole could be nudged in this direction. Offline glasses seem more ok to me.
It makes a lot of sense for actual accessibility devices to be offline-capable. You don’t want to lose your “sight” when you step into a metal building or elevator.
You realize smart glasses have a battery that allows for all of 15 to 20 minutes of recording right?
Hell just turning on wake word detection for asking it questions murders the battery life and it is one of the first things people turn off.
The phone in your pocket reports your position to multiple ad agencies throughout the day. Stores track individual's movements throughout their buildings and see what aisles people linger at.
15 minutes of video recording via glasses (versus on a smart phone, or go pro, or drone) is not some huge mass surveillance issue.
> Very, very few users are going to be an interesting or valid use case
You then list a mere two categories.
Would your argument have been similar in 2008 if told that in ten years, everyone in the economic first world would be carrying multiple cameras including a dedicated "selfie" camera at all times?
You say that like it's assumed that ubiquitous smart phones were obviously a good thing, when it sure seems like there's an increasing number of people questioning that assumption.
I'm not sure I understand the point about a dedicate "selfie" camera, however I think we're conflating "percentage of users" with "varieties of use cases." I think there could be quite a cornucopia of potential use cases, but I think per capita most people will not actually be making use of these. As other commenters have pointed out, I'd be a lot more tolerant if the data were not constantly piped to Meta.
The point about a dedicated selfie camera was that in 2008, few would have considered taking selfies to be a major use case that would drive >90% of teens and adults to have a camera which has no other reasonable purpose. In the age of FaceTime calls, it would seem absurd to question why it's needed, but nothing like that was mainstream in 2008, which would lead to the same argument of "there are very few legitimate reasons to want such a camera (and it will enable creepshots)".
My wider point is that there are already many obvious use cases, and as adoption of cameras which are always on or plausibly always on rises, there will be a lot more, including augmented reality, translation, context hinting, AI agent awareness for assistants and personal security, and at least dozens of others, some of which I am sure no one has started building for, yet.
Meta is probably not the winner in this space (or, I hope not, at least, so we agree there!). However, the idea that people have a right to remember and process what they see and hear in full fidelity is pretty basic, in my opinion.
Thanks for clarifying, I appreciate it. I'm so burnt by the potential downsides (and by the last ~19 years of smartphones) that I don't think we can see eye to eye, but I really appreciate you taking the time to expand on your point so I could understand your perspective.
> However, the idea that people have a right to remember and process what they see and hear in full fidelity is pretty basic, in my opinion.
If that's what we were talking about, I'd be much less bothered. But it's not. What we're talking about is people recording others and feeding that data to a third party.
I can't deprive someone of their right to use them, but I can refuse to interact with someone who's wearing them. This seems like a fair natural consequence. Feel free to wear them, but I won't speak to you when you do.
dash cams are local and pointing at the road, not everywhere.
body cams are local and mostly used by law enforcement to guarantee they are not abusing their power.
glassholes are connected to the cloud. you may have the right to record on public space, i have the right to remain anonymous in the crowd and not be constatly targeted by an advertisement company.
Even if 1% of the corner cases are legit uses (blind people having the glasses describe the world around them is fantastic.) 99% of the people using them are assholes that deserve to be put in the ground and the glasses smashed.
I am blind, and I could imagine several usecases which would make my life a lot easier by using glasses like this. But because of their reputation I will most likely never use them, and especially not in public. I'm already afraid enough people will think I'm recording them when I use my phone to get info about what's around me, definitely don't need to get punched in the face for wearing meta on my face.
Edit: Not that I would want Meta to get all that data anyway. But even if glasses exist which are more privacy conscious, I think Meta and Google Glass thoroughly ruined the reputation of any kind of wearable like this.
I'm sorry you are dealing with the social repercussions of assistive technology. I really wish companies weren't so gross and that they did not endanger some of the advantages of advances like this by being gross
I can imagine there are many use-cases for blind people, but I also think having some kind of visual indicator that "these glasses are recording" would be good, and I don't know what tools you use in public at the moment, but if you use, for example, a white cane, it might help people to understand "this person is using a camera for assistance". But yes, the fact that glasses manufacturers have already demonstrated they want to take every frame of data they can does sour their reputation
I seem to recall that when the snapchat glasses were a thing, they had a very bright an obvious ring of LEDs around the camera itself, that were bright enough to shine through a sticker placed over them. Sure, there are still ways to defeat that, but it makes it a bit harder.
Also I just googled for what the light actually looks like when it's recording, and it's not even really that visible...
If the "subject" is human, those seem rather few. Surgeries come to mind, though smart glasses would be more a convenience there. Maybe some psychiatric patients, where a doctor wants to review snippets of his interactions with lower-level staff or his family members? Law enforcement trying to record interactions between informants and targeted criminals - though the latter might wise up pretty quick. Security staff at some very-high-security facilities.
I already noted it in the answer. If a person feels at risk, or even if they're on vacation, they have a right to record something/everything and someone/everyone around them in public, just as they could with a phone.
Do you think you will know if someone has their phone in their pocket or in a holster, and is turned on and recording? You will never know.
There are dozens if not hundreds of cameras pointed at the street that record people every time they go out in public in any urban setting.
> Do you think you will know if someone has their phone in their pocket or in a holster, and is turned on and recording? You will never know.
At least this says something about the intention. Someone who films with a hidden phone implicitly shows that they intentionally hid this from the people being filmed.
Filming with glasses is hidden by design. It gives plausible deniability to the person filming, so they can film covertly but pretend they weren't hiding anything.
In most cases this doesn't make a difference but there are some cases where the premeditation can make it worse for the person doing the "abusive" filming.
If someone is recording you on video with a smartphone, you are generally aware of it, because it has to be pointed at you. Sure, you have a right to record people in public, there is no reasonable expectation of privacy in a public place, but I would quite like to know if you are recording me. I'm also not terribly worried about people recording me having sex or being naked in public without my knowledge...
>> even if they're on vacation, they have a right to record something/everything and someone/everyone around them in public
Big assumption here that the place you're on vacation doesn't have different laws. You may have absolutely no right to record "everything and everyone" around you.
I have 2 kids in single digit ages (1 under 5). I bought meta gen 2 last month and I cannot describe how many sweet moments I have captured. My kid loves to sing while playing with dolls and stops as soon as I flip my phone out to record.
I hope you can appreciate that you're capturing this data for Meta and their contractors and that they have the capability of doing whatever they want with this data. My spouse and I ask everyone taking pictures of our kid to never post them to social media because Meta et. al. create a shadow profile using those pictures, and they can share those photos with contractors and with other people and we don't want a company like that to have my son's data without his 18-year-old self's consent.
I get this argument and largely agree with it in regards to these meta glasses. Its why I don't currently use them.
But I'd like to have some smart glasses that do respect my privacy and offer this kind of functionality. Honestly, most of the things smart glasses do today are stuff I'd really like. Having my glasses just be the bone conduction headphones I often wear anyways? Check. Easy access to taking photos and short videos of life experiences? Love it. Integrated into the thing I'm often wearing on my head anyways? Perfect.
If you walk up to me and shove a camera in my face I'll get very loud and very angry with you very quickly. That's kind of paradoxical, if you intended the camera to make you feel safer. I don't think I'm in the minority.
> Smartglasses have reasonabl eand legitimate uses. People also use bodycams that record continuously, such as for legal reasons. People have a right to record in public, such as if they feel at risk. Are you going to go after car cameras next?
None of those default to sharing your recording with anyone else, let alone with no practical way to opt out.
I do not want my employees recording their day job and selling it, or the creepy dude next to me in the bathroom filming my goods or the log jam flying out of my butt so meta can try to sell me pepto.
I also don't want that one time I did something minor illegal like jay walking get auto fed into palantir so they can ship me to the latest internment camp.
Or someone stealing my biometrics by just walking past me.
That seems backwards to me. In your country, if you were to record someone committing a crime against you in public, you’re the one who will go to jail?
Is the law applied equally, so that businesses, police officers, and government agencies are also not allowed to record in public?
I'll grant you smartphones, but smart TVs usually don't have cameras/microphones. The problem with smart glasses is that they constantly capture video and upload it to $VENDOR like in this case.
A mostly-solitary sporting event (or one where you know all the other participants and can get their consent to record beforehand) seems like a reasonable use of these sorts of glasses. I wouldn’t personally give consent just as a sort of privacy reflex, but it really depends on your social circle.
People recognize GoPro cameras for what they are. They are easily understood as a camera. Glasshole devices are not as easily recognizable and people honestly may not realize they are being recorded especially when the glasshole does not inform everyone they are being recorded.
Now, for your "while cycling" qualifier, why does it matter? Again, if you stop to talk to people while recording and it is not obvious you are recording, you're a glasshole. Personally, I have no experience with camera quality from the devices, but I do know what a GoPro can do. My gut instinct is that the GoPro will be superior footage.
I have a go pro but it's a bit of hassle to setup, I tried a chest mount and the angle wasn't great and I think the eye level view would look better. Also more convenient to record on glasses which I'll have to wear anyways.
Yes, I could record while talking to people but I wouldn't get the point of that, I want to record descents and pretty views.
My main point is someone owning smart glasses doesn't mean they automatically suck and should be ostracized.
The problem is there's places where you'd get noticed and probably removed for filming with a gopro, or even a smartphone. My local "wellness center" and pools have you deposit your smartphone before you exit the changing area into the showers.
The danger with creep glasses is that many people don't know what they are, they can be used with the LED disabled so they're perfect for filming people without their knowledge, and "these are prescription glasses" has a good chance of working. In a place with a "no recording devices" policy, "could you put that gopro away" has wide social acceptance/support, "take those glasses off" less so.
You're aware of the privacy implications but think people talking about avoiding people who use them are proposing dumb arguments? I don't follow your logic.
How does not talking to people will solve privacy problems of the new technology? Are you assuming that Meta will see that you ignored your coworker with smart glasses and shut down the project, along with Apple and other smart-glasses manufacturers? I'd love to follow your logic, if you can't follow mine.
Ostracization has worked before. And ostracization can lead to decreased sales, which they will definitely notice. If engaging with people to talk about privacy implications involves sacrificing your privacy, I feel that it is reasonable approach.
It's like if I had to be punched by someone to talk to them about why them punching people all the time wasn't alright, then I'd find it very reasonable to just not associate with that person.
I think you're outlining a relationship with people who actually want to actively engage in these concerns in a responsible manner, and these platforms have definitely demonstrated the opposite and a willingness to use that engagement to inflict the very behavior you want to moderate.
There's also nothing stopping us from stigmatizing the use of smartphones in public. Even a slight discouragement of it would be progress. It doesn't have to be all or nothing.
I think smartphones are a lost cause. Even at the gym, there are guys in the locker room taking pics of themselves in the mirror. Meanwhile I'm walking ass-naked out of the shower. There is just no sensitivity to appropriate time and place anymore.
Many security cameras have the ability to record audio. Depending on where you are, it might be illegal to use it. All the cams I have purchased have it. That would include ReoLink and a recommended model from the Frigate site.
Because person wearing glasses usually can move and video surveillance cameras usually can't?
If that's not it then spell it out for me, please.
Also, why would i be deceptive in this discussion? I feel like I missed some ideological conflict.
Imagine someone pulling up a smartphone and then recording everything that happens around them. Contrast that with someone wearing smart glasses and doing that exact same thing.
On a separate note, (and this is a genuine question) are you by any chance aware the term Non-consensual intimate imagery / NCII?
I am beginning to suspect that the average HN goer isn’t aware of the scope and scale of the Trust and Safety problem.
They don't care. Or they refuse to realize that tech isn't the solution to it, but an amplifier of it's scale.
Can tell you that my urge to take photos/record drastically dips around other people. Particularly if it were meant for any sort of commercial exploitation. Stephenson called people wired for max indiscriminate data collection/processing "gargoyles". Personally I prefer glassholes.
Someone pulling up a smartphone on me would feel hostile because it's violating a social contract. Maybe I'd feel betrayed and attacked if it turned out someone was recording me using glasses, but I don't know, I don't care about dashcams and this is not that much different. I imagine it feels bad and scary for women when someone takes creepshots of them, and this tech does open opportunities for that. Maybe that would be enough for me to hate glasshats if I had a bit of empathy. But isn't the genie already out of the bottle with 'deep nude' models available for everyone forever?
No, i don't think I've heard about NCII before, and Trust and Safety sounds like some corporate PR whitewashing term to me.
1> Genie out of the bottle: Yes and no. Nudification is a growing problem, non consensual intimate imagery is a current problem. AI related tools for image gen still require some amount of skill, and that is reducing its blast radius.
2> NCII: Years ago, I was scoping reddit to identify content that was harmful from an Indian perspective. By far the largest category was NCII. This could range from morphed images, to intimate images reshared, to images from their socials reshared in thirst communities. This included images of underage children.
Removing NCII is rough. First the victim has to be willing to come forward and get over the shame. Then they have to deal with a near impossible system and get someone to help. The more conservative the nation, the less likely the support networks will be forgiving or helpful. Finally, once the data is out there, it’s going to be reflected across multiple sites which are in international jurisdictions.
This is one of the situations where, I fear, your life is simply hosed.
Korea is another country which has a severe problem with NCII, and I believe they even instituted laws against deepfaked porn.
>PR whitewashing: Heh. Well thats the division that deals with online safety, fraud, content moderation, policy and the rest. I believe eBay was the first firm to use that term when they were handling fraud.
Most people don't run around holding out their smartphone directly in front of them. It has to be pointed at the subject, and tends to be obvious.
Smart glasses, however, are always aimed at whatever the wearer is looking at. They may or may not be recording (note the reports of people hiding the LED indicators), and at a fair distance could easily be mistaken for a normal pair.
The general populace is much more likely to notice the former recording rather than the latter.
I've seen people keep their phone in their shirt pocket. The only reason it tends to be obvious is that most people aren't trying to be covert. Those aren't the ones you should be worried about.
Don’t forget that audio recording is a thing. The camera doesn’t have to be pointed at you to violate your privacy. Plus I bet you walk past 90% (or more) of all cameras without ever noticing them. You only notice someone’s glasses because they are novel, not because they are more likely to record you.
At everything on the opposite side of the screen, typically. There is a recording light for Meta glasses, but not one for iPhones, for example: the "recording" indicators are all user-side there.
When I'm on public transport, people generally face their phones in such a way that they'd only be filming your feet or the floor... They don't hold them up at head height in such a way that other people would be recorded. Maybe it's just a cultural thing
A Kenyan workers' organisation alleges Meta's decision was caused by the staff speaking out.
Meta says it's because Sama did not meet its standards, a criticism Sama rejects ...
Well, yeah. If I went straight to the press to trash the reputation of my client's product, rather than communicating internally first to help them proactively address the issues, I would expect to get fired.
Not that I am remotely interested in defending Meta, or optimistic that they would proactively address privacy issues. But I don't feel that sympathetic to the outsourcing company here either.
I don't know what happened behind the scenes. I'm just going off what is said and not said in the article. If I were whistleblowing about something like this, I would take pains to describe what measures I took internally before going public. I didn't see any of that here.
EDIT: Look, to be clear, I think it's bad that naive or uninformed people are buying video recorders from Meta and unintentionally having their private lives intruded on by a company that, based on its history, clearly can't be trusted to be a helpful, transparent partner to customers on privacy. I think it's good that the media is giving people a reminder of this. I think it's good that the sources said something, even though the consequences they suffered seem inevitable. But to me, there is nothing essentially new to be learned here, and I don't know what can or should be done to improve the situation. I think for now, the best thing for people to do is not buy Meta hardware if they have any desire for privacy. Maybe there are laws that could help, but what should be in the laws exactly? It's not obvious to me what would work. I suspect that some of the reason people buy these products is for data capture, and that will sometimes lead to sensitive stuff being recorded. What should the rules be around this and who should decide? Personally I don't know.
What makes you think the outsourcing firm didn't raise these concerns in email or meetings? You think these people wanted to lose jobs and income? That's irrational.
Why reflexively defend a massive tech corporation caught repeatedly violating the law?
There are transgressions severe enough that your duty to stop them is heavier than your responsibility to "the reputation of your client's product." Amazing this needs to be stated, frankly.
More like a bright future being someone's fall guy. The ignorance to think that a large tech giant like Facebook would give a crap about any of those concerns makes this person too politically inept to make it anywhere
What specifically do you mean? It is by design that smart glasses see the things happening in front of their users? Yes, it is. That is why people buy them.
Huh. There you go again, thinking everyone else is an idiot. Capture of video data of users by Meta is never acceptable. It would not be acceptable for any phone, and it is not acceptable for any glass, ever.
Saving the data for any purpose other than allowing users to access it is bad enough; allowing Meta employees or contractors to view personal videos is on a whole new level.
I don't know why people buy smart glasses. Maybe they buy them for video capture. If so, the videos go to Meta's servers and Meta might do things with them. They might be criticized for not reviewing them in certain cases. That's one reason why I wouldn't buy Meta smart glasses.
The main issue here is Facebook employees viewing users' private video streams (including of user nudity) without the users' knowledge.
The secondary issue is that it's generally frowned upon to make your employees view nudity in the workplace. Are there extenuating circumstances here? No, we have no evidence there are any extenuating circumstances here.
Even if so, it doesn't matter, because 4 - 8 years later it'll be reversed again. And because it takes longer to rebuild than dismantle, it will never be the same.
This is the cycle now. 180 degree turns in policy every 4 or 8 years. There's no long term planning.
When I worked at a company that was using Palantir's software about 15 years ago the average age of a Palantir employee was in the early 20s in my experience.
It was almost certainly everyone's first job.
It's not too hard to think of ways you can get a bunch of young folks do your bidding without them questioning the motives or what kind of moral challenges the job has.
"Poor, dumb people outbreed rich, smart people and make the whole world dumb" is not real. And the mechanism by which our world harms people is not because everybody involved is an idiot. Executives of corporations that are destroying the environment aren't just doing it because they don't know better. Leaders within the Trump admin and the GOP more broadly are often extremely well educated at top universities. Ignorance does not drive our politics. Resentment does.
However, modern politics of the right absolutely prey upon, and encourage, ignorance. Ridicule of intelligentsia and advanced education (often by Ivy League graduates!) is a key part of the strategy.
That smart people are cultivating an ignorant voting bloc doesn't negate the fact that ignorance is fundamental to the plan.
But Trump went to Wharton and Vance went to Yale. Educated people leveraging anti-intellectualism for political gain is not even remotely the same thing as what happens in Idiocracy.
> Last year this podcast said that nobody wants to solve this because solving it is going to eliminate (IIRC) hundreds of thousands of jobs. Which is a point to consider.
Yet we're ok with spending trillions on AI to eliminate jobs everywhere, including healthcare.
I don't think that's the reason.
Personally I'm of the opinion the reason it isn't being solved, is because the people whose job it would be to solve it get to keep their jobs due to donations from pharma and insurance companies.
Well right, people lobby not to change anything because they have giant companies that make them money. They need all those people in jobs to help them deny claims, identify fraud, waste, etc.
If Intuit and other tax preparers can protect their tax preparation rents at the expense of all income earners, then it is not difficult to believe that the medical industry is also able to protect its own rents.
https://www.fincen.gov/news/news-releases/fincen-assesses-re...
reply