Hacker Newsnew | past | comments | ask | show | jobs | submit | ro_bit's commentslogin

Yeah they just finished their series A funding yesterday. Sorry to hear you missed out

> He fueled the nonprofit’s growth partly through unorthodox fundraising. Tessellations offered parents a deal: pay half their tuition as a donation for a tax write-off. “Lawyers say, ‘Please don’t do that,’” Stanat recalled, “I’m like, ‘But is it illegal?’ ‘No, not illegal.’ ‘OK, great, we’ll do it.’”

How?


This structure is not an unusual in education, especially in institutions where the tuition is smaller component of the cost and you are expected to give much more than that.

A donation to the trust, or even an endowment is typical for the other component . If the institution is non profit depending on the part of the world you can claim tax benefits, or even in the tuition itself .

Accounting and tax is not always black and white . At times more riskier clients may choose more aggressive practices either expecting on not being audited or be able to defend it with expensive experts if they are.


It’s how the sociopath recalls his conversation with lawyers. Not anything any lawyer’s actually saying.

They have to head off the investor whos going to ask "is child safety bullish?"

Why is my Chrome telling random websites which extensions I have installed?

It isn’t exactly. They created a list of known extensions by their id and a file which is known to exist in that extension. The site iterates over each pair and tries to load that file, if it doesn’t error it knows the extension is installed. It’s a clever and difficult manual process, but it does bypass the security trying to prevent this kind of thing.

I read that their reasoning is it exists to block users that use known scraper extensions which bypass their terms of use. But don’t entirely buy that.


So the follow up question, is why is a random website, allowed to try and load arbitrary files?

This is how I interpreted the original question and indeed it makes no sense, JavaScript from a website should not be allowed to interact with extensions like this.

It's actually the extension injecting itself into the webpage, often to interact with it. (I imagine much of this is just looking for global ExtensionName objects.)

Actually, the article is clear about what is happening technically, and it’s both. Chrome does, in fact, allow the page to make requests for resources stored in the extension bundle, and this is one of the two fingerprinting methods that the article describes.

>JavaScript from a website should not be allowed

Agreed 100%.


I agree, and this is why I built 404. If you poke around the page a bit, you'll see a tool that prevents browser fingerprinting.

404 catches JS calls in JS proxies and returns mocked-up values (assigned by a profile), it also has protections against TLS fingerprinting, canvas fingerprinting, device enumeration, TCP/IP fingerprinting, HTTP header fingerprinting, and more.

The predatory practices that browser fingerprinting have enabled guised behind "fraud protection" are atrocious. Even with a VPN, even in incognito mode, a website can track me and see what I've been doing EVEN IF ITS NOT ON THEIR SITE.

Then a data broker buys all this data and uses an AI model to put it all into a pretty little package and sell it to Google, or the gov't, or something. It's scary.


Because extensions can and often do contain stuff like images or JS bundles that they inject into a target page's DOM. Not allowing a tab's context to load files from the chrome-extension:// namespace would break a lot of things.

True, but you'd expect the same CORS rules to apply for extensions. Only pages originating from an extension are by default able to load resources from said extension.

Chrome exposes these files via a URL that you can fetch in javascript like you would any other file on a normal website. These local extension files usually contain code, styles or images that your browser needs to run the extensions.

Why is it not a CORS violation?

The browser needing access and a random website having access are quite different. Seems like a big ol' pile of vulns waiting to happen.


CORS is a server setting to tell the browser not to load its data from potentially unsafe origins. If you set a server to send access-control-allow-origin: *, then your browser will happily load these resources for you regardless of where you currently are. And chrome extensions need to be loadable from everywhere to be able to inject code or images into pages, so enabling CORS for them would defeat their main purpose. The extensions themselves might even need to bypass an existing CORS setup for the website you are currently on to fetch additional data.

From the other end, yes extensions access all page data, but pages shouldn't access extension data at all; it feels like that should be the CORS violation.

You have it backwards. For an extension to work on a page, it's data/code needs to be accessible from said page. If your extension server in chrome enforced CORS to prevent access from tabs on other websites, extensions wouldn't work anywhere.

Firefox at least randomizes extension IDs per install. Chrome hands all of that to extension devs, basically a "your problem now".

If that were the case, the list wouldn't have extensions that relate to a users religion, income, demographics, and more.

Does the same scan is happening on firefox? Random websites invoking extensions do seem to be a security hole to me.

This was posted before and it seems that Firefox randomizes the extension URLs.

Here's the relevant bit from the original source:

"Chrome extensions can expose internal files to web pages through the web_accessible_resources field in their manifest.json. When an extension is installed and has exposed a resource, a fetch() request to chrome-extension://{id}/{file} will succeed. When the extension is not installed, Chrome blocks the request and the promise rejects.

LinkedIn tests every extension in the list this way."


Can ask the same question about so many horrible security blunders web browsers have made over the decades.

They are only blunders if they aren't being used as features by someone

Is that information available to websites? I figured they were doing some kind of novel hackery to self-detect extensions based on behaviour that would only happen if X extension was installed.

But that would be a lot of work for 6,300 extensions. Unless someone offers that as a service?


Brave explicitly blocks this

Last time this was discussed the consensus was Brave does not block it. Brave's fingerprinting protection does not include extensions.

https://news.ycombinator.com/item?id=46904361


Well, just because LinkedIn still tries to send the requests on Brave doesn't mean the blocking doesn't work. The question is whether any request will give a valid response.

That said, I can't find conclusive info on whether this is blocked exactly. Brave does block "plugins" (which is why I assumed this includes this specific kind of fingerprinting), and the getExtension() call (which is probably unrelated), according to this page: https://brave.com/privacy-updates/4-fingerprinting-defenses-...

But since they don't explicitly mention the chrome-extension URL, you might be right.


Chrome is a browser produced by an advertising company. Its reason for existence is to track you.

Not that I disagree but Google's tracking motivation in making the browser seems irrelevant to why it lets competitors do this fingerprinting

They want fingerprinting to work for everyone because the more effective it is, the higher the value of the ad inventory they sell.

> Its reason for existence is to track you.

Source:


because Chrome lets sites probe "installed", and LinkedIn turns that into telemetry.

Chrome always makes tracking easier. It’s their blind spot, because google.

So that websites can track and identify you "for improved personalized advertising" in exactly this way.

Browser fingerprinting is massively valuable to Google's surveillance/advertising apparatus. This is all working exactly as intended.


> like securing and purchasing limited-run tickets the moment they’re on sale — based on pre-authorized user instructions.

Glad they focused on the most pro-social use cases of agent payments, like scalper bots. For a moment I feared they lost their way


Finally, wojak invocation on HN


Proud to be part of this historic moment, took until 2026 but better late than never


Which fringe puritanical lobbbyist group is going to step up and take credit for this one

(reference https://www.engadget.com/gaming/steam-now-bans-games-that-vi...)


I edit videos on a hobbyist level (mostly using davinci resolve to edit clips of me dying in video games to upload to a shareX host to show to friends). The big takeaway for me was reading that for quality/efficiency libx264 is better than nvenc for rendering h264 video. All this time I’ve assumed nvenc is better because it used shiny GPU technology! Is libx264 better for recording high quality videos too? I know it will run on CPU unlike NVENC but I doubt that’s an issue for my use case.

Edit: from some googling it looks like encoding is encoding, whether it’s used for recording or rendering footage. In that case the same quality arguments the article is making should apply for recording too. I only did a cursory search though and have not had a chance to test so if anyone knows better feel free to respond


Yeah, this is a very common misconception. There are hardware encoders that might be distribution quality, but these are (to my knowledge) expensive ASICs that Netflix, Amazon, Google, etc. use to accelerate encode (not necessarily to realtime) and improve power efficiency.

GPU acceleration could be used to accelerate a CPU encode in a quality-neutral way, but NVENC and the various other HW accelerators available to end users are designed for realtime encoding for broadcast or for immediate storage (for example, to an SD card).

For distribution, you can either distribute the original source (if bandwidth and space are no concern), or you can ideally encode in a modern, efficient codec like x265 or AV1. AV1 might be particularly useful if you have a noisy source, since denoising and classification of the noise is part of the algorithm. The reference software encoders are considered the best quality, but often the slowest, options.

GPU is best if you need to temporarily transcode (for Plex), or you want to make a working copy for temporary distribution before a final encode.


You might want to try dumping your work from Resolve out in ProRes 422 HQ or DNxHR HQ and then encoding to h264/h265 with Compressor (costs $; it's the encoder part of Final Cut as a separate piece of software) on a Mac or Shutter Encoder. Also, I'm making a big assumption that you're using the paid version of Resolve; disregard otherwise. It might not be worth it if your input material is video game capture but if you have something like a camera that records h264 4:2:2 10bit in a log gamma then it can help preserve some quality.


> But officers can also make emergency data requests, or EDRs, in cases involving a threat of imminent harm or death. These requests typically bypass any additional verification steps by the companies who are under pressure to fulfill the request as quickly as possible.

How do companies decide which EDRs to fulfill and which ones require a judicial subpoena? Are companies ever even under the obligation to fulfill an EDR?


Maybe they type ASDF or donut:

https://www.texasstandard.org/stories/flock-safety-cameras-h...

> So in a lot of the searches that we reviewed, we had about 500,000 to take a look at. We found the word “investigation” – or variations of the word “investigation” – or “suspect” a lot with really no details about what the investigation pertained to or what the suspect may have done.

> A lot of searches also just listed gibberish, like “ASDF” – that’s the sequence of letters in the center row of your computer keyboard. Or just said that they were there for random checks. We even found a search that just said “donut” or that didn’t say anything at all.


Reminds me of when Reddit posted their year end roundup https://web.archive.org/web/20140409152507/http://www.reddit... and revealed their “most addicted city” to be the home of Eglin Air Force Base, host of a lot of military cyber operations. They edited the article shortly afterward to remove this inconvenient statistic


> host of a lot of military cyber operations

Relevant: “Containment Control for a Social Network with State-Dependent Connectivity” (2014), Air Force Research Laboratory, Eglin AFB: https://arxiv.org/pdf/1402.5644.pdf


Did they edit it? I stepped forward a few years and it's still there.

https://web.archive.org/web/20160410083943/http://www.reddit...

Funny nonetheless though.


The boring but more likely explanation is that "most addicted" is just a weird statistic that produced weird results.

Eglin has something like 50,000 people but it's actual population as a census designated area is more like 5000.

Oak Brook, IL was also "most addicted" but people didn't run with the idea that McDonalds HQ was running psyops.


I mean they should. Because corporate influence networks exist just as much as state run ones do.


There's a Popeye's at Eglin, maybe all that traffic was a chicken sandwich influence campaign?


I'm not saying those trends charts demonstrate anything, just that commercial human astro-turfers or bot networks are no less of a thing than intelligence ones and it wouldn't really be a conspiracy theory to think McDonalds or any other company, trade association, lobbyist, PR firm etc, is operating a lot of social media accounts that could theoretically show up on a report like that if they were doing a lot of it from a specific place.


Urm. They almost certainly are though?

It was generally being called astroturfing when it got more apparent on Reddit in the early 2010s, and definitely didn't get less after.


The point is that a vaguely defined throwaway line on Reddit's blog is not great evidence for either.


> military cyber operations

You would think such people would be competent enough to proxy their operations through at least a layers of compromised devices, or Tor, or VPNs, or at least something other than their own IP addresses.


OP has just completely pulled this analysis out of their ass. They aren’t all constantly running g cyber operations on Reddit, that bears zero resemblance to what cyber operations look like in real life including the point that you raised.


I mean, it was a quote, and they quoted it seemingly verbatim: https://web.archive.org/web/20160410083943/http://www.reddit...

Not sure what the "most addicted" means except for "over 100k visits total" but it doesn't seem to be pulled out of ops ass,


Tor was literally invented for this use case.


Daily reminder (for myself especially) to engage as little with social media (reading/commenting) as possible. It's a huge waste of time anyways not like I don't have better things to do.


Addiction is hard.

This is a special addiction because most of us are community starved. Formative years were spent realizing we could form digital communities, then right when they were starting to become healthy and pay us back, they got hijacked by parasites.

These parasites have always dreamed of directly controlling our communities, and it got handed to them on a silver platter.

Corporate, monetized community centers with direct access to our mindshare, full ability to censor and manipulate, and direct access to our community-centric neurons. It is a dream come true for these slavers which evoke a host of expletives in my mind.

Human beings are addicted to community social interaction. It is normally a healthy addiction. It is not any longer in service of us.

The short term solution: reduce reliance on and consumption of corporate captured social media

The long term solution: rebuild local communities, invest time in p2p technology that outperforms centralized tech

When I say "p2p" I do not mean what is currently available. Matrix, federated services, etc are not it. I am talking about going beyond even Apple in usability, and beyond BitTorrent in decentralization. I am talking about a meta-substrate so compelling to developers and so effortless to users that it makes the old ways appear archaic in their use. That is the long term vision.


Not a good start.

Also don’t reply to this.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: