> He fueled the nonprofit’s growth partly through unorthodox fundraising. Tessellations offered parents a deal: pay half their tuition as a donation for a tax write-off. “Lawyers say, ‘Please don’t do that,’” Stanat recalled, “I’m like, ‘But is it illegal?’ ‘No, not illegal.’ ‘OK, great, we’ll do it.’”
This structure is not an unusual in education, especially in institutions where the tuition is smaller component of the cost and you are expected to give much more than that.
A donation to the trust, or even an endowment is typical for the other component . If the institution is non profit depending on the part of the world you can claim tax benefits, or even in the tuition itself .
Accounting and tax is not always black and white . At times more riskier clients may choose more aggressive practices either expecting on not being audited or be able to defend it with expensive experts if they are.
It isn’t exactly. They created a list of known extensions by their id and a file which is known to exist in that extension. The site iterates over each pair and tries to load that file, if it doesn’t error it knows the extension is installed. It’s a clever and difficult manual process, but it does bypass the security trying to prevent this kind of thing.
I read that their reasoning is it exists to block users that use known scraper extensions which bypass their terms of use. But don’t entirely buy that.
This is how I interpreted the original question and indeed it makes no sense, JavaScript from a website should not be allowed to interact with extensions like this.
It's actually the extension injecting itself into the webpage, often to interact with it. (I imagine much of this is just looking for global ExtensionName objects.)
Actually, the article is clear about what is happening technically, and it’s both. Chrome does, in fact, allow the page to make requests for resources stored in the extension bundle, and this is one of the two fingerprinting methods that the article describes.
I agree, and this is why I built 404. If you poke around the page a bit, you'll see a tool that prevents browser fingerprinting.
404 catches JS calls in JS proxies and returns mocked-up values (assigned by a profile), it also has protections against TLS fingerprinting, canvas fingerprinting, device enumeration, TCP/IP fingerprinting, HTTP header fingerprinting, and more.
The predatory practices that browser fingerprinting have enabled guised behind "fraud protection" are atrocious. Even with a VPN, even in incognito mode, a website can track me and see what I've been doing EVEN IF ITS NOT ON THEIR SITE.
Then a data broker buys all this data and uses an AI model to put it all into a pretty little package and sell it to Google, or the gov't, or something. It's scary.
Because extensions can and often do contain stuff like images or JS bundles that they inject into a target page's DOM. Not allowing a tab's context to load files from the chrome-extension:// namespace would break a lot of things.
True, but you'd expect the same CORS rules to apply for extensions.
Only pages originating from an extension are by default able to load resources from said extension.
Chrome exposes these files via a URL that you can fetch in javascript like you would any other file on a normal website. These local extension files usually contain code, styles or images that your browser needs to run the extensions.
CORS is a server setting to tell the browser not to load its data from potentially unsafe origins. If you set a server to send access-control-allow-origin: *, then your browser will happily load these resources for you regardless of where you currently are. And chrome extensions need to be loadable from everywhere to be able to inject code or images into pages, so enabling CORS for them would defeat their main purpose. The extensions themselves might even need to bypass an existing CORS setup for the website you are currently on to fetch additional data.
From the other end, yes extensions access all page data, but pages shouldn't access extension data at all; it feels like that should be the CORS violation.
You have it backwards. For an extension to work on a page, it's data/code needs to be accessible from said page. If your extension server in chrome enforced CORS to prevent access from tabs on other websites, extensions wouldn't work anywhere.
"Chrome extensions can expose internal files to web pages through the web_accessible_resources field in their manifest.json. When an extension is installed and has exposed a resource, a fetch() request to chrome-extension://{id}/{file} will succeed. When the extension is not installed, Chrome blocks the request and the promise rejects.
LinkedIn tests every extension in the list this way."
Is that information available to websites? I figured they were doing some kind of novel hackery to self-detect extensions based on behaviour that would only happen if X extension was installed.
But that would be a lot of work for 6,300 extensions. Unless someone offers that as a service?
Well, just because LinkedIn still tries to send the requests on Brave doesn't mean the blocking doesn't work. The question is whether any request will give a valid response.
That said, I can't find conclusive info on whether this is blocked exactly. Brave does block "plugins" (which is why I assumed this includes this specific kind of fingerprinting), and the getExtension() call (which is probably unrelated), according to this page: https://brave.com/privacy-updates/4-fingerprinting-defenses-...
But since they don't explicitly mention the chrome-extension URL, you might be right.
I edit videos on a hobbyist level (mostly using davinci resolve to edit clips of me dying in video games to upload to a shareX host to show to friends). The big takeaway for
me was reading that for quality/efficiency libx264 is better than nvenc for rendering h264 video. All this time I’ve assumed nvenc is better because it used shiny GPU technology! Is libx264 better for recording high quality videos too? I know it will run on CPU unlike NVENC but I doubt that’s an issue for my use case.
Edit: from some googling it looks like encoding is encoding, whether it’s used for recording or rendering footage. In that case the same quality arguments the article is making should apply for recording too. I only did a cursory search though and have not had a chance to test so if anyone knows better feel free to respond
Yeah, this is a very common misconception. There are hardware encoders that might be distribution quality, but these are (to my knowledge) expensive ASICs that Netflix, Amazon, Google, etc. use to accelerate encode (not necessarily to realtime) and improve power efficiency.
GPU acceleration could be used to accelerate a CPU encode in a quality-neutral way, but NVENC and the various other HW accelerators available to end users are designed for realtime encoding for broadcast or for immediate storage (for example, to an SD card).
For distribution, you can either distribute the original source (if bandwidth and space are no concern), or you can ideally encode in a modern, efficient codec like x265 or AV1. AV1 might be particularly useful if you have a noisy source, since denoising and classification of the noise is part of the algorithm. The reference software encoders are considered the best quality, but often the slowest, options.
GPU is best if you need to temporarily transcode (for Plex), or you want to make a working copy for temporary distribution before a final encode.
You might want to try dumping your work from Resolve out in ProRes 422 HQ or DNxHR HQ and then encoding to h264/h265 with Compressor (costs $; it's the encoder part of Final Cut as a separate piece of software) on a Mac or Shutter Encoder. Also, I'm making a big assumption that you're using the paid version of Resolve; disregard otherwise. It might not be worth it if your input material is video game capture but if you have something like a camera that records h264 4:2:2 10bit in a log gamma then it can help preserve some quality.
> But officers can also make emergency data requests, or EDRs, in cases involving a threat of imminent harm or death. These requests typically bypass any additional verification steps by the companies who are under pressure to fulfill the request as quickly as possible.
How do companies decide which EDRs to fulfill and which ones require a judicial subpoena? Are companies ever even under the obligation to fulfill an EDR?
> So in a lot of the searches that we reviewed, we had about 500,000 to take a look at. We found the word “investigation” – or variations of the word “investigation” – or “suspect” a lot with really no details about what the investigation pertained to or what the suspect may have done.
> A lot of searches also just listed gibberish, like “ASDF” – that’s the sequence of letters in the center row of your computer keyboard. Or just said that they were there for random checks. We even found a search that just said “donut” or that didn’t say anything at all.
Reminds me of when Reddit posted their year end roundup https://web.archive.org/web/20140409152507/http://www.reddit... and revealed their “most addicted city” to be the home of Eglin Air Force Base, host of a lot of military cyber operations. They edited the article shortly afterward to remove this inconvenient statistic
Relevant: “Containment Control for a Social Network with State-Dependent Connectivity” (2014), Air Force Research Laboratory, Eglin AFB: https://arxiv.org/pdf/1402.5644.pdf
I'm not saying those trends charts demonstrate anything, just that commercial human astro-turfers or bot networks are no less of a thing than intelligence ones and it wouldn't really be a conspiracy theory to think McDonalds or any other company, trade association, lobbyist, PR firm etc, is operating a lot of social media accounts that could theoretically show up on a report like that if they were doing a lot of it from a specific place.
You would think such people would be competent enough to proxy their operations through at least a layers of compromised devices, or Tor, or VPNs, or at least something other than their own IP addresses.
OP has just completely pulled this analysis out of their ass. They aren’t all constantly running g cyber operations on Reddit, that bears zero resemblance to what cyber operations look like in real life including the point that you raised.
Daily reminder (for myself especially) to engage as little with social media (reading/commenting) as possible. It's a huge waste of time anyways not like I don't have better things to do.
This is a special addiction because most of us are community starved. Formative years were spent realizing we could form digital communities, then right when they were starting to become healthy and pay us back, they got hijacked by parasites.
These parasites have always dreamed of directly controlling our communities, and it got handed to them on a silver platter.
Corporate, monetized community centers with direct access to our mindshare, full ability to censor and manipulate, and direct access to our community-centric neurons. It is a dream come true for these slavers which evoke a host of expletives in my mind.
Human beings are addicted to community social interaction. It is normally a healthy addiction. It is not any longer in service of us.
The short term solution: reduce reliance on and consumption of corporate captured social media
The long term solution: rebuild local communities, invest time in p2p technology that outperforms centralized tech
When I say "p2p" I do not mean what is currently available. Matrix, federated services, etc are not it. I am talking about going beyond even Apple in usability, and beyond BitTorrent in decentralization. I am talking about a meta-substrate so compelling to developers and so effortless to users that it makes the old ways appear archaic in their use. That is the long term vision.
reply