Depends on exactly the situation- when I worked on a very expensive desktop application years ago I was warned about giving too much information in error messages as it might help crackers.
With how powerful Ghidra is now, I'm not sure that it matters much any more.
> Depends on exactly the situation- when I worked on a very expensive desktop application years ago I was warned about giving too much information in error messages as it might help crackers.
As a sysadmin (having spent 30 years dealing with desktop software) the attitude of the people who gave you that direction make me seethe with anger. Crackers gonna crack. That just hurts the people who have to make the product work in their environment.
Just yesterday I completely reverse engineered several proprietary audio codecs from a game without even having to touch the static analysis tool myself.
Anyone can access the web interface including attackers, so don't dump debug information there. Feel free to put the errors behind an admin interface or to local files that can be retrived.
I personally feel like _The Day The Universe Changed_ (his second documentary) is better. I love Connections but the basic thesis (there are hidden connections between disparate developments in science and technology) ends up pretty scattershot, spreading out like Brownian Motion. _tDtUC_ is much more focused. Largely based on Kuhn's _Structure of Scientific Revolutions_ for individual stories, it traces how the understanding of time in Europe changed from the middle ages to the 1980's- the idea of time as a marker of descent from a previous golden age (1), or at best a repeating cycle, evolves into our modern conception of time as endlessly improving into a better future. And the supporting book was amazing too.
I also want to speak up for the BBC history documentary team that worked with Michael Wood: _In Search of the Trojan War_, _In Search of the Dark Ages_, _The Story of England_, _The Story of India_ they were also a staple of American PBS and informed my understanding of the world.
1: My go to example for this is imagine you walk into the Pantheon in 1000 AD: no one on your entire continent has known how to build a dome like that in 500 years, and won't again for another 500 years. The fundamental way you understand the world has to be completely different from the "newer is better" baseline that we have understood the world by for the past 150 years.
> "I love Connections but the basic thesis (there are hidden connections between disparate developments in science and technology)..."
Good grief, no. The basic thesis of Connections 1 was that humanity has become fatally dependent on technology (the "technology trap" he speaks of), that that dependence continues getting deeper and deeper, and it's hard to predict what technologies will emerge or where technology will take us, possibly utopia but just as likely a living hell, and finally that we don't even have the option to stop digging ourselves deeper and deeper into the technology trap because technological advancement can't be stopped because its emergence is unpredictable. Re-watch just the first and last episodes and they will terrify you.
Connections 2 and 3 were indeed scattershot because people liked Burke's charming mannerisms and didn't want to think about the ever more complex and ever more fragile panoply of technologies that individuals, even the technologists themselves, can neither understand nor control that is all that stands between humanity and its extinction.
Better still, like a well-written essay, there is closure to the series. All the ends left about in the preceding episodes are drawn together neatly in the final one.
"The point about all this technological pizazz isn't the gee-whiz high-tech stuff. It's the secondary effects of using it. Take say what this chip could do to change the pattern of work. With this you could have telecommuting, that's where you work at home from a screen and you never go into the office.
Great! No more rush hour. But what does that do to the public transportation system and the taxes it uses. Or to the car manufacturers and their workers' jobs, and the rest of the economy that depends on their output?
Or to the concept of the city itself, with its support systems and businesses. Or to the downtown properties where maybe your pension fund's invested.
Not to speak of working at home day in and day out and what that might do to a marriage. And what do you get out of work when it's only you? What would be the effect of isolating and fragmenting the community like that?
Second the comment re the day the universe changed, and found the episode on how Islamic Spain influenced the world quite surprising. Think it was the 2nd one, starting with two competing views of the world from African Roman scholars/clerics.
Many of these older docu’s wanted you to stop and think.
So Ashburn VA is a datacenter hub because the very first non-government Internet Exchange Point (IXP) anywhere in the world was there (https://en.wikipedia.org/wiki/MAE-East). Back in the 1990's something like half of all internet traffic all over the world hit MAE-East. That in turn made AWS put their first region there (us-east-1 preceded eu-west-1 by 2 years and us-west-1 by 3 years). Then because there were lots of people who knew how to build DC's- and lots of vendors who knew how to supply them- the Dulles Corridor became a major hub for lots of companies datacenters. For AWS, because us-east-1 was the first, it's by far the most gnarly and weird- and a lot of control planes for other AWS services end up relying on it. Which is why it goes down more often than other regions, and when it does go down it makes national news, unlike, say, eu-south-2 in Spain.
But NoVA is basically the same sort of economic cluster that Paul Krugman won his Nobel Prize in Economics for studying, just for datacenters, not factories.
Well said. I'll also add, that with these networks, the sooner you can get traffic off your network the better. There's strong incentive to have your datacenter near these peering points. And since MAE-East was the first, it's been the largest as it's been snowballing the oldest. AOL's HQ was here, Equinix built their peering point soon after MAE-East, etc.
As for AWS, I often see it repeated that the DCs are the oldest and therefor in disrepair. That's not true; many of the first ones have since been replaced. But there are services that are located here and only here.
But I'll also add, a lot of customers default to using US-East-1 without considering others, and too many deploy in only one AZ. Part of this is AWS's fault as their new services often launch in US-East-1 and West-2 first, so customers go to East-1 to get the new features first.
Speaking as one who was with AWS for 10 years as a TAM and Well-Architected contributor, I saw a lot of customers who didn't design with too much resiliency in mind, and so they get adversely affected when east-1 has an issue (either regional or AZ). The other regions have their fair bit of issues as well. It's not so much that east-1 necessarily fails more than the others, it's that it has so many AZs and so many workloads that people notice it more.
The underlying reason is more that by being in us east coast you have about equal latency for customers in us west coast and Europe. That's a very large population covered from a single site.
If you're building a single datacenter site this is where you start building first.
I was a IC/manager for a few months. Spending all day in meetings (there are actual things you have to do to manage 15+ people) and then going home and coding for 2-3 hours every night burned me the hell out and I left that company, good riddance to bad rubbish.
Companies above a certain scale- let's use Dunbar's Number as a good threshold- need full time managers to handle the necessary information flow through the company. Middle-manager is actually something that AI can't do yet, because their main job is to figure out what things everyone else around them needs to know (inside and outside their team), which requires a theory of mind that current LLM's just don't have. Is this policy change worth telling your team about? Is this feature creep worth telling other teams about? That is the decision that managers have to make dozens of times a day, and it requires a model of what various people know, to know whether this is important to them or not.
No, the laws are different- and more consumer friendly in the US- so the US consumer behavior is different.
Back when credit cards were first starting out (which happened in the US) the US Congress passed a law- the Fair Credit Billing Act of 1974- that consumers were only liable for $50 of losses as long as they reported the missing credit card within 60 days of the end of the fraudulent billing cycle. This was back when credit cards purchases were all made on paper with the machine that went "kachunk" and transferred a carbon copy of your card- everything was done completely offline. That law has not been changed, in fact, most banks completely waive the $50 and don't hold card-holders liable for anything reported (basically, annoying a customer over $50 isn't worth it to the bank). Thanks to the internet, suddenly cards got a lot easier to steal and a lot easier to exploit- but banks are still on the hook for all losses reported within 60 days of the end of the cycle. The result is that American banks have invested an enormous amount in real-time monitoring of credit card transactions, and are doing lots of stuff to monitor this- they care deeply since ultimately they are on the hook- but the consumer doesn't care. This is why US card's from the consumer perspective are so much laxer, because our banks have invested far more on the back-end because the consumer is held harmless in a way they aren't with European cards.
As a totally separate issue, the EU has regulated the amount of interchange fees that card-companies can charge, but the US has not capped them. The result is that US card-holders can get significant kickbacks for using cards (especially true for the top decile of wealth), in a way that is functionally impossible with EU issued cards that have capped interchange fees. There is a big lawsuit happening now to try and allow merchants to only accept low-fee cards (the standard VISA/MC/AMEX deal requires treating all cards equally, which gives them an incentive to push people to higher interchange cards). We will see what happens with that suit, but until then, American high-spenders can have much higher rewards on their cards, which also encourages greater use of the cards- and making them have less friction than the EU versions.
> Thanks to the internet, suddenly cards got a lot easier to steal and a lot easier to exploit- but banks are still on the hook for all losses reported within 60 days of the end of the cycle.
For card-not-present transactions (i.e. online ones) the liability is on the merchant. They however also have an incentive NOT to use 3DS because it adds real friction to purchases. I'm also not sure if all USA banks even support 3DS.
This theory explains why cardholders in the US are still using cards despite these being relatively less secure than in other countries, but fails to explain why issuing banks wouldn't take steps to protect their own fraud losses, such as introducing 3DS or PINs.
The actual explanation lies in the game theory of fraud prevention; see my sibling comment for details.
Why would the law being different mean they wouldn't use 3DS though? Surely it'd cut out a good amount of fraud along with the realtime monitoring? I understand that US consumers don't have a stake in this, but can't all the banks just agree to enforce 3DS? I can't imagine Americans are going to stop using their cards because of a small amount of friction added
They could, but it's one of those things that really only work if everybody joins. Because 3DS is rarely used right now, a portion of merchants don't even support it, so if you start enforcing is as a single bank, your customers will start complaining their card doesn't work. The banking industry in the US is also more decentralized than in the EU, so getting everybody to join in simultaneously is hard.
The window of opportunity for 3DS has also more or less passed, the industry is moving on to the next generation of tech (wallets/tokenization), that should be both easier to use and more secure.
Because adding friction will deter many impulse purchases. Americans use credit cards constantly. The equilibrium would be perturbed in a way very much not advantageous for the credit card issuers if consumers became more cautious about using credit cards.
It’s the same reason credit card issuers are willing to pay Apple a few basis points to participate in Apple Pay: reducing friction has a non-linear impact on propensity to pay.
The problem with this is now you are solely responsible for managing all of the changes, all of the variation of life. Chrome changed the shape of this API, you are responsible for finding it and updating it. Morocco changed when their daylight savings took effect, now you need to update your date/time handling code. There are a lot of these things that we take for granted because our libraries handle it for us, and with no dependencies you have to do all the work. Not a big deal for making a double-pendulum simulator for your daughter to play with that will stop mattering next week, but is a concern for a company which is trying to build something that can run indefinitely into the future.
> you are responsible for finding it and updating it.
vs the dependency broke something and now you're responsible for working around someone else's broken code.
Honestly, I've seen much more of the latter. Especially nowadays with every single dependency thinking they are an fully fledged OS because an agent can add 1000 feature/bug in no time. Picking the right dependency maintaining by a sane maintainer is like digging potatoes in a minefield.
As a general principle, I agree with you that large companies and teams benefit from common runtimes (i.e. libraries and frameworks).
I don't buy the notion of things breaking down over time, though. For "first-party" code that sticks to HTML and CSS standards, and Stage 4 / finished ecmascript standards, the web is an absurdly stable platform.
It certainly used to be that we had to do all sorts of weird vendor hacks because nobody agreed on anything and supporting IE6 and 7 were nightmares, and blackberry's browser was awful, but those days are largely behind us unless you're doing some cutting-edge chrome-only early days proposed stuff or a browser specific extension or something else that isn't a polished standard.
Even with timezone changes, you're better off using the system's information with Intl.DateTimeFormat.
I don’t know where the fear of breaking changes in deps comes from, but most good projects tries to keep their API stable. Even with fast-evolving platforms like Android and iOS sdk.
In the Python ecosystem making software with reproducibility in mind was a thing before the advent of uv. Some earlier options include Pipenv and Poetry. I used Pipenv already some 6y ago to achieve that and later switched to Poetry.
I think devs who didn't care back then also won't care in the future and will still run around with requirements.txt file in 10 years.
In companies, though, you often wind up with three+ massive dependency trees in your software to handle the same problem because people went and added the new hotness without deprecating the old stuff. You also find dependencies that are much heavier than necessary for the actual task at hand because the software developer was also solving the problem of needing that dependency on their resume. And then there's just the relatively tiny dependencies for fairly solved problems, like leftpad, which don't really require deps, and you can accept the maintenance burden, because not everything is an abstraction layer over chrome.
So if you just need to do something simple like fire off a compute heavy background task and then get a result when it is done, you should probably just roll your own implementation on top of the threading API in your language. That'll probably be very stable. You don't need a massive background task orchestration framework.
People might object that the frameworks will handle edge cases that you've never thought of, but I've actually found in enterprise settings that the small custom implementations--if you actually keep it small and focused--can cover more of the edge cases. And the big frameworks often engineer their own brittle edge cases due to concerns that you just don't have.
So anyway, it isn't as simple as "dependencies are bad" or "dependencies are good", but every dependency has a cost/benefit analysis that needs to go along with it. And in an Enterprise, I'd argue that if you audit the existing dependencies you will find way too many of them that should be removed or consolidated because they were done for the speed of initial delivery and greenfielding. Eventually when you accumulate way too many of those dependencies the exposure to the supply chains, the need to keep them updated, the need to track CVEs in those deps, and the need to fix code to use updated versions of those dependencies, along with not have the direct ability to bugfix them, all combine to produce an ongoing tax of either continual maintenance or tech debt that will eventually bite you hard.
> The problem with this is now you are solely responsible for managing all of the changes
We seem to greatly overestimate the amount of code needed to do something.
For example, there are billions of lines of code from me pressing a key, to you seeing what I wrote. But if we were to make a special program that communicates via ipv6 and icmp, and it is written for hazard3 pico2350 with wiz5500 ethernet breakout, the whole thing including the c compiler to compile your code (which could very well outperform gcc -O3) will be 5-6k lines of code, including RA, and even barebones spi drivers, and a small preemptive os.
So, it is not unreasonable to manage all of those changes.
I think we are stuck with LLMs. They are already in a place where they can find these issues in the first place. They can access RSS feeds. You could cron an agent to look to see if you are pwned as frequently as you want at literally almost zero cost. When you do ingest the libraries, keep a list and of what version and that can help as well.
The US Government made ASML dominant when it allowed it to acquire (US Company) Cymer, Inc.- the company that was best in the world at the time at EUV. Merging Cymer's EUV work with ASML's meticulous perfection and delivery of the entire rest of the system is what made them the only vendor that matters for semiconductor manufacturers.
This acquisition is also what gives the US Government the ability to veto customers of ASML even today- this is why Chinese semiconductor manufacturing is so far behind, because the USG controls who can access ASML's EUV work.
Over the past 17 years I've lived in three houses (in the suburbs of two different cities in two different states- one East Coast, one land-locked) and an apartment in NYC (obviously also East Coast). In all of the East Coast spots (urban and suburban) there was a mosque closer than the nearest McDonald's. For the land-locked state suburb the mosque was 2 miles away and the nearest McDonald's was 0.75 miles away.
I'm not selecting these houses to be convenient to the Mosque- I've never been in any of those Mosques. It's just an artifact of living in the sort of neighborhoods that I like. I tend to agree that it isn't urban/rural per se, as much as it's Openness of the Big Five personality traits. Which, at least in the US, tends to be correlated with a lot of other things (college education, density of living, etc.).
IRT the "college education", Collin County is statistically higher educated than most of the country demographically-speaking. >56% achieved bachelors or higher compared to NYC at ~42%. For reference, Santa Clara County in California is also at 56%, so about as educated as the area with Apple, Google, and Facebook at least as far as that statistic analyzes.
It turns out there is extensive research on this, and you are mistaken. Most politicians actually do try to deliver on their promises. They might get stopped, but they try.
Most of the research on this was done before Trump entered office. Trump is a wildly unusual political leader, who is significantly more corrupt than other politicians, promises random things and then fails to deliver them, and generally breaks all of the rules that politicians follow- this is what his supporters describe as his "authenticity", that he "tells it like it is". The more people believe, incorrectly, that "all politicians are corrupt" and "no politicians deliver on their promises" the more likely they are to accept Trump- who again is an extreme outlier among American politicians.
Your cynicism actually ends up ruining the country and makes it more likely that we have bad government.
the reason we have bad government is solely because we're stuck in two silos
want real change? vote third party. the problem is the same as that red/blue button thought experiment recently posted to HN. one of the hardest things to do is to get 50% of people to agree with you, so everyone keeps hitting the red button (voting D or R) and nothing changes
When originally coined (circa 1950 around the Korean War), the First World was the US aligned block of countries, the Second World was the USSR aligned block of countries, and the Third World was all of the countries not part of either. Egypt, India, Yugoslavia, Ghana and Indonesia viewed themselves as leaders of the broader political movement during the 1960's and 1970's.
Even into the 1960's there were few industrialized nations outside of those two main blocks, so "Third World" quickly lost its explicitly political meaning and became more a description of the level of capital investment and worker productivity.
With how powerful Ghidra is now, I'm not sure that it matters much any more.
reply