This was probably partly a Google refresh token theft (given the length of the access). No inside info, just looking at how the attack occurred.
OAuth 2.1[0] (an RFC that has been around longer than I've been at my employer) recommends some protections around refresh tokens, either making them sender constrained (tied to the client application by public/private key cryptography) or one-time use with revocation if it is used multiple times.
This is recommended for public clients, but I think makes sense for all clients.
The first option is more difficult to implement, but is similar to the IP address solution you suggest. More robust though.
The second option would have made this attack more difficult because the refresh token held by the legit client, context.ai, would have stopped working, presumably triggering someone to look into why and wonder if the tokens had been stolen.
> most software development is not "real engineering".
Most software development doesn't have anywhere near the real world impact of the Boeing/NASA engineering you reference.
Good engineering practice recognizes the risks and scales the effort to match it.
A CRUD app for internal users has a different set of requirements than a revenue generating SaaS app, just like a backyard fence has different building criteria than a highway bridge.
Sure, I understand the stakes are lower for blog plugins than for aircraft.
But being a professional means you do the thing even when the stakes are low. You don't decide to cut corners because you feel like it, or because it's more profitable. Mullenweg is not professional.
That's not what being a professional means at all.
You adjust your approach depending on the stakes. That shouldn't be a controversial take.
You're using "cutting corners" as a pejorative, but ultimately if the stakes are low, you may -- perfectly reasonably -- decide to allocate less time/resources to particular activities, and more to others. You can call that "cutting corners", and you'd be right, but there's nothing necessarily wrong about that: it depends on the circumstances. And there's certainly nothing "unprofessional" about it.
For the mostly-vibe-coded script to reencode a bunch of my own video files to save disk space, I skimmed the result to make sure that it wasn't going to overwrite or delete anything it shouldn't. Cutting corners? Absolutely. Perfectly fine and sufficient? Absolutely.
For the software that I write that I intend to distribute to others, that could cause data loss or other unpleasant problems for them if I get it wrong, I write the code myself, I understand how it works, and I might write tests and/or get someone else to review it, depending on my own judgment of what needs to be done.
Recognizing the difference between the the situations in the prior two paragraphs is what it means to be a professional.
Sure, but in this case, the engineering consideration was whether a specific plugin should be added to the list of other suggested plugins. It was literally just a business decision of whether to configure it to be one of the featured options users might want to install.
What does cutting corners have anything to do with the topic at hand? The situation isn't about devs getting the time to do something right, it's about programmers making a non-engineering decision that was overruled by the business in the businesses best interest. That's perfectly reasonable.
> But being a professional means you do the thing even when the stakes are low.
Not the way I understand "being a professional." All engineering, and all professions, entail the balancing of interests. There are some hard and fast rules*, like "don't do things that will kill your users." And there are some other things that are more guidelines than absolutes, such as "we don't ship feature changes in release candidates." Serious organizations understand that sometimes guidelines like the latter need to be violated for overriding business purposes.
*Even the "don't kill your users" thing is not an absolute. No car is perfectly safe, for example. We could add three more feet of crumple zone to the front and the back, but we don't, because even in safety tradeoffs have to be considered.
FWIW, I think the article "Passkeys: they're not perfect but they're getting better" at the NCSC web site is a pretty fair assessment of the current state of things.
I certainly understand and appreciate the benefits of key-based authentication: been using ssh keys for decades, wouldn't go back to password auth in that context for anything.
But I don't really see passkeys in the much wider context of web authentication for the broadest possible audience has having all the kinks worked out yet.
> Anyone got something helpful to share in that regard?
If you can afford it, go work in a non-software domain for a while (maybe a year or two). You'll see all kinds of problems wherever you are, and you'll learn about the domain and other solutions.
Make a note of the ones connected to revenue.
Leave and go start your own thing, selling it back to wherever you worked (first customer problem solved).
Of course, it's not that easy, but I think digging in deep to a non-software domain is a great way to learn how to build a solo business.
I'm in this step. I joined a company in March and I just hate it. I do not see myself doing this for two years (they're behind on a lot of technology, and we have government customers that just do whatever. Its slow and boring. Some say its amazing, but you're losing your precious time sitting behind a screen.)
I'm genuinely thinking of taking up a skill, and then applying it to a new job. The pay is going to suck, though, but I'm not sure if I can spend my time behind a Monitor doing things I hate.
There are plenty of other ways to access the Anthropic models, eg: OpenRouter. OpenRouter will automatically use Anthropic/Bedrock based on availability and latency.
I think it is part of the growing digital sovereignty trend (the country based one, not the self-sovereign identity one)
reply