> But it is just so tiresome that Google never consults their community. There was no advance notice of this change that I could find. Just a bunch of frustrated users in my inbox blaming me for breaking something.
I get it. This unequivocally sucks. It's a clear loss of functionality for a group of people who are educated about the advantages and disadvantages of embedded EXIF data. But I don't honestly think Google could have consulted their community. It's just too big. So when the author says:
> Because Google run an anticompetitive monopoly on their dominant mobile operating system.
I don't think the problem here is that Google is anticompetitive (though that's a problem in other areas). I think it's just too big that they can't possibly consult with any meaningful percentage of their 1 billion customers (or however many Android users are out there). They may also feel it's impossible to educate their users about the benefits and dangers of embedded location information (just thinking about myself personally, I'm certain that I'd struggle to convey they nuances of embedded location data to my parents).
I will note that Google Photos seems to happily let you add images to shared albums with embedded location information. I can't recall if you get any privacy-related warnings or notices.
> But I don't honestly think Google could have consulted their community. It's just too big.
The thing is, they frequently do. They have developer relations people, they publish blog posts about breaking changes, they work with W3C and other standards bodies, they reply on bug trackers.
But, in this case, nothing. Just a unilateral change with no communication. Not even a blog posts saying "As of April, this functionality is deprecated."
I recognize that this is annoying from a user perspective, but I do understand it. Not all bugs are easily reproducible (and even if they are 100% reproducible for the user, it's not always so easy for the developers). Also sometimes you make a change to the code that you think might be in a related area, and so sometimes the most "efficient" thing is just to ask the user to re-test.
When I close an old bug that is not actionable, I do feel bad about it. But keeping the bug open when realistically I can't really do anything with it might be worse.
Back in another part of my career I worked a lot with putting Macs on ActiveDirectory. And there was a common refrain from Apple about bugs in that implementation: "works on 17!".
The joke is that Apple owns the 17.x.x.x class-A range on the Internet (they got in early, the also have a second class-B and used to have a second class-B that they gave back), and what engineers were really saying is that they could not reproduce on the AD systems that Apple had setup (lots of times it was because AD had been setup with a .local domain, a real no-no, but it was in Microsoft's training materials as an example at the time...).
I used to think that there is no harm in keeping the bug open. I think if you honestly feel that you have the time and resources to go back to the bug and fix it, then by all means keep it open.
But I find that sometimes I can tell from experience that the IR is not actionable and that it will never be fixed. Some examples:
* There's not enough info to reproduce the issue and the user either can't or won't be able to reproduce it themselves. Intermittent bugs generally fall into this category.
* The bug was filed against some version of the software that's no longer in production (think of the cloud context where the backend service has been upgraded to a newer version).
Sometimes the cost to investigate a bug is so high relative to the pain caused that it just closed as a WONTFIX. These sometimes suck the most because they are often legitimate bugs with possible fixes, but they will never be prioritized high enough to get fixed.
Or sometimes the bug is only reproducible using some proprietary data that I don't have access to and so you sometimes have no choice but to ask the bug filer "can you still reproduce this?".
Computer systems are complicated. And real-world systems consisting of multiple computer systems are even more complicated.
I think asking someone if they can still reproduce an issue is valid. Especially if it was trivially reproducible for them, and now it isn't, that seems like a fine resolution, and the bug should be closed.
But in the other cases, closing the bug seems to me to be a way to perturb metrics. It might be true that you'll never fix a given bug, but shouldn't there be a record of the "known defects", or "errata" as some call them?
For your specific scenarios:
- lack of information on how to reproduce or resolve a bug doesn't mean it doesn't exist, just that it's not well understood.
- For the "new version" claim, I've seen literal complete rewrites contain the same defects as the previous version. IMHO the author of the new version needs to confirm that the bug is fixed (and how/why it was fixed)
- I agree there are high cost bugs that nobody has resources to fix, but again, that doesn't mean they don't exist (important for errata)
- Similarly with proprietary data, if you aren't allowed to access it, but it still triggers the bug, then the defect exists
In general my philosophy is to treat the existence of open bugs as the authoritative record of known issues. Yes, some of them will never be solved. But having them in the record is important in and of itself.
> It might be true that you'll never fix a given bug, but shouldn't there be a record of the "known defects", or "errata" as some call them?
Yes, fully agreed. But closing a bug doesn't preclude that. A closed bug isn't refutation or denial of a defect. It's just an indication that there is no plan to fix the bug. Not every bug system works like this though. My bug tracker works like this, and I should have more clearly described what a "closed bug" is in my earlier posts.
Bug reports are not known defects, at any kind of scale half of them will be already fixed, misunderstandings, bad data in, or related to an unusual setup.
Closing the bug is a way of saying: sorry this doesn’t look too important and we don’t have time to look at this given the other more important things (bugs/features) we plan to work on.
If it’s closed as stale after 6-12 months (multiple humans will have seen it) OR triaged by a human and marked as won’t fix I think that’s reasonable.
> at any kind of scale half of them will be already fixed, misunderstandings, bad data in,
Here you're referring to a class of bug reports that's uninteresting for this discussion, because they're invalid (i.e. they don't represent an actual bug). We're talking about valid bugs that have not been fixed.
> or related to an unusual setup
Unusual, but ostensibly supported? Then there exists a bug.
What is the use in keeping it open when no one will ever look at it again after it goes stale? It still exists in the system if you ever wanted to find it again or if someone reports the same issue again. But after a certain time without reconfirming the bug exists, there is no point investigating because you will never know if you just haven't found it yet or if it was fixed already.
See my reply to eminence32 - bug tracking serves as a list of known defects, not as a list of work the engineers are going to do this [day/month/year].
The primary purpose is not usually a list of known defects and many ‘bugs’ are not actually bugs but feature requests or misunderstandings from users (e.g. RFC disallows the data you want my html parser to allow).
The people who filed them would disagree and many would vehemently argue that their bug is in fact a bug, and is the most important bug and how dare you close it.
Conversely, what's the harm in closing the bug? (As long as you don't lock or delete it, I agree that's bad.)
People focused on the work often interpret "open" to mean "requires work" and "closed" to mean "no planned work" in which case keeping an unreproducible bug open is dishonest because it falsely implies that someone might continue to work on it.
Whereas people focused on the problem often interpret "open" to mean "not fixed" and "closed" to mean "fixed" in which case closing an unreproducible bug is dishonest because it falsely implies that it's no longer a problem.
Neither seems right or wrong as long as everyone on the project agrees which interpretation you're using.
How is that worse? Leaving it open signals to anyone searching about it that's it's still an issue of concern. It will show up in filters for active bugs, etc. Closing it without fixing it just obfuscates the situation. It costs nothing (except pride?) to leave "Issues (1)" if there is indeed an Issue.
To some people "open" means "not fixed" whereas to others it means "more work planned". I've worked on projects with both interpretations and it's fine as long as everyone is on the same page.
> It costs nothing (except pride?) to leave "Issues (1)" if there is indeed an Issue.
In our case we omit bugs we couldn't reproduce from the issues list due to practicality, not pride -- our software has tens of thousands of unreproducible bugs and having them show up in reports would drown out planned work.
And it's not like anyone deleted or locked the unreproducible bugs, they are either tracked as "open but unreproducible" or "closed because unreproducible". Either way they're still in the database in case more information comes along, but still filtered out of the vast majority of dashboards.
Apple did not say they couldn't reproduce it. Neither did they say that they thought they fixed it. They refused to say anything except "Verify with macOS 26.4 beta 4".
> and even if they are 100% reproducible for the user, it's not always so easy for the developers
It's not easy for the user! Like I said in the blog post, I don't usually run the betas, so it would have been an ordeal to install macOS 26.4 beta 4 just to test this one bug. If anything, it's easier for Apple to test when they're developing the beta.
> the most "efficient" thing is just to ask the user to re-test.
Efficient from Apple's perspective, but grossly inefficient from the bug reporter's perspective.
> realistically I can't really do anything with it
In this case, I provided Apple with a sample Xcode project and explicit steps to reproduce. So realistically, they could have tried that.
I suspect that your underlying assumption is incorrect: I don't think Apple did anything with my bug report. This is not the first time Apple has asked me to "verify" an unfixed bug in a beta version. This seems to be a perfunctory thing they do before certain significant OS releases, clear out some older bug reports. Maybe they want to focus now on macOS 27 for WWDC and pretend that there are no outstanding issues remaining. I don't know exactly what's going through their corporate minds, but what spurred me to blog about it is that they keep doing this same shit.
I don't work at Apple, so I can't comment on that. But that doesn't always help. There's been plenty of times where I have a full HAR file from the user and I can clearly see that something went wrong, but that doesn't always mean I can reproduce the issue. (I recognize a HAR file doesn't represent the complete state of the world, but it's often one of the best things a backend developer can get)
It always helps. Even if you can't determine the root cause you can at least add an extra assertion check or logging statement at that point so that next time the bug gets triggered you'll at least get more useful diagnostic data and can get a step close. Iterate until you find the root cause.
That’s easy enough. The hard part is doing so without capturing a bunch of email, messages, and other private data that happens to be in memory at the time.
Ignorant question, if privacy didn’t matter and they had an atomically identical machine, would there still be plenty of edge cases where it was the printer or the Wi-Fi causing the issue?
In any case I would have said it sounds difficult on every front
I should be more precise. Capturing the system state isn’t too hard. Turning that into a reproducer may be quite hard, because of things like you say. There are certainly a lot of bugs that such a capture would make easier to figure out, but it wouldn’t be a panacea.
> Generate better visuals with more accurate, legible text directly in the image in multiple languages
Assuming that this new model works as advertised, it's interesting to me that it took this long to get an image generation model that can reliably generate text. Why is text generation in images so hard?
It’s not necessarily harder than other aspects. However:
- It requires an AI that actually understands English, I.e. an LLM. Older, diffusion-only models were naturally terrible at that, because they weren’t trained on it.
- It requires the AI to make no mistakes on image rendering, and that’s a high bar. Mistakes in image generation are so common we have memes about it, and for all that hands generally work fine now, the rest of the picture is full of mistakes you can’t tell are mistakes. Entirely impossible with text.
Nano Banana Pro seems to somewhat reliably produce entire pictures without any mistakes at all.
As a complete layman, it seems obvious that it should be hard? Like, text is a type of graphic that needs to be coherent both in its detail and its large structure, and there’s a very small amount of variation that we don’t immediately notice as strange or flat out incorrect. That’s not true of most types of imagery.
This is obviously just a research project, but I do wonder about the next steps:
* After exploring an learning about a virtual world, can anything at all be transferred to an agent operating in the real world? Or would an agent operating in the real world have to be trained exclusively or partially in the real world?
* These virtual worlds are obviously limited in a lot of important ways (for example, character locomotion in a game is absolutely nothing like how a multi-limbed robot moves). Does there eventually need to be more sophisticated virtual worlds that more closely mirror our real world?
* Google seems clearly interested in generalized agents and AGI, but I'm actually somewhat interested in AI agents in video games too. Many video games have companion NPCs that you can sort of give tasks to, but in almost all cases, the companion NPCs are nearly uncontrollable and very limited in what they can actually do.
The end goal is to marry the lessons learned about HOW to learn in a virtual world with a high fidelity world model that's currently out of reach for this generation of AI. In a year or two once we have a world model that's realistic enough and fast enough, robots will be trained there and then (hopefully) generalize easily to the real world. This is groundwork trying to understand how to do that without having the models required to do it for real.
> Coupons shall be accepted for eligible foods at the same prices and on the same terms and conditions applicable to cash purchases of the same foods at the same store except that tax shall not be charged on eligible foods purchased with coupons.
Obviously lawyers are going to be involved in interpreting these rules. But as a lay person, I wonder: a store cannot charge a different price to someone who is buying food with a SNAP coupon. But for someone who has SNAP benefits, but isn't using a SNAP coupon to buy food (you know... because the government isn't funding the program), do these rules still apply?
> But for someone who has SNAP benefits, but isn't using a SNAP coupon to buy food (you know... because the government isn't funding the program), do these rules still apply?
The lawyer that does pro bono work for the food bank I volunteer with believes this rule doesn't apply in this case for exactly that reason. But the legal challenges will take time and the threat is effective now regardless so it barely matters.
Banning from the EBT program, civil fines, and, potentially, criminal charges (I’m not sure you can wedge discounts for SNAP beneficiaries into any of the criminal violation categories though it seems like if you did it for cash purchases by SNAP beneficiaries as a way of trying to help deal with the suspension or reduction of SNAP benefits, it could probably be argued to be indirect trafficking—using SNAP payments to subsidize other inventory—which is criminal.)
They don't need a good legal argument to just have border patrol ransack the place and stuff a bunch of customers & employees in detention for a week. We haven't seen (afaik) this sort of vindictive not-even-pretending use of ICE/BP yet but it's right around the corner.
Lots of anti-kickback / fraud regulations don't require an immediate exchange in order for it to still be a violation, just that you are getting special treatment for having / being part of the group covered by the regulation. Walgreens will get in trouble with the feds for giving you a $10 gift card to switch your prescriptions over to them as a Medicare patient, regardless of whether or not you actually have Medicare part D or whether you actually have Walgreens fill your scripts. It's likely going to be a similar thing here.
I remember vividly a childhood experience when a car I was in ran over a dog -- it ran towards the side of the car and went under the rear wheels of the car. I'm not sure there's any reaction time (human or otherwise) that would have prevented that from happening.
> just how Apple cannot block their developer tools from being able to transfer apps onto an iPhone.
If I recall correctly (I might be wrong, because this was 10+ years ago), but Apple did exactly this when the iPhone was first released. When the iPhone first came out, Apple released its XCode devtools for free, including an iOS emulator that you could use to test your iPhone app. But you had to pay a $99 USD per year "developer program" free in order to use the devtools to test the app on your physical device.
If Google is also blocking preventing you from loading your own software onto your own phone with adb unless you pay a free, then this would be a very important thing to call out explicitly.
You recall correctly, but that did end in 2015, when Apple ended the requirement that developers sign up for their paid developer program to be able to develop and test iPhone apps. I've written about that elsewhere: https://appfair.org/blog/gpl-and-the-app-stores#fn:3
The adb workaround for Android is essentially on par with being able to use Xcode's tooling to install apps on an iPhone: technically possible without paying a fee, but enough friction that no one would seriously consider as an alternative solution for publishing their apps to a general audience.
> The adb workaround for Android is essentially on par with being able to use Xcode's tooling to install apps on an iPhone
The Apple situation is still significantly worse than ADB, because (at least without a paid-for developer account) AFAIK you're limited to a certain number of in-development app that you can install simultaneously and you definitely need to reinstall them every few days. ADB currently has no such restrictions.
Apple has actually increased the friction since: you now have to enable a scare-screened developer mode, reboot your device, install the app, get an error that the app is untrusted, then go to the part of Settings used for corporate management profiles to enable your own developer profile, and only THEN will the app actually launch and run.
I really enjoy Helix, but I can't install it everywhere, and so I end up going back and forth between Helix and Vim depending on what machine I'm working on. This puts a lot of pressure on my muscle memory -- I often type a vim keybinding while in Helix or a Helix keybinding while in vim.
I get it. This unequivocally sucks. It's a clear loss of functionality for a group of people who are educated about the advantages and disadvantages of embedded EXIF data. But I don't honestly think Google could have consulted their community. It's just too big. So when the author says:
> Because Google run an anticompetitive monopoly on their dominant mobile operating system.
I don't think the problem here is that Google is anticompetitive (though that's a problem in other areas). I think it's just too big that they can't possibly consult with any meaningful percentage of their 1 billion customers (or however many Android users are out there). They may also feel it's impossible to educate their users about the benefits and dangers of embedded location information (just thinking about myself personally, I'm certain that I'd struggle to convey they nuances of embedded location data to my parents).
I will note that Google Photos seems to happily let you add images to shared albums with embedded location information. I can't recall if you get any privacy-related warnings or notices.