> The user agent is the proper channel for the agency Jake is seeking here. Theres nothing preventing the user & their user agent from negotiating what model they use.
This isn't how it works. As the developer, you use the system prompt to set a particular personality for the chat bot. Eg, when you use an LLM in VSCode, it comes with a system prompt to make it an effective code assistant.
Now, in VSCode, you can select a different model, which is maybe where your misconception comes from. But when you select a different model, it will also use a different system prompt, designed to achieve the same personality, but tailored for that particular model.
Once you figure out why they do that, you'll understand why your position here doesn't make sense.
I guess some people don't have access to their system prompt, but that's never been the case for me?
I believe at $LASTJOB even Azure's web GUI let you meddle with that. This isn't cast quite so deeply in stone.
That said I do think it's ok that the app developer ask for British English. My overall concern is more that concern about model behavior & expectations feel very tight & narrow across this, and I don't think the typical well defined rigor and expectations if applied will ever ever ever let LLMs on the web. I think sites see that, I think users see that. Making perfect the enemy of the good is going to doom any attempt, forever, and no progress is possible, if these are the demands. We have to be willing to accept that models are limited, especially small local ones. That's ok.
You're a politician. The sentiment leans anti in this cultural context at this time and so do your statements overall, such as if we look at this one and the rest and tally each one as positive or negative. Underlying you are more anti-AI than neutral. So your reply may have been technically true but it was deliberately misleading.
But you haven't really made a technical argument because your objection is not really technical. It's a type of politics.
It's obviously extremely extremely useful to have a simple API for accessing an LLM. It needs permissions like most things and the ability to limit download sizes/specific or maybe block use of external services if desired.
But anyway people will just fall back to a slightly worse alternative like a wrapper around WebLLM (that wraps WebGPU).
It's probably not politically feasible for you to take a different stance anyway.
On interoperability, time will tell I guess. I've only been working on Firefox for a few months, but general interop issues are way worse than I realised when we worked together at Chrome. Firefox frequently gets bug reports for not behaving like Chrome, even when Firefox is complying with the spec, and Chrome is not. We end up having to just behave like Chrome.
On developer signals… I'm sure there's better evidence of positive sentiment than Chrome provided, but there's a lot of negative sentiment too. I think it would be fair to call the developer signal "mixed", or maybe even "polarised".
Meaning you do not want text generation in the web API at all, or you think the prompt API needs to be different? And if so can you give one sentence on how it should change?
If you glance at that then you may see that I am for the idea of leaner alternatives to the current web platform.
But in the context of the existing web API which has just about everything and the whole kitchen sink in it (hundreds of sub-APIs), I do not think it will really help anyone at this point just just stop adding features, especially major ones.
The web is basically an overlay operating system and has been for many years.
> Meaning you do not want text generation in the web API at all, or you think the prompt API needs to be different?
Not OP but I think you are misunderstanding the interaction as a whole here. The Chromium team made a proposal, then the Chromium team asked the Firefox team for a position on the proposal. Whether or not the Firefox team or anyone on the Firefox team has any goals around AI or whatever, this response was simply "We do not like this proposal for these reasons..."
How to fix those issues really isn't the Firefox team's job and also wasn't part of the question asked by the Chromium team.
You didn't read my comment carefully enough. It was not about AI in general. It was about the text generation API. And it is perfectly reasonable to ask if he wants to reject the feature entirely or if he can give a one sentence overview of how it might be fixed.
There are a lot of people reading his position. One or two additional clarifying sentences to spell it out for people skimming is not such an unreasonable ask.
> There are a lot of people reading his position. One or two additional clarifying sentences to spell it out for people skimming is not such an unreasonable ask.
I do think it is a bit unwarranted, actually. This isn't a press release, it's a technical discussion somewhat deep into a technical process that's open for archival purposes. His audience is not people skimming through, it's the Chromium team and other members of the standards body.
You're sort of overhearing a conversation and injecting yourself into it.
And so are you injecting yourself and objecting to me even discussing on HN.
And this is not really a technical issue. It's a worldview issue no matter how much you or others try to pretend it's a technical problem or that I am violating etiquette or something.
> And this is not really a technical issue. It's a worldview issue no matter how much you or others try to pretend it's a technical problem or that I am violating etiquette or something.
I'm actually so curious what you think is going on here
Aww thanks! To be fair I didn't toe the party line when I was at Google (imo). Although, that caused me increasing amount of grief internally, until I left. From what I hear, things have gotten exponentially worse in that regard for folks still on the team.
I've been feeling more optimistic about Mozilla recently than I had in years, since their language in communication seems to have shifted from a Stepford-ish tone of corporate speak to something that feels more authentic and closer to their roots. I don't know if it's the new CEO, or a general cultural shift. (Or just me projecting from little intangible bits of evidence to something I hope for!)
Hearing about positive personnel shift like this now gives me a bunch more optimism on this. I really hope I can go back to the days of unambiguously being in support of Mozilla and their many awesome efforts, without always having to be a bit dubious about their next (mis)step.
Hey, Jake, not related with your post, but I just want to say that HTTP203 were one of the best web dev content that I've ever consumed. Amazing mix between humour and tech discussion. Thank you!
Aww thanks for saying that! I've been doing little videos on https://www.youtube.com/@FirefoxWebDevs (and accounts of the same name, pretty much everywhere). Although they're designed to be short, so they're pretty different to HTTP203.
When I posted this, I linked to the latest statement https://github.com/mozilla/standards-positions/issues/1213#i..., which is the content relevant to the title (the details of our opposition to the API). Unfortunately someone removed the link to the specific post.
The Chrome model requires either "16 GB of RAM or more and 4 CPU cores or more" or "Strictly more than 4 GB of VRAM", and "22 GB of free space" (it uses around 4.4GB but it doesn't want to use the remaining free space).
The model is pretty slow on my M4 Pro mac.
The API allows the browser to use a cloud service instead, but then privacy is lower. So, more privacy for the rich.
This isn't how it works. As the developer, you use the system prompt to set a particular personality for the chat bot. Eg, when you use an LLM in VSCode, it comes with a system prompt to make it an effective code assistant.
Now, in VSCode, you can select a different model, which is maybe where your misconception comes from. But when you select a different model, it will also use a different system prompt, designed to achieve the same personality, but tailored for that particular model.
Once you figure out why they do that, you'll understand why your position here doesn't make sense.
reply