So the purpose of the website wasn't clear to you, you then figured it out, decided to write an article about it, but didn't explain to the reader what the website actually is?
They're just using Google for an easy user onboarding, nothing to get all up in arms about. Websim is trying to sell plans for more than 30 generations a day.
Makes a little sense to show a "register with Google" button, but clicking the main UI element and being redirected to a Google sign-in screen is bad form.
Also weird that apparently you can sign in via Discord, but you can only sign up via Google?
I find the term 'bullshitting' less fitting as it seems to anthropomorphize LLM in a way that attributes agency to them which they seem to lack. As in, someone who bullshits presumably does this for some personal gain, which doesn't seem like something an LLM is capable of atm.
It might (currently) be most apt to characterize these occurrences as shifts out of the training data distribution
About the same time that LLMs were starting to make the news, I was spending a lot of time with an elderly relative with severe dementia, and was struck by the fact that LLMs are doing the same thing she is: the word is "confabulating", meaning to come up with stories and rationalizations to fill in gaps in knowledge and memory.
The equal problem with "hallucinate" is that it also has far to many anthropomorhic connotations (a person having creative fun, a person on some form of drugs, a person in some sort of "sleep state").
So far I'm coming around to the growing use of "slop", originally meant as an alternative to "spam" and to imply spam-like intent, but the great thing about this word choice is that the closest anthropomorphic connotation is to "pig feeding". Pigs can be highly intelligent, of course, but that's not the first image one has when thinking of a pig at a slop trough.
"Confabulation" also sounds too anthropomorphic to my tastes. Especially because "fabrication" often implies "intent to" by the actor in question. It's the exact same problem as "bullshit", just the G-rated grandiloquent version. To be fair, human languages were built to anthropomorphize almost everything so finding the right terms here is hard.
> someone who bullshits presumably does this for some personal gain
That's one definition of bullshitting, but not the one being used here. If someone says "I think you're bullshitting me" then yes, you're being accused of consciously seeking personal gain. But if someone says "we were standing around bullshitting" then no, it refers to killing time with mindless communication, which is a quite good analogy for LLM output.
I actually like your first definition a bit better; very in line with the way the term was academically in vogue a few years ago; the idea that you're expressing information intended to appear factual without regard for how factual it is.
The LLM does it because it's programmed to, and the human does it for some other self gain reason, but both the process and the results are very similar.
> the idea that you're expressing information intended to appear factual without regard for how factual it is.
That's my second definition! Sorry if I wasn't clear. My first definition (which aligns with the comment I had originally quoted) is that the speaker is aware that they're saying false things, and therefore has intent to deceive, typically for personal gain (they are bullshitting another person). My second definition is that the speaker has no regard for whether what they're saying is true or false (they are bullshitting with another person).
An LLM does not bullshit you, it bullshits with you. It's fluff, not a bluff.
It seems impossible to come up with language to describe why LLMs are both convincing and unreliable ("hallucinate","confabulate","bullshit) or why the ability to converse in natural language does not denote intelligent cognition ("stochastic parrot") without anthropomorphizing them to a degree, given that these things are designed to anthropomorphize themselves.
“Hallucinate” connotes consciousness and self to me. Bullshit does not. Markov chain text generators bullshit, they don’t hallucinate. I’m not aware of anything in LLM tech that warrants implying any sort of awareness, understanding, or consciousness. Not even close.
Come to think of it, I'm kind of surprised that Google Chrome doesn't have a "this link is broken, would you like to see what we have cached for this url?" feature.
Or – some day when this runs on-device – a "smart preload" feature from Google Chrome. On slow connections, hallucinate a wireframe of a heavy website until it loads.
Resulting in web elements bouncing around and changing wholesale as the site loads? Hitting the wrong button as some UI element loads late is already a painful UX stubbed toe as it is.
This is interesting but if you’re genuinely interested in recapturing the feelings many of us had at the beginning of the web, I would suggest playing Hypnospace Outlaw. It’s of course quite different than websim but it’s really fun.
I struggle with the idea of monetization. In one sense, I think its great people can get paid for doing what they like to do and it can encourage more content creation. On the other hand, everything becomes disingenuous, people become perversely incentivized, you got people gluing things to turtles to make videos of them rescuing turtles with 'lichens' on their shells.
So I don't know, I am really torn on how I should feel about it
I wonder why they aren’t straightforward that this is just ai generating websites based on a url prompt? It seems like they go out of their way to not say AI.