Hacker Newsnew | past | comments | ask | show | jobs | submit | more awesome_dude's commentslogin

I often say to Claude "you're doing X when I want Y, how can I get you to follow the Y path without fail" and Claude will respond with "Edit my claude.md to include the following" which I then ask Claude to do.

Not sure this is a great idea. The model only internalized what it was trained on and writing prompts/context for itself isn't part of that. I try to keep my context as clean as possible, mostly today's models seem smart/aligned enough to be steered by a couple of keywords.

Ah yea I do that too. I often have reflection sessions with Claude where I ask it "how can I make sure you do behavior X so we get outcome Y?"

It works relatively well but not always.


The anecdote the GP is providing there rings true for me too - although I'm not sure if I am going offer better detail.

I'm a proponent of architectural styles like MVC, SOLID, hexagonal architecture, etc, and in pre-LLM workflows, "human laziness" often led to technical debt: a developer might lazily leak domain logic into a controller or skip writing an interface just to save time.

The code I get the LLM to emit is a lot more compliant with those BUT there is a caveat that the LLMs do have a habit of "forgetting" the specific concerns of the given file/package/etc, and I frequently have to remind it.

The "metric" improvement isn't that the LLM is a better architect than a senior dev; it's that it reduces the cost of doing things the right way. The delta between "quick and dirty" and "cleanly architected" has shrunk to near zero, so the "clean" version becomes the path of least resistance.

I'm seeing less "temporary" kludges because the LLM almost blindly follows my requests


I don't think I'd like your code. But apparently there's enough implied YAGNI in my CLAUDE.md to prevent the unnecessary interfaces and layers of separation that you apparently like. So I guess there is a flavor for everyone.

I've recently had the interesting experience of working on a Clean Architecture project for the first time. Pre and post-LLM adoption.

It has been... difficult. Services/modules organised by infrastructural layer rather than by feature. A mediator pattern abstracted away the handling of commands. Just in case one day you needed CreateFooCommand to be executed by a different handler, or something, I dunno. It was so hard to figure out how to navigate everything. And it felt like the entire tradeoff was for the purpose of stopping smoothbrains from adding the ORM to an API endpoint - but with the cost of this crushing accidental complexity that made it hard for everyone to hold everything in their heads, not just for me but also for the smart guys on the team.

It turns out that the LLMs also performed extremely poorly. All the heavy abstractions were too hard for them (not to mention most of the developers).

I knew I had no chance of shifting things away from that paradigm. But as luck would have it... we started basically vibe-rewriting it from scratch without bullshit enterprisey crap and its (a) dead simple (b) has most of the features after one month (c) even though the code is questionable, inelegant AI slop, with nearly zero regard to proper architectural design, it's way easier to deal with than before

I've never felt so vindicated.


> Services/modules organised by infrastructural layer rather than by feature.

That is actually true to the original CA as far as I am aware. The "veritcal slices" style of development came after.

> with the cost of this crushing accidental complexity that made it hard for everyone to hold everything in their heads, not just for me but also for the smart guys on the team.

What? You don't like editing 50 files every time a new column is added to a DB table?

> And it felt like the entire tradeoff was for the purpose of stopping smoothbrains from adding the ORM to an API endpoint

There is nothing wrong with this, and I will die on this hill. The entire purpose of CA was for people to make money off book sales, lectures, and consulting. Notice how every single on of the people involved with promoting CA have absolutely nothing noteworthy to their names. In fact, you might be surprised who was one of the consultants on a major failure of a solution... *cough cough*

https://en.wikipedia.org/wiki/Chrysler_Comprehensive_Compens...


My favourite JVM trivia, although I openly admit I don't know if it's still true, is the fact that the size of a boolean is not defined.

If you ask a typical grad the size of a bool they will inevitably say one bit, but, CPUs and RAM, etc don't work like that, typically they expect WORD sized chunks of memory - meaning that the boolean size of one but becomes a WORD sized chunk, assuming that it hasn't been packed


". While it represents one bit of information, it is typically implemented as 1 byte in arrays, and often 4 bytes (an int) or more as a standalone variable on the stack "

But also - colours don't exist without a name

eg. Before Orange, there was only shades of yellow or reds


The colors most certainly exist without the name. You may have described the fruit as being a weird shade of red, but if someone held up something red and said "so it was this color" you'd say no. Conversely if someone held up something that was actually orange colored, you'd say "yeah it was that color."

Similarly, you may have no idea what the name is for the color of a Tangerine, but you know what that color is. You might describe it as a dark orange. If I say the name for it is coquelicot, you can look up coquelicot and see if it matches the color you picture in your mind.


I don't think so. Just becoming fluent in multiple languages can result in the perception of more distinct colors. And those fluent in languages that have additional distinct color names can differentiate subtle differences in the shades of colors that non-speakers cannot even differentiate. Color is less about seeing what is actually out there and more about how our brain interprets colors to create "meaning".

> And those fluent in languages that have additional distinct color names can differentiate subtle differences in the shades of colors that non-speakers cannot even differentiate.

The ability to label more colors is not the ability to perceive more colors. The ability of your cone cells to recognize a difference in color between two samples is unaffected by language.


Your cone cells do not perceive anything whatsoever. Your brain does that part. Those who grew up with words (meaning) assigned to subtle variations in colors can tell those colors apart without a reference to compare it to better than, and much faster than those who haven't grown up with learning the distinction.

We know this to be obvious of sounds, musicians who can tell if a note is slightly out of tune when others who haven't learned how can't, or taste/smell: wine connoisseurs who can tell very similar wines apart that all taste the same to me.

You're not thinking in photons. Your brain is making up meaning from the stimulation your eye received from photons. The perceiving part is learned.


> Your cone cells do not perceive anything whatsoever.

They most certainly do. Your brain may apply meaning to the signals the cone cells send, but it is the cone cells which send a signal for one color and a different signal for another. That's what perception is.

> Those who grew up with words (meaning) assigned to subtle variations in colors can tell those colors apart without a reference to compare it to better than, and much faster than those who haven't grown up with learning the distinction.

No they can't. There is no evidence at all of better color differentiation, and if they were able to better differentiate then they wouldn't be faster because those who were less capable would never be able to. The vocabulary makes labeling faster, and that is all that such tests are measuring.

> We know this to be obvious of sounds, musicians who can tell if a note is slightly out of tune when others who haven't learned how can't.

Knowing the names of notes doesn't make it any easier to tell if a note is out of tune. If you weren't aware before, middle C is 261.62 hz. Can you now tell if a note is .01 hz off middle C? No of course not. Musicians learn to differentiate notes because they spend tremendous amounts of time listening to sound and being corrected when the note they hit isn't the one they are going for. Similarly an orange farmer will know the difference between the color of a ripe orange and the color of a few days under ripened orange, despite not having a distinct word for either. If you're having a blind taste testing competition between someone who drinks lots of wine but has no formal education, and someone who is extremely learned in somellier vocabulary but has never actually had a glass of wine before, it's pretty obvious who is going to be better at distinguishing two vintages.

> You're not thinking in photons. Your brain is making up meaning from the stimulation your eye received from photons. The perceiving part is learned.

You are perceiving photons, or more accurately the firing of neurons triggered by those photons. The meaning your brain applies is a label for what you are perceiving - it's a categorization. You see the color of an apple, you learn that color is called red. You see another apple, and you ask why that one's a different color, and then you are told there are also green apples. But you did not need to be taught to differentiate red apples and green apples, you directly perceived it. The difference between cyan and azure exists even if you don't have the vocabulary to communicate that difference to someone else.


> That's what perception is.

No, it isn't. Perception is a process, and ingress only a part of the process.

Perception (from Latin perceptio 'gathering, receiving') is the organization, identification, and interpretation of sensory information, in order to represent and understand the presented information or environment.[2] All perception involves signals that go through the nervous system, which in turn result from physical or chemical stimulation of the sensory system.[3] Vision involves light striking the retina of the eye; smell is mediated by odor molecules; and hearing involves pressure waves.

Perception is not only the passive receipt of these signals, but it is also shaped by the recipient's learning, memory, expectation, and attention.[4][5] Sensory input is a process that transforms this low-level information to higher-level information (e.g., extracts shapes for object recognition).[5] The following process connects a person's concepts and expectations (or knowledge) with restorative and selective mechanisms, such as attention, that influence perception.

- https://en.wikipedia.org/wiki/Perception

> No they can't. There is no evidence at all of better color differentiation

Yes, there is. Example: "Russian blues reveal effects of language on color discrimination." https://pubmed.ncbi.nlm.nih.gov/17470790/

> Knowing the names of notes doesn't make it any easier to tell if a note is out of tune.

I didn't say that. But having a deep familiarity with tones does.

> Musicians learn.

Yes, I know. I majored in Music and have 30 years experience.

> they spend tremendous amounts of time listening to sound and being corrected

I'm confused since you seems to have just switched sides of the argument completely and entirely here. I'll give you the benefit of the doubt and assume you are thinking that _having_ knowledge (knowing the words and vocabulary) is what I meant. But that is not what I meant. I meant to speak about the _understanding_ you have when you intimately familiar and experienced.

> The difference between cyan and azure exists even if you don't have the vocabulary to communicate that difference to someone else.

Those colors are pretty different and aren't that interesting to study, from a linguistic relativity point of view. Colors much closer together, like #187af7, #1b85f5 and #187af7 are.


I remember back when I thought that perception was this simple.

What I described is anything but simple; it's just not related to language.

I think you are correct, but the likelihood of perceiving probably is tied to language.

It's amazing how much time we spend on autopilot.


There is a difference between perception and categorization. You perceive the difference between salmon and fuchsia regardless of whether you have a word for them or not. You might refer to either color as pink, not because you failed to perceive the difference but because you don't particularly care about communicating the difference.

It's like measuring with a ruler. If you have mm notches then you'll be more likely to describe one thing your measuring as 31mm and another as 29mm, whereas if you have only cm notches then you'll probably say one is just over 3cm and another is just under 3cm. In the second case, you're measuring with a less accurate tool because you don't care as much about accuracy. Hell you may say they're both about the same size if that 2mm difference is insignificant enough. But regardless of how you communicate the length, their lengths exist and you qualitatively perceived them.


You're actually further away from the truth than you will ever know.

1. Colours do NOT actually exist - they are purely an interpretation by your brain of signals encountered by sensors. Light exists at different frequencies, yes, but what colour is 2.6 GHz? What about light in the gamma spectrum?

2. While the wavelengths were always there, the concept of "Orange" as a distinct category didn't exist for English speakers until the fruit arrived. Before that, it was just "yellow-red" (geoluread) - as has already been mentioned. If you don't have a word for a transition, your brain often fails to categorise it as a distinct entity, effectively "grouping" it with its neighbours. The fruit literally defined the colour for the language.

Finally, just FTR coquelicot is actually a vivid poppy red - it comes from the French name for the flower.


The name for the color doesn’t exist before the name. But, you can distinguish all sorts of colors you don’t know the name for. Look at a smooth color wheel or a wall of paint swatches.

The levels do the most important thing in computer science, give discrete and meaningful levels to talk/argue about at the watercolour


Whatever your opinion on one tool or another might be - it does seem weird that the "market" has been captured by what you are saying is a lesser product.

IOW, what do you know that nobody else does?


So far you've only gotten responses to "how can a worse product win?", and they are valid, but honestly the problem here is that Mercurial is not a better product in at least one very important way: branches.

You can visit any resource about git and branches will have a prominent role. Git is very good at branches. Mercurial fans will counter by explaining one of the several different branching options it has available and how it is better than the one git has. They may very well be right. It also doesn't matter, because the fact that there's a discussion about what branching method to use really just means Mercurial doesn't solve branches. For close to 20 years the Mercurial website contained a guide that explained only how to have "branches" by having multiple copies of the repository on your system. It looks like the website has now been updated: it doesn't have any explanation about branches at all that I can find. Instead it links to several different external resources that don't focus on branches either. One of them mentions "topic", introduced in 2015. Maybe that's the answer to Git's branching model. I don't care enough to look into it. By 2015 Git had long since won.

Mercurial is a cool toolbox of stuff. Some of them are almost certainly better than git. It's not a better product.


This is so strange, because, at a low level, a branch isn't even a "thing" in git. There is no branch object type in git, it's literally just a pointer to a commit, functionally no different from a tag except for the commands that interact with it.


Meanwhile mercurial has bookmarks. TBF I'm not sure when it got those but they've been around forever at this point. The purpose is served.

I think there are (or perhaps were) some product issues regarding the specifics of various workflows. But at least some of that is simply the inertia of entrenched workflows and where there are actual downsides the (IMO substantial) advantages need to be properly weighed against them.

Personally I think it just comes down to the status quo. Git is popular because it's popular, not because it's noticably superior.


> I think there are (or perhaps were) some product issues regarding the specifics of various workflows.

I love jumping in discussions about git branching, because that's a very objective and practical area where git made the playing field worse. Less and less people feel it, because people old-enough to have used branch-powered VCSes have long forgotten about them, and those who didn't forget are under-represented in comparison to the newcomers who never have experienced anything else since git became a monopoly.

Anyhow, let's pick django as a project that was using a VCS with branches before moving to git/github, and have a look at the repo history: https://github.com/django/django/commits/stable/6.0.x

Yes, every commit is prefixed with the branch name. Because, unlike mercurial, git is incapable of storing this in its commit metadata. That's ridiculous, that's obscene, but that's the easiest way to do it with git.


Just because there is one project apparently using this in a way that indicates someone could perceive something as a weakness... It doesn't mean it's a real weakness (nor that it's serious).

You can just not move branches. But once you can do it, you will like it. And you are going to use

   git branch --contains COMMIT
which will tell you ALL the branches a commit is part of.

Git's model is clean and simple, and makes a whole lot of sense. IMHO.


> Less and less people feel it, because people old-enough to have used branch-powered VCSes have long forgotten about them, and those who didn't forget are under-represented in comparison to the newcomers who never have experienced anything else since git became a monopoly.

I'm old enough to have used SVN (and some CVS) and let me tell you branching was no fun, so much that we didn't really do it.


That's the definition of a tree though. Everything has a parent, no cycles allowed.


To me mercurials branching is closer to the development process and preserves more information, because it records the original branch a commit was made.

Git does not have such concept. That is a trade off and that trade off works great for projects managed like Linux kernel. But for smaller projects where there is a limited number of people working, the information preserved by mercurial could be very valuable.

It also had some really interesting ideas like change set evolution, which enabled history re-writing after a branch has been published. Don't know its current status and how well it turned out to be..


Just FTR - git /can/ store that information, but it requires human input.

If you rebase the feature branch into the main branch THEN follow it up with the merge commit that records the branch name you store the branches (that have been made a part of main) and can see where they are in your log

Mercurial's notes can become cumbersome if there are a large number in the repository, but, obviously, humans can sort that out if it gets out of hand


It's interesting that branches, which is a marquee feature of git, became less important at the same time as git ate all the other vcs. Outside of OS projects, almost all development is trunk based with continuous releases.

Maybe branching was an important reason to adopt git but now we'd probably be ok with a vcs that doesn't even support them.


Trunk based development is still a hotly debated topic. I personally prefer branches at this point in time, trunk based development has caused me more trouble than it's claimed worth in the past, BUT that could be a me limitation rather than a limitation of the style

Not sure if it's true. I mean, I do agree with the core of it, but how do you even do PRs and resolve conflicts, if there are no branches and a developer cannot efficiently update his code against the last (remote) version of master branch?


Trunk based development has every developer in the company committing straight to main - no PRs, supposedly no merge conflicts (but reality is that main moves fast and if someone else is working in the same files as someone else, there will be merge conflicts)

A middle ground is small PRs where people are constantly rebasing to the tip of main to keep conflicts to a minimum


Worse products win all the time. Inertia is almost impossible to overcome. VHS vs Betamax is a classic. iPod wasn’t the best mp3 player but being a better mp3 player wasn’t enough to claw market share.

Google and Meta don’t use Git and GitHub. Sapling and Phabricator much much better (when supported by a massive internal team)


What was the better mp3 player than the iPod?


unironically Zune is goated in its own way


anything from Cowon. Always has been


sansa clip+


Anything from iriver.


Network effects and marketing can easily prevent better tools from winning.


I mean, in the fickle world that is TECH, I am struggling to believe that that's what's happened.

I personally went from .latest.latest.latest.use.this (naming versions as latest) to tortoise SVN (which I struggled with) to Git (which I also was one of those "walk around with a few memorised commands" people that don't actually know how to use it) to reading the fine manual (well 2.5 chapters of it) to being an evangalist.

I've tried Mercurial, and, frankly, it was just as black magic as Git was to me.

That's network effects.

But my counter is - I've not found Mercurial to be any better, not at all.

I have made multiple attempts to use it, but it's just not doing what I want.

And that's why I'm asking, is it any better, or not.


Mercurial has a more consistent CLI, a really good default GUI (TortoiseHg), and the ability to remember what branch a commit was made on. It's a much easier tool to teach to new developers.


Hmm, that feels a bit subjective - I'm not going to say X is easier than Y when I've just finished saying that I found both tools to have a lot of black magic happening.

But what I will point out, for better or worse, people are now looking at LLMs as Git masters, which is effectively making the LLM the UI which is going to have the effect of removing any assumed advantage of whichever is the "superior" UX

I do wish to make absolutely clear that I personally am not yet ready to completely delegate VCS work to LLMs - as I have pointed out I have what I like to think of as an advanced understanding of the tools, which affords me the luxury of not having an LLM shoot me in the foot, that is soley reserved as my own doing :)


Networking effects are significantly strengthened by necessary user buy in. VC is hard, and every tool demands its users to spend a non-significant amount of time learning it. I would guess the time to move from black magic to understanding most of git is ~100h for most people.

The thing is, to understand which one is actually better, you would have to give the same amount of investment in the second tool, which is not something most people are willing to do if the first tool is "good enough". That's how Python became the default programming language; people don't miss features they do not understand.


A little over a decade ago, with only svn experience, I tried both mercurial and git. There was something about how mercurial handled branches that I found extremely confusing (don't remember what), while git clicked immediately - even without reading the manual.

So at least for me, git was clearly better.


Mercurial later added bookmarks which work like Git branches. These make more sense to me as well.


Did bookmarks moved as you made commits, like a branch pointer in git does?



GitHub had a business model where public repos were free. BitBucket didn’t.

That’s it. That’s why git won, you could put up open source libs with one for free and not the other.

Which is extra funny as the centralized service was the most important part of decentralized version control.


Apologies for the late reply - but I just saw this and immediately thought

So git /did/ have something better than Mercurial after all, it was a 3rd party, but it still meant that it was massively better than Mercurial.


>the centralized service was the most important part of decentralized version control.

I've often thought this about github


JWZ is still blocking traffic from HN so you can't click to read this but you can copy/paste it into a new tab or open it in a incognito window:

https://www.jwz.org/doc/worse-is-better.html


That worse is better, and some people don't know better or care.


"better" in that sentence is very specific. Worse is also worse, and if you're one of the people for whom the "better" side of a solution doesn't apply, you're left with a mess that people celebrate.


Welcome to VHS and Betamax. the superior product does not always win the market.


Not always, but in this case the superior product (i.e. VHS) won. At initial release, Beta could only record an hour of content, while VHS could record 2 hours. Huge difference in functionality. The quality difference was there, but pretty modest.


I suppose one lesson could be that there are different dimensions of superiority, different products may be superior in different ways.

Of course, products also can win market dominance for reasons external to the product's quality itself (marketing, monopoly lock-in, other network effects, consumer preferences on something other than product quality itself, etc).


AI is still a token matching engine - it has ZERO understanding of what those tokens mean

It's doing a damned good job at putting tokens together, but to put it into context that a lot of people will likely understand - it's still a correlation tool, not a causation.

That's why I like it for "search" it's brilliant for finding sets of tokens that belong with the tokens I have provided it.

PS. I use the term token here not as the currency by which a payment is determined, but the tokenisation of the words, letters, paragraphs, novels being provided to and by the LLMs


If Claude was used in the creation of the change, there's usually some dialogue for Claude to use.

FWIW i use Claude to help with code changes, then give the diff to Gemini to review/ create meaningful commit messages


This is where the "Trunk based development" people live - I personally believe that commits should be atomic, because git bisect on smaller meaningful commits is a hang of a lot better than a monster 90 file change commit


For now - the law of enshittification means that the free/cheap access to AI will be curtailed soon enough.


Pretty much any OS locally runnable LLM can generate this stuff.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: