I still code daily without any coding assistance mostly because I believe this is the way to not forget how things are done, even trivial things.
My main point against using AI is that I do not want to depend basically on anything when I'm in front of the screen (obviously not including, documentation, books, SO and alike).
I closely see people that are 100% dependent on AI for literally everything, even the most trivial daily tasks and I find that truly scarly because it means that brain effort drops drammatically to a minimum level. To be stolen mental effort is not a minor thing.
Giving away that at least for me means to become a dependent zombie. Knowledge comes basically from manual trial/error almost daily.
Technology being technology if anything has shown us that we can be pushed and manipulated in every single conceivable way. And in my opinion depending on AI is the ultimate way for companies to penetrate and manipulate a very delicate ability of a human being: to think and wonder about things.
Recently, after a month of heavily AI assisted programming, I spent a few days programming the good old fashioned way.
I spent most of the time confused and frustrated, straining painfully against the problem. I spent most of my 7 hour session this way, and the task was successfully completed.
But I was startled by the difficulty. I began to worry that I had given myself some kind of brainrot from disuse. Then I remembered, my goodness, it always felt that way, if I was ever doing something new. That's just what it feels like, grappling with a problem you haven't seen before.
It was always as hard as that, I was just no longer used to the feeling. You get used to the difficulty, and then it feels normal.
Or indeed: you get used to its absence, and then it suddenly feels overwhelming and "wrong" !
I think maintaining the capacity to tolerate difficulty and discomfort is a "muscle" well worth preserving.
I've had the "problem" of forgetting syntax before any AI, with IDE autocomplete. It was only ever a problem when switching jobs and being expected to write syntactically correct code on platforms without syntax checks or autocomplete. So I did some exercises on such platforms in preparation for interviews.
In the real world, reliance on syntax autocomplete and checks was never an issue. The important thing has always been understanding the core concepts of the language and the runtime, e.g. how the event loop works with Node.js and how to write asynchronous and event driven programs.
I'm the opposite I don't think I've read a single line of code I've shipped in over 6 months.
I'd say it's far more tiring working that way though, you're breaking the satisfaction loop so you never really get the dopamine you used to get coding by hand, when you had a problem figuring it out was like solving a puzzle and you feel satisfaction at the end of it. With AI it feels most of my day is spent being a QA than a puzzle solver and its exhausting and even when it solves difficult problems for me the LLM slot machine is far less satisfying than if I'd figured it out myself.
Agree with you for my day job (which is coding corporate web app), for sure. I'm still letting A.I. drive more nowadays, but it does feel less fulfilling than it used to.
But for my personal projects, I work on games, and by offloading a lot of the coding work to A.I., my puzzle solving is no longer 'how to fix this stupid library spitting stupid errors at me' or 'how to get this shader working' or 'why is this upgrade breaking all the things' and more 'what does this game need in order to be fun and good?', which I find a lot more fulfilling.
It's also why I switched my focus to board game design for the longest time. I didn't have to fight my tools or learn some new api or library frequently. And if I wanted to try a new mechanic, I didn't need to spend 20 minutes or 2 hours or 2 days implementing it, I could write something on an index card in five seconds and shift mid-game most of the time.
A.I. just brought video games closer to that experience, which actually has made them more fun to work on again, because board games has the immense (financial/logistical if self-publishing or social/networking if attempting to get published through a publisher) challenge of getting physical games published to worry about.
The puzzle thought was mostly me trying to figure out why AI coding was more emotionally tiring when I'm literally doing less and creating more, maybe it's something else.
I find this interesting as someone who does primarily devops, my satisfaction has increased with ai. Since for me the code isn't the puzzle but an annoying inconvenience in the way of completing the entire system. For me QA is a big part of solving the puzzle.
DevOps is a huge part of my job as a systems engineer and I too have found increased satisfaction with AI.
I think the reason (for me, at least) is that my markers of success were always perched precariously atop a mountain of systems that I had varying levels of understanding of anyway. Seeing a pipeline "doing the thing" is satisfying regardless of how I sorted it out.
What does "fair" have to do with anything? This is exactly the issue the author is writing about. Take the easy way, reap the profits, then someone suffer the obviously predictable consequences at some point in the unforeseeable future... likely not you! "Fair" is not relevant.
The original author points to the consolidation of military suppliers as a major issue, but the truth is that the economies of the western world have been massively dependent on this sort of consolidation and outsourcing for a large portion of the "growth" that they have achieved for a generation.
It would be convenient to think that the real question is "how do we climb back out of this hole?" but I feel the more pressing question is actually, "when and why will we start trying?"
The profit motive simply does not drive society in this direction.
The crises are catastrophic and perhaps even existential, but they are not profitable. You have to be a really lucky market timer to bet on crisis and win.
Avoiding crisis over the longer term is simply not investable.
"Fair" is not a relevant or useful conception in this context.
Not wasting other people’s time when they expect your work to at least pass a cursory check. It’s selfish and disrespectful. It reflects poorly on you. I don’t know about all that other stuff you wrote but it’s not really what I’m talking about so I’ll clarify.
I don’t know what your high school/college was like, but we used to trade papers for editing. It was universally considered bad practice to send rough/first drafts. It’s disrespectful and wastes the time of people who are being generous with it for you. You’re offloading your work in a selfish way.
Simply put: If I want an LLM’s raw results, I’ll prompt it myself. Why are you involved if I don’t want your work? Your expertise? Want to use an LLM then go for it but don’t just wipe its muddy boots on my work. At least look at the results.
Unfortunately, this is becoming even more common with LLM’s. I have no problem confronting people about it because 100% of the time they don’t want it done to them. It’s not even an argument, it’s catching them being selfish and they know it.
Are the people paying your paycheck being fair to you? Are the executives of your company paid orders of magnitude more than you are? Fairness starts from there. Your job is to be as unexploited as possible. I hope my coworkers also have this goal.
What does my relationship with the c-suite/my work have to do with a colleague dumping their unedited chatgpt crap on to me? I legitimately do not understand what point you’re trying to make. There seems to be a lot of assumptions here and I’m not sure what they are.
Sending your unedited LLM outputs to me is not sticking it to the execs. If you really want to play that game, you go ahead and ship that or hand it to someone who deals with the final output. That’s your prerogative and you can face the consequences. I am not here to clean up your AI slop. That’s not my job. At that point you are the problem, not the c-suite.
All I hear from AI evangelists is “it’s a tool! It’s not the problem! It’s people using it wrong!” Ok, then the people using it are the problem if something is wrong. So if you act this way, which is clearly not a productive use of the tool, you are the problem.
Edit: let me just ask you a somewhat multi-faceted question. If you ask me for a summary of something and I simply hand you what ChatGPT gave me, would you say “thanks” and be satisfied? Is that what you wanted me to do? Is there a reason you asked me to do it instead of prompting ChatGPT yourself?
What if I did this every time I had to write anything? Every email. Every summary. Every report. Just prompt, copy, paste, send to you.
> If you ask me for a summary of something and I simply hand you what ChatGPT gave me, would you say “thanks” and be satisfied?
Yes. Again my job is to stay unexploited. Saying yes is the easiest option. I'll leave the worrying to the people making an order of magnitude more money than me.
It seems you are either very unhappy at your job or just anti-work, that’s fine you do you/sorry if your work sucks, but there is a huge gradient between “completely not caring and doing the bare minimum to collect a paycheck” and “sacrificing everything for a company that does not care about me.” Many of us fall in that gradient. We do decent work and clock out when we’re done.
If you want to phone it in or act your wage or whatever go ahead but don’t make it my problem. You’re not sticking it to your employer. You’re actively making your workplace worse for everyone else. Your decisions impact others.
This is like working in the service industry and simply not doing your job. Management doesn’t suffer and they’ll just fire you. The people you work with have to do your job for you. What have you actually accomplished?
First of all, I don't agree with your implication that AI produced code is bad. It's as good as the developer prompting it in my experience. Secondly, yes I'm anti-work. Capitalism does not allow for what you are desiring. Capitalism is configured such that capital is seeking maximum return for minimal costs (my pay). I am incentivized to do the opposite. Wealth inequality is a multiplier on how hard I'm going to try to achieve my goal.
> First of all, I don't agree with your implication that AI produced code is bad.
Never said that. I said generating code with an LLM then not looking at it at all and pushing it (which is what started this whole comment thread) is a selfish and lazy decision.
Not everyone prescribes to a strict anti-work stance. Most people don’t in fact. So we’re at quite an impasse and it doesn’t change the fact that your decisions become your colleagues’ problems and does nothing to deconstruct/fight capitalism. I feel sorry for anyone who works with you if this is not an internet routine and reflects how you actually operate.
> Like I said most code that's written by AI is better than code written by human
1) this is an arbitrary bar that needs more qualifiers (all code? All people?) and 2) citation needed.
I don’t care how it was generated. I want you to vet the results at some point with your knowledge and not send me whatever it spits out with no consideration for the results. You’re not sticking it to capitalism when you pass the buck to me. You’re being selfish.
I think we are just too far apart on this to be productive unfortunately. I just urge you to consider the impact of your choices. See my accessibility comment from a different part of the thread:
> I also may be staring at consequences you are not. It’s passing the buck with no regard for who is left to deal with the results at the end.
>What if we are working on, say, accessibility tasks? If I see your work won’t actually help those in society who seriously need these features, what am I supposed to do? My kneejerk is 1) fix it (more work for me, selfish on your part), 2) kick it back to your lazy hands that clearly doesn’t see this as an issue, or 3) send it up the chain (or laterally) where someone else has to ask these questions or - worse - it gets shipped and people who need this stuff are screwed. This is basic ethics.
Is it correct? Is it any good? Should I subject another person to this? Is it profoundly rude to not even read their email and just have a robot respond automatically?
The slopmonger does not engage with the question at all, because they never cared.
My boss gets annoyed if I try and do things without AI so eventually I caved but I don't see the point in reading it if thats the culture at the company being pushed.
Also anyone else dealing with it is just gonna be dealing with it via AI so it doesn't really matter.
If I worked somewhere where the CEO cared about hand written code I would be writing it and reading it but I don't.
Because you can’t assume everyone else is as indifferent about wasting people’s time as you are. Some of us don’t want to actively make our colleagues/customers miserable. That decision forces me to decide if I will be a part of the problem even if I generally do good work I can stand behind. You’re forcing me into a decision making process purely out of your desire to not do the bare minimum when working. That’s not right.
I also may be staring at consequences you are not. It’s passing the buck with no regard for who is left to deal with the results at the end.
What if we are working on, say, accessibility tasks? If I see your work won’t actually help those in society who seriously need these features, what am I supposed to do? My kneejerk is 1) fix it (more work for me, selfish on your part), 2) kick it back to your lazy hands that clearly doesn’t see this as an issue, or 3) send it up the chain where someone else has to ask these questions or - worse - it gets shipped and people who need this stuff are screwed. This is basic ethics.
I generally don't have as much time (or patience / fucks) anymore in my day. So, I use AI 3 days a week. On the other two days, I don't use assistants to code, just ask them to review my work after its done.
At work we are literally forced to use AI and it’s part of our performance review. Even though I really like coding by hand, I have to now use AI so I can keep my job. I will try this out though, 2 days per week using AI and the rest handcoding, enough to stave off the inevitable lay off perhaps.
Surely it can’t be hard to token max at work the same fucking way people have games Jira metrics for years and years.
If I’m ever in that position (everything I work on it air-gapped, it’ll never happen) I would make it a priority to figure out how to game that bullshit metric so I could get on with solving actual problems.
I imagine a lot of people do this. Metric becomes a target, etc.
I have always had a problem, worse than most I think, where if I’m away from a language for a bit I lose my ability to write it quickly and competently, real quick.
It doesn’t matter if I was quite competent in it… the mechanical bits fade fast.
Doing llm assisted work is going to be like pouring bleach on my brain. I can feel it. The more I use it the worse it will be for me.
I can still formulate what I need, and problem solve just fine, but all the nuts and bolts evaporate.
> Knowledge comes basically from manual trial/error almost daily.
This is the important statement, although I'd swap the word "knowledge" for "experience" here. You can gain "knowledge" from books, but only trial & error will give you experience to know "which" knowledge to use in which situations.
And what's important about this in the context of working with AI is the "error" part.
You have to experience errors to become truly experienced. And part of the experience is to recognize when you're about to make an error - to avoid it.
AI-driven processes mess up our natural trial & error learning curve in multiple ways:
- the AI push forces us to ship features faster (cause if we don't, our competitors will), reviews are sloppier, we discover errors later on, the feedback loop gets longer...
- using AI to debug and fix errors means we spend less time understanding what the error was about, which means we learn less about how to avoid the error in the first place...
- AI itself sounds overly confident, so reading its outputs without previous experience you may be less likely to recognize when it's making an error, which makes it harder for you to recognize when you're making an error trusting it...
On the other hand, this last point I tried to make is also why I don't think avoiding AI completely is a good strategy. Whether we like it or not, AI is becoming a part of developer's workflow. And as such, we also need to learn the trial & error process of using AI - what makes AI make errors and how to prompt it to avoid that.
I really don't understand the people who use it for everything.
It's become my first stop for search because it's doing it in bulk—read 50 results and lead me to something useful.
But I just got Claude MCP connected to my personal email/calendar/etc and I can't figure out what to do with it. It wrote a summary of my inbox that took as long to read as flipping through my inbox. And since it makes no sense to delegate decision making, I'm not sure what the actual work I'm supposed to give it would be.
> Giving away that at least for me means to become a dependent zombie.
I suppose people felt the same way in the agrarian revolution and later again with inventions like the plough. Suddenly a lot of people offloaded their food independence onto the work of a few.
What might it open up in our lives to be free of knowledge?
That said, these machines don't run themselves, if we disengage our minds we might get stuck in a dead end with them.
For me AI mostly reduces time effort. AI types code faster than I do, looks up stuff on the internet faster than I do, debugs faster than I do, but doing those never required much "brain effort" from me.
What does require "brain effort" from me is making educated decisions. Mostly during planning to figure out which pros/cons of each possible approach are actually relevant for our situation - AI does this poorly, makes lots of wrong assumptions if you don't steer it correctly, and noticing these + correcting AI on them requires "brain effort" too. Then the part of code review where you think about what can go wrong. AI still sucks at figuring out edge cases. It doesn't "know" the entire codebase like I do, its context only has "the parts of codebase deemed relevant".
Before AI I could jump from 30 minutes of hard thinking into an hour of coding during which my brain essentially rested, before returning to hard thinking again. Nowadays those hour-long coding sessions turn into 5-10 minutes of watching AI do something.
So for me using AI doesn't "free up brain juice", it instead makes me use my "brain effort" more, and in a workplace environment gives me less time to rest and makes me more tired, cause nowadays bosses expect us to work faster + colleagues working faster means more review requests.
I've got one. I'm working on a cryptographic identity system in rust. One of the stricter iterations of it demanded creating a public version and private version of each type. The best way to accomplish this is a procedural macro. I don't know if you've written proc macros by hand in rust. I have, years ago, and it was somewhat torturous. I didn't want to relearn to do it all over again and spend what would have taken weeks (this is a side project) to gain a skill I will easily forget in a month or so. So I had an LLM code it for me. This is a really great use for it: it's not building any strong logic or doing any IO, it's simply writing code that generates other code, and is entirely verifiable and testable. It built it for me so I could spend those weeks working on higher level logic and p2p syncing protocol stuff that actually matters for the project.
I want to make it clear that I'm an LLM luddite. I mostly find the things distasteful and obnoxious. But there are definitely use-cases where they can do what's essentially bitch work and save a lot of time that would otherwise be a waste. It's a tool that can be used for specific things. I don't use them for everything.
Did it became noticeably better because you used LLM to make a proc macro, therefore freed up you creative and cognitive powers to deliver something much better than you would by writing this macro yourself?
I spend a week hand-generating charts and graphs detailing my caloric intake vs creative output in case I needed to convince someone online that my side quest was a success. Are you stupid?
I measured it in time and personal energy. I spend time working on something creative as opposed to working on something procedural. If I'd had to write the macros by hand, it would have sucked a lot of joy out of the project and would have delayed me probably much longer than it would have just to write the macros.
Your line of questioning is obnoxious and indicative that your original suspicions are false, yet you're too pigheaded to just let it go and admit that you are wrong.
I'll bite. I've been writing music for decades but I can't sing. With ai I can write lyrics and generate ai vocals, then separate the stems and extract the vocals throwing away the rest.
Add the vocals to my daw and create the rest the way I want.
Saying its a great work of art is subjective, but for me I can make music I couldn't before now.
Parent suggests the perspective where using AI allows to free up the "brain juice", and utilize it elsewhere. What you describe is AI allowing you to mitigate some limitations that prevented you trying something. So not the same.
Sidebar: learn to sing. Singing well and “finding your voice” are in my mind equivalent. Every time I become a more confident person I get better at singing. Every time my singing gets better through practice I feel more confident. “Speak with your chest” didn’t make sense until a few years ago. Now it’s obvious to me when someone is incapable of it.
On my side, I can give many examples of random software that became significantly worse since the AI trend started.
Trainline is practically unusable for purchasing itineraries that go accross multiple european countries. GitHub Actions now contains a bunch of extremely frustrating random bugs. Grammarly somehow gives worse copy recommendations.
So this is the classic tension between the "coding for the love of code" vs the "coding to solve problems" mindset. This cultural concept has been around since before AI was on the scene, heck well before software existed (craftsman vs builder).
I'm curious why this is a vs and you have to pick both? I've found coding for the love of code always helped me accelerate my speed and ability so that I could also deliver solidly on time and solve the problems too.
Ya this has been my sentiment. If i need to one-off a quick script that does some processing on data, it’s nice to offload that so i can focus on pieces of my code that are more important and interesting to me. The context switching cost is still there tho…
I think your forgetting what programming actually is. Its a way convert instructions into 1 & 0s. The programming language your using is irrelevant. If AI can do the same thing with natural language is just a matter of time before we drop the "programming language" layer and AI just creates programs using straight up machine code.
I hear this a lot, but also I'm curious. How can you really forget coding?
It doesn't seem to me a thing that I could suddenly forget?
Without AI I will feel frustrated that I'm now much slower, but ultimately it's just describing logic. So I'm a bit skeptical of the claim.
My brain effort is also on other things now, such as how to orchestrate guardrails, how to build pipelines to enable multiple agents work on the same thing at the same time, how to understand their weaknesses and strengths, how to automate all of that. So there's definitely a lot of mental effort going into those things.
If you are not practicing an activity consistently, you'll forget some of the finer grained aspects. When I'm coding, I subconsciously create a continuous logic map. Having someone or something just generate (and generate so quickly) destroys that and makes it easier for bugs to slip through.
I mean if e.g. AI stopped existing all of sudden, it doesn't mean you would have forgot how to code and couldn't all of sudden anymore, right?
You could forget maybe how a certain lib or framework worked or things like that, or more so how you wouldn't have been up to date with all the new ones, but ultimately code can be represented as just functions with input and output, and that's all there is to it.
As in how could I possibly forget what loops, conditionals or functions are?
I haven't written code myself for 1+ year (because AI does it), but I feel like I have forgot absolutely nothing, in fact I feel like I have learned more about coding, because I see what patterns AI uses vs what I did or people did, and I am able to witness different patterns either work out or not work out much faster in front of my eyes.
A writer will never forget what adjectives, verbs, and nouns are. But if they use LLMs to write for them for years they will be worse at writing on their own.
Well, what I'm trying to say here is that coding is conveying logic, the way you'd evaluate it is how fit it is for its purpose, and if it's long term code, how well it will scale into future.
Now writing is something totally different. In some cases writing ability is not about writing, it's about your thoughts and understanding of life and human nature.
You could simply become a better writer without not writing anything by just observing.
If you are using an LLM to write, what is the purpose of that? Are you writing news articles or are you writing a story reflecting your observations of human nature with novel insights? In the latter case you couldn't utilize AI in the first place as you'd have to convey what you are trying to say within your own words, as AI would just "average" your prompt or meaning, which takes away from the initial point.
With code it's desired that it's to be expected, with good writing it's supposed to be something that is unexpectedly insightful. It's completely different.
I would disagree. If you only do X, in fact I think you will miss a lot of things that could make you better. You can become better writer by reading other great writings, if you only write yourself, you will not have the full big picture on what is possible. Then you can become better by thinking a lot, imagining a lot, etc... Same with most fields I would argue.
Although we were discussing about the decay of skill in something. While in some things the decay is super clear (as in running - pace, not the technique), I think there's many areas where there's no clear decay and other activities will actually significantly boost it, and any decay that there is, will be removed in just few days of practice or remembering.
Are we talking about observational ability, creativity, accuracy of communication or grammar here?
There's many more ways to evaluate a writer skill in terms of what they are doing vs what is coding. Coding can be creative, but in most cases you are not evaluating coding as writing, unless it's possibly technical writing, which is still different compared to coding.
Sure, you may argue that you are becomming a better editor or project manager but your skill in the craft of programming is decaying if you are not actively typing lines of code into a computer.
I don't know, I'm still seeing all the diffs and thinking - ah interesting it did it that way, I would have done it like this or that, so I would just rather say it even widens my perspective on how to code since I see so much compared to what I used to do myself. Sometimes it's better, sometimes I feel I would have done it other ways, etc. But all the diffs flying in terminal still keep me engaged, right?
Coding is a thinking avtivity. What you’ll be missing is the nimbleness in doing that activity, not the knowledge.
So you may remember all your high school math, but not doing it every day, means you are slower than some of the students. So your knowledge of programming will be there, bit you will be slower because you no longer have the reflex that comes with doing things over and over.
I feel like I have to disagree here. I don't practice e.g. multiplication or doing math in my head everyday or for years really, but I feel like I'm just as fast at it as I ever was. In fact whenever I have tried things like Lumosity or brain benching games, that I used to do when I was younger, I'm actually faster than when I was younger, despite not having practiced it at all. I feel like all the real world side practice has helped me improve these abilities indirectly, they have all added to my brain's ability to notice novel patterns, see things from different perspectives, apply new intuitive strategies, that I might have not noticed because I was tunnel visioning when I was younger.
There's also plenty of things that I have got for life just by having practiced them when I was child. E.g. I think everyone gets bicycling, but there's also handstand, walking on hands, etc, which I learned as a kid for few years, and I can still do it even if I only do it once a year. In my view code is exactly the same, and maybe in a way even more straightforward, it's easier than obscure math since you don't have to memorize any formulas to solve it easily, albeit I think a lot of math is great because you don't have to memorize formulas in the first place you just have to internalize or figure out the logic or the idea behind it, and then you just have it. I think repetition in math is specifically the wrong way to go about it, it's about understanding, not repetition.
Multiplication is elementary school math which doesn't require any thinking and the learned approach is simple. You can't really compare the simple stuff that's taught to kids, like basic multiplication or riding a bike with stuff that requires domain-specific knowledge and experience.
Think more stuff like "find the angle of lines defined by (x-4y-1=0) and (x-y-2=0)", "write the number 2026 in base 7", "solve an equation sin^2(x) - sin(x) = 0".
I plucked these from our country's high school final exam from this year. Back when I was in high school, I did mine in 60 minutes without an error when the time limit is 150 minutes and I intuitively immediately knew how to approach each task since the moment I saw it. Also all needed formulas are supplied, you don't need to remember any of them.
I plucked these because for these I don't have the immediate "know how" now, I still understand the topics, and could solve them with enough time, but it would require some thinking and thus I would be slower at solving them than when I was in high school, even though I'm pretty sure I could still ace it in the 150 minute time limit.
But reality goes beyond high schoool... College-level math, like derivations/integrations, sums, algebraic proofs, is even harder and solving some of them could take me hours when I could do them in minutes when I was in college.
With code it's the same. I could solve simple Python/Pascal/C++ high school level tasks as fast or faster than when I was in high school, even if I didn't write any code for a couple of years. But we also had assembly class in college, and I would struggle at assembly if I had to code it now, 10+ years later, even though I didn't struggle with it back then.
> Think more stuff like "find the angle of lines defined by (x-4y-1=0) and (x-y-2=0)", "write the number 2026 in base 7", "solve an equation sin^2(x) - sin(x) = 0".
> I plucked these from our country's high school final exam from this year. Back when I was in high school, I did mine in 60 minutes without an error when the time limit is 150 minutes and I intuitively immediately knew how to approach each task since the moment I saw it. Also all needed formulas are supplied, you don't need to remember any of them.
It seems like with just a little bit of doing it again, you'd be back at the level you were though. Especially if you can do it with formulas right. You would be slower for only a very short amount of time. All those things are in my view if you understood them at some point in your life, you will understand them to the exact same extent with just a little bit of reminding. I would say with most of those concepts, it would take less than 1 hour to be back at similar level. Like for instance number in another base etc.
Depends on the complexity of the task. That's what I tried to hint at by also mentioning college-level math. For the high-school level tasks yeah, couple of hours and I'd be as fast as I was in high school again. For the number in another base it could be as quick as less than 1 hour as again that's a simpler task than the other two.
For derivations/integrations it'd take more time. Less than what it took me to learn them in the first place, for sure. But still a lot more than 1 hour.
Cause I forgot how to "do them" in the first place, which is what the discussion was about in the first place. I still know the "theory" behind, so I can "figure it out" if needed without needing anyone to "instruct" me, or needing "classes" to learn how to do them. But essentially all the "practice" I had back then is forgotten.
And again the same goes for code and technology knowledge, which is what the discussion was about in the first place.
As a senior developer with 10+ years of experience I've already encountered situations where I needed knowledge I knew I had at some point, but already forgot. In my case as a backend developer working for the same company for 5+ years my favorite example is payments processing. There are tons of special/edge cases - e.g. how a failed recurring payment during a subscription is processed. That's something you set up once, then don't touch for years, and suddenly need to study again if a change is needed. How a subscription goes "past due", what you can do in that case, what your code actually does, how it reverts to correct state once a retried payment follows through, what options you offer a customer if his payment method expired and he wants to switch it, ...
And this is also a good example why "domain knowledge" and "code ownership" is a good thing in larger companies. Because under usual circumstances I don't have to deal with these, cause we have a dedicated person who's maintaining the payment-related code. I only fill in in urgent cases happening when he's on vacation.
And juniors designing stuff like this AI-first without properly thinking about all these cases won't learn all the edge cases this flow can contain. So if something goes wrong, you end up with nobody who has the "maintainer experience" for that code - you don't have the one person who is knowledgeable about that topic - everyone in your company is in the same spot as me - having to research the topics again to understand them enough to be able to debug the incident which happened.
This pretty much. It's very hard to forgot theory, especially if it was hard-learned with lots of practice. But enough time spent away from the doing, and the details starts to blur and the only things is the intuition behind those things and a general sense of how it's done.
But with computer programs, the devils is in those details, so even if you're a senior designing the whole service, you either have someone on staff that's responsible for the implementation gotchas or you have to got familiar with them.
I always compare AI programming to Google. If that's the case, then without internet, without Google, without Stack Overflow, my abilities would be worse than they were in 2000.
If my internet died in 2020 I would also be useless because probably I couldn't install/download all the libs/frameworks, etc.
But if I didn't need those things, and there was a simple pseudolang syntax which acted exactly the same in all versions, didn't have any breaking changes, I would argue I'd be much better at it now.
Internet, search etc is needed to understand how to setup libs/frameworks/APIs, but logic at itself isn't something that I could possibly forget. AI will help to get those setups quicker without me having to search, but arguably it's all useless information, that will get out of date, that I really don't even need to know. I don't need to know top of my head what the perfect modern tsconfig setup should look like or what is the best monorepo framework and how to set it up, so it would scalably support all different coding languages for different purposes.
I used to be an expert at php but now I haven’t written any in over a decade, I can still read it but it would take me a little while to get back to where I was (hopefully I’ll never need to), same thing could easily happen due to ai
>I closely see people that are 100% dependent on AI for literally everything, even the most trivial daily tasks and I find that truly scarly because it means that brain effort drops drammatically to a minimum level. To be stolen mental effort is not a minor thing.
I find myself thinking more and my thinking is of higher quality. Now I have 30 years of fucked up projects experience, so I know all the rakes I could step into.
I relate to the idea of having a different level of thinking now with AI. How would you evaluate that someone is overestimating themselves?
As in every little thing that used to be too much effort before, I can just easily get the info, the data now with prompt. The data analysis of something, which otherwise might have taken hours to figure out, I can just have AI write scripts for everything, which allows me to see more data about everything that previously was out of touch. Now you will probably ask of course "how do I know the data is accurate?" -- I can still cross reference things and it is still far faster because even if I spent hours before trying to access that data there wouldn't have been similarly guarantees that it was accurate.
I am thinking so much more about the things now that I couldn't have possibly time to think about before because they were so far out of reach, or even unimaginable to do in my lifetime. Now I'm thinking about automating everything, having perfect visualizations, data about everything, being able to study/learn everything quickly etc.
It sounds like you're optimizing for a system of self-deception. If you never check how the data is collated, but rather whether the collation appears consistent, you will eventually be left only with data that has the appearance of consistency, regardless of how correct it is.
Yeah, I got it. That's what consistency means. But appearing consistent isn't the same as being correct. You can't check the latter without an exhaustive check on the data, but doing that kind of defeats the purpose of off-loading the query to an AI.
My main point against using AI is that I do not want to depend basically on anything when I'm in front of the screen (obviously not including, documentation, books, SO and alike).
I closely see people that are 100% dependent on AI for literally everything, even the most trivial daily tasks and I find that truly scarly because it means that brain effort drops drammatically to a minimum level. To be stolen mental effort is not a minor thing.
Giving away that at least for me means to become a dependent zombie. Knowledge comes basically from manual trial/error almost daily.
Technology being technology if anything has shown us that we can be pushed and manipulated in every single conceivable way. And in my opinion depending on AI is the ultimate way for companies to penetrate and manipulate a very delicate ability of a human being: to think and wonder about things.