Hacker Newsnew | past | comments | ask | show | jobs | submit | nerder92's commentslogin

It was never about the runner, it has always been about technology and innovation. Shoes tech is just one of them. Better nutrition, novel training techniques, better air quality etc.

Of course innovation in shoes will have a bigger marginal impact (because physics).


Can't help but think that the wins and records done barefoot and without refreshments in the 1960s are still a bit more about the runner than running with essentially spring loaded shoes, lab-optimized nutrition gels, computer optimized pacing strategies and multisensor real-time measurement devices.

It's also somewhat ironic for a race supposedly modeling a messenger running the distance in an emergency situation.


Also makes wonder question did the messenger participate in hard cardio training before. Meaning the fighting. Or do some messengering before.


For sure but most of that innovation pre-super shoes was optimizing the runner, not the shoes


Of course it's also about the runner, that's why the Kenyan models dominate with their biological innovation.


This is supposed to be a contrarian opinion yet this is a retoric yapped non-stop in the “build in public” community. Of course lean is a good approach, it makes sense, and most engineers know this. Is not a new concept, we’ve been doing this for years in every branch of engineering.

The invented “people start with a k8s cluster for 5 users” doesn’t really exist. This is just a story repeated ad nauseam to fit a narrative that help them justify their choices. This position is just as dogmatic, if not more, than the alleged dogma it attempts to disrupt.

Smart technical leaders knows that technical decisions only matter in context never in absolutes. The right answer is always “it depends”.

I can agree that there is a tendency to prematurely optimize infra, as a direct consequence of lack of measuring especially in young busy startups. One could argue that premature optimization might be the smart choice when you don’t have enough data, as in the best case scenario (your startup do well) you’ve saved some time, worst case scenario you’ve lost some money that depending on the situation might be less valuable than time spent in maintaining, and later refactoring, infra.


I've seen A LOT of public sector projects starting out with loads of Azure services and >$3000 montly bills for applications that could've easily run on a single VM.


I've seen Big Data pipelines (Hive tables, Spark jobs for queries, data engineers setting it all up) for what was ultimately a 5-10 GB dataset.

Companies cargo culting Big Data stacks when their data is nowhere near big is very prevalent.


This a structural problem not an awareness one. Is not like they don’t know they can run it on a 5$ VPS, the problem is that there are no incentives to do so. You’d be surprised to know how much of engineering is there to address organisational challenges rather than technical ones (ie: micro-services)


Same. They've been sold a bill of goods by big cloud tech and company.


“people start with a k8s cluster for 5 users” doesn’t really exist

Most people in the BiP these days barely know how to deploy a database or host something using nginx. it's all vercel, supabase, aws, clerk, yada yada. Cost aside, I think that people are addicted to complexity.


> The invented “people start with a k8s cluster for 5 users” doesn’t really exist.

Can confirm it exists, especially with founders self-coding with LLMs now.


I recall reading multiple AskHN posts about people trying to get attention from a cloud provider because they ran up thousands of dollars in charges accidentally. I've seen large companies do this too, even if you think something is just a dev environment, its the cloud provider's production environment and they will charge you per their ToS for everything you use, doesn't matter what the customer usage profile looks like.


Experienced dev with limited hands-on big tech infrastructure experience. Based on the results I get from LLMs in domains I understand how get they even get this stuff running using AI?


Monkeys and typewriters. Throw enough character input and "It's not working" into an LLM and it will eventually produce... something.

And since it tends to reach for the most web-represented solution, that means infinite redis caches doing the same thing, k8s, and/or Vercel.

Best mental model: imagine something that produces great tactical architecture, with zero strategic architecture, running in a loop.


I can accept this is true, they will for sure exist. Of course if this they ability to make choices, technical or not, they are completely doomed.


https://pardonned.com/pardon/details/cindy-marie-griffith-fk... How come this ended up in the drug offence category?

The sentence reads: “Distribution of satellite cable television decryption devices”


good catch, I think the classifier saw the "distribution" in the sentence and assumed it was drug related. I could probably run each sentence against an ai model to classify things..

probably run it against something small like haiku and not cost me an arm and a leg.


We are experimenting with this kind of development style and from my experience so far this shift a lot of the complexity of building into the story writing and manual testing phases.

As I will need to fully handover the task and let the agent(s) essentially one-shot the implementation I need to be way for specific and clear in giving it context and goals, otherwise I’m afraid it will start build code purely by accumulation creating a pile of unmanageable garbage.

Also changes which requires new UI components tend to require more manual adjustments and detailed testing on the UX and general level of polishing of the experience our users expect at this stage.

I’m starting to develop a feeling of tasks that can be done this way and I think those more or less represent 20 to 30% of the tasks in a normal sprint. The other 70% will have diminishing returns if not actually a negative return as I will need to familiarise with the code before being able to instruct AI to improve/fix it.

From your experience building this, what’s your take on:

1. How do your product helps in reducing the project management/requirements gathering for each individual tasks to be completed with a sufficient level of accuracy?

2. Your strong point seems to be in parallelisation, but considering my previous analysis I don’t see how this is a real pain for a small teams. Is this intended to be more of a tool for scale up with a stable product mostly in maintenance/enhancement mode?

3. Are you imagining a way for this tool to implement some kind of automated way of actually e2e test the code of each task?


Thanks! What tools have you been experimenting with?

Agreed. That this evolution pushes much of the work into describing desired outcomes and giving sufficient context.

To your questions:

Emdash helps reduce the setup cost of each environment by allowing you to open an isolated git worktree, copying over env variables and other desired context. And then preserving your conversation per task. That said, you still need to write clear goals and point it in the right direction.

I think it's less about team scale and more about individual throughput. My working mode is that I'm actively working on one or two tasks, switching between them as one runs. Then I have a long list of open tasks in the sidebar that are more explorative, quick reviews, issue creation etc. So for me it's not about one-shotting tasks, but instead about navigating between them easily as they're progressing

Automated e2e testing is tricky, particularly for rendering. I think roborev (https://github.com/roborev-dev/roborev) is moving in the right direction. Generating bug reports synchronously per commit and not just once you create a PR. I also think what https://cursor.com shipped today with computer-use agents testing interfaces is very interesting.


I found this project from a YouTuber I follow and it seems in line with the values of the HN community.

What do you think about this project? Has something like this being tried before?


the link returns 404 now


I love this. This manifesto is what got me into tech. Thank you for sharing


Is this lmgtfy of the AI era?


This is more like Let Me Not Google That For You and You Shouldn’t Either


When Trump’s video about the Kirk assassination (https://www.youtube.com/watch?v=2yCu21pL73s) went viral, I noticed a wave of replies aggressively insisting the video was not AI-generated. I could feel those replies were mostly by bots as I've checked manually few accounts and most of them followed weirdly the same patter:

  1. Created in 2013

  2. Have between 7 and 10 subs

  3. Have between 2 and 3 video playlist

  4. Account bio extremely generic
After a few minutes spent manually checking I decided to build a tool that:

  • Downloads all YouTube comments + replies

  • Runs sentiment analysis on each

  • Detects bot-like behavior using heuristics + LLMs
On this video, over 40% of comments look like bots, and they overwhelmingly argue the video wasn’t AI-generated.

I didn't went as far as trying to understand where these accounts are coming from, but my main goal was to confirm whether this was real coordination.

I'm not expert in data nor in python (I've mostly vibe-coded it). I’d love to get some help from folks how might be interested on these topic.


This should come from this paper I guess: https://www.biorxiv.org/content/10.1101/704080v3


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: