Hacker Newsnew | past | comments | ask | show | jobs light | darkhn

I’ve observed this too. I’m sceptical of the all-in-one builders, I think the most likely route to get there is for LLMs to eat the smaller tasks as part of a developer workflow, with humans wiring them together, then expand with specialised agents to move up the stack.

For instance, instead of a web designer AI, start with an agent to generate tests for a human building a web component. Then add an agent to generate components for a human building a design system. Then add an agent to generate a design system using those agents for a human building a web page. Then add an agent to build entire page layouts using a design system for a human building a website.

Even if there’s a 20% failure rate that needs human intervention, that’s still 5x developer productivity. When the failure rate gets low enough, move up the stack.


I’ve found that getting the ai to write unit tests is almost more useless than getting it to write the code. If I’m writing a test suite, the code is non trivial, and the edge cases are something I need to think about deeply to really make sure I’ve covered, which is absolutely not something an llm will do. And, most of the time, it’s only by actually writing the tests that I actually figure out all of the possible edge cases, if I just handed the job off to an llm I’m very confident that my defect rate would balloon significantly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact |

Search: