Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> A limitation I see with AI for coding is that your problem must be mainstream to get decent results.

Or to phrase it another way, there must examples of the technique that you want to "generate" in the training set otherwise the horrifically overfitted model cannot work.

Probably a little bit too cynical but this is my experience asking LLMs "unusual" questions.



No, that's about where we are. Today's LLM coding is basically automated Stack Overflow copy/paste - not too bad for anything simple and mainstream, but unable to reason about code from first principles, and fond of making up random shit that looks good but doesn't work.


Seems realistic.

I tried asking a LLM for a full code snippet for the first time yesterday (i've been using them as search engines better than google, which isn't saying much, before).

It produced code that compiled but failed to do anything because the 3rd api call it generated returned an error instead of doing what the LLM said it would do.

Didn't spend time on seeing what was wrong because we already had a library that did the same thing; I was more testing what the LLM can do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: