Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Someone unminified the js, and it turned out that a bunch of the rest endpoints it knew about were just unverified crud endpoints for the site.

https://archive.ph/2025.02.14-132833/https://www.404media.co...



Smells exactly like llm created solution.


Or just what happens when you hire a bunch of 20 year olds and let them loose.

That's currently how I model my usage of LLMs in code. A smart veeeery junior engineer that needs to be kept on a veeeeery short leash.


Yes. LLMs are very much like a smart intern you hired with no real experience who is very eager to please you.


IMO, they're worse than that. You can teach an intern things, correct their mistakes, help them become better and your investment will lead to them performing better.

LLMs are an eternal intern that can only repeat what it's gleaned from some articles it skimmed last year or whatever. If your expected response isn't in its corpus, or isn't in it frequently enough, and it can't just regurgitate an amalgamation of the top N articles you'd find on Google anyway, tough luck.


The Age of the Eternal Intern


LLMs are to interns what house cats are to babies. They seem more self sufficient at first, but soon the toddler grows up and you're stuck with an animal who will forever need you to scoop its poops.


And the content online is now written by Fully Automated Eternal September


Today is Friday the 11490th of September 1993.


Without a mechanism to detect output from LLMs, we’re essentially facing an eternal model collapse with each new ingestion of information from academic journals, to blogs, to art. [1][2]

[1] https://en.m.wikipedia.org/wiki/Model_collapse

[2]https://thebullshitmachines.com/lesson-16-the-first-step-fal...


> You can teach an intern things, correct their mistakes, help them become better and your investment will lead to them performing better.

You can't do the same way you do with a human developer, but you can do a somewhat effective form of it through things like .cursorrules files and the like.


Even at 20 years old I would not have done this.


The difference is that today's digital natives regard computers as magic and most don't know what's really happening when their framework du jour spits out some "unreadable" text.


So much this, I was interning at a government entity at 20 and I already knew you needed credentials to do shit. Most frameworks have this by default for free, we're so incredibly screwed with these folks running rampant and destroying the government.


One who thinks "open source" means blindly copy/pasting code snippets found online.


It's definitely both. A bunch of 20 year olds were let loose to be "super efficient." So, to be efficient they use LLMs to implement what should be a major government oversight webpage. Even after the fix the list is a few half-baked partial document excerpts with a few sentences saying, "look how great we are!" It's embarrassing.


Does it? At least my experience is that ChatGPT goes super hard on security, heavily promoting the use of best practices.

Maybe they used Grok ;P


> At least my experience is that ChatGPT goes super hard on security, heavily promoting the use of best practices.

Not my experience at all. Every LLM produces lots of trivial SQLI/XSS/other-injection vulnerabilities. Worse they seem to completely authorization business logic, error handling, and logging even when prompted to do so.


Post-edit window, the above should read “…completely skip authorization…”


Does it, though? The saying says we shouldn't mistake incompetence for malice, but that requires more than usual for Musk's retinue.

Smells like getting a backdoor in early.


Apparently they get backdoors in as incompetently as they create efficiency.


My first guess is that this is an unauthenticated server action.[0]

0 - https://blog.arcjet.com/next-js-server-action-security/


Maybe doge should have used an LLM to generate defenses


They did, and this is what they got.


Just checked the DOGE website; I'm not too sure about this theory given that POST requests are blocked and the only APIs you can find (ie. /api/offices) only supports GET requests and if the UUID doesn't match, it 404s.

I don't see any CRUD endpoints for modifying the database


DOGE noticed. They might have "fixed" the vulnerability by now

https://doge.gov/workforce?orgId=69ee18bc-9ac8-467e-84b0-106... is what's linked to by the "Workforce" header, and it now looks different than the screenshots


Good thing we have the best and brightest at DOGE!


well they pay for a blue checkmark, they _must_ be the cleverest we have


It's been a while since I last saw a CMS pulling data from a database... It's a miracle the website didn't crumble under the load.


Put a CMS behind a well-configured CDN and it's essentially a static site generator. If you have cache invalidation figured out, you get all the speed and scalability benefits of a static site without ever having to regenerate your content.


I’m guessing it didn’t have much in front of it because the management endpoints were accessible from the public Internet. I think you mentioning the “well configured CDN” is key here. If there was a CDN in front of it, it wasn’t well configured.

BTW, I spent a lot of my career configuring load balancing, caches, proxies, sharding, and CDNs for Plone (a CMS that’s popular with governments) websites.


Yeah sorry, I didn't mean to imply these folks have any clue what they're doing. I misread your comment as "it's been a while since I saw a CMS-based site, big sites are all static now" instead of "it's been a while since I saw a CMS rawdogging it."





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: