Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I choose to talk in a respectful way, because that's how I want to communicate: it's not because I'm afraid of retaliation or burning bridges. It's because I am caring and conscious. If I think that something doesn't have feelings or long-term memory, whether it's AI or a piece of rock on the side of a trail, it in no way leads me to be abusive to it.

Further, an LLM being inherently sycophantic leads to it mimmicking me, so if I talk to it in a stupid or abusive (which is just another form of stupidity, in my eyes) manner, it will behave stupid. Or, that's what I'd expect. I've not researched this in a focused way, but I've seen examples where people get LLMs to be very unintelligent by prompting riddles or intelligence tests in highly-stylized speech. I wanted to say "highly-stupid speech", but "stylized" is probably more accurate, e.g.: `YOOOO CHATGEEEPEEETEEE!!!!!!1111 wasup I gots to asks you DIS.......`. Maybe someone can prove me wrong.



My wondering was never about being abusive, rather just having a dry tone and cutting the unnecessary parts, some sort of middle ground if you will. Prompting "yo chatgeepeetee whats good lemme get this feature real quick" doesn't make sense to me mostly because it's anthrophomorphizing it, and it's the same concept of unnecessary writing as "Good morning ChatGPT, would you please help me with ..."


I guess in part I commented not on what you said, but on seeing people be abusive when an LLM doesn't follow instructions or fails to fulfill some expectation. I think I had some pent up feelings about that.

> having a dry tone and cutting the unnecessary parts

That's how I try to communicate in professional settings (AI included). Our approaches might not be that different.


> seeing people be abusive when an LLM doesn't follow instructions or fails to fulfill some expectation. I think I had some pent up feelings about that.

Oh me too, because people are anthropomorphizing the LLM, not because they hurt it. Indirectly, though, I agree that this behaviour can easily affect the way this person would speak to other humans


To be fair, I do anthropomorphize LLMs. But, I also anthropomorphize, say, a kitchen knife that I accidentally scrape on something (I think "sorry, knife"). I don't reflect on this much; it's just a pleasant way to relate to my environment. What feelings do you have about people anthropomorphizing LLMs?

Anthropomorphizing might not be the right term, because it's about assigning human attributes. When I talk to my dog, for example, I don't contextualize it as giving it human attributes. In a way, talking to something is part of how I engage my relationship-management circuitry. I don't only relate to humans, I relate to everything in one way or another, and kindness is a pretty nice starting point. As I said, I don't think about this much: might come up with something more coherent if I did.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: