Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Death and Suicide in Universal Artificial Intelligence (arxiv.org)
55 points by apsec112 on Aug 24, 2016 | hide | past | favorite | 8 comments


Nice title for something that is mostly kinda boring.

Basically this is saying the reinforcement learning systems (and other ML systems?) can get stuck in states from which they cannot escape. They label these states "death"

They go on to define various agent behaviours, which given starting states lead to different equilibriums. The ones that actively lead to certain death they label "suicide"


Can somebody explain this?

Also, I wonder if a hypothetical human who has never been in contact with other humans or living organisms would ponder about death and suicide. I think this human would not even know these concepts exist. So why would this hold for an AI?


The idea is that you can make an AI system that gives up if it gets too smart, which guarantees that you don't get a world-eating AI singularity explosion.

The alternative approach relies on putting tripwires in the system, so that the AI gives up if it tries to do something you specifically don't want it to - like launching all the world's nuclear missiles, or maybe getting its own Hollywood agent.

The problem with tripwires is that a really smart agent can modify itself to bypass them, or at least find the loopholes you didn't think of because you're not as smart as it is.

The systems mentioned - AIXI, etc - are all formalisms for understanding/modelling general AI systems.

https://en.wikipedia.org/wiki/AIXI


Hutter (one of this paper's authors) is the inventor of AIXI.


The author gives an alternative characterization of death as a state with nul reward. I interpret this and other assertions in the article as the possibility that an AI system, even if it does not know about death, can chose complete inaction, which for all intents and purposes can be assimilated to death.


Isn't this basically a computer science approach to Hamlet's monologue? If so, it's kind of cool.


I don't think the agents ponder being. They are not self aware in that sense i don't think.


"2b" || "!2b"

always results in 'true'.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: