>First, I think what we're searching for isn't intelligence so much as consciousness. So I'm going to treat general AI and consciousness as pretty much the same thing.
That's not how I think about it at all. I think you're underestimating how alien AI could be to us, in general. I.e., I think you're anthropomorphising it unnecessarily.
There's an idea called the orthogonality thesis, which says:
The intelligence of an arbitrary agent can vary independently of its goals.
Here, I'm using intelligence to mean just the ability of an agent to generate/evaluate/execute plans to further its goals. A modern chess AI on modern hardware is more intelligent than one from the 1980s.
In principle, you could have any arbitrary utility function attached to any level of intelligence.
This idea was brought up because often, when people heard about the paperclip maximiser (an agent that tries to maximize the number of paperclips- and thus wants to take apart everything in its lightcone and turn it into paperclips), they said:
"But why would something so clever do something so stupid and pointless? A superintelligence would surely realize it was being stupid and decide to do something smarter."
But there's no reason to think this! When you take away the agent's goal of maximising paperclips, there's no perfect platonic ghost of consciousness left! The paperclip maximiser, as defined is just an algorithm that predicts the future, looking for actions that maximise paperclips, and executes them.
Given unbounded compute power, it can just look through arbitrarily many counterfactual simulations of the future given different plans, pick the plan that maximises the number of paperclip-like structures in the universe, and execute it.
The rest is implementation details, given we don't have unbounded computing power, but the point is such a system doesn't need to be conscious to be powerful.
If you have an AI without a utility function, you don't have an AI. 'General intelligence' just means the AI's computing/reasoning/versatility power, in a vague sense. It has nothing to do with whatever particular utility function an agent might be trying to maximize.
The paperclip machine doesn't 'want' to colonize the stars; it's just a system that will do that if it can, because tautologically, it's a system that tries to turn as much as it can into paperclips.
I don't really think there is a Fermi 'paradox'. We don't have solid priors on how often agents arise in the universe. The fact there aren't any visible aliens should either just lower our estimates of how likely life is, or maybe there's some sort of anthropic principle argument to be made here where we probably wouldn't observe a universe where aliens come to visit, because in nearly all possible universes where intelligent life survives and goes from star to star it just eats all the stars very quickly on astronomical timescales or something, such that the chances of intelligent agents like us arising while there's visible evidence of other agents is incredibly slim.
Like I said, we don't even know what consciousness is.
To say there is a platonic consciousness is kind of saying you know what that looks like. And to say that you could grant something an intelligence capable of solving the problem of colonizing the stars while also denying it the ability to choose its own goals is an assumption I'm not willing to grant.
In other words, there is "a reason to think this!" We are that reason. Other life on this planet is reason to think this.
That's not how I think about it at all. I think you're underestimating how alien AI could be to us, in general. I.e., I think you're anthropomorphising it unnecessarily.
There's an idea called the orthogonality thesis, which says: The intelligence of an arbitrary agent can vary independently of its goals.
Here, I'm using intelligence to mean just the ability of an agent to generate/evaluate/execute plans to further its goals. A modern chess AI on modern hardware is more intelligent than one from the 1980s. In principle, you could have any arbitrary utility function attached to any level of intelligence.
This idea was brought up because often, when people heard about the paperclip maximiser (an agent that tries to maximize the number of paperclips- and thus wants to take apart everything in its lightcone and turn it into paperclips), they said: "But why would something so clever do something so stupid and pointless? A superintelligence would surely realize it was being stupid and decide to do something smarter."
But there's no reason to think this! When you take away the agent's goal of maximising paperclips, there's no perfect platonic ghost of consciousness left! The paperclip maximiser, as defined is just an algorithm that predicts the future, looking for actions that maximise paperclips, and executes them. Given unbounded compute power, it can just look through arbitrarily many counterfactual simulations of the future given different plans, pick the plan that maximises the number of paperclip-like structures in the universe, and execute it. The rest is implementation details, given we don't have unbounded computing power, but the point is such a system doesn't need to be conscious to be powerful.
If you have an AI without a utility function, you don't have an AI. 'General intelligence' just means the AI's computing/reasoning/versatility power, in a vague sense. It has nothing to do with whatever particular utility function an agent might be trying to maximize.
The paperclip machine doesn't 'want' to colonize the stars; it's just a system that will do that if it can, because tautologically, it's a system that tries to turn as much as it can into paperclips.
I don't really think there is a Fermi 'paradox'. We don't have solid priors on how often agents arise in the universe. The fact there aren't any visible aliens should either just lower our estimates of how likely life is, or maybe there's some sort of anthropic principle argument to be made here where we probably wouldn't observe a universe where aliens come to visit, because in nearly all possible universes where intelligent life survives and goes from star to star it just eats all the stars very quickly on astronomical timescales or something, such that the chances of intelligent agents like us arising while there's visible evidence of other agents is incredibly slim.