A policy of Mutually Assured Destruction (MAD) would align the multiverse with the Anthropic Principle, maximizing shared survival and filtering out branches with apocalyptic hostilities.
I'm no philosopher, but I don't think it's okay to recommend drunk driving to people even if there are versions of them in alternate realities that survive.
Doing something that is very likely to get us all killed is likely to just plain get us all killed, and then there will be no anthros around to make observations at all. There's nothing about the universe (or multiverse for that matter) that demands that monkeys keep existing in it.
I appreciate your reply, but your drunk-driving analogy is flawed. A key assumption of MAD is that all sides in a conflict act rationally and understand that peace is the only way to avoid catastrophe. Rational actors would not behave like a bunch of intoxicated fools.
In our heretofore considered sane world, MAD ended the 20-year cycle of World Wars that killed millions. However, if nuclear-armed superpowers were to act irrationally, the stability provided by MAD vanishes. Irrational behavior, driven by factors such as extreme ideologies, could lead to decisions that defy the logic of deterrence.
In most branches of the multiverse humans may die off. However, that shouldn’t matter, as I’m only concerned with the quality of life in branches populated with living people. The Anthropic Principle suggests that these are the only branches that matter.
Initially, I was optimistic, thinking there was a way to avoid AI doom. But now, I fear there is no multiverse, and we are all doomed. Thank you. You destroyed all my hope, and, for winning this argument, you get 10 points. :-(
I may be out of my depth but I think Kambhampati has a different focus than you. He is deeply researching whether current architectures can reason, and his conclusion is "no". He's not as focused on "will they be able to reason soon". On the other hand, you are less concerned with today's AI architectures than with what AIs will be capable of soon. I suspect if you asked Kambhampati "Might AIs in the year 20xx be an existential threat?" he would probably say yes.
A policy of Mutually Assured Destruction (MAD) would align the multiverse with the Anthropic Principle, maximizing shared survival and filtering out branches with apocalyptic hostilities.
I'm no philosopher, but I don't think it's okay to recommend drunk driving to people even if there are versions of them in alternate realities that survive.
Doing something that is very likely to get us all killed is likely to just plain get us all killed, and then there will be no anthros around to make observations at all. There's nothing about the universe (or multiverse for that matter) that demands that monkeys keep existing in it.
I appreciate your reply, but your drunk-driving analogy is flawed. A key assumption of MAD is that all sides in a conflict act rationally and understand that peace is the only way to avoid catastrophe. Rational actors would not behave like a bunch of intoxicated fools.
In our heretofore considered sane world, MAD ended the 20-year cycle of World Wars that killed millions. However, if nuclear-armed superpowers were to act irrationally, the stability provided by MAD vanishes. Irrational behavior, driven by factors such as extreme ideologies, could lead to decisions that defy the logic of deterrence.
In most branches of the multiverse humans may die off. However, that shouldn’t matter, as I’m only concerned with the quality of life in branches populated with living people. The Anthropic Principle suggests that these are the only branches that matter.
Initially, I was optimistic, thinking there was a way to avoid AI doom. But now, I fear there is no multiverse, and we are all doomed. Thank you. You destroyed all my hope, and, for winning this argument, you get 10 points. :-(
I may be out of my depth but I think Kambhampati has a different focus than you. He is deeply researching whether current architectures can reason, and his conclusion is "no". He's not as focused on "will they be able to reason soon". On the other hand, you are less concerned with today's AI architectures than with what AIs will be capable of soon. I suspect if you asked Kambhampati "Might AIs in the year 20xx be an existential threat?" he would probably say yes.
Problem is he's conveniently defining "not reasoning" in a useless way.