In 1798, Thomas Malthus published “An Essay on the Principle of Population” in which he argued that in short order humanity would suffer a major population collapse.
I’m not moved almost at all due to the prior failed predictions, for two reasons:
1. There’s very little distance between “major population collapse” and “civilization as we know it is over”, and we can only witness the latter once
2. You’re leaving out successful predictions. For example, lots of people predicted covid-19, world war 2, the financial collapse of 2008. It’s not like literally every major prediction has been wrong.
My argument that we should be scared is as simple as:
1. You can model super intelligent AI as an army of super intelligent humans
2. Would we be scared of an army of super intelligent humans that aren’t aligned with our values? (obviously)
3. Would those super intelligent humans necessarily be aligned? I dunno, but we have enough trouble with alignment in human populations — This is basically what democracy is trying to achieve, and most people seem to think it’s struggling.
I do think I could dig the well deeper on catastrophic predictions generally. As you say, some are successful. So can we know in advance of their confirmation whether they are likely to hold up? By what means?
The question is not whether AI deployed in a vacuum could be unsafe (I agree it could be) but rather whether we ought to, for example, "Pause AI" or implement other draconian policies. By my lights, we don't have any good reason to do that stuff.
I’m not moved almost at all due to the prior failed predictions, for two reasons:
1. There’s very little distance between “major population collapse” and “civilization as we know it is over”, and we can only witness the latter once
2. You’re leaving out successful predictions. For example, lots of people predicted covid-19, world war 2, the financial collapse of 2008. It’s not like literally every major prediction has been wrong.
My argument that we should be scared is as simple as:
1. You can model super intelligent AI as an army of super intelligent humans
2. Would we be scared of an army of super intelligent humans that aren’t aligned with our values? (obviously)
3. Would those super intelligent humans necessarily be aligned? I dunno, but we have enough trouble with alignment in human populations — This is basically what democracy is trying to achieve, and most people seem to think it’s struggling.
Thanks for the thoughtful feedback.
I do think I could dig the well deeper on catastrophic predictions generally. As you say, some are successful. So can we know in advance of their confirmation whether they are likely to hold up? By what means?
Re: your argument we should be scared, your framing makes sense but I always return to the question of probability vs. possibility. While the scenario you lay out seems possible, it does not seem probable: so far, AI has been deployed with safeguards, and government regulation even in the U.S. is coming (see Biden's EO today https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/ ).
The question is not whether AI deployed in a vacuum could be unsafe (I agree it could be) but rather whether we ought to, for example, "Pause AI" or implement other draconian policies. By my lights, we don't have any good reason to do that stuff.