Two styles of AI risk arguments

Added 2022-07-02, Modified 2022-07-20

A common failure in introducing AGI risk, and how to do better


When introducing people to AI Risk a common failure mode is to go for too much, making claims too far outside their personal Overton window. A great example of a common reaction to this is given here1

Another example: I attended a talk given to some high school students using a similar "shock and awe" approach, I observed the same reaction as in the image from everyone I asked (around 5 people). If anything overbearing introductions like this vaccinate the listeners against the field.

Overall I model introducing weird AI beliefs as a trade-off between

  1. Probability they agree with the arguments
  2. How fantastical the claims are

Given the goal of "making people take AI risk seriously enough to consider working on it" it's important to make the right trade offs2, you want to focus on increasing the first number even if you have to decrease the second. Some (in my opinion) good ideas:

And some bad ideas

Footnotes

  1. For reference I actually agree with everything in the image, I simply question the approach of immediately mentioning all your weird beliefs in rapid succession

  2. Remember what you are optimizing for! Not "show this person how weird my beliefs are, and how little I care about social approval", Not "Signal to the in-group I'm one of them" but "Maximize the probability the people I'm talking to will (eventually) take AI risk seriously"

  3. Another nitpick, talking about a human level AI "doing AI research" makes fewer assumptions then mentioning unclear rapid self-improvement, of which there is ample debate. This is another trade off in which we increase 1. Probability they agree and decrease 2. How fantastical the claims are