Stephan Hawking Talking About AI

Today, when I searched for whether Hawking used eye-tracking for typing or speaking, I found a video on YouTube, which is the Q & A of Hawking at the Imperial College London.

The first question is “Do you think one day AI can take over humanity?”

Hawking said:

  • The brain of a worm is not qualitative different from that of a computer

  • The brain of a worm is not qualitative different from that of a human.

  • So the brain of a human is not qualitative different from that of a computer.

  • So AI can definitely take over humanity one day.

  • To avoid this, we need alignments.

I was shocked by his answers because they were all right at the first sight.

However, when I thought deeper, I found something not right: it is true that GPT-3.5 is not qualitative different from GPT-1, 2, 3 but GPT-3.5 is much more powerful that the latter ones. I believe that we should stop fighting for the definition of “qualitative”, “quantitative”, “understanding”, “awareness”, “right”, “wrong” and so on so forth. Just focus on whether we can build a more powerful AI, whether we can align AI with human values and whether we can use AI to help people (say, help the paralyzed ones walking again).

Stop talking as a philosopher and just do it!

Shut up and calculate!

Another thing is that, never respect somebody without consideration. I don’t think Stephan Hawking knows AI better than Hinton, LeCun, Ilya, Kaiming, Feifei, Andrew or Jensen Huang. Not even better than me.