Sunday, 13 May 2018

Google, A.I. and Predictions of Hi-tech Apocalypse: The Exponential Detriment Question

I wonder just how many interesting little cases we have already that confirm our most conservative fear that deep learning A.I. may 'deep learn' to do things that do no benefit humans? By way of one small example, Prof Make Griffiths and I have just published a peer reviewed paper that shows Google's deep-learning autonomous A.I. Rank Brain appears to be preventing humans from using the previously incredibly disruptive Big Data power of using the IDD method in Google Books to bust deeply embedded establishment myths. The paper is here: http://www.mdpi.com/2076-0760/7/4/66

How many such small examples do we need to sum to the probabilities: (1) That (currently) an A.I apocalypse is more likely than not. (2) That an A.I. apocalypse is (currently) imminent? We might call this problem the "Exponential Detriment Question". One thing is for sure. There can be no harm to humans in collecting the necessary data now.

+

1 comment:

  1. Wow ..loved your article. Can you check my views about this. !!! Ollupdate thanks ....

    ReplyDelete