I wonder just how many interesting little cases we have
already that confirm our most conservative fear that deep learning A.I. may
'deep learn' to do things that do no benefit humans? By way of one small
example, Prof Make Griffiths and I have just published a peer reviewed paper
that shows Google's deep-learning autonomous A.I. Rank Brain appears to be
preventing humans from using the previously incredibly disruptive Big Data
power of using the IDD method in Google Books to bust deeply embedded
establishment myths. The paper is here: http://www.mdpi.com/2076-0760/7/4/66
How many such small examples do we need to sum to the
probabilities: (1) That (currently) an A.I apocalypse is more likely than not.
(2) That an A.I. apocalypse is (currently) imminent? We might call this problem
the "Exponential Detriment Question". One thing is for sure. There
can be no harm to humans in collecting the necessary data now.
+1. ELON MUSK’S BILLION-DOLLAR CRUSADE TO STOP THE A.I. APOCALYPSE https://t.co/sn8JwUakjO— Dr Mike Sutton (@Criminotweet) May 13, 2018
2. Sutton and @DrMarkGriffiths 4x expert peer reviewed paper that finds a wee bit of confirmatory evidence for the AI apocalypse concern being valid: https://t.co/waIQjoe3EE pic.twitter.com/mvry15kmVm
If you know the importance of what you unearthed, you had better "trumpet it from the rooftops" @DrMarkGriffiths otherwise people like @RichardDawkln will claim you're a dim poor sucker who never understood the importance of what you originally discovered https://t.co/RiRXVgRk1g pic.twitter.com/6jGdExlWrm— Dr Mike Sutton (@Dysology) May 13, 2018
Wow ..loved your article. Can you check my views about this. !!! Ollupdate thanks ....
ReplyDelete