Monday 28 May 2018

Demand evidence: What facts do you really know for a "fact"


Sunday 27 May 2018


Saturday 19 May 2018

BIg Data Killed the Darwin Myth

Homeopathic Jokes

Tuesday 15 May 2018

Fact Suppression and Persecution: A case study

Sunday 13 May 2018

Google, A.I. and Predictions of Hi-tech Apocalypse: The Exponential Detriment Question

I wonder just how many interesting little cases we have already that confirm our most conservative fear that deep learning A.I. may 'deep learn' to do things that do no benefit humans? By way of one small example, Prof Make Griffiths and I have just published a peer reviewed paper that shows Google's deep-learning autonomous A.I. Rank Brain appears to be preventing humans from using the previously incredibly disruptive Big Data power of using the IDD method in Google Books to bust deeply embedded establishment myths. The paper is here:

How many such small examples do we need to sum to the probabilities: (1) That (currently) an A.I apocalypse is more likely than not. (2) That an A.I. apocalypse is (currently) imminent? We might call this problem the "Exponential Detriment Question". One thing is for sure. There can be no harm to humans in collecting the necessary data now.