Thoughts on The Master Algorithm

March 14, 2020  ○   8 min read  ○   1508 words

During this semester I was atending a class called 'Brain, Mind and Cognition', where we were required to read some books and write our thoughts on them, while answering given questions.

For this book the two given questions were:

  • In your opinion, what is the most interesting/most stimulating/most exciting thought/concept/idea you encountered in the book? Please explain your position. If the book did not provide any good idea for you, then you can alternatively describe the least interesting or most revolting thought in the book.
  • In your opinion, which of the five tribes of machine learning is the most promising one to contribute to the "master algorithm"? Please explain your position. If the book did not provide any promising aspect that could contribute to the "master algorithm", then you can alternatively describe the least promising tribe of machine learning in the book.

I won't do tl;dr on the book, but here goes.

My thoughts on The Master Algorithm

In the book "The Master Algorithm" the author, Pedro Domingos provides reader with useful introduction to the whole Machine Learning space. While primary objective of the book is to give the reader low level explanation of different schools of thought, he does it by constructing an ideal, that all machine learning experts seemingly seek. This ideal is - as title suggests - The Master Algorithm. The author defined it as a self learning algorithm, capable of learning any given task. Throughout the book, author makes many observations and predictions, some justified, some less so I think. In the first part of the essay I am going to address author's remarks on the AI safety, which are unjustified in my opinion. I will continue with my opinion on future of the search for the Master Algorithm and importance of few tribes, that should advance it the most.

One of the more interesting questions the author raises in the course of the book is whether or not we should be afraid of AI equipped with the Master Algorithm taking over the world. The author, who is one of the leading experts in the field of machine learning, assures us that chances of "AI appocalypse" are zero, yet he later states that, if we are so foolish as to deliberately program a computer to put itself above us, we will maybe get what we deserve. Knowing that not all people have our best interests at their hearts, I find his reasoning wildly unsettling.

First argument the author provides in support of fear of AI being unjustified, is that unlike humans computers are not capable of having a will. As used here I would define will as one's ability of setting their own goals. I would argue that this statement is in contradiction with the author's observation in the beginning of the book, where he stated that the Master Algorithm is the last thing we'll ever have to invent, because once we let it loose it would go on to invent everything else that can possibly be invented. Well, if will can be invented, the Master Algorithm by definition will invent it. Further more it is not clear whether or not computers' inability to have will would prevent AI from taking over. What if AI was tasked with maximizing happines of every person in the world. It might come to the conclusion, that required the Master Algorithm to restrict an individual's personal freedom. You might say that, if that day comes to existance we could just turn it off, but this conslusion is reached too fast. Surely The Master Algorithm will be able to learn how to take control of any electronic system and it would probably have the ability to self replicate. With countless of interconnected devices, I don't think that this is the risk we should be willing to take.

Second argument made by author was, that even in a case of AI developing will of it's own, it would eventually be stopped by humans being unable to provide sufficient computational power to it to operate on dangerous levels. Opinions of scholars on the computational complexity as a limit of AI tend to be divided. I believe classification of computational complexity doesn't provide good enough picture of AI nor human capabilities. I believe the author (and other experts) applied computational complexity argument outside of it's intended domain. One can only argue, that computational complexity proves that AI and humans could not be omnipotent. It cannot provide justification for belief, that human brains are able of higher computational power than AI. Furthermore it cannot guarantee that computational complexity barrier even needs to be overcome by AI in ordrer to take control of the world.

To further my argument, I would like to add the fact, that we are on the verge of discovering new computational technologies, namely quantum computing, that might enhance the AI in unforseable ways. Other field of research that is advancing with great velocity is genetic manipulation (e.g. CRISPR). I don't think it is unreasonable to expect, that one day we will be able to create a brainlike biological organism, with far higher computational abilities than us.

Closing my observations of author's statements on AI safety, I would add, that the author's downplaying of AI safety supprised me. While I understand, that experts' opinion on the topic varies grately, I think it is in our best interest to be aware of the threats. How we should act in face of these threats is of course open to debate, but mocking the idea, while at the same time arguing, that eventually we will discover the Master Algorithm is unproffesional at best and deceiving at worst.

Premise of the book, as the title suggest, is about Pedro Domingos' search of the Master Algorithm. The author also makes educated guesses on the future and development of the Master Algorithm. He believes that all five tribes will be involved in the construction of said algorithm.

I think this is a sound argument, since it is clear that each of the five tribes (Symbolists, Connectionists, Evolutionaries, Bayesians and Analogizers) has certain strenghts and also weakneses. Out of these five tribes, I feel like the Analogizers are the least likely, to significally contribute to the development of the Master Algorithm, since not everything can be derived from comparisons of past knowledge. After all future findings are not determined to be analogous to past discoveries.

Concluding from author's own comments on real life (non-artificial) Master Algorithm being evolution and brain, I think Connectionists and Evolutionarists will most likely be the two most contributing tribes of the five. My reasoning is based on our understanding of evolution and human brain so far. If we can say that real life examples of The Master Algorithm are evolution and our brain, then obviously we will develop artificial Master Algorithm sooner rather than later. However, my conclusion is based on few assumptions. First is that the Evolutionaries will be able to simulate evolution well enough. Second assumption is that the Connectionists' model of neural networks are also accurate. And most importantly, that given enough resources (e.g. time and energy) evolution or brain would be able to solve, as per definition of the Master Algorithm, any given task.

While I do believe that Evolutionaries and Connectionists will contribute the most to the development of the Master Algorithm, I also believe that there are still some pieces of the puzzle, that we are missing. Eventually we will hit the computational barriers. One answer to those barriers might be Quantum Machine Learning, but even with Quantum computers there are still proven computational limits we are aware of. Another answer might be some kind of biological computing, with genetically created neurons where we don't yet know the limitations, but my guess is that research is still in its infancy in this domain.

In conclusion I would like to acknowledge, that I am painting bleak future for machine learning, but in this case I think, that my worries are justified in both domains, AI safety as well as the possibility of the Master Algorithm with our current technologies. Adding to the discussion, I would like to add, that eventhough I am disputing author's remarks about AI safety I am not convinced of the Master Algorithms eventual existence. We are aware of limits of classical computers, we are also aware of inability of quantum computing to overcome some of those. Furthermore, we are also aware of theoretical limits in data compression, from which I am concluding that the Master Algorithm, as defined by the author, is only wishfull thinking. Having said that, I think we are left with two choiches. Either we change the definition of the Master Algorithm, or we are doomed to wait and hope for new, more powerfull ways of computing. As a closing remark, I would like to add, that eventhough, I have arrived at opposite conclusions, than the author, I am acknowledging, that I might be missing some critical inforamtion, that would otherwise change my outlook on the subject.

Simon Sekavcnik