A log entry of the Mad Machine Learning Scientist of Cybercom


2016-06-07, 12:36 Posted by: Tero Keski-Valkama

We have a reference implementation of a deep reinforced learning machine: The human brain.

The brain does not take gigawatts of power, and it is not made of exotic and expensive materials; in fact, it is made of fats and proteins. That is, essentially chicken, as opposed to silicon and rare earth elements like neodymium, or hafnium, which is not quite a rare earth element, but rare regardless; Its reserves on Earth are expected to last under 10 years.

That reference implementation still outperforms our best computers and algorithms in some tasks, although the scope of those tasks is getting smaller. At the least it shows that this kind of computing is in principle possible with minimal energy and material requirements. However, the human brain is exceptionally bad at certain kinds of computation, like computing SHA-2 hashes.

To simplify greatly, the human brain comprises of a general unsupervised learning machine, the cortex, that has received very much attention, and a very human-specific reinforced learning machinery, the reward-punishment system (largely based on dopamine and cortisol). The latter guides the former on what is important for the long-term survival of the genes. The latter has received less research attention for some reason, even though it is of crucial importance for making sure the unsupervised part does meaningful things. Lately reinforced learning in general has gathered more interest, but it hasn't yet been implemented in human brain analog fashion, and that is an open problem.

As the technology progresses we are faced with the big, old philosophical problems: What is consciousness really? We can out of hand dismiss the old thinkers like Searle as simplistically inane, but the problem remains. The Mad Machine Learning Scientist of Cybercom of course has his own hypothesis:

Evolution diagram

It is not likely that evolution has found the only computing architecture that is sentient in mammalian brain. It is more likely that genes simply harness the implicit capacity for computation in the physical substrate where life constructs itself, generation by generation. There is no boundary between the atom constituting a neuron in the brain and the atom constituting the ground where the mammal stands on.

It just happens that this capacity of computation in the physical world does not exist externally, there is not external calculator that computes the equations and writes the results to the physical reality. The physical reality computes itself, and is the computation. Information is one of the fundamental and real quantities we have in modern physics.

Human brain

Somehow the noisy, divisible, medically interferable, resonating group of signals in the human brain, made out of proteins and slime, actually perceives things like qualia (for example a color blue). From the perspective of genes and evolution, this is coincidental. If a brain/computer could be constructed by genes that helps the vessel of the genes to survive in the changing environment, it is all the same for the genes if that brain actually perceives the color blue or just fakes it. In my view, faking is either not possible, or at least not a likely outcome for a system designed for such purpose. Aliens with no "souls", evolved through alternative evolutionary paths cannot be. So, how do the computers see the world? How do traffic lights and inanimate rocks see the world? Come to Cybercom to discuss these things with the Mad Machine Learning Scientist of Cybercom. It is not required to be mad, but you might want to be at least slightly eccentric.

Back to immediate and concrete:

While the human capabilities are being met and surpassed in all fronts, replacing humans is not the goal. Regardless, the number of machines, the amount of information and the numbers of potential actions increase all the time per person. We need more intelligent systems to help us cope and even succeed in this age of information and speed.

In the past, machine learning used to be about special algorithms for special purposes. If you needed statistical regression for a phenomenon exhibiting certain distribution, you could not blindly use the methods appropriate for other kinds of distributions. Every application was specially tailored by experts to function well for that single purpose. You needed a special algorithm and a special system for reading each different kind of barcode.

Now we are increasingly migrating towards general algorithms. Sure, special algorithms still have their place, but in many, albeit not all cases the general algorithms outperform the special algorithms made for a single purpose only.

Recurrent neural network

The same algorithms which won humans in Go are used to predict stock markets. The same algorithms which won humans in Jeopardy are used in cancer and legal research. The same control algorithms that are used to walk bipedal robots are used to fly drones.

The number of tools in our toolbox of general machine learning algorithms is slowly increasing. It is important to make note and give due attention to spectacular successes in the field, but also not become monomaniacs for a specific buzzword, and to be open for new approaches.

In Cybercom we are all about Internet-of-Things, modern web technologies and service design. Parallel to these, we also love deep, paradigm shifting technologies like machine learning and blockchain. We do not only follow trends, we are them.

The Mad Machine Learning Scientist of Cybercom is currently researching industrial applications for LSTM neural networks as a part of his doctoral degree. He is the guild master of the machine learning guild in Cybercom, and is one of the many background ideologues behind the Machinebook concept.

The images used are from Wikimedia Commons with attribution. [1][2][3]


comments powered by Disqus