People, governments, and businesses are slowly waking up to the new reality of machine intelligence and intelligent automation. Experts and scientists are trying lots of new things trying to find the limits of this new realm of possibility. And limits they have found.
Some people have found interesting challenges to overcome. Some people are just frustrated that machine intelligence is still mainly traditional software work where months are spent to fine tune the user experience, the exceptions and the rare border cases. Even if all technologies have limits which are important for deciding research directions, the newly conquered space of possible deep learning solutions is vast.
Recognizing a limit to solve as a topic to research while engineers are laboring full-time to implement the first and the easiest of newly possible solutions is hardly an AI winter. Progress in realizing many of the applications is slow because we have too few software engineers knowing how to utilize these new technologies. One needs to realize how long a typical software project takes and how many people it requires. Just check out any list of acknowledgements after any computer game you have played.
Even the most intelligent application of today is meaningless if it is not based on full understanding of the target use case. In addition to implementing the use case semantics and the architecture of the solution, there is the new dimension of training the system now also. If such an application is trained with bad data, it will lead to bad performance. Learning systems have to be patiently nurtured and reared just like human children. Where there is lack of good training data, architectural solutions can be used to facilitate transfer learning for example. Traditional software engineer sweat and tears are still required to build and nurture these new intelligent applications.
Even though AI implementation is a lot of work, it is work that can be done. This wasn't the case in the previous AI winters. Now we actually have the keys to the kingdom. We are just lacking engineering work to get all these collaborative swarms, robotic cars, and robotic construction workers done.
And people are still talking about Searle's Chinese Room, the distinction between narrow and general intelligence, and multitudes of other philosophical dead ends. If mammal brains were general intelligence machines, they would be similar in all animals. The reason that the general algorithm for general intelligence hasn't been found in the brain is simply that it doesn't exist there. What we find instead is that evolution has adapted the structure and the composition of the brain for each specific purpose, connecting together lots of narrow and special functions with the reward system glue.
But worry not! We can still surpass human cognitive capabilities in all domains. It is just more work than just inventing one algorithm to rule them all and sprinkling magic fairy dust on the computer. Surprise and terror! Understanding the requirements still matters!
Working in machine learning field, I have been personally asked uncountable times if we could just remove everyone but the owner from the equation and make the machine train itself. It is way too soon for that. Even being the mad machine learning scientist, it makes even me uneasy to unleash buggy and quickly hashed-together half-baked AI systems free of proper guardianship to optimize some badly understood business case.
It is like giving your child to someone who wants "one of those child things". It must be so by extrapolation that the world is at the moment full of CEOs asking this same question from their trusted AI people. The old world order conserving itself boiling down to millionaire Tony Starks asking experts if they can build a J.A.R.V.I.S. and an Iron Man suit for them, because they can't do it themselves. But can the engineer trust the millionnaire, or will the pyramid builder be buried inside the Pharaoh's monument, nameless and forgotten? Why not just skip the middle man, remove the millionnaire from the equation instead, and just build the thing for oneself? Maybe we are discovering that peak post-human condition is not for sale, and cannot be acquired with money alone. Instead, trust will likely continue to rule supreme in this ultimate disruption.
We are still not at a point where such AI systems can be built by oneself, so networking of competence, computing and mechanical infrastructure, and of course IPR assets is required. But as we see in Google, if there is no trust for the proper guardianship of the resulting technology, it will not happen. But we will get there, surely.
Trust so happens to be Cybercom's core value. It keeps being continuously redefined in new contexts adding to its timeless relevance. In this new time of extreme possibility, this trust will fundamentally enable us to make the future happen.