Machine Learning VS AI – What Is the Difference?3 min read

The following two tabs change content below.
I am an expert data scientist and statistician living and working in London with experience in multiple domains including (but not limited to): deep learning, natural language processing, recommender systems, statistical modelling and research design. I am running my own consultancy, and I can take up work with companies of all sizes. I am also offering education services in the areas of data science, AI, machine learning and blockchain through my company Tesseract Academy (http://tesseract.academy.). The seminal event is a half-day workshop taking place every few months, but we also provide in-house training services. Finally, I am also involved in the blockchain space and I have been an advisor to many ICOs. My main specialties include white paper review and modelling token economies.

Latest posts by Dr. Stylianos Kampakis (see all)

Something I am asked quite often is what is the difference between machine learning and AIMachine learning has become one of the hottest words in the last decade. However, many people falsely ignore the history of AI, sometimes confusing the two, and falsely believing that machine learning can lead straight to general AI.

Artificial Intelligence was defined by John McCarthy as “the science and engineering of making intelligent machines“. Research in AI started during the 50s and is closely connected to lots of other disciplines such as cyberneticscognitive science and linguistics.

GOOD OLD-FASHIONED AI

Research back then concentrated on the idea that creating an intelligent machine has something to do with formal reasoning. This gave birth to languages such as Prolog and expert systems. The intuition behind that idea was that humans are using symbols and rules in order to navigate the world. Therefore, in order to mimic human intelligence, machines should follow the same process. This way of thinking failed spectacularly leading to what is known as the AI winter, a period in AI during which no-one was willing to invest in AI ventures after the extremely high expectations about the success of AI to pretty much anything failed to materialise.

So, the approach in classic AI was “top-down“: hand craft the rules and the knowledge into the system and then intelligent behaviour will ensue. However, in the 80s and the 90s two things took place. First, an improvement in machine learning algorithms, which started with the discovery of backpropagation with neural networks and followed on by the discovery of algorithms such as Support Vector Machines and Random Forests. Secondly, an increased volume of data. This made machine learning and the “bottom-up” approach to AI the standard paradigm. Instead of creating countless rules, create a system that can learn from data.

Vladimir Vapnik, machine learning pioneer

Vladimir Vapnik, creator of SVM

THE SEARCH FOR TRUE ARTIFICIAL INTELLIGENCE

This kind of approach was popularized in the branch of AI known as “computational intelligence“. According to this paradigm, intelligent machines could be created by mimicking intelligent structures in nature, be it neurons or insect swarms.

This approach is interesting, but obviously it has failed to create true artificial intelligence. The success of systems like Watson and deep neural nets might have tricked many people into thinking that general AI is near, but we are still far. These systems have indeed been very successful in a variety of problems, but they are missing an important component: They can’t reason the way humans do.

So, for example, deep neural nets can learn various representations hidden in data, but it can’t reason formally over these representations. A network might be able to caption an image but it does not have a concept, of let’s say, a girl. It’s missing the equivalent of a semantic network, as well as formal rules to reason over those concepts or perform logic inference.


Example of a caption generated by a deep neural net: “black and white dog jumps over bar.”

There have been theories trying to bridge this gap. Markov Logic Networks is one such theory. Also, in deep neural nets there have been some attempts to embed them with memory which can help solidify concepts in the network. A result of such an effort is the Neural Turing Machine.

In any steps towards that direction and we might see a breakthrough in the next decade or so. However, we are still not at the point where we can generate true AI.

If you are interested more about the subject I am contacting my own workshops for non-tech execs, as well as developers. Feel free to get in touch!

Print Friendly, PDF & Email

Comments

comments

Dr. Stylianos Kampakis

I am an expert data scientist and statistician living and working in London with experience in multiple domains including (but not limited to): deep learning, natural language processing, recommender systems, statistical modelling and research design. I am running my own consultancy, and I can take up work with companies of all sizes. I am also offering education services in the areas of data science, AI, machine learning and blockchain through my company Tesseract Academy (http://tesseract.academy.). The seminal event is a half-day workshop taking place every few months, but we also provide in-house training services. Finally, I am also involved in the blockchain space and I have been an advisor to many ICOs. My main specialties include white paper review and modelling token economies.

Share
Tweet
Share
Pin