Where Does the Machine Learning and AI Megatrend Come From?

Circuit board of computer involving Machine Learning

Nowadays, the ideas of machine learning (ML)neural networks, and artificial intelligence (AI) are trending topics seeming to be the focus of discussion everywhere. In this article, we briefly summarize the development of Machine Learning in the last ten years and explain why this trend will be applied more in all economic sectors. 

Graph showing where the machine Learning Mega-trend comes from.
Trend increase of the term Machine Learning

Background

In the 1940s, Warren McCulloch and Walter Pitts laid the foundations of machine learning with their publication “A Logical Calculus of the Ideas Immanent in Nervous Activity“ on the topics of neurons and nerve networks.

In 1957 Frank Rosenblatt developed the Perceptron algorithm, which represents a simplified model of a biological neuron. Three years later, Bernard Widrow and Marcian Hoff developed ADALINE, an early artificial neural network and for the first time, the weights of the inputs could be learned by the network.

However, the publication of the book “Perceptrons” by Marvin Minsky and Seymour Papert in 1969 meant that after the initial euphoria about machine learning, the topic lost its importance and we fell into the so-called “AI winter”. The book presents not only the strengths but also the serious limitations of perceptrons such as the XOR problem. The XOR problem represented such a hurdle because classical perceptrons can only solve linearly separable functions. However, the XOR function generates a non-linear system that can not be solved in a linear manner.

New Revivals

David Rumelhart, Geoff Hinton, and Ronald Wiliams laid the foundation for deep learning through backpropagation experiments in 1986 and they solved the XOR problem by applying the method of backpropagation to multi-layer neural networks.

Another big step in machine learning was the use of deep learning. Deep learning refers to a class of machine learning algorithms that can solve nonlinear problems due to their high number of layers. Each layer processes the data transferred from the layer above thus abstracting the data, layer by layer.

Machine Learning Today 

The Influence of AlexNet on Machine Learning

From the AlexNet paper by Hinton, Krizhevsky, Sutskever from 2012, Neural Networks and Machine Learning
From the AlexNet paper by Hinton, Krizhevsky, Sutskever from 2012, the source is linked below.

In the last decade, the topic gained popularity again especially in 2012, Geoff Hinton, Alex Krizhevsky and Ilya Sutskever caused quite a stir with their Convolutional Neural Network AlexNet.

Success in the Large Scale Visual Recognition Challenge

With AlexNet, they were able to achieve an outstanding result by using deep learning methods at the annual ImageNet Large Scale Visual Recognition Challenge ( ILSVRC ), which has been held annually since 2010. The aim is to design the most efficient image recognition software possible by using the free ImageNet database. In the first year, the best result was an error rate of 28.2%. By the second year, the error rate was still 25.7% and the 2nd best result from 2012 still had an error rate of 26.2%.  The AlexNet team, in contrast, achieved an error rate of just 16.4%. This result quickly made a big impact in the professional world, which rekindled the hype about and the importance of machine learning.

Reasons for the Success of AlexNet

On the one hand, this result can be attributed to advances in the theory of machine learning algorithms. For example, the use of the so-called “rectified linear activation unit” (ReLU) has greatly increased the efficiency and speed of deep learning algorithms.  Among other problems, the use of ReLU has since solved the Vanishing Gradient Problem; where certain parts of a network may no longer be active during the training of the neural net and in worst-case scenarios, means that this network can no longer be trained.

Unlike previous competitors, Hinton used graphics cards instead of CPUs thanks to the CUDA technology released by Nvidia in 2007. This technology allowed for graphics cards to be used for general calculations. In a 2006 study, Rajat Raina, Anand Madhavan, and Andrew Ng showed that the use of graphics cards instead of CPU’s could increase the speed of neural network training by up to 15 times.

Development After to AlexNet

After the success of AlexNet, the potential behind these methods were increasingly recognized, which is why even big companies like Google started to engage with machine learning. As an example, machine learning algorithms can be used to develop self-driving cars (eg Waymo), because of their ability to solve non-linear problems. From this trend, various program libraries such as Google’s TensorFlow, Keras, or Theano, developed by the University of Montreal, emerged.

Why is it applicable today?

Machine learning methods are recently finding great applicability because of the tools above and the more widely available computing power. The prices for graphics cards have fallen in relation to computing power in recent years, as the following illustrations show.

Graphics CardGFLOPSPrice ($)Publication YearGFLOPS/€
Nvidia GeForce GTX 6803.09050020126,2
Nvidia GeForce GTX 7803.97749920136,1
Nvidia GeForce GTX 780 Ti5.04669920137,2
Nvidia GeForce GTX 9804.61254920148,4
Nvidia GeForce GTX 980 Ti5.63264920158,7
Nvidia GeForce GTX 10808.228499201716,5
Nvidia GeForce GTX 1080 Ti10.609699201715,2
Nvidia GeForce RTX 20808.920699201812,8
Nvidia GeForce RTX 2080 Ti11.750999201811,8
Graph showing the prices for graphics cards falling in relation to computing power in recent years

Development of the most Powerful Graphics Cards for Machine Learning Applications

Google’s 2016 Tensor Processing Units (TPU) enabled the acceleration of machine learning applications and also allowed accelerated training of neural networks in later generations from the years 2017 and 2018. Also helpful in the application of neural networks is the ability to rely on GPU clusters, because they allow fast training of the networks.  Today, it is not even necessary to perform the calculations on your own computer, instead, it is possible to perform the calculations at very reasonable prices in the cloud ( ImageNet Benchmark ).

Chart showing cost of GPU power from ImageNet
Chart showing the computing time lowering in recent years

Areas of Applications 

Computer vision is one of the most important areas of application for machine learning algorithms. Computer vision is a term used to describe when one enables a computer to gain a general understanding of images or videos to obtain information from them. Another area of application is speech analysis and the evaluation of texts. Speech analysis teaches the computer to understand general spoken words and, for example, convert them into a written text. In text analysis, the computer is supposed to be able to extract information from any text.

All of these areas result in exciting use cases such as the evaluation of satellite data, the enhancement of image searches, the analysis of public sentiment, or self-driving cars.

Do only International IT Companies Benefit from this Development?

Applicability of neural networks in practical application 1957-2012
The evolution of the past decade has made neural networks practically and widely applicable.

The affordable availability of computing power, open-source tools, and the availability of data through digital processes today allows almost all companies to be able to use machine learning methods. Companies that benefit from this development often start with small projects that help them better understand the technology, the way they handle data, and the changes needed in their own processes.

Use cases where good results can be achieved quickly include:

  • Automatic evaluation of images or video recordings
  • Predicting key figures (demand, inventory levels, etc.) allow quicker and better decisions can be made
  • Knowledge extraction from documents and large text bodies
  • Automatic classification of frequently occurring business transactions (for example, in banking, insurance, or other audit cases) into automatically acceptable requests and those that still require manual post-processing.

Share this Article on Social Media

Facebook
Reddit
Twitter
WhatsApp
LinkedIn
Email
Telegram

Do you have questions about what you just read?
Get in Contact with us!

Thank You!

We have received your message and someone will get back to you shortly!

Let's Start Building
Something Great Together!

Are you ready to get started on the development of your product? Wait no longer! Enter your email below and one of our team members will contact you soon!

This Website Uses Cookies

We use cookies to provide social media features and to analyze our traffic. You can consent to the use of such technologies by closing this notice, by interacting with any link or button outside of this notice, or by continuing to browse otherwise. You can read more about our cookie consent policy here.