station10
Typewriter-blog.png

Blogs

our views and our knowledge in analytics and other releveant topics


our blogs


How do machines learn

 
How-do-machines-learn.png

Machine learning is increasingly implemented across various industries from medicine - where it is being used to tackle the vast amount of patient data helping researchers predict an individual’s probability of developing certain diseases - to journalism - where machine learning is being used to organise and archive incoming news material, providing context to the journalist.

Machine learning technology has been around for decades and is embedded into the fabric of our lives, affecting how we work, communicate and socialise. Take Siri and Alexa, for instance, our beloved personal assistants. These voice-activated programs are able to interpret our voice commands, retrieve and present the relevant information. Other examples include search engines, recommendation engines and recognising people in an image, to name a few.

Machine Learning is the field of study that gives computers the ability to learn without being explicitly programmed (Arthur Samuel, 1959).  Rather than providing the computer with a set of rules detailing how to execute a task (computer programming), machine learning techniques provide the computer with examples of the task and leave it to figure out the rules all by itself.

If computer programming is the automation of manual tasks (i.e. scripting a set of rules that instructs a computer), then machine learning is the process of “automating the automation process” (Sebastian Raschka).

How do machines learn?

Machine learning systems use historical data to build a model that can predict the likelihood of an input producing a given output. For example, a spam-filtering model would use old emails to generate a model that learns how to label spam and non-spam email.

To develop a spam-filter we would take a large dataset of old emails and split it into two. The first set would be used to train the model (training set) and the second set would be used to test the model (test set).

Training Model Graph.jpg

The training set is fed into the learning model which processes the old emails and estimates a set of rules that can be used to label emails as spam or non-spam. For instance, it may detect that certain words such as ‘Make $’ or ‘100% Satisfied’ are highly correlated with spam emails. As such it would scan new emails for such words and use this to decide whether the mail is spam or non-spam. These set of rules are known as the predictor function, that is they predict the likelihood of a given outcome for each input.

To check the validity of the learning model, we pass the test set into the learning model, which labels the emails as spam or non-spam. This output is then compared to the actual results (which are known) giving us an error function. This is then used to fine tune the model such that the number of mistakes it makes is minimised (i.e. reducing the number of spam email that are labelled as non-spam and vice versa).

Supervised vs Unsupervised

What we described above is known as supervised learning. In this method of training, the learning algorithm receives both inputs and outputs and it uses these to determine the underlying correlations. We can think of it as learning in a classroom with a teacher present for guidance.

Another training method often used is unsupervised learning. In this method, the learning model is given a data set with inputs only. It is then expected to find patterns/ similarities within the data that it can use to group it into subsets (clusters).

As the learning model is exposed to new data, it is able to adapt its set of rules independently. Allowing it to produce reliable, repeatable decisions and results. The field has gained momentum over the last two decades as organisations have seen growth in the volumes and varieties of available data, cheaper computational processing that is more powerful, and affordable data storage.

Resulting in the faster development of such models that can analyse large datasets at a fast pace providing quicker insights. This ability to “learn” and adapt rather than following explicit instructions is akin to the way we learn. 

The human brain learns how to process millions of inputs per second, and this has helped neuroscience understand the computations that occur in the human brain.  But we are still many years off from a genuinely artificially intelligent machine that can act in the same way.  As the Raschka quote above highlights, we are now at a stage where we can automate the tasks that computers have previously done; the next task is to give a machine a task and ask it to learn without any prior experience.

If you want to ask us humans at Station10 about what tasks you might want to automate within your business, please get in touch.