What is Machine Learning? How is it different from AI? How can you get started? I know you aren’t here for the theory, so here’s what we’re going to do.


“Wow your program is perfect? When does machine learning come into play, however?”
“What?”
“Machine learning. You implemented machine learning, right?”
“Why should I…”
“It’s important”
“It’s just a calculator!”
“yUo sHoUlD hAvE iMpLeMeNtEd mAcHine lEaRniNg!”

Machine learning is important. Why? (we will see what it is in a bit, be patient Chris)

Just a Machine Learning meme

Well, can you imagine what it would take to code a program to tell the difference between rock, paper and scissors in a photo?

I wouldn’t even know where to start!

So one or more guys, I don’t really know, thought: “can we program a computer to learn like a human?“.

The first answer was: “No, the human brain is too complex for our 128MB machines”, but then another sentence followed: “actually, we could do that for relatively simple stuff”.

So now we have self-driving cars, machines that can recognise skin cancer, Google and a bunch of other unimportant stuff like that.

What is Machine Learning?


In short:

“Machine learning is the process that allows a computer to improve in a certain task without being explicitly programmed”

Understood? No? Keep reading. Yes? Keep reading anyway, because the explanation below is better.

As a programmer you usually write code to turn certain inputs into certain outputs.

In machine learning we do the opposite: we take some inputs, sometimes some outputs too, and we let the computer figure out the code (actually it’s better to say “the rules”).

Does it seem difficult? Well, it ISN’T.

Machine learning is, in fact, super EASY (at least at the beginning) and all these fancy words you see around aren’t difficult, but just confusing.

OK, back to the explanation.

How do we make the computer figure out the program? Using some algorithms, fancy word that means code/operations/functions, that guide the machine during its process of learning.

You, as a programmer, code the learning rules and your computer spits out the program you need.

Now that you have the final rules for your program (the model), you can use those to turn inputs into outputs in the good old-fashioned way.

The three key ingredients of machine learning


Before starting to code your machine learning program, you need to establish/get three things:

  • The input’s nature: Features
  • Samples of those inputs: Data
  • Something that tells the machine how to learn: Algorithms

Features

You must know which inputs your program will need to predict an output.

If you want a computer to predict if you’ll like a new song that your friend Robert has just shared with you, it will need some of the song’s features, such as rithm and genre.

Obviously, you’re not going to tell the computer what you ate yesterday, because it doesn’t matter.

The more information you give to your machine, the more accurate it might get, but the less the inputs, the faster it gets.

Nonetheless quality is important: your program won’t get better if you add useless inputs, it will only get slower and, very likely, worse.

Don’t confuse your machine, pay attention to your inputs.

Data

Do you want to predict which songs you’ll love? Make a list of songs you like and you hate.

You can’t just use a list full of songs you like, because the computer will assume that you like every kind of music.

Gathering data and examples to train your program is the most important part.

It’s important to have heterogeneous data, so that the computer will be able to recognise differences and patterns.

For the same reason you can’t even use a short list of songs, because you won’t have enough diversity.

Generally speaking, if two datasets have the same quality (= they are equally varied), the longest is better.

But you can’t train on extremely long datasets, because it would take an eternity.

Datasets

The Best Public Datasets for Machine Learning and Data Science by Stacy Stanford on Medium

So, we must find a middle ground here too.

Algorithms

The algorithm is what decides how your machine will learn.

Different algorithms can improve and speed up the machine’s training (more about it below) depending on each situation: the absolute best algorithm doesn’t exist.

However, even the best algorithm for you situation can’t do anything if your dataset is bad.

Most of the times your program won’t improve because of the data you’re using.

Where do I take those algorithms?


Nature, obviously!

Humans didn’t invent aeroplanes dreaming them from nothing, we got the idea from birds.

Therefore this time we are copying natural selection, the human brain and a bit of unnecessary psychology.

Ever heard of Neural Networks or the genetic algorithm? These are two of the many methods to implement machine learning and yes, we’re going to see them in the next posts.

Machine Learning Algorithms

Source: https://dev.to/trekhleb/homemade-machine-learning-in-python-4gbj

Since there are a lot of machine learning algorithms and scientists/programmers love grouping things, here are the four big categories of methods used:

  • Supervised Learning

    In supervised learning, the programmer gives the computer a data set of inputs and outputs.

    The machine will calculate some outputs, starting by guessing (= it will do something completely random), from those inputs and then it will compare its result with the one in the data set.

    If it got it wrong, the learning algorithm will adjust the model’s parameters trying to make it more accurate.

    What are those parameters? I’m going to explain that in a bit.

    Teaching a computer how to distinguish the hand shape in a rock, paper and scissors game is an example of supervised learning.

  • Unsupervised Learning

    When we use usupervised learning, we don’t give the machine the correct outputs, but only the inputs.

    How should the computer be supposed to learn? Simply, in this case we don’t want the computer to learn something we decide, rather we want it to discover a feature to separate the inputs in two or more unlabelled groups.

    For example, understanding the most important features in human faces is something perfect for unsupervised learning.

  • Semi-supervised Learning

  • It is a mix of the two above, used when we’d like to use supervised learning, but we don’t have enough labelled data (in other words we don’t have all the outputs associated with our inputs).

  • Reinforcement Learning

    This is the part we copyed from natural selection and psychology.

    Reinforcement learning consists of making the computer learn by trials and errors.

    If a behaviour (= a certain output) is proper for the programmed enviroment, the parameters that got that result get a reward (= a point).

    The computer runs this simulation many and many times, it takes the best parameters and it uses them in the next generation (= the next set of simulations).

    In other words we do the same thing natural selection does: we make hundreds of copies of the same program with random parameters, we see which ones perform better, we copy those in the next generation, letting the others die painfully, metaphorically speaking, and we repeat.

    A good use of reinforcement learning is teaching the computer to play chess, or to do this:

How do those algorithms work?


We’re not going much into the details, because each algorithm has its own features and I’m going to explain them in other posts.

Generally speaking, a machine learning program consists of three parts:

  1. Model algorithm
  2. Loss function
  3. Learning algorithm

The model algorithm is the program itself, the part that turns inputs into outputs, correctly or not.

When the training phase is finished, the programmer will export that algorithm in a model optimised for execution.

The learning algorithm is what makes the model algorithm improve.

Learning Algorithms

Source: https://www.datasciencecentral.com/profiles/blogs/machine-learning-and-its-algorithms-to-know-mlalgos

Training and testing

The learning process is divided in two phases: training and testing.

Usually, a programmer divides their data set in two parts, one for each phase, with the one for training bigger than the other.

We do that to test our machine against new inputs, so that it can’t memorise the answers or something like that.

The model algorithms consists of functions and parameters: the inputs are combined with those parameters and passed to a function that returns the output.

In most cases the functions are constant and the parameters aren’t.

When the training starts, all the parameters are initialised to a random value, so we get completely random outputs, so each time the computer did something wrong, the learning algorithm comes into play and tries to adjust those parameters.

We do that until we reach a decent accuracy level.

Wait, how do we measure accuracy? We use a loss function, another algorithm which measures how well the computer has performed.

We always have a loss function, even in unsupervised learning.

Machine Learning vs AI vs Neural Networks vs Deep Learning


I didn’t want to include this in this post, but I have to because people on the Internet love confuse simple things to beginners.

AI vs Machine Learning vs Neural Networks vs Deep Learning

  • Artificial Intelligence (AI) is the name of the whole field, just like Maths.
    In other words AI includes Machine Learning and all the others, but not only!

    Even the zombies in Minecraft walks thanks to an AI, but they’re not learning.

  • Machine Learning (ML) is the part of Artificial Intelligence we’re discussing here.
  • Neural Networks (ANN) are one of the types of Machine Learning structures you can use.

    This program is learning to reach a target but it doesn’t use Neural Networks:
    ML without ANN

  • Deep Learning is just one of the many possible methods you can use to train a Neural Network.

What you’re going to learn


In the next post we’re going to see in detail a lot of different approaches and methods to get a machine to learn.

And of course there will be tutorial on how to make machine learning programs, I know you aren’t here just for the theory and neither I am.

Maybe you want to have a look at some programming tutorial before diving into the coding part: Learn Python Learn Java

Here are just some of the things we’re going to see:

  • Regression
  • Clustering
  • Association
  • Genetic Algorithms (I love these ones, they can be a lot of fun to see evolve)
  • Neural Networks, of course
  • Convolution

Stop. No more spoilers. Be happy with this little list for now, I’m not going to write everything we’re going to do, it would take me forever.

Conclusion


Do you want your compute to predict the perfect song to play next? To be the best chess player in the world? Or just make some work for you?

This is the power of machine learning, the power that can arise from human lazyness and you’re going to learn how to use it.

OK, now let’s stop with all of this climax thing and answer this question: how did you get interested in machine learning? Where did you hear it the first time?

Write a comment below, I’m curious. Unless you want to ask your own question. Then write that, it’s best to get all the doubts out of the way.

Anyway, the post is finished, have a good day and check any other article you might be interested in: the next of this series may be already available!

From Zephyro it’s all, Bye!


Leave a Reply

%d bloggers like this: