Playing with Machine Learning Algorithms

Data Science Jun 17, 2019

Machine Learning is something of which everyone talks but few people know exactly what it is and even fewer know how to implement in order to tap its best possible potential. If you read about it on any website or in the books, either they would seem as if Machine Learning is very typical to implement (covering the maths part, believe me that even I am scared of it) or they would seem as this is any fairy tale in which Machine Learning algorithms are the "Bahubali" and its applications are the over fantasized "Jarvis" of Iron Man (We love you 3000). They would tell you that this field is very hot at the present moment having highest placements. Maybe that is true, that's why I am here

I thought of writing this blog so that anyone reading it will come to know what all algorithms are present in Machine Learning at a broader level so that they could understand it. The algorithms are not discussed in very detail as that would deviate the purpose.

           What is the need of creating Artifical Intelligence?

Simply because we are human beings and we are lazy. We don't have powerful hands, legs or even the motivation to work at many instances but we have amazing minds evolved from God knows how long. Through these minds we are making machines to do work for us. For doing the work, they need to learn things. These "things" are application specific but the learning could be categorised in various "Machine Learning Algorithms"

Let us take an example of "Oggy" (Yes, I love Oggy and the cockroaches).

Oggy's House

Oggy is a teenager and he has to sell his house (A very traditional example for Machine Learning) but as we know he is very lazy and he doesn't know at what cost, he must sell it. He asked help from the cockroaches and they collected the data which contained information about the house and at what costs they were sold.
Now, as Oggy is very smart, he discovered a pattern and he saw that the houses that were big, were costly and the houses that were new where costly. He also saw that the houses that didn't have cockroches were costly, ironically.  

What Oggy was doing is known as "regression" in the world of Machine Learning. Learning from historical data. Now if he could make a formula having dependence on these "parameters", then he is done. He can just simply feed the values of those paramters belonging to his house. But he is lazy and he wants machine to do it. He wants that the analysis made by him is made by machine and data could be provided by him.

Now machines have memory and we are providing them with computation, thus ideally converting it to "Human Mind". This was the beginning of Machine Learning, where machine learns the patterns, provides you a model where you can feed newer values and get result which was not known earlier. Amazing, isn't it?

Elements of Machine Learning

So Machine Learning gives machine a brain. It means that we are giving it an input and applying the algorithm and thus creating a function, "hypothesis" (as it is called in machine learning world), it provides us with the output. Now this complete process has three components:  

1. Data: It is the key element of Machine Learning. All the algorithms depends upon data. data is like the fuel to the solution. Without data, neither the hyposthesis could be made and nor will Oggy be able to sell his house. If you want to sell houses, collect data of many houses and their prices. If you want to predict weather then collect data about weather in past times. If you want to classify image as dog or cat, feed in millions of images of dogs and cats.
Data could be found through many ways: download the data dumps provided by good people or keep the track of the activities on popular sites, say Facebook (No, Mark stop it!).
This data could tell the thought process of a person. Making algorithms based on these data could alter their future thought process. Business sectors make use of it and thus tries to exploit data. As a Machine Learning enginneer, one has to spend maximum of its time to find data, finding meaning to data, cleaning the data.

Google Opinions is paying you for a simple survey. Think, why?

2. Features: These features are the parameters which have to be given weights by the algorithm like in the case of house, we have size of house, number of cockroaches, locality of residence, old or new. These feautures are the factors on which our final answer will depend but how will it happen, this is purely a question of features.  When the data is stored in tables, then it is easy as the features would be the names of columns but what are those when the data is present in the form of pictures. What are these features in this case? This is the job of Machine Learning engineer. You will later see in this series of blogs, that we can assume each of the pixel as a feature, then combine some of those, drop some of those and return with a complete set of new, less and sophisticated features.

3.Algorithms: If you are a computer science engineer then definitely you must be waiting for it. Obviously there is a need of an algorithm. Any problem can be solved by many ways, or many algorithms. The thing that we are going to discuss in this blog. The only thing that changes our selection is the the accuracy, precision or performance that we are willing to get and loose. Any algorithm that is perect combination of the amount of these things is what we would implement in our problem statement. But if the data that you are feeding is garbage then obviously even if the algorithm is very good, you are not getting correct desired answers. Only way to deal with it is to have good data.
Always remember that a problem could be solved by many algorithms. Neural networks could be used to do most of the tasks but it comes with the cost of Graphic Cards, remember?

Types of Machine Learning

  1. Classical Machine Learning: Whenever we are having simple data having clear feautures. When data is heterogeneous like a set of columns where each column means something different then we can’t just mix everything with a neural network and hope it will work. For those problems classical machine learning is still the way to go.
    1.1 Supervised Learning
    1.1.1 Regression: Linear Regression, Polynomial Regression, Laaso Regression
    1.1.2 Classification: Decision Trees, SVM, Logistic Regression, Naive Bayes, KNN
    1.2 Unsupervised Learning
    1.2.1 Clustering: K-Means, Fuzzy C-Means
    1.2.2 Pattern Search: FP Growth,Euclat
    1.2.3 Dimensionality Reduction: LDA,LSA,SVD,PCA,t-SNE

2. Ensemblers: The goal of any machine learning problem is to find a single model that will best predict our wanted outcome. Rather than making one model and hoping this model is the best/most accurate predictor we can make, ensemble methods take a myriad of models into account, and average those models to produce one final model. So take an example of decision trees. Instead of making a single decision tree, we could have multiple decision trees and then average them out and this combination of various decision trees is known as Random Forests.
2.1 Boosting: XGBoost, CatBoost
2.2 Bagging: Random Forests
2.3 Stacking

3. Reinforcement Learning: Whenever we are not having any data but we have an environment where we can interact, then reinforcement learning techniques are used. The agent receives rewards by performing correctly and penalties for performing incorrectly. The agent learns without intervention from a human by maximizing its reward and minimizing its penalty.
3.1 Genetic algorithm
3.2 Deep Q-network
3.3 Q-Learning

4. Deep Learning and Neural Networks: Deep Learning aims to solve the “representation problem” which means how to represent data as a vector, when the data is homogeneous like images, video or sound this problem is very important and deep learning is the state of the art to solve it. Here in deep learning complete process is a black box. We have to wait for a miracle to happen and we will get our result.
4.1 Convolutional Neural Networks (CNN)
4.2 Recurrent Neural Networks (RNN)
4.3 Generative Adversarial Networks (GAN)
4.4 Perceptrons

If you have any doubts or suggestions, please let me know in the comment section. Also, you will be getting a series of blogs on this topic very soon.
Details of these algorithms will come in future series of these posts. Till then, stay tuned :)

P.S: Please suggest us topics in the comment box that you want to explore further

Abhishek Agarwal

A tech blogger who tries new things and blogs them to help readers.