Machine Learning and Algorithms

Deepak Kumar
6 min readNov 19, 2020

In his book “On Intelligence”, published in 2004, Jeff Hawkins defined intelligence as the ability to predict the future, for example, the weight of a glass we are going to lift or the reaction of others to our actions, based on patterns stored in the memory (the memory-prediction framework). This same principle is behind Machine Learning.

What is Machine Learning?

Machine Learning is a discipline within the field of Artificial Intelligence which, by means of algorithms, provides computers with the ability to identify patterns from mass data in order to make predictions. This learning method allows computers to perform specific tasks autonomously, that is, without the need to be programmed.

Processes involved in Machine Learning

  1. Data Gathering
  2. Data Pre-Processing
  3. Choose Model
  4. Train Model
  5. Test Model
  6. Tune Model
  7. Deployment for Predictions

Machine Learning Types

Supervised Learning: Supervised learning is the machine learning task of learning a function that maps an input to an output based on example input-output pairs. It infers a function from labeled and classified data.

In supervised learning, we are given the data sets and already know what our correct output should look like.

big-datamadesimple

Supervised Learning can be further divided into Regression and Classification Algorithms:

  • Regression: In regression we fit the training data into the continuous function and try to predict results within a continuous output, meaning that we are trying to map input variables to some continuous function. eg: Predicting the price of a house according to the size. Here the price is a function of the size of the house which is the continuous output. so this is a regression problem.
  • Classification: In classification, outputs are predicted in discrete value such as yes or no, true or false,0 or 1, diabetes or not, male or female, positive or negative, etc. eg: In given health data predicting a person has diabetes or not is classification.

Some popular examples of supervised machine learning algorithms are:

  • Linear regression for regression problems.
  • K Nearest Neighbours for classification.
  • Random forest for classification and regression problems similar to a Decision Tree but with more accuracy.

Unsupervised Learning: Unsupervised learning is a type of machine learning algorithm used to draw inferences from datasets consisting of input data without labeled responses.

Unlike Supervised Learning, we provide data sets without telling the what is the label of data(what actually data is?) and ask to find the structures from the given data sets.

big-datamadesimple

Unsupervised Learning can be further divided into Clustering and Association Algorithms:

  • Clustering: It involves segregating data based on the similarity between data instances where it involves an iterative process to find cluster centers called centroids and assigning data points to one of the centroids.
  • Association: Association rules are used to identify new and interesting insights between different objects in a set, frequent pattern in transactional data or any sort of relational database.

Some popular examples of unsupervised learning algorithms are:

  • k-means for clustering problems.
  • Hierarchical Clustering for Spam and Fraud Detection.
  • Relational Association Rule to identify the probability of the occurrence of illness

Reinforcement Learning: Reinforcement Learning, agents are trained on a reward and punishment mechanism. The agent is rewarded for correct moves and punished for the wrong ones. In doing so, the agent tries to minimize wrong moves and maximize the right ones.

For example, consider teaching a dog a new trick: you cannot tell it what to do, but you can reward/punish it if it does the right/wrong thing. It has to figure out what it did that made it get the reward/punishment, which is known as the credit assignment problem.

big-datamadesimple

Reinforcement Learning can be further divided into Positive and Negative RL Algorithms:

  • Positive: Positive Reinforcement is defined as when an event, occurs due to a particular behavior, increases the strength and the frequency of the behavior. In other words, it has a positive effect on behavior.
  • Negative: Negative Reinforcement is defined as strengthening of a behavior because a negative condition is stopped or avoided. It increases behavior.

Some popular examples of Reinforcement learning algorithms are:

  • In Gaming frontier, AlphaGo Zero.
  • Reinforcement Learning in news recommendation.
  • Discord Bots to learn and maintain a user’s behavior.

Some commonly used Algorithms in ML :

Linear Regression: Linear regression is a linear model, i.e a model that assumes a linear relationship between the input variables (x) and the single output variable (y). More specifically, y can be calculated from a linear combination of the input variables (x).

Some applications of Linear Regression:

  • Studying engine performance from test data in automobiles.
  • OLS regression can be used in weather data analysis.

Logistic Regression: It uses a sigmoid function for classification problems, which is an S-shaped curve that can take any real-valued number and map it into a value between 0 and 1, but never exactly at those limits.

Some applications of Logistic regressions are:

  • Fraud detection: Detection of credit card frauds or banking fraud is the objective of this use case.
  • Image segmentation, recognition, and classification.
  • Object detection.

Decision Tree: Decision trees are constructed via an algorithmic approach that identifies ways to split a data set based on different conditions. It is one of the most widely used and practical methods for supervised learning.

Some applications of the Decision Tree Algorithm:

  • Selecting a flight to travel.
  • Recommendations on Dating Sites.

Random Forest: Random forest is a supervised learning algorithm. The “forest” it builds, is an ensemble of decision trees, usually trained with the “bagging” method. We can say that it is a collection of Decision Trees to improve the overall accuracy of the output.

Some common applications of Random Forest Algorithms are:

  • In the stock market, a random forest algorithm can be used to check the stock trends and contemplate loss and profit.
  • The random forest can be used for recommending products in e-commerce.

K Nearest Neighbor: It is one of the simplest classification algorithms and it is one of the most used learning algorithms It stores all the available cases and classifies the new data or case based on similarity measures.Since this algorithm relies on distance for classification, normalizing the training data can improve its accuracy dramatically.

Some common applications of the K-NN Algorithm:

  • Object Detection(Either cat or dog).
  • Recommendation Systems.

References~

Sources~

  • simplilearn.com
  • neptune.ai
  • towardsdatascience.com
  • bigdata-madesimple.com

--

--