Machine Learning (ML) Explained
What is Machine Learning (ML)?
The term “Machine Learning” was coined in 1959 by Arthur Samuel, a prominent figure in the origins of Artificial Intelligence when it became a formal field of research. And as the field began to develop, it drifted from the more symbolic and “fantastical” ideas of Artificial Intelligence and edged closer to more practical methods driven by the Statistics and Probability Theory. Artificial Intelligence not only established itself as its own field of research, but here it developed many subfields like Deep Learning and Computer Vision.
Machine Learning is similar to Data Mining, and the two are sometimes confused. The difference comes in that Data Mining is the practice of discovering unknown properties embedded in certain sets of data, while Machine Learning is a means of producing predictions based on already known properties of data sets.
Additionally, Machine Learning has very strong ties to Optimization. In fact, the goal of Machine Learning is often to minimize a certain loss function, or the discrepancy between a prediction and the actual result. Put simply, Machine Learning attempts to optimize the way its very predictions are made.
There are many different approaches to Machine Learning. Among the most popular today are:
Decision Tree Learning
In the context of Machine Learning, these decision “trees” are a set of points of information referred to as “nodes,” connected by edges referred to as “branches.” A decision tree, then, is a tree that outlines criteria in each node that lead the machine to the best option found in the bottom node of the path taken—this bottom node is referred to as a “leaf.”
A certain decision tree can be “learned” by a machine or program by taking the set of decision criteria and splitting the set into subsets at each point along the path, thereby partitioning the data. Once this tree has been formed, the decision process is simple—the machine simply follows a path down the tree based on the relevant criterion.
Artificial Neural Networks
Artificial Neural Networks are inspired by the biological Neural Networks found in our brains, and are the essential means of “learning” by massive association (or forming connections and learning relationships without being explicitly told how to do so).
An easy example of this is image recognition. If you have a smartphone, chances are you’re able to search for certain objects, and your phone will show you the images you have with this object. This ability can be learned, for example, by giving a program loads of images with apples and without apples, and then manually categorizing them as such. The program analyzes all the images and sees what they have in common and how they differ, building connections between objects that are apples and objects that aren’t, eventually learning to tell apples from oranges.
While Deep Learning can take other forms, the most common form it takes is Artificial Neural Networks, or “layered” learning. The “layered” refers to how many times the data is analyzed and transformed into information that the machine or system can use. In image processing, for example, these layers can mean breaking an image down into pixels, then shapes, then recognizable objects, etc.
Support Vector Machines
Support Vector Machines, also known as SVMs, are fed data on examples of items that fit into certain categories. This acts as machine “training,” and teach the systems to differentiate between these categories. This method is useful in image and text categorization.
Cluster Analysis, or Clustering, groups data together into different “clusters” based on similarities and differences found between the data. These clusters are then analyzed to “learn” more about how the elements of the data within clusters are related and how they aren’t. This method is useful in genetic research as well as crime and climate analysis.