Introduction To Machine Learning
Chapter 1 Machine learning
Machine learning is a subfield of computer science that evolved from the study of pattern recognition and
computational learning theory in artificial intelligence.
Machine learning explores the construction and study of
algorithms that can learn from and make predictions on
data.
Such algorithms operate by building a model from
example inputs in order to make data-driven predictions
or decisions, rather than following strictly static program instructions.
Machine learning is closely related to and often overlaps with computational statistics; a discipline that also
specializes in prediction-making. It has strong ties to
mathematical optimization, which deliver methods, theory and application domains to the field. Machine learning is employed in a range of computing tasks where
designing and programming explicit algorithms is infeasible.
Example applications include spam filtering,
optical character recognition (OCR), search engines
and computer vision. Machine learning is sometimes
conflated with data mining, although that focuses more
on exploratory data analysis.
Machine learning and pattern recognition “can be viewed as two facets of the same
field.”
When employed in industrial contexts, machine learning methods may be referred to as predictive analytics or
predictive modelling.
In 1959, Arthur Samuel defined machine learning as a
“Field of study that gives computers the ability to learn
without being explicitly programmed”.
Tom M. Mitchell provided a widely quoted, more formal definition: “A computer program is said to learn
from experience E with respect to some class of tasks T
and performance measure P, if its performance at tasks
in T, as measured by P, improves with experience E”.
This definition is notable for its defining machine learning in fundamentally operational rather than cognitive
terms, thus following Alan Turing's proposal in his pap
