Language
  • Python 2
Reading time
  • Approximately 38 days
What you will learn
  • Machine Learning and AI
Author
  • Peter Harrington
Published
  • 12 years ago
Packages you will be introduced to
  • numpy
  • matplotlib

Summary

Machine Learning in Action is unique book that blends the foundational theories of machine learning with the practical realities of building tools for everyday data analysis. You'll use the flexible Python programming language to build programs that implement algorithms for data classification, forecasting, recommendations, and higher-level features like summarization and simplification.

About the Book

A machine is said to learn when its performance improves with experience. Learning requires algorithms and programs that capture data and ferret out the interestingor useful patterns. Once the specialized domain of analysts and mathematicians, machine learning is becoming a skill needed by many.

Machine Learning in Action is a clearly written tutorial for developers. It avoids academic language and takes you straight to the techniques you'll use in your day-to-day work. Many (Python) examples present the core algorithms of statistical data processing, data analysis, and data visualization in code you can reuse. You'll understand the concepts and how they fit in with tactical tasks like classification, forecasting, recommendations, and higher-level features like summarization and simplification.

Readers need no prior experience with machine learning or statistical processing. Familiarity with Python is helpful.

Purchase of the print book comes with an offer of a free PDF, ePub, and Kindle eBook from Manning. Also available is all code from the book.

What's Inside
  • A no-nonsense introduction
  • Examples showing common ML tasks
  • Everyday data analysis
  • Implementing classic algorithms like Apriori and Adaboos
Table of Contents
    PART 1 CLASSIFICATION
  1. Machine learning basics
  2. Classifying with k-Nearest Neighbors
  3. Splitting datasets one feature at a time: decision trees
  4. Classifying with probability theory: naïve Bayes
  5. Logistic regression
  6. Support vector machines
  7. Improving classification with the AdaBoost meta algorithm
  8. PART 2 FORECASTING NUMERIC VALUES WITH REGRESSION
  9. Predicting numeric values: regression
  10. Tree-based regression
  11. PART 3 UNSUPERVISED LEARNING
  12. Grouping unlabeled items using k-means clustering
  13. Association analysis with the Apriori algorithm
  14. Efficiently finding frequent itemsets with FP-growth
  15. PART 4 ADDITIONAL TOOLS
  16. Using principal component analysis to simplify data
  17. Simplifying data with the singular value decomposition
  18. Big data and MapReduce
The author Peter Harrington has the following credentials.

  • Works/Worked at Intel