Details
Title | Machine Learning for OpenCV: A practical introduction to the world of machine learning and image processing using OpenCV and Python |
---|---|
Creators | Beyeler Michael |
Imprint | Birmingham - Mumbai: Packt Publishing, 2017 |
Electronic publication | Санкт-Петербург, 2025 |
Collection | Электронные книги зарубежных издательств ; Общая коллекция |
Subjects | Upress |
Document type | Other |
File type | |
Language | English |
Rights | Доступ по паролю из сети Интернет (чтение) |
Additionally | New arrival |
Record key | RU\SPSTU\edoc\75740 |
Record create date | 4/16/2025 |
Allowed Actions
–
Action 'Read' will be available if you login or access site from another network
Group | Anonymous |
---|---|
Network | Internet |
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Network | User group | Action |
---|---|---|
ILC SPbPU Local Network | All |
|
Internet | Authorized users SPbPU |
|
Internet | Anonymous |
|
- Cover
- Copyright
- Credits
- Foreword
- About the Author
- About the Reviewers
- www.PacktPub.com
- Customer Feedback
- Table of Contents
- Preface
- Chapter 1: A Taste of Machine Learning
- Getting started with machine learning
- Problems that machine learning can solve
- Getting started with Python
- Getting started with OpenCV
- Installation
- Getting the latest code for this book
- Getting to grips with Python's Anaconda distribution
- Installing OpenCV in a conda environment
- Verifying the installation
- Getting a glimpse of OpenCV's ML module
- Summary
- Chapter 2: Working with Data in OpenCV and Python
- Understanding the machine learning workflow
- Dealing with data using OpenCV and Python
- Starting a new IPython or Jupyter session
- Dealing with data using Python's NumPy package
- Importing NumPy
- Understanding NumPy arrays
- Accessing single array elements by indexing
- Creating multidimensional arrays
- Loading external datasets in Python
- Visualizing the data using Matplotlib
- Importing Matplotlib
- Producing a simple plot
- Visualizing data from an external dataset
- Dealing with data using OpenCV's TrainData container in C++
- Summary
- Chapter 3: First Steps in Supervised Learning
- Understanding supervised learning
- Having a look at supervised learning in OpenCV
- Measuring model performance with scoring functions
- Scoring classifiers using accuracy, precision, and recall
- Scoring regressors using mean squared error, explained variance, and R squared
- Using classification models to predict class labels
- Understanding the k-NN algorithm
- Implementing k-NN in OpenCV
- Generating the training data
- Training the classifier
- Predicting the label of a new data point
- Using regression models to predict continuous outcomes
- Understanding linear regression
- Using linear regression to predict Boston housing prices
- Loading the dataset
- Training the model
- Testing the model
- Applying Lasso and ridge regression
- Classifying iris species using logistic regression
- Understanding logistic regression
- Loading the training data
- Making it a binary classification problem
- Inspecting the data
- Splitting the data into training and test sets
- Training the classifier
- Testing the classifier
- Understanding logistic regression
- Summary
- Understanding supervised learning
- Chapter 4: Representing Data and Engineering Features
- Understanding feature engineering
- Preprocessing data
- Standardizing features
- Normalizing features
- Scaling features to a range
- Binarizing features
- Handling the missing data
- Understanding dimensionality reduction
- Implementing Principal Component Analysis (PCA) in OpenCV
- Implementing Independent Component Analysis (ICA)
- Implementing Non-negative Matrix Factorization (NMF)
- Representing categorical variables
- Representing text features
- Representing images
- Using color spaces
- Encoding images in RGB space
- Encoding images in HSV and HLS space
- Detecting corners in images
- Using the Scale-Invariant Feature Transform (SIFT)
- Using Speeded Up Robust Features (SURF)
- Using color spaces
- Summary
- Chapter 5: Using Decision Trees to Make a Medical Diagnosis
- Understanding decision trees
- Building our first decision tree
- Understanding the task by understanding the data
- Preprocessing the data
- Constructing the tree
- Visualizing a trained decision tree
- Investigating the inner workings of a decision tree
- Rating the importance of features
- Understanding the decision rules
- Controlling the complexity of decision trees
- Building our first decision tree
- Using decision trees to diagnose breast cancer
- Loading the dataset
- Building the decision tree
- Using decision trees for regression
- Summary
- Understanding decision trees
- Chapter 6: Detecting Pedestrians with Support Vector Machines
- Understanding linear support vector machines
- Learning optimal decision boundaries
- Implementing our first support vector machine
- Generating the dataset
- Visualizing the dataset
- Preprocessing the dataset
- Building the support vector machine
- Visualizing the decision boundary
- Dealing with nonlinear decision boundaries
- Understanding the kernel trick
- Knowing our kernels
- Implementing nonlinear support vector machines
- Detecting pedestrians in the wild
- Obtaining the dataset
- Taking a glimpse at the histogram of oriented gradients (HOG)
- Generating negatives
- Implementing the support vector machine
- Bootstrapping the model
- Detecting pedestrians in a larger image
- Further improving the model
- Summary
- Understanding linear support vector machines
- Chapter 7: Implementing a Spam Filter with Bayesian Learning
- Understanding Bayesian inference
- Taking a short detour on probability theory
- Understanding Bayes' theorem
- Understanding the naive Bayes classifier
- Implementing your first Bayesian classifier
- Creating a toy dataset
- Classifying the data with a normal Bayes classifier
- Classifying the data with a naive Bayes classifier
- Visualizing conditional probabilities
- Classifying emails using the naive Bayes classifier
- Loading the dataset
- Building a data matrix using Pandas
- Preprocessing the data
- Training a normal Bayes classifier
- Training on the full dataset
- Using n-grams to improve the result
- Using tf-idf to improve the result
- Summary
- Understanding Bayesian inference
- Chapter 8: Discovering Hidden Structures with Unsupervised Learning
- Understanding unsupervised learning
- Understanding k-means clustering
- Implementing our first k-means example
- Understanding expectation-maximization
- Implementing our own expectation-maximization solution
- Knowing the limitations of expectation-maximization
- First caveat: No guarantee of finding the global optimum
- Second caveat: We must select the number of clusters beforehand
- Third caveat: Cluster boundaries are linear
- Fourth caveat: k-means is slow for a large number of samples
- Compressing color spaces using k-means
- Visualizing the true-color palette
- Reducing the color palette using k-means
- Classifying handwritten digits using k-means
- Loading the dataset
- Running k-means
- Organizing clusters as a hierarchical tree
- Understanding hierarchical clustering
- Implementing agglomerative hierarchical clustering
- Summary
- Chapter 9: Using Deep Learning to Classify Handwritten Digits
- Understanding the McCulloch-Pitts neuron
- Understanding the perceptron
- Implementing your first perceptron
- Generating a toy dataset
- Fitting the perceptron to data
- Evaluating the perceptron classifier
- Applying the perceptron to data that is not linearly separable
- Understanding multilayer perceptrons
- Understanding gradient descent
- Training multi-layer perceptrons with backpropagation
- Implementing a multilayer perceptron in OpenCV
- Preprocessing the data
- Creating an MLP classifier in OpenCV
- Customizing the MLP classifier
- Training and testing the MLP classifier
- Getting acquainted with deep learning
- Getting acquainted with Keras
- Classifying handwritten digits
- Loading the MNIST dataset
- Preprocessing the MNIST dataset
- Training an MLP using OpenCV
- Training a deep neural net using Keras
- Preprocessing the MNIST dataset
- Creating a convolutional neural network
- Fitting the model
- Summary
- Chapter 10: Combining Different Algorithms into an Ensemble
- Understanding ensemble methods
- Understanding averaging ensembles
- Implementing a bagging classifier
- Implementing a bagging regressor
- Understanding boosting ensembles
- Implementing a boosting classifier
- Implementing a boosting regressor
- Understanding stacking ensembles
- Understanding averaging ensembles
- Combining decision trees into a random forest
- Understanding the shortcomings of decision trees
- Implementing our first random forest
- Implementing a random forest with scikit-learn
- Implementing extremely randomized trees
- Using random forests for face recognition
- Loading the dataset
- Preprocessing the dataset
- Training and testing the random forest
- Implementing AdaBoost
- Implementing AdaBoost in OpenCV
- Implementing AdaBoost in scikit-learn
- Combining different models into a voting classifier
- Understanding different voting schemes
- Implementing a voting classifier
- Summary
- Understanding ensemble methods
- Chapter 11: Selecting the Right Model with Hyperparameter Tuning
- Evaluating a model
- Evaluating a model the wrong way
- Evaluating a model in the right way
- Selecting the best model
- Understanding cross-validation
- Manually implementing cross-validation in OpenCV
- Using scikit-learn for k-fold cross-validation
- Implementing leave-one-out cross-validation
- Estimating robustness using bootstrapping
- Manually implementing bootstrapping in OpenCV
- Assessing the significance of our results
- Implementing Student's t-test
- Implementing McNemar's test
- Tuning hyperparameters with grid search
- Implementing a simple grid search
- Understanding the value of a validation set
- Combining grid search with cross-validation
- Combining grid search with nested cross-validation
- Scoring models using different evaluation metrics
- Choosing the right classification metric
- Choosing the right regression metric
- Chaining algorithms together to form a pipeline
- Implementing pipelines in scikit-learn
- Using pipelines in grid searches
- Summary
- Evaluating a model
- Chapter 12: Wrapping Up
- Approaching a machine learning problem
- Building your own estimator
- Writing your own OpenCV-based classifier in C++
- Writing your own scikit-learn-based classifier in Python
- Where to go from here?
- Summary
- Index
- Humble bundle_CDP.pdf
- Table of Contents
- Test
- Index
Access count: 0
Last 30 days: 0