Details
Allowed Actions
pdf/1950559.pdf | – |
Action 'Read' will be available if you login or access site from another network
Action 'Download' will be available if you login or access site from another network
|
---|---|---|
epub/1950559.epub | – |
Action 'Download' will be available if you login or access site from another network
|
Group | Anonymous |
---|---|
Network | Internet |
NLP in Python is among the most sought-after skills among data scientists. With code and relevant case studies, this book will show how you can use industry grade tools to implement NLP programs capable of learning from relevant data. We will explore many modern methods ranging from spaCy to word vectors that have reinvented NLP.
Network | User group | Action |
---|---|---|
ILC SPbPU Local Network | All |
|
Internet | Authorized users SPbPU |
|
Internet | Anonymous |
|
- Cover
- Title Page
- Copyright and Credits
- About Packt
- Contributors
- Table of Contents
- Preface
- Chapter 1: Getting Started with Text Classification
- What is NLP?
- Why learn about NLP?
- You have a problem in mind
- Technical achievement
- Do something new
- Is this book for you?
- Why learn about NLP?
- NLP workflow template
- Understanding the problem
- Understanding and preparing the data
- Quick wins – proof of concept
- Iterating and improving
- Algorithms
- Pre-processing
- Evaluation and deployment
- Evaluation
- Deployment
- Understanding the problem
- Example – text classification workflow
- Launchpad – programming environment setup
- Text classification in 30 lines of code
- Getting the data
- Text to numbers
- Machine learning
- Text classification in 30 lines of code
- Launchpad – programming environment setup
- Summary
- What is NLP?
- Chapter 2: Tidying your Text
- Bread and butter – most common tasks
- Loading the data
- Exploring the loaded data
- Tokenization
- Intuitive – split by whitespace
- The hack – splitting by word extraction
- Introducing Regexes
- spaCy for tokenization
- How does the spaCy tokenizer work?
- Sentence tokenization
- Stop words removal and case change
- Stemming and lemmatization
- spaCy for lemmatization
- -PRON-
- Case-insensitive
- Conversion – meeting to meet
- spaCy for lemmatization
- spaCy compared with NLTK and CoreNLP
- Correcting spelling
- FuzzyWuzzy
- Jellyfish
- Phonetic word similarity
- What is a phonetic encoding?
- Runtime complexity
- Cleaning a corpus with FlashText
- Summary
- Bread and butter – most common tasks
- Chapter 3: Leveraging Linguistics
- Linguistics and NLP
- Getting started
- Introducing textacy
- Redacting names with named entity recognition
- Entity types
- Automatic question generation
- Part-of-speech tagging
- Creating a ruleset
- Question and answer generation using dependency parsing
- Visualizing the relationship
- Introducing textacy
- Leveling up – question and answer
- Putting it together and the end
- Summary
- Linguistics and NLP
- Chapter 4: Text Representations - Words to Numbers
- Vectorizing a specific dataset
- Word representations
- How do we use pre-trained embeddings?
- KeyedVectors API
- What is missing in both word2vec and GloVe?
- How do we handle Out Of Vocabulary words?
- Getting the dataset
- Training fastText embedddings
- Training word2vec embeddings
- fastText versus word2vec
- Document embedding
- Understanding the doc2vec API
- Negative sampling
- Hierarchical softmax
- Data exploration and model evaluation
- Understanding the doc2vec API
- Summary
- Chapter 5: Modern Methods for Classification
- Machine learning for text
- Sentiment analysis as text classification
- Simple classifiers
- Optimizing simple classifiers
- Ensemble methods
- Getting the data
- Reading data
- Simple classifiers
- Logistic regression
- Removing stop words
- Increasing ngram range
- Multinomial Naive Bayes
- Adding TF-IDF
- Removing stop words
- Changing fit prior to false
- Support vector machines
- Decision trees
- Random forest classifier
- Extra trees classifier
- Logistic regression
- Optimizing our classifiers
- Parameter tuning using RandomizedSearch
- GridSearch
- Parameter tuning using RandomizedSearch
- Ensembling models
- Voting ensembles – Simple majority (aka hard voting)
- Voting ensembles – soft voting
- Weighted classifiers
- Removing correlated classifiers
- Sentiment analysis as text classification
- Summary
- Machine learning for text
- Chapter 6: Deep Learning for NLP
- What is deep learning?
- Differences between modern machine learning methods
- Understanding deep learning
- Puzzle pieces
- Model
- Loss function
- Optimizer
- Puzzle pieces
- Putting it all together – the training loop
- Kaggle – text categorization challenge
- Getting the data
- Exploring the data
- Multiple target dataset
- Why PyTorch?
- PyTorch and torchtext
- Data loaders with torchtext
- Conventions and style
- Knowing the field
- Exploring the dataset objects
- Iterators
- BucketIterator
- BatchWrapper
- Training a text classifier
- Initializing the model
- Putting the pieces together again
- Training loop
- Prediction mode
- Converting predictions into a pandas DataFrame
- Summary
- What is deep learning?
- Chapter 7: Building your Own Chatbot
- Why chatbots as a learning example?
- Why build a chatbot?
- Quick code means word vectors and heuristics
- Figuring out the right user intent
- Use case – food order bot
- Classifying user intent
- Bot responses
- Better response personalization
- Figuring out the right user intent
- Summary
- Why chatbots as a learning example?
- Chapter 8: Web Deployments
- Web deployments
- Model persistence
- Model loading and prediction
- Flask for web deployments
- Summary
- Web deployments
- Other Books You May Enjoy
- Index