How to Text mine in R using NLP techniques

How to Text mine in R using NLP techniques

Introduction:

Text mining, also known as intelligent text analysis, text data mining or knowledge discovery in text(KDT) is the process of extracting significant patterns and meaning from unstructured text data. Natural language processing (NLP), data mining, information retrieval(IR) and analytical techniques are employed to transform text into structured data for analysis. It follows the sequence of text pre-processing(syntactic/semantic text analysis), feature generation, feature selection(simple counting-statistics), text data mining(supervised/unsupervised learning) and analyzing results.

Problems in Text Mining:

  • Noisy Data: Noisy data is often corrupt data. It is the data that has a considerable amount of meaningless information and is not easily comprehensible by machines.

  • Word ambiguity and context sensitivity: Ambiguous words lead to vagueness and confusion. Context sensitiveness "depending on context" or "depending on circumstances"

For example, Apple (the company) or Apple (the fruit)

The complex and subtle relationship between concepts in the text

Natural Language Processing (NLP):

NLP allows communication between humans and computers. It gives the machines the ability to read, understand, and interpret languages used by people. It includes tasks like text tokenization, stemming, sentiment analysis, and lemmatization.

Text Mining:

Text mining is the process of collecting important patterns from large amounts of unstructured text data. In unstructured data, information does not have any form obtained from natural language. For example, comments used on Facebook, tweets on Twitter, opinions or reviews of any products or services are examples of unstructured data.

Tokenization:

Tokenization is the process of dividing a text into distinct words, or tokens. Since tokenization transforms human-readable text into a format that machine learning algorithms can easily process, it is a crucial step in the Natural Language Processing (NLP) process.

Term Frequency (TF):

This is calculated by dividing the total number of words in a document by the frequency with which each word appears. The frequency with which a term appears in a document indicates its importance.

Tidy Text Principles:

In tidy text format, each row represents a document, each column represents a token (a word or term), and each cell represents a measurement. It helps in organizing text data neatly for better analysis.

Sentiment Analysis:

Sentiment analysis is also referred to as emotion AI, opinion mining, or subjectivity analysis. It is useful in understanding a speaker's or writer's attitude toward a particular topic. The attitude could be their judgment or evaluation, their affective state (the writer or speaker's emotional state), their reaction to an event, incident, interaction, document, and so on, or their intended emotional communication. It uses NLP, text analysis, computational linguistics, and biometrics.

Topic Modeling:

A type of statistical model that finds the abstract "topics" that are contained in a collection of documents. Similar to clustering on numerical data, this unsupervised document classification method locates naturally occurring groups of objects even when we are unsure of our exact search parameters.

Apriori algorithm:

The Apriori algorithm is a seminal algorithm for mining frequent itemsets. Hash-based techniques, transaction reduction partitioning, sampling, and dynamic itemset counting are some methods used to enhance the efficiency of the Apriori algorithm. This algorithm iteratively generates candidate itemsets and checks their support in the transaction database. Its main challenge is its computational complexity, especially for large datasets with many items. Hash-based techniques reduce the number of candidate itemsets generated and checked, while transaction reduction partitioning divides the database into smaller partitions.

Text Mining in R

R language provides a package “tm” for text mining. This text-mining package, "tm,” provides a framework for text-mining applications within R. The main framework or structure for managing documents in R is Corpus.

Corpus represents a collection of text documents in R. It is an abstract concept with different implementations. It creates the corpora object that is held in the memory. Another class of the package is VCorpus (Volatile Corpus) which is a virtual base class. The VCorpus creates a volatile corpora, i.e. when the R object is destroyed, the whole corpus is lost.

Here is a basic syntax of the VCorpus function:

VCorpus(DataframeSource(dataframe))

Where:

  • dataframe is a data frame containing a column of text documents (e.g., a column of strings).

  • DataframeSource(dataframe) creates a Source object from the data frame.

Steps Needed:

Text Mining Workflow in R with NLP

  • Install and Load Libraries:
  1. tm: For text mining operations.

  2. SnowballC: For text stemming.

  3. wordcloud: For creating word clouds.

  4. tokenizers: For tokenization.

  5. tidytext: For sentiment analysis (optional)

install.packages(c("tm", "SnowballC", "wordcloud", "tokenizers", "tidytext"))
library(tm)
library(SnowballC)
library(wordcloud)
library(tokenizers)
library(tidytext)
  • Read and preprocess text data:

Create a sample dataset with text documents.

# Sample data with text documents
documents <- c("Text mining is the process of deriving meaningful information from natural language text.",
               "R is a programming language and free software environment for statistical computing and graphics.",
               "The tm package provides functions for text mining in R.")

# Create a corpus from the documents
corpus <- Corpus(VectorSource(documents))
  • Text Cleaning:
  1. Convert text to lowercase.

  2. Remove special characters, numbers, and punctuation.

  3. Remove extra white spaces.

# Text cleaning
clean_corpus <- tm_map(corpus, content_transformer(tolower))
clean_corpus <- tm_map(clean_corpus, removeNumbers)
clean_corpus <- tm_map(clean_corpus, removePunctuation)
clean_corpus <- tm_map(clean_corpus, stripWhitespace)
  • Tokenization:
# Tokenization
clean_corpus <- tm_map(clean_corpus, content_transformer(function(x) tokenize_words(x)))
  • Stemming: Create a sample dataset with text documents
# Stemming
clean_corpus <- tm_map(clean_corpus, stemDocument)
  • Creating a Document-Term Matrix (DTM): Convert the corpus into a document-term matrix.
# Create a document-term matrix
dtm <- DocumentTermMatrix(clean_corpus)
  • Exploratory Data Analysis (EDA): Get word frequencies and visualize using a word cloud.
# Get word frequencies
word_freq <- colSums(as.matrix(dtm))

# Create a word cloud
wordcloud(names(word_freq), word_freq, min.freq = 1, scale = c(3, 0.5), colors = brewer.pal(8, "Dark2"))
  • Sentiment Analysis (Optional): Determine the sentiment of the text.
# Sample sentiment lexicon
sentiment_lexicon <- data.frame(word = c("good", "bad", "great", "poor"), sentiment = c("positive", "negative", "positive", "negative"))
sentiment_analysis <- lapply(clean_corpus, function(x) {
  tokens <- tidytext::unnest_tokens(data.frame(text = x), text)
  inner_join(tokens, sentiment_lexicon, by = "word")
})
  • Topic Modeling (Optional): Identify topics within the text documents.
# Topic modeling
# Note: This is a simplified example using the 'tm' package. For more advanced topic modeling, consider using the 'topicmodels' package.
topics <- LDA(dtm, k = 2, control = list(seed = 1234))
  • Display the results:
# Display the results
print(clean_corpus)
print(dtm)
print(word_freq)

Output:

Screenshot-2024-03-15-010231

Output in R Studio Console

Screenshot-2024-03-15-010327

Output in R Studio Environment

These techniques and tools provide a strong basis for advanced text analytics, including information retrieval, topic modeling, and sentiment analysis.

Text mining in R with NLP techniques is an effective way to extract useful insights from unstructured text data. It enables researchers, data scientists, and analysts to identify patterns, trends, and hidden information within large amounts of text, resulting in better decision-making and insights across a wide range of industries and applications.