R Text Processing advice for data to be trained and tested with Naive Bayes

I have started using R to apply machine learning for sentiment analysis. Currently I have a csv of tweets that has been labeled positive or negative as my corpus that is cleaned and then applied into a document text matrix which is further filtered for frequent terms before having the NB algorithm trained and tested.

CODE

library(tm)
library(RTextTools)
library(e1071)
library(dplyr)
library(caret)
library(Perc)
library(SnowballC)


df<- read.csv("C:/Users/Suki/Projects/RProject/tweetsFixed.csv", stringsAsFactors = FALSE)

set.seed(1)
df <- df[sample(nrow(df)), ]
df <- df[sample(nrow(df)), ]
glimpse(df)
df$class <- as.factor(df$class)

corpus <- Corpus(VectorSource(df$text))
corpus
inspect(corpus[1:3])

corpus.clean <- corpus %>%
  tm_map(content_transformer(tolower)) %>% 
  tm_map(removePunctuation) %>%
  tm_map(removeNumbers) %>%
  tm_map(removeWords, stopwords(kind="en")) %>%
  tm_map(removeWords, c(stopwords("en"), "stopword.txt", "rt", "handle", "url", "unâ€"))%>%
  tm_map(stripWhitespace)



corpus.clean[[229]][1]
df$text[229]


dtm <- DocumentTermMatrix(corpus.clean)
inspect(dtm[40:50, 10:15])

df.train <- df[1:200,]
df.test <- df[201:400,]

dtm.train <- dtm[1:200,]
dtm.test <- dtm[201:400,]

corpus.clean.train <- corpus.clean[1:200]
corpus.clean.test <- corpus.clean[201:400]

dim(dtm.train)
fivefreq <- findFreqTerms(dtm.train, 5)
length((fivefreq))
dtm.train.nb <- DocumentTermMatrix(corpus.clean.train, control=list(dictionary = fivefreq))
dim(dtm.train.nb)
dtm.test.nb <- DocumentTermMatrix(corpus.clean.test, control=list(dictionary = fivefreq))
dim(dtm.train.nb)

convert_count <- function(x) {
  y <- ifelse(x > 0, 1,0)
  y <- factor(y, levels=c(0,1), labels=c("No", "Yes"))
  y
}

trainNB <- apply(dtm.train.nb, 2, convert_count)
testNB <- apply(dtm.test.nb, 2, convert_count)

system.time( classifier <- naiveBayes(trainNB, df.train$class, laplace = 1) )

system.time( pred <- predict(classifier, newdata=testNB) )
table("Predictions"= pred,  "Actual" = df.test$class )

conf.mat <- confusionMatrix(pred, df.test$class)
conf.mat
conf.mat$byClass
conf.mat$overall
conf.mat$overall['Accuracy']

Am I right in believing NLP text processing has been applied through the usage of document text matrix and frequent words in this system after the tweets have been cleaned? If not, can I get some advice on what text processing approaches I should make or further text processing I should apply in this system?

Thanks for the time and help