size mismatch neural network

I use a neural network to classify text (good feedback or bad) based on the sample IMDB. For vectorization, I use the Tfidf vectorizer of sklearn.

But I get an error because of the mismatch of the sizes:

data = pd.concat([positive_train_data,negative_train_data,positive_test_data,negative_test_data],ignore_index = True)
x = data.Text
y = data.Sentiment

x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.20, random_state = 0)
print( "Train set has total {0} entries with {1:.2f}% negative, {2:.2f}% positive".format(len(x_train),
                                                                             (len(x_train[y_train == 0]) / (len(x_train)*1.))*100,
                                                                            (len(x_train[y_train == 1]) / (len(x_train)*1.))*100))

print ("Test set has total {0} entries with {1:.2f}% negative, {2:.2f}% positive".format(len(x_test),
                                                                             (len(x_test[y_test == 0]) / (len(x_test)*1.))*100,
                                                                            (len(x_test[y_test == 1]) / (len(x_test)*1.))*100))


tvec1 = TfidfVectorizer(max_features=10000,ngram_range=(1, 3))
tvec1.fit(x_train)
x_train_tfidf = tvec1.transform(x_train)
x_test_tfidf = tvec1.transform(x_test).toarray()


model = Sequential()
model.add(Dense(64, activation='relu', input_dim=100000))
model.add(Dense(1, activation='sigmoid'))
model.summary()
optimiz = optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
model.compile(loss = 'binary_crossentropy',optimizer = optimiz ,metrics = ['accuracy'])
hist  = model.fit(x_train_tfidf,y_train,validation_data = (x_test_tfidf,y_test),epochs = 5, callbacks=[EarlyStopping(patience=2, monitor='val_loss')],batch_size = 256)

Error:

Input arrays should have the same number of samples as target arrays. Found 25000 input samples and 24082 target samples