How to use embedding to onehot encode before passing

I am able to train my seq2seq model when onehot encodded input is passed in the fit function. How would I achieve the same thing if input is not one hot encoded?

Following code works

def seqModel():

    latent_dim = 256  # Latent dimensionality of the encoding space.

    encoder_inputs =  Input(shape=(None, input_vocab_size))
    decoder_inputs = Input(shape=(None, output_vocab_size))
    encoder = LSTM(latent_dim, return_state=True)

    encoder_outputs, state_h, state_c = encoder(encoder_inputs)
    encoder_states = [state_h, state_c]
    decoder_inputs = Input(shape=(None, num_decoder_tokens))
    decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
    decoder_outputs, _, _ = decoder_lstm(decoder_inputs,\
    decoder_dense = Dense(num_decoder_tokens, activation='softmax')
    decoder_outputs = decoder_dense(decoder_outputs)

    model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
    return model

def train(data):
    model = seqModel()
    #compile and get data[to_one_hot(input_texts, num_encoder_tokens),to_one_hot(target_text, num_decoder_tokens)], outputs,batch_size=3, epochs=5)

I am asked not to onehot encode in the train method. How would I do it in the seqModel method? Is Embedding right way to onehot encode?