How do we compile a multiple output keras model
As the title says how do we compile a keras functional model with mulitple outputs?
# Multiple Outputs
from keras.utils import plot_model
from keras.models import Model
from keras.layers import Input
from keras.layers import Dense
from keras.layers.recurrent import LSTM
from keras.layers.wrappers import TimeDistributed
# input layer
visible = Input(shape=(4,2))
# feature extraction
extract = LSTM(10, return_sequences=True)(visible)
# classification output
class11 = LSTM(10)(extract)
class12 = Dense(8, activation='relu')(class11)
class13 = Dense(8, activation='relu')(class12)
output1 = Dense(9, activation='softmax')(class13)
# sequence output
output2 = TimeDistributed(Dense(1, activation='tanh'))(extract)
# output
model = Model(inputs=visible, outputs=[output1, output2])
# summarize layers
print(model.summary())
There are two output branches with two different types of output values. First output is a dense layer with softmax activation function and other output is a time distributed layer with tanh activation.
How should we compile this model. I tried this way
model.compile(optimizer=['rmsprop','adam'],
loss=['categorical_crossentropy','mse'],
metrics=['accuracy'])
But its giving this error
ValueError: ('Could not interpret optimizer identifier:', ['rmsprop', 'adam'])
1 answer

The problem lies in the fact that you want to set two separate optimizer what is not plausible in
keras
. You need to choose eitherrmsprop
oradam
as the main optimizer.
See also questions close to this topic

Python to append datas from text file to xlsx file
My text file contains the data which has to be appended to excel file
my Main.txt :
Sample1 Sample2 Sample3 Sample4 Sample5
My excel contains :
A numbers Sample1 1 Sample2 1 Sample3 3
A and numbers are columns in excel. Now I need to append the extra content from the text file to the excel sheet. (copy operation).
My output excel should be like :
A numbers Sample1 1 Sample2 1 Sample3 3 Sample4 20 Sample5 21
So wherever I update my text file, the python script should copy the contents and append contents in the A column. Also I need to update the number column when the data in A column is appended

How to change inches to meters ？
Here's the code, how to change inches to meters ？
box = cv2.boxPoints(marker) cv2.drawContours(image, [box.astype(int)], 1, (0, 255, 0), 2) cv2.putText(image, "%.2fft" % (inches / 12), (image.shape[1]  200, image.shape[0]  20), cv2.FONT_HERSHEY_SIMPLEX, 2.0, (0, 255, 0), 3) cv2.imshow("image", image) cv2.waitKey(0)

What is better to use an RFID made with Rasberry Pi + MFRC522 or acr122u reader?
Good day fellas I'm currently arguing this stuff with my professors I am creating an WEBbase Attendance System using RFID and then I am proposing to my prof to use a customize build scanner using
1.Raspberry Pi 3 model B
2.MFRC522 reader/writer chipmy reason is that raspberry Pi is also a computer(some sort) and it was easy to customize the system application when validating data from database etc. and manipulating data. from the cards to the database.
Then my prof said that it's better to use the ready made so I search up for it I found the acr122u reader writer but based on it's review I have read that the SDK are quite not that good. and it was not that easy to use to pass data and other things so I wanna ask you guys to help reason out my problem. what is better in 2. 
Cannot run load_model (keras) on raspberry pi 3
I'm facing an error when trying to load the model 'small_model.h5' The code is as follow
small_model = load_model('small_model.h5')
The error message is as follow
AttributeError module 'tensorflow' has no attribute 'global_variables'
I've tried to upgrade the tensorflow and keras on the board but it didn't help. Also, I've tried another way of reading the file by using
small_model = 'small_model.h5' small_modl = h5py.File(small_model,'r')
and didn't help
Any idea how to solve this? Thank you in advance.

Second derivative in Keras
For a custom loss function for a NN I need to use the equation . The function u, given a pair (t,x), is the the output of my NN. Problem is I'm stuck at how to compute the second derivative using
K.gradient
(K being the TensorFlow backend):def custom_loss(input_tensor, output_tensor): def loss(y_true, y_pred): # so far, I can only get this right, naturally: gradient = K.gradients(output_tensor, input_tensor) # here I'm falling badly: # d_t = K.gradients(output_tensor, input_tensor)[0] # dd_x = K.gradient(K.gradients(output_tensor, input_tensor), # input_tensor[1]) return gradient # obviously not useful, just for it to work return loss
All my attemps, based on
Input(shape=(2,))
, were variations of the commented lines in the snippet above, mainly trying to find the right indexation of the resulting tensor.Sure enough I lack knowledge of how exactly tensors work. By the way, I know in TensorFlow itself I could simply use
tf.hessian
, but I noticed it's just not present when using TF as a backend. 
Load Alexnet weights with keras=1.1.0 using theano backend
I account a problem which is ''Exception: You are trying to load a weight file containing 11 layers into a model with 8 layers.'' when I load Alexnet weights with keras = 1.1.0 using theano backend. The code is:
from keras.models import Model from keras.layers import Flatten, Dense, Dropout, Input from keras.layers.convolutional import Conv2D, MaxPooling2D from keras import backend as K from keras.utils.layer_utils import convert_all_kernels_in_model def alexnet_model(weights_path=None): K.set_image_dim_ordering('th') inputs = Input(shape=(3, 227, 227)) x = Conv2D(96, 11, 11, subsample=(4,4), activation='relu', border_mode='valid', name='conv1')(inputs) x = MaxPooling2D((3, 3), strides=(2, 2), name='pool1')(x) x = Conv2D(256, 5, 5, subsample=(1,1), activation='relu', border_mode='same', name='conv2')(x) x = MaxPooling2D((3, 3), strides=(2, 2), name='pool2')(x) x = Conv2D(384, 3, 3, subsample=(1,1), activation='relu', border_mode='same', name='conv3')(x) x = Conv2D(384, 3, 3, subsample=(1,1), activation='relu', border_mode='same', name='conv4')(x) x = Conv2D(256, 3, 3, subsample=(1,1), activation='relu', border_mode='same', name='conv5')(x) x = MaxPooling2D((3, 3), strides=(2, 2), name='pool5')(x) x = Flatten(name='flatten')(x) x = Dense(4096, activation='relu', name='fc1')(x) x = Dropout(0.5)(x) x = Dense(4096, activation='relu', name='fc2')(x) x = Dropout(0.5)(x) x = Dense(1000, activation='softmax', name='predictions')(x) model = Model(inputs, x) weights_path = 'alexnet_weights.h5' model.load_weights(weights_path) convert_all_kernels_in_model(model) return model if "__main__" == __name__: model = alexnet_model(weights_path = 'alexnet_weights.h5')
The file 'alexnet_weights.h5' is downloaded in 'http://files.heuritech.com/weights/alexnet_weights.h5', and in my keras.json file, 'backend' is 'theano' and 'image_dim_ordering' is 'th'. Is my alexnet model's layer not 11(5 convolutional layers, 3 pool layers and 3 fc layers)? How could I solve this error? Thanks a lot in advance.

Custom Layer behaves differently when inside keras model
I'm working on a Permutational Equivariant Layer for Keras based on this paper https://arxiv.org/pdf/1612.04530.pdf and previous work by Josef Ondrej found here.
The layer itself is a Keras Model consisting of multiple layers:
from keras import backend as K from keras import losses from keras.layers import Average, Add, Concatenate, Maximum, Input, Dense, Lambda from keras.models import Model from keras.engine.topology import Layer def PermutationEquivariant(input_shape, layer_size, tuple_dim = 2, reduce_fun = "sum", dense_params = {}): """ Implements a permutation equivariant layer. Each batch in our data consists of `input_shape[0]` observations each with `input_shape[1]` features. Args: input_shape  A pair of `int`  (number of observations in one batch x number of features of each observation). The batch dimension is not included. layer_size  `int`. Size of dense layer applied to each tuple of observations. tuple_dim  A `int`, how many observations to put in one tuple. reduce_fun  A `string`, type of function to "average" over all tuples starting with the same index. Returns: g  A keras Model  the permutation equivariant layer. It consists of one tuple layer that creates all possible `tuple_dim`tuples of observations, sorted on an axis along which the first index is constant. The same dense layer is applied on every tuple and then some symmetric pooling function is applied across all tuples with the same first index (for example mean or maximum). """ inputs = Input(shape=input_shape)## input_shape: batch_size x row x col ## SeperatedTuple layer x = SeperatedTuples(tuple_dim, input_shape = input_shape)(inputs)## out_shape: batch_size x row x row ** (tuple_dim1) x tuple_dim*col ## Dense layer  implemented with a conv layer # Use the same dense layer for each tuple dense_input_shape = (tuple_dim*input_shape[1], ) # batch_size x tuple_dim*col dense_layer = Dense(input_shape = dense_input_shape, units=layer_size, **dense_params) # iterate through rows x_i_list = [] for i in range(input_shape[0]): xi_j_list = [] # applying the dense layer to each tuple where first index equals i # here we could also use a 1x1 convolution. Instead of reusing # the dense layer for each tuple, we would be reusing the kernels for j in range(input_shape[0] ** (tuple_dim1)): input_ij = Lambda(lambda x : x[:,i,j,:], output_shape=(tuple_dim*input_shape[1],))(x) ##out_shape: batch_size x tuple_dim * col xi_j_list += [dense_layer(input_ij)] ## xi_j_listshape: row x batch_size x layer_size ## Pooling layer # Pooling the list of the dense outputs of all the tuples where first index equals i to out_shape: batch_size x layer_size # note that axis=0 because in previous step rowaxis comes before batch_sizeaxis # Use Lambda Wrapper to preserve the output being a Keras Tensor if reduce_fun == "mean": pooling_layer = Average(axis=1) #pooling_layer = Lambda(lambda x : K.mean(x, axis = 0)) elif reduce_fun == "max": pooling_layer = Maximum() #pooling_layer = Lambda(lambda x : K.max(x, axis = 0)) elif reduce_fun == "sum": pooling_layer = Add() #pooling_layer = Lambda(lambda x : K.sum(x, axis = 0)) else: raise ValueError("Invalid value for argument `reduce_fun` provided. ") xi = pooling_layer(xi_j_list) ## xishape: batch_size x layer_size x_i_list += [xi] # x_i_listshape: # Concatenate the results of each row x = Lambda(lambda x : K.stack(x, axis=1), output_shape = (input_shape[0], layer_size))(x_i_list) ## out_shape: batch_size x row x layer_size model = Model(inputs=inputs, outputs=x) return model class SeperatedTuples(Layer): """ Creates all possible tuples of rows of 2D tensor, with an additional axis along which the first elements are constant. In the case of tuple_dim = 2, from one input batch: x_1, x_2, ... x_n, where x_i are rows of the tensor, it creates 3D output tensor: [[x_1  x_1, x_1  x_2 ... x_1  x_n], [x_2  x_1, x_2  x_2 ... x_2  x_n], ... ... x_n  x_n]] Args: tuple_dim  A `int`. Dimension of one tuple (i.e. how many rows from the input tensor to combine to create a row in output tensor) input_shape  A `tuple` of `int`. In the most frequent case where our data has shape (batch_size x num_rows x num_cols) this should be (num_rows x num_cols). """ def __init__(self, tuple_dim = 2, **kwargs): self.tuple_dim = tuple_dim super(SeperatedTuples, self).__init__(**kwargs) def create_indices(self, n, k = 2): """ Creates all integer valued coordinate ktuples in k dimensional hypercube with edge size n. for example n = 4, k = 2 returns [[0, 0], [0, 1], [0, 2], [0, 3], [1, 0], [1, 1], [1, 2], [1, 3], ... [3, 0], [3, 1], [3, 2], [3, 3]] Args: n  A `int`, edge size of the hypercube. k  A `int`, dimension of the hypercube. Returns: indices_n_k  A `list` of `list` of `int`. Each inner list represents coordinates of one integer point in the hypercube. """ if k == 0: indices_n_k = [[]] else: indices_n_k_minus_1 = self.create_indices(n, k1) indices_n_k = [[i] + indices_n_k_minus_1[c] for i in range(n) for c in range(n**(k1))] return indices_n_k def create_seperated_indices(self, n, k = 2): """ Same as create_indices, just that there is an additional axis along which the first value of the tuples is constant for example n = 4, k = 2 returns [[[0, 0], [0, 1], [0, 2], [0, 3]], [[1, 0], [1, 1], [1, 2], [1, 3]], ... [[3, 0], [3, 1], [3, 2], [3, 3]]] shape: row x row x k """ indices = self.create_indices(n,k) seperated_indices = [indices[i:i + n] for i in range(0, len(indices), n)] return seperated_indices def build(self, input_shape): # Create indexing tuple self.gathering_indices = self.create_seperated_indices(input_shape[2], self.tuple_dim) super(SeperatedTuples, self).build(input_shape) # Be sure to call this somewhere! def call(self, x): """ input_dim : batch_size x rows x cols output_dim : batch_size x rows x rows ** (tuple_dim1) x cols * tuple_dim """ stacks_of_tuples = K.map_fn( fn = lambda z : ## z shape: row x col K.stack( [K.concatenate( [K.reshape( K.gather(z, i), ## shape: tuple_dim x col shape = (1,1) ) ## shape: 1 x tuple_dim*col for i in indices # idim: tuple_dim, indicesshape: row x tuple_dim ], ## shape: row x 1 x tuple_dim*col axis = 0 ) ## shape: row x tuple_dim*col for indices in self.gathering_indices # gathering_indicesshape: row x row x tuple_dim ], axis=0), ## shape: row x row x tuple_dim*col elems = x ## shape: batch_size x row x col ) ## shape: batch_size x row x row x tuple_dim*col return stacks_of_tuples def compute_output_shape(self, input_shape): """ input_shape: batch_size x rows x cols output_shape: batch_size x rows x rows ** (tuple_dim1) x cols * tuple_dim """ output_shape = list(input_shape) output_shape[1] = output_shape[1] * self.tuple_dim output_shape[2] = output_shape[2] ** self.tuple_dim return tuple(output_shape)
When testing the
PermutationEquivariant
layer all alone, everything seems to work fine (run 1). However, when I try to incorporate it in a larger model, the outputs just repeat themselves (run 2).from keras.models import Model from keras.layers import Input, Lambda import numpy as np # parameters for Permutational Equivariant layer input_shape = (2,5) dense_params = {'kernel_initializer': 'glorot_normal', 'bias_initializer': 'glorot_normal', 'activation': 'tanh'} sample = np.random.random((1,) + input_shape) # run 1: Using only the PermutationEquivariant layer as a model by itself seems to work model_1 = PermutationEquivariant(input_shape=input_shape, layer_size=10, tuple_dim=2, reduce_fun="sum", dense_params = dense_params) model_1.compile(optimizer='sgd', loss='categorical_crossentropy') print("model_1: \n", model_1.predict(sample)) #model_1: #[[[1.0494264 1.6808903 1.2861781 0.90004706 1.6178854 # 1.6686234 1.5724193 1.2454509 0.3730019 1.4580158 ] # [1.3904197 1.467866 1.0848606 1.2094728 1.6304723 # 1.6369174 1.4074551 0.58116794 0.292305 1.7162979 ]]] # run 2: Incorporating the PermutationEquivariant layer inside another model makes the output constant along the first axis inputs = Input(shape=input_shape) x = PermutationEquivariant(input_shape=input_shape, layer_size=10, tuple_dim=2, reduce_fun="sum", dense_params = dense_params)(inputs) model_2 = Model(inputs=inputs,outputs = x) model_2.compile(optimizer='sgd', loss='categorical_crossentropy') print("model_2: \n", model_2.predict(sample)) enter code here #model_2: # [[[ 0.72823656 1.2213255 0.28404936 1.4711846 0.49544945 # 1.7930243 0.7502286 1.892496 1.675402 0.2252224 ] # [ 0.72823656 1.2213255 0.28404936 1.4711846 0.49544945 # 1.7930243 0.7502286 1.892496 1.675402 0.2252224 ]]]
I have tried theano and tensorflow as backends, both with the same result. Does anybody have an idea why it behaves differently when inside another model / what am I missing? I appreciate any help!

IndexError: LSTM with "stateful=True"
I tried to use LSTM network using reset callback for expected future predictions as follows:
import numpy as np, pandas as pd, matplotlib.pyplot as plt from keras.models import Sequential from keras.layers import Dense, LSTM from keras.callbacks import LambdaCallback from sklearn.metrics import mean_squared_error from sklearn.preprocessing import MinMaxScaler from sklearn.preprocessing import StandardScaler raw = np.sin(2*np.pi*np.arange(1024)/float(1024/2)).reshape(1,1) scaler = MinMaxScaler(feature_range=(1, 1)) scaled = scaler.fit_transform(raw) data = pd.DataFrame(scaled) window_size = 3 data_s = data.copy() for i in range(window_size): data = pd.concat([data, data_s.shift((i+1))], axis = 1) data.dropna(axis=0, inplace=True) ds = data.values n_rows = ds.shape[0] ts = int(n_rows * 0.8) train_data = ds[:ts,:] test_data = ds[ts:,:] train_X = train_data[:,:1] train_y = train_data[:,1] test_X = test_data[:,:1] test_y = test_data[:,1] print (train_X.shape) print (train_y.shape) print (test_X.shape) print (test_y.shape) batch_size = 3 n_feats = 1 train_X = train_X.reshape(train_X.shape[0], batch_size, n_feats) test_X = test_X.reshape(test_X.shape[0], batch_size, n_feats) print(train_X.shape, train_y.shape) regressor = Sequential() regressor.add(LSTM(units = 64, batch_input_shape=(1, batch_size, n_feats), activation = 'sigmoid', stateful=True, return_sequences=False)) regressor.add(Dense(units = 1)) regressor.compile(optimizer = 'adam', loss = 'mean_squared_error') resetCallback = LambdaCallback(on_epoch_begin=lambda epoch,logs: regressor.reset_states()) regressor.fit(train_X, train_y, batch_size=1, epochs = 1, callbacks=[resetCallback]) previous_inputs = test_X regressor.reset_states() previous_predictions = regressor.predict(previous_inputs, batch_size=1) previous_predictions = scaler.inverse_transform(previous_predictions).reshape(1) test_y = scaler.inverse_transform(test_y.reshape(1,1)).reshape(1) plt.plot(test_y, color = 'blue') plt.plot(previous_predictions, color = 'red') plt.show() inputs = test_X future_predicitons = regressor.predict(inputs, batch_size=1) n_futures = 7 regressor.reset_states() predictions = regressor.predict(previous_inputs, batch_size=1) print (predictions) future_predicts = [] currentStep = predictions[:,1:,:] for i in range(n_futures): currentStep = regressor.predict(currentStep, batch_size=1) future_predicts.append(currentStep) regressor.reset_states() future_predicts = np.array(future_predicts, batch_size=1).reshape(1,1) future_predicts = scaler.inverse_transform(future_predicts).reshape(1) all_predicts = np.concatenate([predicts, future_predicts]) plt.plot(all_predicts, color='red') plt.show()
but i got the following error. I could not figure out how to solve it for expected predictions.
currentStep = predictions[:,1:,:] IndexError: too many indices for array
PS this code has been adapted from https://github.com/danmoller/TestRepo/blob/master/testing%20the%20blog%20code%20%20train%20and%20pred.ipynb

Keras  Error when checking target: Expected activation_2 to have shape (6,6,512) but got array with shape (192,192,3)
I've been working on a segmentation problem for many days and after finally finding out how to properly read the dataset, I ran into this problem: "Error when checking target: expected activation_9 to have shape (6, 6, 512) but got array with shape (192, 192, 3)". I used the functional API, since I took the FCNN architecture from : https://github.com/divamgupta/imagesegmentationkeras/blob/master/Models/FCN32.py.
It is slightly modified and adapted in accordance with my task(IMAGE_ORDERING = "channels_last"(TensorFlow backend) and the dimensions of the image 192X192). Can anyone please help me? Massive thanks in advance. The architecture below is for FCNN, which I try to implement for the purpose of the segmentation.
Here is the architecture(after calling model.summary()).
[1]: https://i.stack.imgur.com/PiSOw.png
[2]: https://i.stack.imgur.com/iartm.png
[3]: The specific error is : https://i.stack.imgur.com/3TIjT.png
[4]: "Importing the dataset" function :https://i.stack.imgur.com/UY2FE.png
[5]: "Fit_Generator method calling" :https://i.stack.imgur.com/VskLj.pngIMAGE_ORDERING = 'channels_last' def getFCN32(nb_classes,input_height=192, input_width=192): img_input = Input(shape=(input_height,input_width,3)) #Block 1 x = Convolution2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1', data_format=IMAGE_ORDERING)(img_input) x = BatchNormalization()(x) x = Convolution2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2', data_format=IMAGE_ORDERING)(x) x = BatchNormalization()(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool', data_format=IMAGE_ORDERING)(x) f1 = x # Block 2 x = Convolution2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1', data_format=IMAGE_ORDERING)(x) x = BatchNormalization()(x) x = Convolution2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2', data_format=IMAGE_ORDERING)(x) x = BatchNormalization()(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool', data_format=IMAGE_ORDERING )(x) f2 = x # Block 3 x = Convolution2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1', data_format=IMAGE_ORDERING)(x) x = BatchNormalization()(x) x = Convolution2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2', data_format=IMAGE_ORDERING)(x) x = BatchNormalization()(x) x = Convolution2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3', data_format=IMAGE_ORDERING)(x) x = BatchNormalization()(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool', data_format=IMAGE_ORDERING )(x) f3 = x # Block 4 x = Convolution2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1', data_format=IMAGE_ORDERING)(x) x = BatchNormalization()(x) x = Convolution2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2',data_format=IMAGE_ORDERING)(x) x = BatchNormalization()(x) x = Convolution2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3',data_format=IMAGE_ORDERING)(x) x = BatchNormalization()(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool', data_format=IMAGE_ORDERING)(x) f4 = x # Block 5 x = Convolution2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1', data_format=IMAGE_ORDERING)(x) x = BatchNormalization()(x) x = Convolution2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2',data_format=IMAGE_ORDERING)(x) x = BatchNormalization()(x) x = Convolution2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3', data_format=IMAGE_ORDERING)(x) x = BatchNormalization()(x) x = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool', data_format=IMAGE_ORDERING)(x) f5 = x o = f5 o = (Convolution2D(4096,(7,7) , activation='relu' , padding='same', data_format=IMAGE_ORDERING))(o) o = Dropout(0.5)(o) o = (Convolution2D(4096,(1,1) , activation='relu' , padding='same',data_format=IMAGE_ORDERING))(o) o = Dropout(0.5)(o) o = (Convolution2D(20,(1,1) ,kernel_initializer='he_normal' ,data_format=IMAGE_ORDERING))(o) o = Convolution2DTranspose(20,kernel_size=(64,64), strides=(32,32),use_bias=False,data_format=IMAGE_ORDERING)(o) o_shape = Model(img_input,o).output_shape outputHeight = o_shape[1] print('Output Height is:', outputHeight) outputWidth = o_shape[2] print('Output Width is:', outputWidth) #https://keras.io/layers/core/#reshape o = (Reshape((1,outputHeight*outputWidth)))(o) #https://keras.io/layers/core/#permute # o = (Permute((2, 1)))(o) print("Output shape before flatten is", o_shape) o = Flatten(name='flatten')(o) print("Output shape before softmax is", o_shape) o = (Activation('softmax'))(x) print("Output shape after softmax is", o_shape) model = Model(inputs = img_input,outputs = o) model.outputWidth = outputWidth model.outputHeight = outputHeight model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics =['accuracy']) return model