CNN model with both image data and pre-extracted features
I am trying to implement a
CNN model to classify some images to their corresponding classes. Images are of size
64x64x3. My dataset consists of 25,000 images and also a
CSV file consisting of
14 pre-extracted features like color, length etc.
I want to build a
CNN model that make use of both the image data and the features for training and prediction. How can I implement such a model in
I'm going to start out assuming that you can import the data without any issues, and you have already separated the x-data into Image and Features, and you have the y data as the labels of each image.
You can use the keras functional api to have a neural network take multiple inputs.
from keras.models import Model from keras.layers import Conv2D, Dense, Input, Embedding, multiply, Reshape, concatenate img = Input(shape=(64, 64, 3)) features = Input(shape=(14,)) embedded = Embedding(input_dim=14, output_dim=60*32)(features) embedded = Reshape(target_shape=(14, 60,32))(embedded) encoded = Conv2D(32, (3, 3), activation='relu')(img) encoded = Conv2D(32, (3, 3), activation='relu')(encoded) x = concatenate([embedded, encoded], axis=1) x = Dense(64, activation='relu')(x) x = Dense(64, activation='relu')(x) main_output = Dense(1, activation='sigmoid', name='main_output')(x) model = Model([img, features], [main_output])
See also questions close to this topic
Invocation of multiprocessing in Python 3.11 on Windows
I have a problem with multiprocessing in Python 3.11 on Windows.
Here is the script:
from multiprocessing import Process import os import time import threading def info(title): print(title) print('module name:', __name__) if hasattr(os, 'getppid'): # only available on Unix print('parent process:', os.getppid()) print('process id:', os.getpid()) def f(name): info('function f') print('--- hello', name) time.sleep(5) print('--- bye', name) def some_func(): print('Running some_func function') def another_func(): print('Running another_func function') def main(): print('Running main function of try_multi.py') print('-------------------------------------') some_func() if __name__ == '__main__': info('main line') p1 = Process(target=f, args=('bob',)) p2 = Process(target=f, args=('larry',)) p1.start() p2.start() p1.join() p2.join() another_func() main()
Here is it's output:
C:\Scripts> c:\Python31\python.exe try_multi.py Running main function of try_multi.py ------------------------------------- Running some_func function main line module name: __main__ process id: 12696 Running main function of try_multi.py ------------------------------------- Running some_func function Running another_func function function f module name: __main__ process id: 14568 --- hello bob Running main function of try_multi.py ------------------------------------- Running some_func function Running another_func function function f module name: __main__ process id: 9336 --- hello larry --- bye bob --- bye larry Running another_func function
The problem is that I expect only function "f" to run in the new processes but it looks like the whole new instance of the parent script is invoked - both "some_fun" and "another_func" run.
On Linux with Python 2.7.5 it works as expected:
$ python try_multi.py Running main function of try_multi.py ------------------------------------- Running some_func function main line ('module name:', '__main__') ('parent process:', 1137) ('process id:', 1167) function f ('module name:', '__main__') ('parent process:', 1167) ('process id:', 1168) ('--- hello', 'bob') function f ('module name:', '__main__') ('parent process:', 1167) ('process id:', 1169) ('--- hello', 'larry') ('--- bye', 'bob') ('--- bye', 'larry') Running another_func function
Can I cause it work correctly on my platform (Python 3.11 on Windows)? Thank you
Atom Editor - python-tools was unable to find your machine's python executable
I can't run python in atom it gives me odd message saying " python-tools was unable to find your machine's python executable."
in the same time I can't find python-tool-coffee file under python-tool/lib directory.
is there any solution for that.
Thanks in advance.
python supress output of subprocess
- there is a hacking challange.
- i have a password protected file called "lock"
- opening file with password returns a QR-code.
- need to assign QR to a var.
- Don't want to display QR everytime
Works but has output:
var = os.system("./lock %s" % password)
SO says i should use:
var = subprocess.Popen("something.py")
tryied to pass like above but that fails cause "Popen" wants a list or a string. if i concat the command as a string before using popen, its still displayed.
what a already read (at least)
import sys import os import subprocess def file_len(fname): with open(fname) as f: for i, l in enumerate(f): pass return i + 1 lock = "/root/share/lock" print "Hello" passfile = raw_input("Enter the password file name: ") assert os.path.exists(passfile), "I did not find the file at, "+str(passfile) devnull = open(os.devnull, 'wb') trys = file_len(passfile) passfile = open(passfile,'r+') cnt = 1 wrong = os.system("./lock penis") for password in passfile: # com = ("./lock %s" % password) # var = os.system("./lock %s" % password) var = subprocess.Popen("./lock %s" % password, stderr=devnull, stdout=devnull) if var == wrong: os.system('clear') cnt += 1 print ("Try %s/%s " %(cnt, trys)) print ("Currently PIN: %s" % password) else: print "!!!!!!!!!!!!!!!!!" print password
redirecting to devnull doesn't work either. OSError: [Errno 2] No such file or directory: ''
Getting the details about the boundary of the objects in images using Tensor flow?
I have images some thing like this, objects placed on a white board. I Can use tensor flow to detect objects in the images, but is it possible to get the boundaries so that I can crop at particular place if I think specific image is necessary and use my Image processing technique further.
Questions about AutoEncoders
Good morning .
I have three questions about autoEncoders and i would really appreciate your help :
1- I have noticed that there is a lack of research papers done on deep auto encoders (AE) although the concept is explained in plenty of tutorials and examples and most of the tutorials claim that this model is powerful , is there a reason for the lack of research paper published using AE especially in Anomaly or novelty detection ?
2- in all the tutorials i have seen a threshold is manually set ( hard set ) for AutoEncoder to be as a decision boundary for Anomaly detection by testing several values and selecting the best one , is there another technique to select the Threshold value , in other words what are the different thresholding mechanisms that can automatically detect the threshold
machine learning multiclass classification performance
I have three classes A,B,C to classify ( with CNN ). When I classify two of them, the result is:
classify A and B: at the point that B's true rate is 50%, A's fake rate ( A fake into B ) is 0.1%.
classify A and C: at the point that C's true rate is 50%, A's fake rate is 0.1%.
classify B and C: at the point that C's true rate is 50%, B's fake rate is 2%.
So I think class B and C is more similar to each other while class A is more different. Is that a good conclusion?
Then I try to classify all of them,
classify A, B and C: at the point that C's true rate is 50%, B's fake rate is 2% and A's fake rate is 2% too.
Is this ( the A's fake rate increase ) normal? Why this happens? Can I decrease the fake rate without change the model? I tried, for example, using more class A data then the other two classes, but it doesn't help.
Passing beta and gamma into batch normalization
I have a very specific use case where I need to manually pass in the parameters for the affine transform for batch norm (gamma and beta). As far as I can tell, neither
tf.layers.batch_normalizationallow this (I believe the accepted answer to this related question is incorrect, at least for recent versions of Tensorflow: How to give beta and gamma in tf.contrib.layers.batch_norm).
Is there any way to accomplish this without manually defining a custom batch norm op that uses
tf.nn.batch_normalization(I'd like Tensorflow to take care of maintaining the moving averages for mean and variance if possible)?
conda install fails, conda info package
I am on windows using Anaconda with python 2.7, I want to install tensorflow,But when I try to install it by:
conda install tensorflow
I got error message:
Solving environment: -Exception in thread Thread-1: Traceback (most recent call last): File "C:\Users\nesri\Anaconda2\lib\threading.py", line 801, in __bootstrap_inner self.run() File "C:\Users\nesri\Anaconda2\lib\threading.py", line 754, in run self.__target(*self.__args, **self.__kwargs) File "C:\Users\nesri\Anaconda2\lib\site-packages\conda\common\io.py", line 342, in _start_spinning self.fh.write('\b' * self._indicator_length) File "C:\Users\nesri\Anaconda2\lib\site-packages\colorama\ansitowin32.py", line 40, in write self.__convertor.write(text) File "C:\Users\nesri\Anaconda2\lib\site-packages\colorama\ansitowin32.py", line 141, in write self.write_and_convert(text) File "C:\Users\nesri\Anaconda2\lib\site-packages\colorama\ansitowin32.py", line 169, in write_and_convert self.write_plain_text(text, cursor, len(text)) File "C:\Users\nesri\Anaconda2\lib\site-packages\colorama\ansitowin32.py", line 174, in write_plain_text self.wrapped.write(text[start:end]) IOError: [Errno 0] Error failed UnsatisfiableError: The following specifications were found to be in conflict: - ipaddress - tensorflow Use "conda info <package>" to see the dependencies for each package.
so what should i do ??
Is there any easy way to time pytorch code (python) while running in GPU ( not in CPU)?
Are there any easy ways to time pytorch code (python) while running in GPU ( not in CPU)? I can simply use the
timemodule builtin in python to time code in CPU. However are there any easy way to use functions to time code for GPU? Thanks.
Calling fit multiple times in keras not working
Based on this post : Calling "fit" multiple times in Keras
I tried to call multiple times model.fit() with different training chunk data and then check model accuracy but it saved training on the last piece of data.it'll overwrite all previously fitted coefficients, weights, intercept (bias), etc.
I have 800 GB of Training Data and developed Segmentation CNN (with ADAM Batch Optimizer to work on 64GB RAM using GTX 1080 Ti GPU).
for i in range(noofImgChunks):
tngData=np.load(str(int(i))+".npy") tngMaskData=np.load(str(int(i))+".npy") model_checkpoint = ModelCheckpoint(modelPath, monitor='val_loss', verbose=2, save_best_only=True) history = model.fit(tngData, tngMaskData, batch_size=5, nb_epoch=10, validation_split=0.2, verbose=1, callbacks=[model_checkpoint])
Some help understanding this would be much appreciated.
Which merge layers to use in keras?
Keras has many different ways of merging inputs like
Do they all have the same effect or are there situations where one is preferable?
Use data as input for CNN
I have an EEG data structure as shown 3D data
I want to use it as input for my CNN model. Should I append the data together in 1 variable or keep this same structure for input? If the data is appended then will the data lose some importance?
40 is the number of trials, 64 is number of channels and 1500 is the timepoints of the data. In these 40 trials I have data of frequency starting from 8Hz to 15.8 Hz with 0.2Hz interval. So I have 40 trials. I want to train my model to identify a given signal as 8Hz or 8.2Hz or whichever input has been given.
Kernel died if successive conv layers on Keras
I am using the following convolutional network using Keras.
def CNN_model(): # Create model model = Sequential() model.add(Conv2D(10, (3, 3), input_shape=(1, 256, 256), activation='elu')) model.add(Conv2D(10, (3, 3), activation='elu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(10, (3, 3), activation='elu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(10, (3, 3), activation='elu')) model.add(MaxPooling2D(pool_size=(10, 10))) model.add(Flatten()) model.add(Dropout(0.3)) model.add(Dense(64, activation='elu')) model.add(Dropout(0.3)) model.add(Dense(16, activation='elu')) model.add(Dense(nb_models, activation='softmax')) # Compile model my_adam_optimizer = Adam(lr=initial_learning_rate, decay=decay_rate) model.compile(loss='categorical_crossentropy', optimizer=my_adam_optimizer, metrics=['accuracy']) return model
I don't understand why the kernel stops each time I am trying to train my model. If you just remove the second convolutional layer (remove
model.add(Conv2D(10, (3, 3), activation='elu'))), then the model fits.
Does anyone know why it is impossible to me to put two convolutional layers in a row ?
how to input the output of an RNN to a CNN using tensorflow?
I am trying to figure out a way to use the output of a Recurrent Neural Network and input it to some CNN using tensorflow for text prediction !
I have an RNN that predicts text and its output shape is (100, 32)
Keras CNN too few parameters
I am trying to recreate the following tutorial CNN with 3 inputs and sigmoid activation functions in keras:
So the number of parameters should be 7 (assuming 1 filter of size 2 convolved over 2 locations (either top 2 inputs or 2 lower inputs), 2 shared weights (shown as 1.0's on the synapses) and no padding in the conv1d layer). When I write the following in Keras:
I only get 5 parameters when I check it in
What do I need to do to get the correct number of parameters? There are probably several things that are wrong in my code since I'm new to Keras.