neural-network- how to choose the best model
I am using nftool to train my data (90 piece of data), and my training algorithm is "Bayesian Regularization" I am going to use different neurons to compare and find which one is the best model. Is there any appropriate step I need to follow? Do I need to retrain again and again until getting the good performance of model for each neurons?
See also questions close to this topic
Matlab - Display common features
I'm working on a project for university, which currently takes a reference image of a coin and produces 100 drawn images of it, with different brightness settings to capture as many features as possible.
So for example, here is a reference coin and just a select images that the application has produced.
What I want to achieve is an addition to my current application, to go through these 100 drawn images and automatically pick out the dominant features that appear multiple times.
How it could work
- Application grabs the 100 reference images.
- If a line on the image is repeated/displayed 40/50 times, add it to the final drawn image.
- At the end, the final drawn image is generated, this time displaying all the dominant features of this coin.
I hope I've made it clear what I have in my head, but since I'm a newcomer with Matlab, I'm completely unsure on how to detect repetitive features on multiple images like I've shown above. If anyone can point me in the right direction or illustrate a solution, I'd greatly appreciate it.
How to use function like matlab 'fread' in python?
This is a .dat file.
In Matlab, I can use this code to read.
lonlatfile='NOM_ITG_2288_2288(0E0N)_LE.dat'; f=fopen(lonlatfile,'r'); lat_fy=fread(f,[2288*2288,1],'float32'); lon_fy=fread(f,[2288*2288,1],'float32')+86.5; lon=reshape(lon_fy,2288,2288); lat=reshape(lat_fy,2288,2288);
Here are some results of Matlab: matalab
How to do in python to get the same result?
PS: My code is this:
def fromfileskip(fid,shape,counts,skip,dtype): """ fid : file object, Should be open binary file. shape : tuple of ints, This is the desired shape of each data block. For a 2d array with xdim,ydim = 3000,2000 and xdim = fastest dimension, then shape = (2000,3000). counts : int, Number of times to read a data block. skip : int, Number of bytes to skip between reads. dtype : np.dtype object, Type of each binary element. """ data = np.zeros((counts,) + shape) for c in range(counts): block = np.fromfile(fid,dtype=np.float32,count=np.product(shape)) data[c] = block.reshape(shape) fid.seek( fid.tell() + skip) return data fid = open(r'NOM_ITG_2288_2288(0E0N)_LE.dat','rb') data = fromfileskip(fid,(2288,2288),1,0,np.float32) loncenter = 86.5 #Footpoint of FY2E latcenter = 0 lon2e = data+loncenter lat2e = data+latcenter Lon = lon2e.reshape(2288,2288) Lat = lat2e.reshape(2288,2288)
But, the result is different from that of Matlab.
contour of ear in profile face image
hello I have canny edge used on my profile face 2d image I want to model the ear as a set of contours can you please help
thank you profile face image
What activation function to pick to have a binary output layer with multiple 0s and 1s?
The question is related to neural network output estimation. My input is a vector size of 100, where elements of
[0.5, 1, 1.5]are randomly distribute across 100 indexes of input, the rest of the vector elements are zeros, i.e.,
input = [0, 0, 0, 0.5, 1, 0, 0, 1.5, 0, .5, 0 ... , 0; 1, 0, 1, 1.5, 1.5, 0, 0, 0, 1.5, 0, 1 ... , 0]
and my output consist of $1$'s at index of input where the element is not $0$, i.e., for the above input:
input = [0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0 ... , 0; 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1 ... , 0]
my network structure is 100-10-100 as for my network input-hidden layer-output unit size respectively.
tansigas an activation function of my hidden layer node and also
softmaxfor my out layer.
1. I attached a figure at (https://i.stack.imgur.com/Pt29V.png), holding a sample input (green), corresponding target (blue), and the estimated output (red).
Why aren't the output elements $0$'s and $1$'s, but instead are numbers between [0, 1)? How can I simply get an estimation of $1$'s and $0$'s? Since the sum of which, i.e. sum of all elements in the output vector, is one, is this a probability density estimation? How can I turn this into $0$'s and $1$'s?
Output for an RNN
I'm taking the Coursera course Neural Networks for Machine Learning hosted by Geoffrey Hinton from the University of Toronto and there is a quiz question in week 7 for which my answer differs from the right one.
One question is, how should I get a probability between 0 and 1 if the
Whhweight is negative and the logistic
hunit gives values between 0 and 1. Given the above, their linear combination will allways be negative.
A second question would be if we also have to use backpropagation in order to get the right answer?
The way I've started to tackle this question is the following:
h0 = 1/( 1 + exp(- (Whh * hbias + Wxh*x0)) ) h1 = 1/( 1 + exp(- (Whh * h0 + Wxh*x1))) y1 = Why * h1
Which of my assumptions are incorrect?
Python:HOW INSTALL NEURALPY
hello if i try to install neuralpy with
sudo pip install neuralpy, I have this kind of error:
Command "python setup.py egg_info" failed with error code 1 in /private/tmp/pip-build-ziqu6pe6/neuralpy/