neural-network- how to choose the best model
I am using nftool to train my data (90 piece of data), and my training algorithm is "Bayesian Regularization" I am going to use different neurons to compare and find which one is the best model. Is there any appropriate step I need to follow? Do I need to retrain again and again until getting the good performance of model for each neurons?
See also questions close to this topic
aboout functions in matlab. Error using myfun() Not enough input arguments
I have created a function in the following way:
I have opened a empty notepad file, I have written the following statement
function y = myfun(x) y = 2*x+1;
and then I have saved the file in the following way: myfun.m
Then, I have imported this .m file into matlab, and then I have run the file pressing RUN button
but I have obtained the followin error:
What it was interesting, that if I have written down into command window
I have obtained the correct value: 5.
My question: How should I interpret the above error?
How to run .m file in matlab cloud using REST API
I have a MatLab cloud account and have a .m file there.Now am able to run the file using Matlab Mobile. I need to run that file using an API call or something like that without using Matlab mobile. Please help me
reading a binary movie file: how to skip every n-th frame
I want to read in a binary video file using fread. However, how the skip parameter works is after hours of fiddling around not clear to me from Mathworks explanations / Google results.
I want to change this pseudocode to read every 4th frame of the video. What already worked for me is storing every 4th frame in the
datavariable, and performing
freadfor all other frames (without storing the results). However, the question is how to do the same thing direclty using fread.
Here's the pseudocode that should be self-explanatory:
for ind=1:data_length data(:,:,ind) = fread(fi,[y_res x_res],'int16=>int16'); end
data(:,:,ind) = fread(fi,[y_res x_res],'int16=>int16*4'); %should skip every 4th frame? data(:,:,ind) = fread(fi,[y_res x_res],'int16=>int16', sum([y_res,x_res])); %should skip every 4th frame?
without success. If possible, please also explain what is meant by 'specify the skip value in a scalar form' (as mentioned on Mathworks documentation).
How to exclude boilerplate information in information retrival and information extraction?
I am wondering if there is an general way to recoganize non-boilerplate information automatically on an webpage in general.
For example, "Support Teams" appears in the following webpage. But such a term is clearly not specific to Ahrens Lab as the webpages of other labs at Janelia also have "Support Teams". Thus, it should be considered as boilerplate information. For the purpose of information retrieval, such text should be excluded.
Google apparently can not exclude the boilerplate information. This is demonstrated by the fact that searching
site:janelia.org intext:"Support Teams" Ahrensreturns the above URL.
For a webpage from a specific website, a person can define a set of rules manually to exclude boilerplate information. However, this solution is not generalizable.
Is there a general solution (probably should involve some machine learning/deep learning techniques and data prepreprocessing) to exclude boilerplate information? Thanks.
Tensor sizes CrossEntropyLoss
In the snippet:
criterion = nn.CrossEntropyLoss() raw_loss = criterion(output.view(-1, ntokens), targets)
output size is torch.Size([5, 5, 8967]), targets size is torch.Size(), and ntokens is 8967
After modifying the code, my
output size is torch.Size([5, 8967]) and targets size is torch.Size()
which rises dimensionality issues when computing the loss.
Is it sensible to increase the size of my Linear activation that produces the output by 5, so that I can resize the output later to be of the size torch.Size([5, 5, 8967])?
The problem with increasing the size of the tensor is that ntokens can become quite large and I can easily run out of memory because of that. Is there an alternative approach?
How to identify issue with image classification model and softmax probability?
I am trying to build an image classification using transfer learning of VGG16 model in keras. I acquired very small data set of 200 images for each class and used 10 images as validation(I know the data set is small but the requirement was for controlled environment). When predicting, the model favors particular class and also probabilities of other classes are almost close to zero (2e-35). I cant figure out exactly the issue. some classes contain very different sets of images(example: vehicle class contain: cars,bikes etc). My question is what is wrong with the model, is it low number of data-set or variations of images in a class.