Where can I find complex and detailed examples for training Tensorflow in C++?
I want to build and train a graph in C++ with TensorFlow that consists of two layers, and to feed it with a given matrix as an input.
I didn't find yet any comprehensive tutorial for building a TensorFlow graph and optimize it in C++.
Till now I found only these two relevant code pieces:
An official C++ example without real training nor optimization
An important answer about feed_dict but without examples
Is it possible that the "input" argument for the Session>Run() in the first link has a bug? It does not contain the string "x:0" as suggested in the second link.
Thank you!
See also questions close to this topic

Getting WA on BYECAKES
I am using correct approach. Still getting WA. I am just taking the maximum ration and add all the ingredient according to it. I am also trying to check all the edge cases but still i am getting wrong answer.
question link  http://www.spoj.com/problems/BYECAKES/
BYECAKES  Bye Bye Cakes
John is moving to a different city and he wants to use all his perishable food before doing it, to avoid wasting. Luckily all he has now is eggs, flour, sugar and milk, so he is going to make his famous cakes and give them to his friends as a goodbye gift. John only knows how to make an entire cake and not half a cake, a third of a cake, or any other portion. So, he will buy whatever is needed of each ingredient so that he can make an integer number of cakes and have nothing left. Of course, he wants to spend as little money as possible. You must help John to decide how much he should buy of each ingredient.
#include <bits/stdc++.h> using namespace std; struct data{ int rt, rm, val; }arr[5]; data find_mx() { data mx; mx.rt = 1; if(arr[0].rt > mx.rt){ mx.rt = arr[0].rt; mx.rm = arr[0].rm; } if(arr[1].rt > mx.rt){ mx.rt = arr[1].rt; mx.rm = arr[1].rm; } if(arr[2].rt > mx.rt){ mx.rt = arr[2].rt; mx.rm = arr[2].rm; } if(arr[3].rt > mx.rt){ mx.rt = arr[3].rt; mx.rm = arr[3].rm; } return mx; } int main() { while(1) { int A, B, C, D, a, b, c, d; scanf("%d %d %d %d %d %d %d %d", &A, &B, &C, &D, &a, &b, &c, &d); if(A == 1) break; arr[0].rt = A / a; arr[0].rm = A % a; arr[0].val = a; arr[1].rt = B / b; arr[1].rm = B % b; arr[1].val = b; arr[2].rt = C / c; arr[2].rm = C % c; arr[2].val = c; arr[3].rt = D / d; arr[3].rm = D % d; arr[3].val = d; data mx = find_mx(); if(mx.rm != 0) mx.rt++; for(int i = 0;i < 4;i++) printf("%d ", ((mx.rt  arr[i].rt) * arr[i].val)  arr[i].rm); printf("\n"); } return 0;
}`

Porting from QtWebKit to QtWebEngine (from Qt4.8 to Qt5.6)
I need to port my program to newer version of Qt (from 4.8 to 5.6), but I don't know how to port QWebFrame class, because QWebFrame has been merged into QWebEnginePage. I used this guide http://doc.qt.io/qt5/qtwebenginewidgetsqtwebkitportingguide.html
I have in my program this code:
connectToLambda( ui>view>page()>mainFrame(), SIGNAL(javaScriptWindowObjectCleared()), this, &{ ui>view>page()>mainFrame()>addToJavaScriptWindowObject("widget",this); });
Where ui>view is QWebEngineView (aka QWebView).
How can I port it to Qt5.6.3? I can't find signal javaScriptWindowObjectCleared() and function addToJavaScriptWindowObject..

Determining if a pack is empty or not
Can someone explain why this static assertion is false? How to redefine
is_empty
so that it works the way I want it to (without changing the syntax)? A type that is not a pack I want to evaluate to false by default (e.g.is_empty<int>::value
shall be false).#include <type_traits> template <typename Pack> struct is_empty : std::false_type {}; template <typename T, template <T...> class Z, T... Is> struct is_empty<Z<Is...>> : std::true_type {}; template <typename T, template <T...> class Z, T First, T... Rest> struct is_empty<Z<First, Rest...>> : std::false_type {}; template <int...> struct Z; int main() { static_assert(is_empty<Z<>>::value); }

Empty factor levels were dropped for columns when using MLR package
I have a question here, when I try to use "makeClassifTask" from MLR package to do a SVM, a warning said Empty factor levels were dropped for columns. My codes are:
install.packages("mlr") library(mlr) set.seed(1) sample=sample(2,nrow(cleaned_caravan_train),replace=T) train=cleaned_caravan_train[sample==1,] test=cleaned_caravan_train[sample==2,] makeClassifTask(data=train,target = "CARAVAN")
An example from the MLR package works very well:
install.packages("mlbench") library(mlbench) data("BostonHousing") data("Ionosphere") makeClassifTask(data=iris,target="Species")
I don't understand what the different between those.

Dialogflow: selecting specific value for the action while training
Is it possible to select a specific value for the action in the training or intent tab?
For ex.: I have an entity PLACES and there are a lot of places in the city, I try to keep tons of synonyms per each.
Let's say there is a place called City Museum and synonyms are "museum, city museum, cit mus, meseum" and so on, with mistakes or other aliases.
Currently, I have to add them manually, as while training there is no way to select a specific value for the entity. As I select proper intent, then entity, but Dialogflow creates a new value for the words it doesn't yet know, rather than adding them to the list.
Is there any way to do it this way?

what is wrong with my cosine similarity? Tensorflow
I want to use cosine similarity in my Neural network, instead of the standard dot product.
I've had a look at the dot product and at the cosine similarity.
In the example above they use
a = tf.placeholder(tf.float32, shape=[None], name="input_placeholder_a") b = tf.placeholder(tf.float32, shape=[None], name="input_placeholder_b") normalize_a = tf.nn.l2_normalize(a,0) normalize_b = tf.nn.l2_normalize(b,0) cos_similarity=tf.reduce_sum(tf.multiply(normalize_a,normalize_b)) sess=tf.Session() cos_sim=sess.run(cos_similarity,feed_dict={a:[1,2,3],b:[2,4,6]})
However, I tried doing it my own way
x = tf.placeholder(tf.float32, [None, 3], name = 'x') # input has 3 features w1 = tf.placeholder(tf.float32, [10, 3], name = 'w1') # 10 nodes in the first hidden layer cos_sim = tf.divide(tf.matmul(x, w1), tf.multiply(tf.norm(x), tf.norm(w1))) with tf.Session() as sess: sess.run(cos_sim, feed_dict = {x = np.array([[1,2,3], [4,5,6], [7,8,9], w1: np.random.uniform(0,1,size = (10,3) )})
Is my way wrong? Also, what is going on in the matrix multiplication? Are we actually multiplying the weights of one node for the inputs of different samples (within one feature)?

Tensorflow(r1.3,r1.4) codes build error with cuda on MacBook Pro
apple:tensorflow apple$ bazel build $flags verbose_failures action_env PATH action_env LD_LIBRARY_PATH action_env DYLD_LIBRARY_PATH //tensorflow/tools/pip_package:build_pip_package
INFO: Found 1 target... ... ...
tensorflow/core/kernels/split_lib_gpu.cu.cc(122): error: specified alignment (4) is different from alignment (2) specified on a previous declaration detected during: instantiation of "void tensorflow::split_v_kernel(const T *, tensorflow::CudaDeviceArrayStruct, IntType, IntType, tensorflow::CudaDeviceArrayStruct) [with T=float, IntType=tensorflow::int32, useSmem=true]" (231): here instantiation of "void tensorflow::SplitVOpGPULaunch::Run(const Eigen::GpuDevice &, __nv_bool, const T *, int, int, const tensorflow::CudaDeviceArrayStruct &, const tensorflow::CudaDeviceArrayStruct &) [with T=float, IntType=tensorflow::int32]" (251): here
tensorflow/core/kernels/split_lib_gpu.cu.cc(122): error: specified alignment (4) is different from alignment (2) specified on a previous declaration detected during: instantiation of "void tensorflow::split_v_kernel(const T *, tensorflow::CudaDeviceArrayStruct, IntType, IntType, tensorflow::CudaDeviceArrayStruct) [with T=float, IntType=tensorflow::int32, useSmem=false]" (236): here instantiation of "void tensorflow::SplitVOpGPULaunch::Run(const Eigen::GpuDevice &, __nv_bool, const T *, int, int, const tensorflow::CudaDeviceArrayStruct &, const tensorflow::CudaDeviceArrayStruct &) [with T=float, IntType=tensorflow::int32]" (251): here
.........
16 errors detected in the compilation of "/var/folders/p3/rl3d8_690qx5jz2xvmkssscc0000gn/T//tmpxft_0000484e_000000006_split_lib_gpu.cu.cpp1.ii". ERROR: /Users/apple/temp/tensorflow/tflatest/tensorflow/tensorflow/core/kernels/BUILD:387:1: output 'tensorflow/core/kernels/_objs/split_lib_gpu/tensorflow/core/kernels/split_lib_gpu.cu.pic.o' was not created. ERROR: /Users/apple/temp/tensorflow/tflatest/tensorflow/tensorflow/core/kernels/BUILD:387:1: not all outputs were created or valid. Target //tensorflow/tools/pip_package:build_pip_package failed to build INFO: Elapsed time: 8.731s, Critical Path: 7.62s
Any tip

Keras vs. TensorFlow. Hyperparamters not identical?
I'm converting a Keras script to my own pure TensorFlow script.
Unfortunately, there is something wrong. The training on my script is a lot faster (so fast it got to be an error somewhere), and loss is not improving. I think it has something to do with my hyperparamters... but I can't figure it out.
I was a bit confused of the terminology,
samples_pr_epoch
etc., but I thought I had it. Is the second forloop right?Anyone have an idea whats different? Is it something under the hood of Keras?
Original code:
def gen(hwm, host, port): for tup in client_generator(hwm=hwm, host=host, port=port): X, Y, _ = tup Y = Y[:, 1] if X.shape[1] == 1: # no temporal context X = X[:, 1] yield X, Y def get_model(time_len=1): ch, row, col = 3, 160, 320 # camera format model = Sequential() model.add(Lambda(lambda x: x/127.5  1., # Normalize input_shape=(ch, row, col), output_shape=(ch, row, col))) model.add(Convolution2D(16, 8, 8, subsample=(4, 4), border_mode="same")) model.add(ELU()) model.add(Convolution2D(32, 5, 5, subsample=(2, 2), border_mode="same")) model.add(ELU()) model.add(Convolution2D(64, 5, 5, subsample=(2, 2), border_mode="same")) model.add(Flatten()) model.add(Dropout(.2)) model.add(ELU()) model.add(Dense(512)) model.add(Dropout(.5)) model.add(ELU()) model.add(Dense(1)) model.compile(optimizer="adam", loss="mse") return model if __name__ == "__main__": tf.reset_default_graph() sess = tf.Session() model = get_model() model.fit_generator( gen(20, args.host, port=args.port), # Batch size 200 samples_per_epoch=10000, nb_epoch=args.epoch, validation_data=gen(20, args.host, port=args.val_port), # Batch size 200 nb_val_samples=1000 ) init = tf.global_variables_initializer() sess.run(init)
My code:
def gen(hwm, host, port): for tup in client_generator(hwm=hwm, host=host, port=port): X, Y, _ = tup Y = Y[:, 1] if X.shape[1] == 1: # no temporal context X = X[:, 1] yield X, Y def conv2d(x, W, s): return tf.nn.conv2d(x, W, strides=[1,s,s,1], padding="SAME") def weight_variable(shape): return tf.Variable(tf.truncated_normal(shape, stddev=0.1)) def bias_variabel(shape): return tf.Variable(tf.constant(0.1, shape=shape)) def sig2d(x): return tf.nn.sigmoid(x, name='Sigmoidnormalization') if __name__ == "__main__": tf.reset_default_graph() sess = tf.Session() xs = tf.placeholder(tf.float32, [None, 153600]) x_image = tf.reshape(xs, [1, 160, 320, 3]) ys = tf.placeholder(tf.float32, [None, 1]) x_image = tf.nn.sigmoid(x_image, name='Sigmoidnormalization') # conv1 W_conv1 = weight_variable([8,8,3,16]) b_conv1 = bias_variabel([16]) h_conv1 = tf.nn.elu(conv2d(x_image, W_conv1, 4) + b_conv1) # conv2 W_conv2 = weight_variable([5,5,16,32]) b_conv2 = bias_variabel([32]) h_conv2 = tf.nn.elu(conv2d(h_conv1, W_conv2, 2) + b_conv2) # conv3 W_conv3 = weight_variable([5,5,32,64]) b_conv3 = bias_variabel([64]) h_conv3 = tf.nn.elu(conv2d(h_conv2, W_conv3, 2) + b_conv3) # flat1 shape = h_conv3.get_shape().as_list() flat1 = tf.reshape(h_conv3, [1, shape[1]*shape[2]* shape[3]]) # drop1 drop1 = tf.nn.dropout(flat1, 0.2) # elu1 elu1 = tf.nn.elu(drop1) # dense1 dense1 = tf.layers.dense(elu1, 512) # drop2 drop2 = tf.nn.dropout(dense1, 0.5) # elu2 elu2 = tf.nn.elu(drop2) # dense2 output = tf.layers.dense(elu2, 1) # output loss = tf.sqrt(tf.reduce_mean(tf.square(tf.subtract(ys, output)))) train = tf.train.AdamOptimizer().minimize(loss, sess.run(tf.global_variables_initializer()) for i in range(args.epoch): for j in range(args.epochsize/args.batch_size): batch_xs, batch_ys = next(gen(20, args.host, port=args.port)) # Batch size 200 batch_xs = np.reshape(batch_xs,(1,153600)) sess.run(train, feed_dict={xs: batch_xs, ys: batch_ys})

generating pb file in tensorflow
I'm trying to generate a pb file using the method given in this tutorial,
http://cvtricks.com/howto/freezetensorflowmodels/
import tensorflow as tf saver = tf.train.import_meta_graph('/Users/pr/tensorflow/dogscatsmodel.meta', clear_devices=True) graph = tf.get_default_graph() input_graph_def = graph.as_graph_def() sess = tf.Session() saver.restore(sess, "./dogscatsmodel")
When I try to run this code I get this error 
DataLossError (see above for traceback): Unable to open table file ./dogscatsmodel: Data loss: file is too short to be an sstable: perhaps your file is in a different file format and you need to use a different restore operator?
WHen I googled this error most of them recommend to generate the meta file using version 2 format? Is that the right approach?
Tensorflow version used 
1.3.0