conditional probability table to Json
I am looking for an elegant way to turn conditional probability tables (cpt) into Json Format.
e.g. turn
XY z  z 
11 0.4  0.6 
10 0.7  0.3 
01 0.1  0.9 
00 1.0  0.0 
into
[
[
[
0.4, //P(zx,y)
0.6 //P(zx,y)
],
[
0.7, //P(zx,y)
0.3 //P(zx,y)
]
],
[
[
0.1, //P(zx,y)
0.9 //P(zx,y)
],
[
1, //P(zx,y)
0 //P(zx,y)
]
]
]
As this looks like a common problem to me, I am hoping anyone can set me up with some example code
See also questions close to this topic

Pretty Print a JSON string in Rust
I have a String
let str = String::from_utf8(data.to_vec()).unwrap();
How do I prettyprint it with newlines and tabs as JSON?
Essentially I want to do the Rust equivalent of the JavaScript
JSON.stringify(JSON.parse(my_string), null, 4);
This is different than the existing question because I want to parse and then prettyify an existing String of JSON, rather than an existing struct.

Android  cz.msebera.android.httpclient.entity.ByteArrayEntity required: org.apache.http.HttpEntity
I am using
loopj AsyncHttpClient
to call web services. I am trying register a user. So I need to sendJSON
data to Web Service.ByteArrayEntity entity = new ByteArrayEntity(json.toString().getBytes("UTF8")); entity.setContentEncoding(new BasicHeader(HTTP.CONTENT_TYPE, "application/json")); client.post(getApplicationContext(), "http://10.0.3.2:8080/WebService/rest/user/insert", entity, new JsonHttpResponseHandler(){
When I put cursor on the
entity
inclient.post
line it gives this error.cz.msebera.android.httpclient.entity.ByteArrayEntity required: org.apache.http.HttpEntity
Example That I am trying is also from stackoverflow  Send JSON as a POST request to server by AsyncHttpClient
Libraries that I am using
compile files('libs/androidasynchttp1.4.4.jar') compile 'cz.msebera.android:httpclient:4.3.6'
Anybody can help me? Thanks in advance.

Couchbase Lite 2 + JsonConvert
The following code sample writes a simple object to a couchbase lite (version 2) database and reads all objects afterwards. This is what you can find in the official documentation here
This is quite a lot of manual typing since every property of every object must be transferred to the
MutableObject
.class Program { static void Main(string[] args) { Couchbase.Lite.Support.NetDesktop.Activate(); const string DbName = "MyDb"; var db = new Database(DbName); var item = new Item { Name = "test", Value = 5 }; // Serialization HERE var doc = new MutableDocument(); doc.SetString("Name", item.Name); doc.SetInt("Value", item.Value); db.Save(doc); using (var qry = QueryBuilder.Select(SelectResult.All()) .From(DataSource.Database(db))) { foreach (var result in qry.Execute()) { var resultItem = new Item { // Deserialization HERE Name = result[DbName].Dictionary.GetString("Name"), Value = result[DbName].Dictionary.GetInt("Value") }; Console.WriteLine(resultItem.Name); } } Console.ReadKey(); } class Item { public string Name { get; set; } public int Value { get; set; } } }
From my research Couchbase lite uses JsonConvert internally, so there might be a way to simplify all that with the help of JsonConvert.
Anything like:
var json = JsonConvert.SerializeObject(item); var doc = new MutableDocument(json); // No overload to provide raw JSON
or maybe
var data = JsonConvert.SerializeToDict(item); // JsonConvert does not provide this var doc = new MutableDocument(data);
Is there or is this some kind of optimization and the preferred approach is by intend?

What is returned by the call wacky 4,6? also how to calculate this recursion?
public static int wacky (int x , int y){ if(x <= 1){ return y; } else{ return wacky(x  1, y  1) + y; } }
I had a test a while but I still don't know how to do a recursion step by step and I remember this question by memory and I guess on the test thinking that you maybe calculate this way... public static = 4 + 6 + 1 + 4 + 4  1 + 6  1 + 6 but was not of the answer choice I realize i was doing something wrong... my teacher doesn't help me and don't even care I try to get help but he does not how to explain this...

Handle selfreferences when flattening dictionary
Given some arbitrary dictionary
mydict = { 'first': { 'second': { 'third': { 'fourth': 'the end' } } } }
I've written a small routine to flatten it in the process of writing an answer to another question.
def recursive_flatten(mydict): d = {} for k, v in mydict.items(): if isinstance(v, dict): for k2, v2 in recursive_flatten(v).items(): d[k + '.' + k2] = v2 else: d[k] = v return d
It works, giving me what I want:
new_dict = recursive_flatten(mydict) print(new_dict) {'first.second.third.fourth': 'the end'}
And should work for just about any arbitrarily structured dictionary. Unfortunately, it does not:
mydict['new_key'] = mydict
Now
recursive_flatten(mydict)
will run until I run out of stack space. I'm trying to figure out how to gracefully handle selfreferences (basically, ignore or remove them). To complicate matters, selfreferences may occur for any subdictionary... not just the top level. How would I handle selfreferences elegantly? I can think of a mutable default argument, but there should be a better way... right?Pointers appreciated, thanks for reading. I welcome any other suggestions/improvements to
recursive_flatten
if you have them. 
Efficiently accessing multiple level dictionary with dot notation
Suppose I have a multilevel dictionary like this
mydict = { 'first': { 'second': { 'third': { 'fourth': 'the end' } } } }
I'd like to access it like this
test = get_entry(mydict, 'first.second.third.fourth')
What I have so far is
def get_entry(dict, keyspec): keys = keyspec.split('.') result = dict[keys[0]] for key in keys[1:]: result = dict[key] return result
Are there more efficient ways to do it? According to %timeit the runtime of the function is 1.26us, while accessing the dictionary the standard way like this
foo = mydict['first']['second']['third']['fourth']
takes 541ns. I'm looking for ways to trim it to 800ns range if possible.
Thanks

Performing a function on my data query using matrix and vectors
I am creating a function called pagerank that takes as input the set edges, a teleport probability
a
and a positive integeriters
and computes the transition probability matrix.the edges being defined as
edges =[[0,1], [1,1], [2,0], [2,2], [2,3], [3,3], [3,4], [4,6], [5,5], [6,6], [6,3]]
It then starts from an arbitrary probability vector (say one full of 1/Ns where N is the number of all states) and then multiplies this vector with the transition probability matrix iters times.
The function should return the resulting
vector
Does anyone know how I can create a function to handle the vector multiplication with the vector?

Draw small sample from large set with discrete distribution efficiently
I have two lists, both the same size, let's call them
elements
andweights
. I want to choose one element of theelements
list with discrete probability distribution given byweights
.weight[i]
corresponds to the probability of choosingelements[i]
.elements
never changes, but after every sample taken,weights
changes (only the values, not the size).I need an efficient way to do this with large lists.
I have an implementation in Python with
numpy.random.choice(elements, p=weights)
but taking a sample of sizek
from a set of sizen
wherek << n
is extremely inefficient. An implementation in any language is welcome, but I am working primarily in Python.(This is used in a social network simulation with networkx. I have a weighted graph and a node
i
and I want to choose a nodej
fromi
's neighbors where the probability for each node is proportional to the weight of the edge betweeni
and the given node. If I set the probability to 0 for nonneighbors, I don't have to generate the list of neighbors every time, I just need a list of all nodes.)It will be used like this:
elements = [...] weights = [...] for(...): element = sample(elements, weights) *Some calculation with element and changing the values of weights*

Independence of random variables
let U = the number of trials needed to get the first head and V= number of trials needed to get two heads in repeated tosses of a fair coin. Are U and V independent random variables?
I would say they are dependent if  u = number of trials before first head appears v = number of trials to get 2nd head after the event u has occurred
Please help me understand it better?

Laplace smoothig for Bayesian Netoworks in bnlearn
I'm trying to work with Bayesian Networks using R and currently I am using
bnlearn
framework. I'm trying to use score based structural learning from data and try different algorithms and approaches.I would like to know if there is Laplace smoothing implemented in
bnlearn
or not. I could not find any information about it in the documentation. Am I missing somethings? Does anyone know? 
How to choose order of parameters in KL divergence?
Since KL Divergence is not symmetric, how to choose which distribution is q and which one is p in formula KL(qp)?