Python polynomial with deegre and coefficients from user
I write a program in Python and I need to create a polynomial with deegre(n) and coefficients (a,b,c) from user. I create it but I don't know how use it like function with argument for example polynomial(x)=some value. How i can solve this?
1 answer

You can specify a polynomial using the numpy package: https://docs.scipy.org/doc/numpy/reference/routines.polynomials.html.
As an alternative you can use sympys poly function: http://docs.sympy.org/latest/modules/polys/reference.html to get a polynomial in symbolic form. To evaluate for a given x see http://docs.sympy.org/latest/modules/evalf.html
Numpy roots will find all the roots of a polynomial, given the coefficients: https://docs.scipy.org/doc/numpy1.13.0/reference/generated/numpy.roots.html
(It might be useful to know (if you don't happen to have them from input) that you can obtain the coefficients from the sympy poly using all_coeffs function: http://docs.sympy.org/0.7.1/modules/polys/reference.html#sympy.polys.polytools.Poly.all_coeffs)
If you want to implement from first principals then I suggest looking at the reference in https://docs.scipy.org/doc/numpy1.13.0/reference/generated/numpy.roots.html.
NB: A degree zero polynomial (although I'm not sure if you meant that) is a constant and has no root unless equal to zero.
Some hints when writing your program:
 Prompt and obtain user input for poly order and then for each coefficient. Store the coefficients in a list. Also prompt and obtain user input for the x value to evaluate the polynomial for.
 If using sympy construct your polynomial object using the list.
 If using sympy evaluate your polynomial for the x value using evalf. If numpy then call a function that takes the list and the x value and evaluates the polynomial using the numpy library.
 Then call numpy.roots with your list of coefficients.
See also questions close to this topic

Python Cassandra Driver: encoding issue during insertion
I'm developing a simple python module that reads data from a tsv file and load them into a Cassandra keyspace table.
I started by looking at the examples given by Datastax and everything seemed to be ok, so at that point I began to code.
The program reads data from the tsv file correctly, it translates them into a list of rows and I verified that every element of each row has the right type for the destination column. But when I try to insert a raw into a table the terminal says:
AttributeError: 'float' object has no attribute 'encode'
This is the code:
#Upload data to Cassandra DB (cassandra_df is a Pandas dataframe) session.set_keyspace(data_ks) cassandra_df_list = cassandra_df.values.tolist() query = "INSERT INTO table_str (rowid,a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z,aa,ab,ac,ad,ae,af,ag,ah,ai,aj,ak,al,am,an,ao,ap,aq,ar,as,at,au,av,aw,ax,ay,az,ba,bb,bc,bd) VALUES (uuid(),?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)" prepared = session.prepare(query) for row in cassandra_df_list: prepared.bind(row) session.execute(prepared) cluster.shutdown()
I made a lot of changes in order to solve the problem, but I got new issues or the same with 'int' instead of 'float'. I also read other questions here and tried to use str(row) and repr(row) in prepared.bind(), but I got other errors.
I'm new to Python and I'm not able to find other solutions, what would you do?
Thanks in advance!
Edit Sorry, I forgot to give details about the DB table. Here is the creation statement:
CREATE TABLE prova.table_str ( rowid uuid PRIMARY KEY, a text, aa text, ab text, ac text, ad text, ae text, af text, ag text, ah text, ai text, aj double, ak double, al double, am text, an double, ao double, ap double, aq double, ar double, as double, at double, au double, av double, aw double, ax double, ay double, az double, b text, ba double, bb text, bc text, bd text, c text, d text, e int, f text, g text, h text, i text, j text, k double, l int, m text, n double, o int, p int, q text, r text, s text, t text, u text, v int, w text, x text, y text, z text
)

How to calculate rise in of the vectors in 3D which are tilted?
I have a data, which shows coordinates for start and end of the vectors in 3D space which are orientated around 3fold screw axis:
x y z 0 38.522003 5.600998 129.203995 # start of v1 1 23.854996 66.576996 112.487000 # end of v1 2 4.417000 40.182999 121.309998 # start of v2 3 65.761993 27.550995 104.285004 # end of v2 4 50.272003 56.473999 112.857010 #... 5 12.574997 6.202995 96.598007 6 45.192993 8.042999 105.147995 7 15.934998 63.490005 88.347992 8 3.613998 33.112991 97.102997 9 66.244003 35.949997 80.309006 10 44.052994 59.996002 89.057999 11 19.916000 2.125000 72.294998 12 51.201996 11.974998 81.044998 13 9.035995 58.367996 64.238998 14 4.529999 25.854996 72.759003 15 64.563004 44.283997 56.357998 16 37.153000 62.003998 65.026001 17 28.061996 0.000000 48.126995
for i in range(xyz_coords.shape[0]): if i == 0: ax.plot(xyz_coords['x'].loc[0:1], xyz_coords['y'].loc[0:1], xyz_coords['z'].loc[0:1]) elif i%2 == 0 and i!=0: ax.plot(xyz_coords['x'].loc[i:i+1], xyz_coords['y'].loc[i:i+1], xyz_coords['z'].loc[i:i+1])
I would like to calculate the rise and angle between each starting and ending positions of the vector in relation to the next one, and those values should be similar for each vn and vn+1. The problem is that vectors shifts are not parallel to the z axis  otherwise it would be very simple task to determine the rise. The distance between v1(start) and v2(start) gives me the distance between them. Loading the data given above as pandas DataFrame, the distance between the starting points of v1 and v2:
d = np.sqrt((xyz_coords['x'][0]  xyz_coords['x'][2])**2 + (xyz_coords['y'][0]  xyz_coords['y'][2])**2 + (xyz_coords['z'][0]  xyz_coords['z'][2])**2)
You might also notice, that the z difference is ~8.95 but as as said before, zdifference is not proper rise value because is influenced by the tilt. If I knew how to determine the tilt angle I would be able to calculate the rise by just using
z_prop = sin(tilt_angle) * d
. Is there any easy way to correct the tilt and get the proper z value? 
Feature Contribution of one sample in decision tree
import numpy as np import pandas as pd from sklearn.datasets import make_classification from sklearn.tree import DecisionTreeClassifier X, y = make_classification(n_samples=1000, n_features=6, n_informative=3, n_classes=2, random_state=0, shuffle=False) # Creating a dataFrame df = pd.DataFrame({'Feature 1':X[:,0], 'Feature 2':X[:,1], 'Feature 3':X[:,2], 'Feature 4':X[:,3], 'Feature 5':X[:,4], 'Feature 6':X[:,5], 'Class':y}) y_train = df['Class'] X_train = df.drop('Class',axis = 1) dt = DecisionTreeClassifier(random_state=42) dt.fit(X_train, y_train) # Plot the top important features imp_feat_rf = pd.Series(dt.feature_importances_, index=X_train.columns).sort_values(ascending=False) print(imp_feat_rf)
this prints out the feature importance of the whole dataset.
How can I modify the code to only print the feature contribution of a certain sample using decision tree.
Beside lime and tree interpreter, can the decision tree itself show us the feature contribution ?

6th Degree Polynomial and Chebyshev minmax Matlab
I have to find the 6th degree polynomial for the function $f(x)=xe^x$. After which the use of the Chebyshev minmax approach I have to use the list grade polynomial approach with respect to the fault 0.01 at [1,1].
I don't think that I can use Taylor or Maclaurin. Any idea? I use only Matlab.

How can i do integration of a Gaussian function with combination of Chebyshev polynomials?
Mathematica code:
sigma = 1.0; (1./(Sqrt[2*pi]*sigma))*Integrate[Exp[(x^2)/(2*sigma^2)]*Cos[n*ArcCos[x]],{inf,inf}]
Unfortunately, Mathematica (version 11) is unable to integrate. Thank you

Efficient (dense) multivariate polynomial multiplication
Suppose we have two polynomials, in man variables, but they need not both contain the same variables. But, for the variables a polynomial does contain, it contains all monomials (up to a degree).
Example up to degree 2
Polynomial A: 1+x+y+x^2+xy+y^2
Polynomial B: 1+e+f+e^2+ef+f^2
Both A and B are of degree 2, and they are both dense, meaning they contain all monomials of their respected variables.
Note that in the above, the sets of variables A_s=(x,y) and B_s=(e,f) had an empty intersection, but that need not be the case.
Does anybody know of an efficient algorithm for computing their product? As both polynomials are dense, there must be an algorithm that takes advantage of this fact.
I have an algorithm, that is efficient, if the above sets of variables A_s == B_s, are equal. The trivial solution of simply expanding the sets, giving zero coeficients, ex.
A: 1+x+y+0e+0f+...
is not a valid one, as one would constantly need to construct new arrays (polynomial representations).

Rate of Convergence
Suppose
{xn}∞n=0
is a sequence that converges to p, withx^n ̸= p
for all n. If (finite) positive constants λ and α exist withlim x^n+1−p=λ, n→∞ x^n−pα
then
{xn}∞n=0
converges to p of order α. For each of the following sequences, give the order of convergence α.1)
xn = e−3n → 0
2) (2 points)
xn = e−3n → 0
I know that the answer for the first one is alpha = 1 and for the second one alpha = 3. But that is just the answer, I want to know why, been reading up on this but cannot seem to find the explanation. Any help is fine thank you!

Practice Midterm (Design an algorithm)
So my professor posted a practice midterm and has yet to post solutions. I'm stuck on this problem.
Design an algorithm to evaluate Problem Set 02 f(x) = (e^x − 1 − x)/x^2 in IEEE double precision arithmetic, to 12digit accuracy for all machine numbers x ≤ 1.
I know its Floating Point arithmetic and I think I use Taylors theorem but I'm not completely sure been stuck for the past hour on it. Just need a jumpstart to this problem

How to avoid floating point exception when using pow?
I use a C library which uses the
pow
function on twodouble
valuesdouble a = pow(b, c)
At the moment, I have
b = 0.62
andc = 1504
, which means thata
should be nearly 0 (3.6e312).But I have a floating point exception. How to avoid it and directly return 0? Can we anticipate this case?
I use Debian 9, 64bit, and I compile with gcc 6.3. The library is ccmaes and here is the problematic line:
https://github.com/CMAES/ccmaes/blob/eda8268ee4c8c9fbe4d2489555ae08f8a8c949b5/src/cmaes.c#L893
I have used gdb so the floating point exception does not come from the division (t>chiN = 2.74)
If I try to reproduce it, with the values when the FPE occurs, I have no problem (compilation option : fopenmp O3 DNDEBUG fPIC Wall Wextra Wnolonglong Wconversion o, like the library)
#include <math.h> #include <stdio.h> int main() { double psxps = 5.6107247793270769; double cs = 0.37564049253818982; int gen = 752; double chiN = 2.7421432615656891; int N =8; double foo = sqrt(psxps) / sqrt(1.  pow(1.cs, 2*gen)) / chiN < 1.4 + 2./(N+1); printf("%lf\n",foo); }
Result: 1.00000000000

Computing hazard ratios and confidence interval using polynomial (Coxph)
I've developed a model using cox regression (using coxph from the Survival package) with my dependent variable x as a polynomial, adjusted for several factors (age, sex, smoking and nonHDL cholesterol). This is a (fake) sample that gives a similar result as my real dataset:
d<data.frame(c(1.21,1.3,1.33,1.34,1.6,1.8,2.0,2.2,2.4,2.8,2.87,2.9,2.95,3.0,3.2,3.25,3.3,3.4,3.6,3.7,3.87,3.94,4.02,4.35,4.49,4.78,4.89), c(67,64,62,73,75,75,72,84,72,75,86,83,73,86,82,73,72,85,84,81,80,75,78,87,69,70,72), c(0,0,1,0,1,1,0,0,0,1,1,0,1,0,1,1,0,1,0,0,1,0,1,0,0,0,1), c(0,0,0,1,0,1,1,0,0,1,1,0,1,0,0,1,0,1,1,0,1,0,0,0,1,0,0), c(3.52,3.44,3.99,3.82,3.33,3.86,3.87,3.34,2.68,4.01,3.46,3.31,3.13,3.86,2.96,3.58,3.55,2.54,3.27,3.66,3.72,2.79,3.67,3.79,3.31,2.60,4.28), c(1,1,1,1,1,1,0,0,1,0,0,0,1,0,1,0,0,1,1,0,1,1,1,0,1,1,0), c(0.9,1.1,0.9,1.2,1.3,0.8,2.2,1.8,1.3,1.2,2.5,2.2,2.6,1.9,2,1.8,1.7,1.7,1.6,2,1.5,1.4,1,1.5,0.8,1.1,1.3)) colnames(d)<(c("x","age","sex","smoking","nonhdl","event","followup")) cox<coxph(Surv(followup,event==1)~pol(x,2)+age+sex+smoking+nonhdl,data=d) summary(cox)
This is the result:
Call: coxph(formula = Surv(followup, event == 1) ~ pol(x, 2) + age + sex + smoking + nonhdl, data = d) n= 27, number of events= 16 coef exp(coef) se(coef) z Pr(>z) pol(x, 2)x 6.886939 0.001021 2.401935 2.867 0.00414 ** pol(x, 2)x^2 1.094643 2.988115 0.384752 2.845 0.00444 ** age 0.053374 0.948025 0.059271 0.901 0.36785 sex 0.548018 1.729820 0.592089 0.926 0.35467 smoking 0.450827 1.569609 0.653909 0.689 0.49055 nonhdl 1.605715 0.200746 0.633126 2.536 0.01121 *  Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 exp(coef) exp(coef) lower .95 upper .95 pol(x, 2)x 0.001021 979.3985 9.215e06 0.1131 pol(x, 2)x^2 2.988115 0.3347 1.406e+00 6.3518 age 0.948025 1.0548 8.441e01 1.0648 sex 1.729820 0.5781 5.420e01 5.5206 smoking 1.569609 0.6371 4.357e01 5.6546 nonhdl 0.200746 4.9814 5.804e02 0.6943 Concordance= 0.832 (se = 0.085 ) Rsquare= 0.528 (max possible= 0.959 ) Likelihood ratio test= 20.25 on 6 df, p=0.002496 Wald test = 14.46 on 6 df, p=0.02493 Score (logrank) test = 21.75 on 6 df, p=0.001343
Now my problem is that the result for such a polynomial are difficult to interpret. You cannot simply give a hazard ratio for each unit increase of x as you could with a linear model.
I would like to give the hazard ratio for x=1 vs x=2.5 and for x=4 vs x=2.5 keeping all other variables constant, and give the confidence intervals around these estimates, because that is much easier to interpret (and to explain to clinicians in the medical field).How would I go about computing these values? Thank you so much in advance! I hope this is enough information to answer the question, otherwise let me know.

How to encode ostrowski's method for systems of polynomial equations in python?
How to encode ostrowski's method and ostrowski homotopy continuation method for systems of polynomial equations in python?

Sage: Polynomial ring over finite field  inverting a polynomial nonprime
I'm trying to recreate the wiki's example procedure, available here:
https://en.wikipedia.org/wiki/NTRUEncrypt
I've run into an issue while attempting to invert the polynomials.
The SAGE code below seems to be working fine for the given p=3, which is a prime number.
However, the representation of the polynomial in the field generated by q=32 ends up wrong, because it behaves as if the modulus was 2.
Here's the code in play:
F = PolynomialRing(GF(32),'a') a = F.gen() Ring = F.quotient(a^11  1, 'x') x = Ring.gen() pollist = [1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1] fq = Ring(pollist) print(fq) print(fq^(1))
The Ring is described as follows:
Univariate Quotient Polynomial Ring in x over Finite Field in z5 of size 2^5 with modulus a^11 + 1
And the result:
x^10 + x^9 + x^6 + x^4 + x^2 + x + 1 x^5 + x + 1
I've tried to replace the Finite Field with IntegerModRing(32), but the inversion ends up demanding a field, as implied by the message:
NotImplementedError: The base ring (=Ring of integers modulo 32) is not a field
Any suggestions as to how I could obtain the correct inverse of f (mod q) would be greatly appreciated.