Using the Runge-Kutta integration method in a system
h=0.005; x = 0:h:40; y = zeros(1,length(x)); y(1) = 0; F_xy = ; for i=1:(length(x)-1) k_1 = F_xy(x(i),y(i)); k_2 = F_xy(x(i)+0.5*h,y(i)+0.5*h*k_1); k_3 = F_xy((x(i)+0.5*h),(y(i)+0.5*h*k_2)); k_4 = F_xy((x(i)+h),(y(i)+k_3*h)); y(i+1) = y(i) + (1/6)*(k_1+2*k_2+2*k_3+k_4)*h; end
I have the following code, I think it's right. I know there's parts missing on the F_xy because this is my follow up question.
I have dx/dt = = −x(2 − y) with t_0 = 0, x(t_0) = 1
and dy/dt = y(1 − 2x) with t_0 = 0, y(t_0) = 2.
My question is that I don't know how to get these equations in to the code. All help appreciated
F_xyyour derivative function?
If so, simply write it as a helper function or function handle. For example,
Also note that your
k_1, k_2, k_3, k_4, y(i)are all two-dimensional. You need to re-size your
yand rewrite the indices in your iterating steps accordingly.
See also questions close to this topic
Combining two ode45 function answers into one plot
I am trying to use Matlab to solve a problem which has two separate differential equations that I want to return the values for over time in one giant plot.
For Example the first portion I want to do is:
ainitial = 0; arange=[0 2]; [a,A] = ode45(@rkfunc, arange, ainitial);
Then I would like to start the next ode45 portion based on the last A value, so I try to set it as binitial.
binitial = A(end); brange=[2 4]; [b,B] = ode45(@rkfunc, brange, binitial);
Then I would like to combine and plot the answers from [a,A] and [b,B] together into one giant plot, but I'm not sure how to go about doing that.
Any help would be appreciated.
Note: Edited to show binitial = A(end) instead of B(end) which fixes the numbers not overlapping from A(end) and the start of B.
How to generate a random DNA sequence in matlab
I am new to Matlab and I am trying to create random DNA sequence generator and so far I only found a way in generating A, C, G, T with equal probabilities, how do I assign different probabilities to each letter? All probabilities must add up to 1.
total_bp=10; %open file SeqLength=100; bases = repmat('ACGT', 1, SeqLength/4); for i=1:total_bp %random DNA sequence SeqLength=100; Seq = bases(randperm(SeqLength)); display(num2str(Seq)) end fclose('all');
How to change gridline color
In current version of matlab 2017, gridline properties commands of earlier version are not working. How to reset gridline color (major and minor). How to change line weight and line style
set(gca,'GridLineColor',[0.7 0.2 0.1])
Integrating pseudo code to create Iterative Refinement Function Using PYTHON
I am trying to create an iterative refinement method to solve linear systems of equations using PYTHON. Psuedo Code.EDIT: However my code always shows unsuccessful regardless of the matrix. And how should I define t, the book says by "t-digit" arithmetic, but I am unsure how to define t based on the decimal place of the matrices plugged in.
def Iterative_Ref(A,b,tolerance,N,t): #declarations n =len(A) xx0 = np.empty_like(b) r = np.empty_like(b) t = .0001 x = np.linalg.solve(A,b) #step 0 k = 1.0 #step 1 while (k <= N): #step 2 for i in range(0,n): r = b[i] - np.dot(A,b) #step 3 y = np.linalg.solve(A,r) #step 4 for i in range(0,n): #step 5 xx0[i] = y[i] + x[i] if (k == 1.0): #step 6 cond = (np.linalg.norm(y)/np.linalg.norm(xx0))*10**t if ( (np.linalg.norm(x-xx0)) < tolerance): #step 7 print(xx0) print(cond) print("The procedure was successful") k = k + 1.0 #step 8 for i in range(0,n): #step 9 x[i] = xx0[i] print("Maximum number of iterations exceeded") #step 10 print(cond) print("Procedure was unsucessful")
Python/Numpy numerical precision issues on Windows system
We are developing an algorithm using Python 3.6 and Numpy (1.14.2) to solve a numerical PDE (partial differential equation) problem. What surprises us is the same code gets different results on Mac and Windows.
It is an iterative algorithm. On Mac OS 10.13.4 (High Sierra), 5000 iterations reach mean square error 10^-11, while on Windows 10 we have mean squared error 0.003 (both Intel and Anaconda versions tested). Anyone knows anything for Python/Numpy that could potentially cause such numerical precision problem on Windows system?
We find the likely cause of our problem is the system's scheduling of processes. Put it here in case someone doing asynchronous computing meets the same problem.
On Windows, the processes (threads) seem not to be initialized all at once. We recorded which process makes each update in the iteration, and plot it in the following chart. The time line is wrapped because it is too long, from bottom to above, from left to right. Different process are of different color. We can see that at the beginning only "Blue" process is running, then other processes "gradually" start to participate in the iteration (orange, green, purple,...).
This has significant impact on parallel iterative algorithms, where each process works on its own part based on updates provided by others. On Windows, at the beginning only one or two processes start working, and receive no updates from others (because they don't even start), and all these early iterations will be very inaccurate. When the "remaining processes" start, the early processes have ceased to work due to their iteration limit, which further causes unbalanced evaluation.
We solved the problem by simply asking the processes to wait for all other to start. After that, no synchronization is needed and Windows gets the same high accuracy as Mac.
One windows, processes start "one by one", with huge delay...
While on mac, all four processes we used start almost at the same time. Although some still runs faster other, but that does not affect much because the iterations already converge.
Firedrake to solve and plot nonlinear system of PDEs
I have been trying to run a numerical simulation to find the steady state distribution of the following system:
And I have this code in Python Firedrake:
element = fdr.VectorElement('P', 'triangle', 1, dim=3) V = fdr.FunctionSpace(mesh, element) u = fdr.Function(V) tS, tP, tF = fdr.TestFunctions(V) S, P, F = fdr.split(u) a = [fdr.dot(fdr.grad(i), fdr.grad(i)) * fdr.dx for i in [(S, tS), (P, tP), (F, tF)]] Ds = fdr.Constant(5) Df = fdr.Constant(1) Dp = fdr.Constant(0.02) dc = fdr.Constant(1.6) v = fdr.Constant(0) ds = fdr.Constant(0.2) dp = fdr.Constant(0) df = fdr.Constant(5) rs = fdr.Constant(1600) rf = fdr.Constant(0) rp = fdr.Constant(0) n = fdr.Constant(2) srcS = (rs * (F ** n / (F ** n + 1) - (ds - dc * P*P) * S)) * tS * fdr.dx srcP = (rp - dp * P + (v - 2 * dc) * P*P*S) * tP * fdr.dx srcF = (rf/((P*P*S) ** n + 1) - df * F) * tF * fdr.dx L = a + srcS + a + srcP + a + srcF bcs = fdr.DirichletBC(V, 0.0, "on_boundary") problem = fdr.NonlinearVariationalProblem(L, u , bcs = bcs) solver = fdr.NonlinearVariationalSolver(problem) solver.solve()
I am trying to plot this solution, but I can't figure out how to do so.
I tried this, to no avail:
xplot = mesh.coordinates.dat.data[:,0] yplot = mesh.coordinates.dat.data[:,1] nPlotGrid = len(xplot) p_list = [u.at(point,tolerance=1e-12) for point in zip(xplot, yplot)] pmat = np.array(p_list)
There seems to be two issues. The bigger issue, is that it seems as though nothing is actually happening in the simulation, i.e, everything remains identically 0 on the domain.
How do I address this issue? Then, how do I plot?
Thanks in advance.
Normalization of integrand for numerical integration in Matlab
First off, I'm not sure if this is the best place to post this, but since there isn't a dedicated Matlab community I'm posting this here.
To give a little background, I'm currently prototyping a plasma physics simulation which involves triple integration. The innermost integral can be done analytically, but for the outer two this is just impossible. I always thought it's best to work with values close to unity and thus normalized the my innermost integral such that it is unit-less and usually takes values close to unity. However, compared to an earlier version of the code where the this innermost integral evaluated to values of the order of 1e-50, the numerical double integration, which uses the native Matlab function
integral2with target relative tolerance of 1e-6, now requires around 1000 times more function evaluations to converge. As a consequence my simulation now takes roughly 12h instead of the previous 20 minutes.
So my questions are:
- Is it possible that the faster convergence in the older version is simply due to the additional evaluations vanishing as roundoff errors and that the results thus arn't trustworthy even though it passes the 1e-6 relative tolerance? In the few tests I run the results seemed to be the same in both versions though.
- What is the best practice concerning the normalization of the integrand for numerical integration?
- Is there some way to improve the convergence of numerical integrals, especially if the integrand might have singularities?
I'm thankful for any help or insight, especially since I don't fully understand the inner workings of Matlab's
integral2function and what should be paid attention to when using it. If I didn't know any better I would actually conclude, that the integrand which is of the order of 1e-50 works way better than one of say the order of 1e+0, but that doesn't seem to make sense. Is there some numerical reason why this could actually be the case?
TL;DR when multiplying the function to be integrated numerically by Matlab 's
integral2with a factor 1e-50 and then the result in turn with a factor 1e+50, the integral gives the same result but converges way faster and I don't understand why.
How to integrate object space acceleration to world space position (2D)
I want to double integrate 2D acceleration data in object coordinates to get 2D position in world coordinates. The object always points in the direction of velocity (assume e.g. a train).
So I tried to numerically integrate the acceleration values with velocity verlet integration, changing the direction at each step to the current velocity in world coordinates:
import numpy as np from math import sqrt from matplotlib import pyplot as plt def rotate(a, newXAxis): r = newXAxis normX = r / sqrt(np.dot(r.T,r)) normY = [-normX, normX] b = np.dot(np.array([normX, normY]).T, a) return(b) """return true if v > 1 km/h or any speed given""" def isMoving(deltaXPosition, deltaYPosition, deltaTime, fasterThankmh=1.0): x = deltaXPosition y = deltaYPosition t = deltaTime if t*t == 0.: return False if hasattr(x, "__len__"): x = x if hasattr(y, "__len__"): y = y if hasattr(t, "__len__"): t = t speed = float(fasterThankmh) return((x*x + y*y) / (t*t) > 0.077160*speed*speed) def velocity_verlet_integration(Xacc, Yacc, x0=0., y0=0., vx_0=0, vy_0=0, forward=np.array([1.0, 0.0])): vx = np.zeros(len(Xacc)) vy = np.zeros(len(Xacc)) x = np.zeros(len(Xacc)) y = np.zeros(len(Xacc)) x = x0 y = y0 vx = vx_0 vy = vy_0 for i in range(len(Xacc)-1): dt = Xacc[i+1]-Xacc[i] a = rotate(Yacc[i,:], forward) x[i+1] = x[i] + vx[i]*dt + 1.0/2.0*a*dt*dt y[i+1] = y[i] + vy[i]*dt + 1.0/2.0*a*dt*dt if isMoving(x[i+1]-x[i], y[i+1]-y[i], dt): forward = np.array([x[i+1]-x[i], y[i+1]-y[i]]) aNext = rotate(Yacc[i+1,:], forward) vx[i+1] = vx[i] + dt*(a + aNext)/2 vy[i+1] = vy[i] + dt*(a + aNext)/2 return x, y
Testing this with a simple circular motion with:
"""test circle""" centripetal=-0.2 N = 0.01 xCircle = np.array(range(int(100*10**N)))/float(10**N) yCircle = np.array([[0.0, centripetal] for i in xCircle]) xvvi, yvvi = velocity_verlet_integration(xCircle, yCircle, 0., 0., 1., 0.) #plot it plt.plot(xvvi, yvvi, ".-", label='position with "velocity verlet" integration')
This results in a drift outwards, because the current direction is based on the last velocity, which is obviously a bad approximation.
Can anyone point me to a better solution?
Compute stream function from x- and y- velocities by integration in python
I'm trying to compute the stream function of a 2D flow given the x- and y- velocity components. I'm using this definition of stream function:
And I tried this method as suggested here, which basically suggests you to integrate one row of v-component, and integrate the u-component at all places, and add them up (if I understood correctly).
Here is my code:
from scipy import integrate import numpy # make some data y=numpy.linspace(0,10,40) x=numpy.linspace(0,10,50) X,Y=numpy.meshgrid(x,y) # a velocity field that is non-divergent u=3*Y**2-3*X**2 v=6*X*Y # integrate intx=integrate.cumtrapz(v,X,axis=1,initial=0) inty=integrate.cumtrapz(u,Y,axis=0,initial=0) psi1=-intx+inty intx2=integrate.cumtrapz(v,X,axis=1,initial=0) inty2=integrate.cumtrapz(u,Y,axis=0,initial=0)[:,0][:,None] psi2=-intx2+inty2 psi=(psi1+psi2)/2. u2=numpy.gradient(psi,axis=0) v2=-numpy.gradient(psi,axis=1) dx=numpy.gradient(X,axis=1) dy=numpy.gradient(Y,axis=0) u2=u2/dy v2=v2/dx
My problem is the re-computed
vare quite close, but
ualways get a slight offset (0.09861933 in this setup). Is this error related to the way the integration is computed? What's the recommended way of computing stream function from x- and y- flows?