Commit 7eb5a51f authored by Thibaut.Lunet's avatar Thibaut.Lunet

TL: finished scipy examples

parent 346a78d1
# Example directory
Contains different sub-directories dedicated to particular points of python-learning.
Preferably in complexity order ;).
## exercise1
## [](
This exercise deals with list comprehension.
In particular, they have to replace a function that filters positive numbers from a list using a loop so that it uses list comprehension to do it in one line.
Basic script to run python and see something. Just do :
## exercise2
and enjoy !
This exercise deals with file input/output.
It is an extension of exercise1, but now they have to read the numbers from a file and write the output in another file.
## [](
## 01-example1
Script containing all code example presented in the presentation.
This examples deals with ...
In particular:
* This wonderful thing
* And also this thing ...
## [numpy](numpy/)
Contains scripts that use numpy python package for specific applications
## [scipy](scipy/)
Contains scripts that use scipy python package for specific applications
[MarkDown documentation]( - General synthax for *.md* files
# Scipy examples
This directory contains several script that show how scipy can be used for particular scientific applications.
Also, you can look at an [extended list of Scipy Tutorials]( to go further ...
## [](
This script performs a regression using non-linear least-square optimization extract a behavior law from noisy data.
In particular, it uses the [minimize]( function of the scipy packages.
## [](
This script focus on the Jacobi matrix of the advection operator:
f(u) = c_x \frac{\partial u}\frac{\partial x}
It compute it in 1D, 2D, 3D, and extract the eigenvalues for each cases.
\ No newline at end of file
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
Created on Mon Mar 19 09:49:39 2018
@author: lunet
Perform a non-linear regression for noisy data, using the optimize function
of scipy
import numpy as np
import scipy.optimize as sco
import matplotlib.pyplot as plt
x = np.linspace(0, 1, num=200)
# Matplotlib : do not fill marker in plot
plt.rcParams['markers.fillstyle'] = 'none'
# Define an exponential law for x data
alphaTh = 0.6
betaTh = 0.9
def law(x, alpha=alphaTh, beta=betaTh):
def expoLaw(x, alpha=alphaTh, beta=betaTh):
Define an exponential law of the form
.. math::
f(x) = \\alpha e^{\\beta x}
x : numpy.ndarray, or float, or int
The x to evaluate, eventualy in vectorial form
alpha : float
The :math:`\\alpha` coefficient, default value is defined
above in the script
beta : float
The :math:`\\beta` coefficient, default value is defined
above in the script
return alpha*np.exp(beta*x)
# Build noisy data
nMeasure = 50
noiseAmplitude = 0.08
noiseAmplitude = 0.09
xMeasure = np.linspace(0, 1, num=nMeasure)
yMeasure = law(xMeasure) + np.random.randn(nMeasure)*noiseAmplitude
yMeasure = expoLaw(xMeasure) + np.random.randn(nMeasure)*noiseAmplitude
def minFunc(alpha, beta):
return np.linalg.norm(law(xMeasure, alpha, beta))
def functionToMinimize(x):
Define a function to minimize in order to determine
alpha, beta coefficients for the exponential law, given the
vector of noisy data **yMeasure** (:math:`y_m`) and the x data coordinates
**xMeasure** (:math:`x_m`):
.. math::
f(\\alpha, \\beta) = || \\alpha e^{\\beta x_m} - y_m||_2
alpha = x[0]
beta = x[1]
return np.linalg.norm(expoLaw(xMeasure, alpha, beta) - yMeasure)
res = sco.minimize(functionToMinimize, [1.0, 1.0])
# The minimize function returns a dictionnary:
# Get alpha and beta obtained from the minimization
alphaExp = res['x'][0]
betaExp = res['x'][1]
# Plot data
plt.figure('Data regerssion')
plt.plot(x, law(x), '-', label='Exact data')
x = np.linspace(0, 1, num=200)
plt.plot(x, expoLaw(x), '-', label='Exact data')
plt.plot(xMeasure, yMeasure, 'o', label='Noisy data')
plt.plot(x, expoLaw(x, alphaExp, betaExp), 's--',
label='Regression data', markevery=0.1)
\ No newline at end of file
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
Created on Tue May 23 13:29:54 2017
@author: t.lunet
Compute the eigenvalues of the 1D, 2D and 3D advection operator when a finite
difference scheme is used to compute the space derivative
import numpy as np
import scipy.linalg as spl
......@@ -36,7 +35,7 @@ u01D = np.ones(nX)
u02D = np.ones((nX, nY)).ravel()
u03D = np.ones((nX, nY, nZ)).ravel()
# Stencils
# Stencils form definition
stencilU1 = [-1, 1, 0]
stencilC2 = [-0.5, 0, 0.5]
stencilU3 = [1./6, -6./6, 3./6, 2./6, 0]
......@@ -45,6 +44,7 @@ dicoStencil = {'U1': ['$1^{st}$ order Upwind', stencilU1],
'C2': ['$2^{nd}$ order Centered', stencilC2],
'U3': ['$3^{rd}$ order Upwind', stencilU3]}
# Set stencils
stencilXName = dicoStencil[schemeX][0]
stencilYName = dicoStencil[schemeY][0]
stencilZName = dicoStencil[schemeZ][0]
......@@ -65,6 +65,28 @@ def rhs1D(u):
.. math::
f(u) = c_x \\frac{\\partial u}{\\partial x}
The space derivative is approximated by a finite difference scheme,
set depending on the value for the variable **schemeX** above in the
script :
- schemeX='U1': :math:`1^{st}` order upwind,
.. math::
\\frac{\\partial u_i}{\\partial x} \\simeq
- schemeX='C2': :math:`2^{nd}` order centered,
.. math::
\\frac{\\partial u_i}{\\partial x} \\simeq
- schemeX='U3': :math:`3^{rd}` order upwind,
.. math::
\\frac{\\partial u_i}{\\partial x} \\simeq
u : numpy.ndarray
......@@ -92,6 +114,8 @@ def rhs2D(u):
f(u) = c_x \\frac{\\partial u}{\\partial x} +
c_y \\frac{\\partial u}{\\partial y}
See rhs1D for description of the space discretization
u : numpy.ndarray
......@@ -123,6 +147,8 @@ def rhs3D(u):
c_y \\frac{\\partial u}{\\partial y} +
c_z \\frac{\\partial u}{\\partial z}
See rhs1D for description of the space discretization
u : numpy.ndarray
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment