lv_sens.sensitivity

Functions

J(theta, xdata, dt, N, x0)

Compute cost function for a specific parameter a by first solving the forward problem.

grad_adj(theta, xdata, dt, N, x0)

Compute the cost function gradient via the adjoint.

grad_fd(theta, xdata, dt, N, x0[, h])

Approximate the cost function gradient with finite differences.

optimize(theta0, x_data, dt, N, x0, grad_type)

Optimize the inverse problem

lv_sens.sensitivity.J(theta: ndarray[Any, dtype[float64]], xdata: ndarray[Any, dtype[float64]], dt: float, N: int, x0: ndarray[Any, dtype[float64]]) float[source]

Compute cost function for a specific parameter a by first solving the forward problem.

Parameters:
  • theta – parameter

  • xdata – “measurement” data

  • dt – time step

  • N – number of timesteps

  • x0 – initial condition of forward solve

Returns:

cost function value

lv_sens.sensitivity.grad_adj(theta: ndarray[Any, dtype[float64]], xdata: ndarray[Any, dtype[float64]], dt: float, N: int, x0: ndarray[Any, dtype[float64]]) ndarray[Any, dtype[float64]][source]

Compute the cost function gradient via the adjoint.

Parameters:
  • theta – parameter

  • xdata – ‘measurement’ data

  • dt – time step size

  • N – number of timesteps

  • x0 – initial condition of forward solve

Returns:

cost function gradient

lv_sens.sensitivity.grad_fd(theta: ndarray[Any, dtype[float64]], xdata: ndarray[Any, dtype[float64]], dt: float, N: int, x0: ndarray[Any, dtype[float64]], h: float = 1e-07) ndarray[Any, dtype[float64]][source]

Approximate the cost function gradient with finite differences.

Parameters:
  • theta – parameter

  • xdata – “measurement” data

  • dt – time step size

  • N – number of timesteps

  • x0 – initial condition of forward solve

  • h – finite differente step size (parameter)

Returns:

cost function gradient approximation

lv_sens.sensitivity.optimize(theta0: ndarray[Any, dtype[float64]], x_data: ndarray[Any, dtype[float64]], dt: float, N: int, x0: ndarray[Any, dtype[float64]], grad_type: str) OptimizeResult[source]

Optimize the inverse problem

\[\hat\theta = \textrm{argmin}\, J(\theta)\]

for cost function

\[J(\theta) = \frac{1}{2}\sum_{i=1}^N |x_i(\theta) - y_i|\]

where \(x\) is the solution of the ODE LV() at time step \(i\) and \(y_i\) is a noisy measurement of \(x_i\).

Parameters:
  • theta0 – parameter initial guess

  • x_data – data array

  • dt – time step

  • N – number of time steps

  • x0 – initial guess

  • grad_type – gradient type of the cost function

Returns:

scipy result object