A memory-safe interior point optimizer written in Rust, inspired by Ipopt.
ripopt solves nonlinear programming (NLP) problems of the form:
min f(x)
s.t. g_l <= g(x) <= g_u
x_l <= x <= x_u
It implements a primal-dual interior point method with a barrier formulation, similar to the algorithm described in the Ipopt papers. The solver is written entirely in Rust (~14,200 lines) with no external C/Fortran dependencies.
- Primal-dual interior point method with logarithmic barrier
- Dense LDL^T factorization via Bunch-Kaufman pivoting with inertia detection
- Sparse multifrontal LDL^T factorization (via
rmumpswith SuiteSparse AMD ordering) for larger problems (n+m >= 110) - Banded LDL^T solver for problems with detected-banded structure (e.g., PDE discretizations)
- Dense condensed KKT (Schur complement) for tall-narrow problems (m >> n, n <= 100)
- Sparse condensed KKT for reducing system size when m > 0
- Filter line search with switching condition and Armijo criterion
- Second-order corrections (SOC) for improved step acceptance
- Mehrotra predictor-corrector with Gondzio centrality corrections (enabled by default)
- Adaptive and monotone barrier parameter strategies with Mehrotra sigma-guided mu updates
- Fraction-to-boundary rule for primal and dual step sizes
- Support for equality constraints, inequality constraints, and variable bounds
- Warm-start initialization
- Two-phase restoration: fast Gauss-Newton + full NLP restoration subproblem
- Multi-attempt recovery with systematic barrier landscape perturbation
- Watchdog strategy for escaping narrow feasible corridors
- Automatic NE-to-LS reformulation for overdetermined nonlinear equation systems
- Convergence polishing: Newton polish for NE-to-LS, complementarity snap for IPM
- NLP scaling (gradient-based objective and constraint scaling)
- Local infeasibility detection for inconsistent constraint systems
- Early stall detection: bail out fast when stuck in early iterations to trigger fallbacks
- Preprocessing: Automatic elimination of fixed variables, redundant constraints, and bound tightening from single-variable linear constraints
- Near-linear constraint detection: Automatically identifies linear constraints and skips their Hessian contribution
- Limited-memory Hessian approximation: L-BFGS-in-IPM mode (
hessian_approximation_lbfgs) replaces exact Hessian with L-BFGS curvature pairs, eliminating the need for second-derivative callbacks - Multi-solver fallback architecture: L-BFGS, Augmented Lagrangian, SQP, and explicit slack reformulation
- Parametric sensitivity analysis: sIPOPT-style post-optimal sensitivity (
ds/dp = -M⁻¹ · Nₚ) for computing how the optimal solution changes under parameter perturbations, plus reduced Hessian extraction for covariance estimation - C API mirroring the Ipopt C interface for direct linking from C/C++/Python/Julia
- AMPL NL interface with Pyomo integration via
SolverFactory('ripopt'), with--helplisting all options - GAMS solver link enabling
option nlp = ripopt;in GAMS models via the GMO API - Julia/JuMP interface (
Ripopt.jl) via MathOptInterface, enablingModel(Ripopt.Optimizer)with full JuMP support
| Metric | ripopt | Ipopt (native, MUMPS) |
|---|---|---|
| Problems solved | 118/120 (98.3%) | 116/120 (96.7%) |
| Optimal | 118 | 116 |
| ripopt only | 2 | -- |
| Ipopt only | -- | 0 |
On 116 commonly-solved problems: 12.9x geometric mean speedup, ripopt faster on 114/116 (98%).
| Metric | ripopt | Ipopt (C++ with MUMPS) |
|---|---|---|
| Total solved | 569/727 (78.3%) | 556/727 (76.5%) |
| Both solve | 524 | 524 |
| ripopt only | 45 | -- |
| Ipopt only | -- | 32 |
On 524 commonly-solved problems:
| Metric | Value |
|---|---|
| Geometric mean speedup | 10.2x |
| Median speedup | 23.9x |
| Problems where ripopt is faster | 439/524 (84%) |
| ripopt 10x+ faster | 336/524 (64%) |
| Problems where Ipopt is faster | 85/524 (16%) |
Interpreting the speed numbers. Most CUTEst problems are small (n < 10) and solve in microseconds for ripopt, while Ipopt has a ~1-3ms floor from internal initialization. The per-iteration speedup on small problems comes from stack allocation, the absence of C/Fortran interop, and cache-efficient dense linear algebra. On larger problems, ripopt switches to sparse multifrontal LDL^T with SuiteSparse AMD ordering, and Ipopt's Fortran MUMPS has a per-factorization advantage.
The speed advantage comes from:
- Lower per-iteration overhead. ripopt's dense Bunch-Kaufman factorization avoids sparse symbolic analysis and has minimal allocation. For small-to-medium problems (n < 50), this gives 2-5x per-iteration speedup.
- Dense condensed KKT for tall-narrow problems. When m >> n with n <= 100, ripopt reduces an (n+m)x(n+m) sparse factorization to an nxn dense solve, giving 100-800x speedup on problems like EXPFITC (n=5, m=502) and OET3 (n=4, m=1002).
- Mehrotra predictor-corrector with Gondzio corrections. Enabled by default, reducing iteration counts on many problems.
- Fewer iterations on some problems. NE-to-LS reformulation, two-phase restoration, and multi-solver fallback recover problems that Ipopt cannot solve.
Where Ipopt is faster:
- Large sparse problems. Ipopt's Fortran MUMPS is ~10-15x faster per factorization than rmumps on 50K-100K systems.
- Some medium constrained problems. A handful of problems (CORE1, HAIFAM, NET1) have high per-iteration cost in ripopt's line search or fallback cascade.
- Some difficult nonlinear problems. Ipopt's mature barrier parameter tuning gives it an edge on specific hard problems.
Both solvers receive the exact same NlpProblem struct via the Rust trait interface, ensuring a fair comparison. ripopt uses rmumps (pure Rust multifrontal LDL^T with SuiteSparse AMD ordering); Ipopt uses MUMPS (Fortran).
| Problem | n | m | ripopt | time | Ipopt | time | speedup |
|---|---|---|---|---|---|---|---|
| Rosenbrock 500 | 500 | 0 | Optimal | 0.002s | Optimal | 0.196s | 85.7x |
| Bratu 1K | 1,000 | 998 | Optimal | 0.002s | Optimal | 0.002s | 1.1x |
| SparseQP 1K | 500 | 500 | Optimal | 0.008s | Optimal | 0.004s | 0.4x |
| OptControl 2.5K | 2,499 | 1,250 | Optimal | 0.006s | Optimal | 0.002s | 0.4x |
| Rosenbrock 5K | 5,000 | 0 | NumericalError | 17.234s | Failed | 3.624s | 0.2x |
| Poisson 2.5K | 5,000 | 2,500 | Optimal | 0.026s | Optimal | 0.009s | 0.4x |
| Bratu 10K | 10,000 | 9,998 | Optimal | 0.125s | Optimal | 0.012s | 0.1x |
| OptControl 20K | 19,999 | 10,000 | Optimal | 0.192s | Optimal | 0.019s | 0.1x |
| Poisson 50K | 49,928 | 24,964 | Optimal | 1.724s | Optimal | 0.121s | 0.1x |
| SparseQP 100K | 50,000 | 50,000 | Optimal | 4.736s | Optimal | 0.309s | 0.1x |
ripopt solves 9/10 (Ipopt: 9/10). Both fail on Rosenbrock 5K. On large constrained problems, Ipopt's Fortran MUMPS is ~10-15x faster per factorization. ripopt dominates on unconstrained problems via L-BFGS fallback.
Run the benchmarks yourself: make benchmark
| Suite | Problems | ripopt | Ipopt | Notes |
|---|---|---|---|---|
| Electrolyte thermodynamics | 13 | 13/13 (100%) | 12/13 (92.3%) | 23.7x geo mean speedup; ripopt uniquely solves seawater speciation |
| AC Optimal Power Flow | 4 | 4/4 (100%) | 4/4 (100%) | Ipopt faster on OPF (0.1x geo mean) |
| CHO parameter estimation | 1 | 0/1 | 0/1 | Large-scale (n=21,672, m=21,660); both hit iteration limit |
Run all benchmarks: make benchmark
ripopt is written in Rust. You need the Rust toolchain (compiler + Cargo build tool) installed.
Install Rust via rustup (the official installer, works on macOS, Linux, and WSL):
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | shFollow the prompts (the defaults are fine). Then restart your shell or run:
source "$HOME/.cargo/env"Verify the installation:
rustc --version
cargo --versionFor other installation methods (Homebrew, distro packages, Windows), see the official Rust installation guide.
From crates.io:
cargo install ripoptThis installs the ripopt AMPL solver binary to ~/.cargo/bin/.
To use ripopt as a library dependency in your Rust project:
cargo add ripoptFrom source (clone the repository and run make install):
git clone https://github.com/jkitchin/ripopt.git
cd ripopt
make installThis does three things:
- Builds the optimized release binary and shared library
- Installs the
ripoptAMPL solver binary to~/.cargo/bin/(which rustup already added to your$PATH) - Copies the shared library (
libripopt.dylibon macOS,libripopt.soon Linux) to~/.local/lib/
PATH check: The
ripoptbinary is installed to~/.cargo/bin/. Ifripopt --versiondoesn't work after installation, make sure~/.cargo/binis on your$PATHby adding this to your shell profile (~/.bashrc,~/.zshrc, etc.):export PATH="$HOME/.cargo/bin:$PATH"Then restart your shell or run
source ~/.bashrc(or~/.zshrc).
Shared library: If you use the C API, ensure
~/.local/libis in your library path:export LD_LIBRARY_PATH="$HOME/.local/lib:$LD_LIBRARY_PATH"
After installation, verify it works:
ripopt --versionOnce ripopt is on your $PATH (the make install step above handles this), install the Pyomo solver plugin:
pip install ./pyomo-ripoptThis registers ripopt as a named solver with Pyomo's SolverFactory:
from pyomo.environ import *
model = ConcreteModel()
# ... define your model ...
solver = SolverFactory('ripopt')
result = solver.solve(model, tee=True)Note: If you skip the
pip installstep, you can still use ripopt via the generic AMPL interface withSolverFactory('asl:ripopt'), as long as theripoptbinary is on your$PATH.
Add to your project's Cargo.toml:
[dependencies]
ripopt = { git = "https://github.com/jkitchin/ripopt" }Or for a local checkout:
[dependencies]
ripopt = { path = "/path/to/ripopt" }After make install, the shared library is at ~/.local/lib/libripopt.dylib (macOS) or ~/.local/lib/libripopt.so (Linux). The C header ripopt.h is in the repository root.
# Compile a C program against the installed library
cc my_program.c -I/path/to/ripopt -L~/.local/lib -lripopt -lmOr link directly from the build directory without installing:
cargo build --release
cc my_program.c -I. -Ltarget/release -lripopt \
-Wl,-rpath,$(pwd)/target/release -o my_program -lmripopt includes a GAMS solver link (gams/) that bridges between GAMS's GMO API and ripopt's C API. This allows GAMS models to use ripopt as an NLP solver via option nlp = ripopt;.
Build and install (requires a GAMS installation):
cargo build --release
make -C gams
sudo make -C gams install # copies libs to GAMS sysdir, registers in gmscmpun.txtUse in a GAMS model:
option nlp = ripopt;
Solve mymodel using nlp minimizing obj;
Solver options are set via a ripopt.opt file (same key-value format as Ipopt):
tol 1e-8
max_iter 1000
print_level 5
GAMS iteration and resource limits (option iterlim, option reslim) are automatically forwarded. The solver link supports NLP, DNLP, and RMINLP model types. When the analytical Hessian is not available (e.g., DNLP models), it automatically falls back to L-BFGS approximation.
Test:
sudo make -C gams test # solves HS071 and checks the resultRipopt.jl provides a MathOptInterface (MOI) wrapper so ripopt can be used as a drop-in JuMP optimizer.
Prerequisites: Julia ≥ 1.9, JuMP, and the ripopt shared library.
Install once (adds Ripopt.jl to your global Julia environment):
cargo build --release # build libripopt.dylib / libripopt.so
julia -e '
import Pkg
Pkg.develop(path="Ripopt.jl") # or Pkg.add(url="...") for a remote install
'Use in a script or notebook:
ENV["RIPOPT_LIBRARY_PATH"] = "/path/to/ripopt/target/release"
using JuMP, Ripopt
model = Model(Ripopt.Optimizer)
set_silent(model)
@variable(model, 1 <= x[1:4] <= 5)
set_start_value.(x, [1.0, 5.0, 5.0, 1.0])
@NLobjective(model, Min, x[1]*x[4]*(x[1]+x[2]+x[3]) + x[3])
@NLconstraint(model, x[1]*x[2]*x[3]*x[4] >= 25)
@NLconstraint(model, x[1]^2 + x[2]^2 + x[3]^2 + x[4]^2 == 40)
optimize!(model)
println(termination_status(model)) # LOCALLY_SOLVED
println(objective_value(model)) # ≈ 17.014
println(value.(x)) # ≈ [1.0, 4.743, 3.821, 1.379]Or run the provided examples from the repo root:
RIPOPT_LIBRARY_PATH=target/release julia --project=@v1.12 Ripopt.jl/examples/jump_hs071.jl
RIPOPT_LIBRARY_PATH=target/release julia --project=@v1.12 Ripopt.jl/examples/jump_rosenbrock.jl
RIPOPT_LIBRARY_PATH=target/release julia --project=@v1.12 Ripopt.jl/examples/c_wrapper_hs071.jlSolver options use the same names as Ipopt:
set_optimizer_attribute(model, "tol", 1e-10)
set_optimizer_attribute(model, "max_iter", 500)
set_optimizer_attribute(model, "mu_strategy", "adaptive")
set_time_limit_sec(model, 60.0)Switching between ripopt and Ipopt requires only changing the optimizer constructor; the rest of the model is identical:
# With ripopt
model = Model(Ripopt.Optimizer)
# With Ipopt (if installed)
using Ipopt
model = Model(Ipopt.Optimizer)make uninstallImplement the NlpProblem trait:
use ripopt::NlpProblem;
struct Rosenbrock;
impl NlpProblem for Rosenbrock {
fn num_variables(&self) -> usize { 2 }
fn num_constraints(&self) -> usize { 0 }
fn bounds(&self, x_l: &mut [f64], x_u: &mut [f64]) {
// Unconstrained: use infinity bounds
for i in 0..2 {
x_l[i] = f64::NEG_INFINITY;
x_u[i] = f64::INFINITY;
}
}
fn constraint_bounds(&self, _g_l: &mut [f64], _g_u: &mut [f64]) {}
fn initial_point(&self, x0: &mut [f64]) { x0[0] = -1.0; x0[1] = 1.0; }
fn objective(&self, x: &[f64]) -> f64 {
100.0 * (x[1] - x[0] * x[0]).powi(2) + (1.0 - x[0]).powi(2)
}
fn gradient(&self, x: &[f64], grad: &mut [f64]) {
grad[0] = -400.0 * x[0] * (x[1] - x[0] * x[0]) - 2.0 * (1.0 - x[0]);
grad[1] = 200.0 * (x[1] - x[0] * x[0]);
}
fn constraints(&self, _x: &[f64], _g: &mut [f64]) {}
fn jacobian_structure(&self) -> (Vec<usize>, Vec<usize>) { (vec![], vec![]) }
fn jacobian_values(&self, _x: &[f64], _vals: &mut [f64]) {}
fn hessian_structure(&self) -> (Vec<usize>, Vec<usize>) {
(vec![0, 1, 1], vec![0, 0, 1]) // lower triangle
}
fn hessian_values(&self, x: &[f64], obj_factor: f64, _lambda: &[f64], vals: &mut [f64]) {
vals[0] = obj_factor * (-400.0 * (x[1] - 3.0 * x[0] * x[0]) + 2.0);
vals[1] = obj_factor * (-400.0 * x[0]);
vals[2] = obj_factor * 200.0;
}
}use ripopt::{SolverOptions, solve};
let problem = Rosenbrock;
let options = SolverOptions::default();
let result = solve(&problem, &options);
println!("Status: {:?}", result.status);
println!("Objective: {:.6e}", result.objective);
println!("Solution: {:?}", result.x);
println!("Iterations: {}", result.iterations);Key options (all have Ipopt-matching defaults):
| Option | Default | Description |
|---|---|---|
tol |
1e-8 | Convergence tolerance |
max_iter |
3000 | Maximum iterations |
acceptable_tol |
1e-4 | Acceptable (less strict) tolerance |
acceptable_iter |
10 | Consecutive acceptable iterations needed |
mu_init |
0.1 | Initial barrier parameter |
print_level |
5 | Output verbosity (0=silent, 5=verbose) |
mu_strategy_adaptive |
true | Adaptive vs monotone barrier update |
max_soc |
4 | Maximum second-order correction steps |
max_wall_time |
0.0 | Wall-clock time limit in seconds (0=no limit) |
warm_start |
false | Enable warm-start initialization |
constr_viol_tol |
1e-4 | Constraint violation tolerance |
dual_inf_tol |
1.0 | Dual infeasibility tolerance |
enable_preprocessing |
true | Eliminate fixed variables and redundant constraints |
detect_linear_constraints |
true | Skip Hessian for linear constraints |
enable_sqp_fallback |
true | SQP fallback for constrained problems |
hessian_approximation_lbfgs |
false | Use L-BFGS Hessian approximation (no exact Hessian needed) |
enable_lbfgs_hessian_fallback |
true | Auto-retry with L-BFGS Hessian when exact Hessian fails |
mehrotra_pc |
true | Mehrotra predictor-corrector for better centering |
gondzio_mcc_max |
3 | Maximum Gondzio centrality corrections per iteration |
early_stall_timeout |
10.0 | Max seconds for first 3 iterations (0=off) |
linear_solver |
direct | KKT solver: direct, iterative (MINRES), or hybrid |
SolveResult contains:
x-- optimal primal variablesobjective-- optimal objective value f(x*)constraint_multipliers-- Lagrange multipliers for constraints (y)bound_multipliers_lower/bound_multipliers_upper-- bound multipliers (z_L, z_U)constraint_values-- constraint values g(x*)status-- one of:Optimal,Acceptable,Infeasible,LocalInfeasibility,MaxIterations,NumericalError,Unbounded,RestorationFailed,InternalErroriterations-- number of IPM iterations
ripopt exposes a C API that mirrors the Ipopt C interface, enabling direct linking from C, C++, Python (ctypes/cffi), Julia, and any language with C FFI support — without the subprocess/file overhead of the NL interface. If you have existing Ipopt C code, migrating to ripopt requires only header/function renaming; the callback signatures are identical.
cargo build --release
# produces target/release/libripopt.dylib (macOS) or libripopt.so (Linux)Include ripopt.h (repo root) in your C project. It defines version macros, callback typedefs, return status codes, and all public functions:
#include "ripopt.h"
// Check version at compile time
printf("ripopt %s\n", RIPOPT_VERSION); // "0.3.0"The five callback types are identical to the Ipopt C interface. All callbacks return 1 on success, 0 on error (the solver will abort if a callback returns 0):
typedef int (*Eval_F_CB) (int n, const double *x, int new_x,
double *obj_value, void *user_data);
typedef int (*Eval_Grad_F_CB)(int n, const double *x, int new_x,
double *grad_f, void *user_data);
typedef int (*Eval_G_CB) (int n, const double *x, int new_x,
int m, double *g, void *user_data);
typedef int (*Eval_Jac_G_CB)(int n, const double *x, int new_x,
int m, int nele_jac,
int *iRow, int *jCol, double *values,
void *user_data);
typedef int (*Eval_H_CB) (int n, const double *x, int new_x,
double obj_factor,
int m, const double *lambda, int new_lambda,
int nele_hess,
int *iRow, int *jCol, double *values,
void *user_data);Two-call protocol for Jacobian and Hessian: When values == NULL, fill iRow/jCol with the sparsity pattern (0-based indexing); when values != NULL, fill numerical values in the same element order as the pattern. The Hessian uses the lower triangle only.
Sign convention: ripopt uses the Ipopt convention L = f(x) + y^T g(x). The Hessian callback receives obj_factor and lambda and should compute obj_factor * ∇²f + Σ lambda[i] * ∇²g_i.
#include "ripopt.h"
// 1. Create handle
RipoptProblem nlp = ripopt_create(
n, x_l, x_u, // variable bounds (use ±1e30 for ±∞)
m, g_l, g_u, // constraint bounds (g_l == g_u for equality)
nele_jac, nele_hess, // number of nonzeros
eval_f, eval_grad_f, eval_g, eval_jac_g, eval_h);
// 2. Set options (Ipopt-compatible key names)
ripopt_add_int_option(nlp, "print_level", 5);
ripopt_add_num_option(nlp, "tol", 1e-8);
ripopt_add_str_option(nlp, "mu_strategy", "adaptive");
// 3. Solve (x: in = initial point, out = solution)
double obj_val;
int status = ripopt_solve(nlp, x, NULL, &obj_val,
NULL, NULL, NULL, NULL);
// status == 0 → RIPOPT_SOLVE_SUCCEEDED
// 4. Free
ripopt_free(nlp);For unconstrained problems, pass m=0 and NULL for g_l/g_u.
Infinity bounds: Use HUGE_VAL (from <math.h>) for "no bound". Internally, any value beyond ±1e19 is treated as unbounded. Avoid using finite large values like 1e30 — they may cause numerical issues.
All output pointers except x are optional (pass NULL to skip). Here is how to extract the full solution including Lagrange multipliers and bound multipliers:
double x[4] = {1.0, 5.0, 5.0, 1.0}; // initial point
double obj_val = 0.0;
double g[2] = {0.0, 0.0}; // constraint values at solution
double mult_g[2] = {0.0, 0.0}; // constraint multipliers (lambda)
double mult_xl[4]= {0.0, 0.0, 0.0, 0.0}; // lower bound multipliers (z_L)
double mult_xu[4]= {0.0, 0.0, 0.0, 0.0}; // upper bound multipliers (z_U)
int status = ripopt_solve(nlp, x, g, &obj_val,
mult_g, mult_xl, mult_xu,
NULL); // user_data
// At the solution:
// - x[] contains the optimal primal variables
// - obj_val is f(x*)
// - g[] contains g(x*) — verify constraints are satisfied
// - mult_g[] contains the Lagrange multipliers for constraints
// (nonzero for active constraints)
// - mult_xl[] contains z_L (positive when x is at its lower bound)
// - mult_xu[] contains z_U (positive when x is at its upper bound)The user_data pointer is forwarded to every callback unchanged — use it to pass problem-specific data (e.g., model parameters) without globals.
| Code | Enum constant | Meaning |
|---|---|---|
| 0 | RIPOPT_SOLVE_SUCCEEDED |
Converged to optimal solution |
| 1 | RIPOPT_ACCEPTABLE_LEVEL |
Converged to acceptable (less strict) tolerance |
| 2 | RIPOPT_INFEASIBLE_PROBLEM |
Problem is locally infeasible |
| 5 | RIPOPT_MAXITER_EXCEEDED |
Reached iteration limit |
| 6 | RIPOPT_RESTORATION_FAILED |
Feasibility restoration failed |
| 7 | RIPOPT_ERROR_IN_STEP_COMPUTATION |
Numerical difficulties |
| 10 | RIPOPT_NOT_ENOUGH_DEGREES_OF_FREEDOM |
Problem has too few free variables |
| 11 | RIPOPT_INVALID_PROBLEM_DEFINITION |
Problem appears unbounded |
| -1 | RIPOPT_INTERNAL_ERROR |
Internal error |
Status 0 and 1 indicate a successful solve. All others indicate failure — check your problem formulation, initial point, or try adjusting options.
Option-setting functions return 1 on success, 0 if the keyword is unknown. All option keywords match Ipopt naming conventions.
Numeric options (ripopt_add_num_option):
| Option | Default | Description |
|---|---|---|
tol |
1e-8 | Convergence tolerance |
acceptable_tol |
1e-4 | Acceptable convergence tolerance |
acceptable_constr_viol_tol |
1e-2 | Acceptable constraint violation |
acceptable_dual_inf_tol |
1e10 | Acceptable dual infeasibility |
acceptable_compl_inf_tol |
1e-2 | Acceptable complementarity |
mu_init |
0.1 | Initial barrier parameter |
mu_min |
1e-11 | Minimum barrier parameter |
bound_push |
1e-2 | Initial bound push |
bound_frac |
1e-2 | Initial bound fraction |
constr_viol_tol |
1e-4 | Constraint violation tolerance |
dual_inf_tol |
100.0 | Dual infeasibility tolerance |
compl_inf_tol |
1e-4 | Complementarity tolerance |
max_wall_time |
0.0 | Wall-clock time limit in seconds (0 = no limit) |
warm_start_bound_push |
1e-3 | Warm-start bound push |
warm_start_bound_frac |
1e-3 | Warm-start bound fraction |
warm_start_mult_bound_push |
1e-3 | Warm-start multiplier push |
nlp_lower_bound_inf |
-1e19 | Threshold for -infinity bounds |
nlp_upper_bound_inf |
1e19 | Threshold for +infinity bounds |
kappa |
10.0 | Adaptive mu divisor |
constr_mult_init_max |
1000.0 | Max initial constraint multiplier |
barrier_tol_factor |
10.0 | Barrier tolerance factor |
Integer options (ripopt_add_int_option):
| Option | Default | Description |
|---|---|---|
max_iter |
3000 | Maximum iterations |
print_level |
5 | Output verbosity (0 = silent, 5 = verbose, 12 = debug) |
acceptable_iter |
10 | Consecutive acceptable iterations for convergence |
max_soc |
4 | Maximum second-order correction steps |
sparse_threshold |
110 | KKT dimension threshold for sparse solver |
restoration_max_iter |
200 | Max iterations in NLP restoration subproblem |
String options (ripopt_add_str_option):
| Option | Default | Values | Description |
|---|---|---|---|
mu_strategy |
"adaptive" |
"adaptive", "monotone" |
Barrier parameter update strategy |
warm_start_init_point |
"no" |
"yes", "no" |
Enable warm-start initialization |
mu_allow_increase |
"yes" |
"yes", "no" |
Allow barrier parameter increase |
least_squares_mult_init |
"yes" |
"yes", "no" |
LS estimate for initial multipliers |
enable_slack_fallback |
"yes" |
"yes", "no" |
Slack reformulation fallback |
enable_lbfgs_fallback |
"yes" |
"yes", "no" |
L-BFGS fallback for unconstrained |
enable_al_fallback |
"yes" |
"yes", "no" |
Augmented Lagrangian fallback |
enable_preprocessing |
"yes" |
"yes", "no" |
Preprocessing (fixed vars, redundant constraints) |
detect_linear_constraints |
"yes" |
"yes", "no" |
Skip Hessian for linear constraints |
enable_sqp_fallback |
"yes" |
"yes", "no" |
SQP fallback for constrained problems |
hessian_approximation |
"exact" |
"exact", "limited-memory" |
Use L-BFGS Hessian approximation |
enable_lbfgs_hessian_fallback |
"yes" |
"yes", "no" |
Auto-retry with L-BFGS Hessian on failure |
- Callback errors: If any callback returns
0, the solver aborts and returnsRIPOPT_ERROR_IN_STEP_COMPUTATION(7). Always return1from callbacks unless you detect a problem (e.g., NaN in inputs). - Unknown options:
ripopt_add_*_optionreturns0for unrecognized keywords. Check the return value if you want to detect typos. - NULL safety:
ripopt_free(NULL)is a no-op (safe to call). All output pointers inripopt_solveexceptxmay beNULL. - Memory: The problem handle owns all internal memory. Call
ripopt_free()once when done. Do not use the handle after freeing.
If you have existing Ipopt C code, the migration is straightforward:
- Header:
#include "IpStdCInterface.h"→#include "ripopt.h" - Handle type:
IpoptProblem→RipoptProblem(both arevoid*) - Functions: Rename
CreateIpoptProblem→ripopt_create,FreeIpoptProblem→ripopt_free,AddIpoptNumOption→ripopt_add_num_option, etc. - Callbacks: No changes required — signatures are identical
- Status codes: Similar semantics but different enum names (e.g.,
Solve_Succeeded→RIPOPT_SOLVE_SUCCEEDED) - Infinity: Ipopt uses ±2e19 by default; ripopt uses ±1e30 in bounds and ±1e19 for
nlp_*_bound_inf - Linking:
-lipopt→-lripopt
cargo build --release
# HS071 — constrained NLP with inequality + equality constraints
cc examples/c_api_test.c -I. -Ltarget/release -lripopt \
-Wl,-rpath,$(pwd)/target/release -o c_api_test -lm
./c_api_test
# Rosenbrock — unconstrained optimization
cc examples/c_rosenbrock.c -I. -Ltarget/release -lripopt \
-Wl,-rpath,$(pwd)/target/release -o c_rosenbrock -lm
./c_rosenbrock
# HS035 — bound-constrained QP with inequality
cc examples/c_hs035.c -I. -Ltarget/release -lripopt \
-Wl,-rpath,$(pwd)/target/release -o c_hs035 -lm
./c_hs035
# Full multiplier extraction and options demonstration
cc examples/c_example_with_options.c -I. -Ltarget/release -lripopt \
-Wl,-rpath,$(pwd)/target/release -o c_example_with_options -lm
./c_example_with_options# Rosenbrock function (unconstrained with bounds)
cargo run --example rosenbrock
# HS071 (constrained NLP with inequalities)
cargo run --example hs071
# Benchmark timing across 5 problems
cargo run --release --example benchmark
# Parametric sensitivity analysis
cargo run --release --example sensitivitySee Compile and run the examples above for build instructions. The C examples are:
| Example | Problem type | Demonstrates |
|---|---|---|
c_api_test.c |
HS071 (constrained) | Basic usage, all 5 callbacks |
c_rosenbrock.c |
Rosenbrock (unconstrained) | No constraints, no bounds |
c_hs035.c |
HS035 (bounds + inequality) | Bound multipliers, constraint multiplier |
c_example_with_options.c |
HS071 (multiple solves) | Options tuning, multiplier extraction, status interpretation |
cargo test230 tests total:
- 131 unit tests: Dense LDL factorization, convergence checking, filter line search, fraction-to-boundary, KKT assembly, restoration, preprocessing, linearity detection, SQP, linear solver, autodiff, L-BFGS, sensitivity analysis
- 12 C API tests: FFI integration tests
- 29 integration tests: Rosenbrock, SimpleQP, HS071, HS035, PureBoundConstrained, MultipleEqualityConstraints, NE-to-LS reformulation, augmented Lagrangian, NL file parsing, IPM code paths, parametric sensitivity, and more
- 15 HS regression tests: Selected Hock-Schittkowski problems for regression checking
- 14 coverage tests: Augmented Lagrangian convergence paths, NL parser/solver pipeline, autodiff tape operations, IPM preprocessing/condensed KKT/unbounded detection
ripopt uses cargo-llvm-cov for code coverage measurement:
# Run tests with coverage and print summary
cargo llvm-cov test
# Detailed line-by-line report
cargo llvm-cov test --text
# HTML report (opens in browser)
cargo llvm-cov test --html && open target/llvm-cov/html/index.htmlCurrent coverage by module:
| Module | Line Coverage |
|---|---|
| slack_formulation.rs | 99% |
| options.rs | 100% |
| kkt.rs | 91% |
| filter.rs | 96% |
| restoration_nlp.rs | 93% |
| sqp.rs | 92% |
| warmstart.rs | 98% |
| sensitivity.rs | 89% |
| preprocessing.rs | 91% |
| convergence.rs | 90% |
| c_api.rs | 91% |
| dense.rs (linear solver) | 90% |
| restoration.rs | 90% |
| nl/header.rs | 92% |
| banded.rs | 92% |
| linear_solver/mod.rs | 87% |
| sparse.rs | 84% |
| lbfgs.rs | 76% |
| nl/autodiff.rs | 74% |
| nl/parser.rs | 72% |
| multifrontal.rs | 70% |
| augmented_lagrangian.rs | 70% |
| iterative.rs | 76% |
| linearity.rs | 58% |
| ipm.rs | 57% |
| nl/problem_impl.rs | 59% |
| nl/expr.rs | 38% |
| hybrid.rs | 36% |
Overall: 61% line coverage (230 tests)
src/
lib.rs Public API (solve function, re-exports)
c_api.rs C FFI layer (extern "C" functions, ripopt.h)
problem.rs NlpProblem trait definition
options.rs SolverOptions with Ipopt-matching defaults
result.rs SolveResult and SolveStatus
ipm.rs Main IPM loop, barrier updates, line search, NE-to-LS detection, NLP scaling
kkt.rs KKT system assembly, solution, and inertia correction
convergence.rs Convergence checking (primal/dual/complementarity)
filter.rs Filter line search mechanism
restoration.rs Gauss-Newton restoration phase with adaptive LM regularization
restoration_nlp.rs Full NLP restoration subproblem (Phase 2)
lbfgs.rs L-BFGS solver for unconstrained/bound-constrained problems
augmented_lagrangian.rs Augmented Lagrangian fallback for constrained problems
sqp.rs SQP fallback for constrained problems
sensitivity.rs Parametric sensitivity analysis (sIPOPT-style)
slack_formulation.rs Explicit slack reformulation fallback
preprocessing.rs Fixed variable elimination, redundant constraint removal, bound tightening
linearity.rs Near-linear constraint detection
warmstart.rs Warm-start initialization
linear_solver/
mod.rs LinearSolver trait, SymmetricMatrix, KktMatrix
dense.rs Dense LDL^T (Bunch-Kaufman) factorization
banded.rs Banded LDL^T for problems with small bandwidth
multifrontal.rs Multifrontal sparse LDL^T via rmumps (default, SuiteSparse AMD ordering)
sparse.rs Sparse LDL^T via faer (optional)
iterative.rs MINRES iterative solver with incomplete LDL^T preconditioner
hybrid.rs Hybrid direct/iterative solver with automatic switching
tests/
correctness.rs Integration tests (22 NLP problems)
ipm_paths.rs IPM code path tests (condensed KKT, unbounded, NE-to-LS, preprocessing)
sensitivity.rs Parametric sensitivity integration tests
hs_regression.rs HS suite regression tests (15 problems)
c_api.rs C API integration tests (12 tests via FFI)
lbfgs_ipm.rs L-BFGS Hessian approximation tests
iterative_solvers.rs Iterative/hybrid solver tests
large_scale.rs Large-scale correctness tests (up to 100K variables)
large_scale_benchmark.rs Large-scale ripopt vs Ipopt comparison
nl_integration.rs NL file parsing and solving tests
gams/
gams_ripopt.c GAMS solver link (GMO API → ripopt C API bridge)
Makefile Build, install, and test targets
install.sh Registration script for gmscmpun.txt
test_hs071.gms Smoke test (HS071 via `option nlp = ripopt`)
examples/
rosenbrock.rs Unconstrained optimization
hs071.rs Constrained NLP
sensitivity.rs Parametric sensitivity analysis demo
benchmark.rs Timing benchmark
c_api_test.c HS071 via the C API
c_rosenbrock.c Unconstrained Rosenbrock via C API
c_hs035.c Bound-constrained QP via C API
c_example_with_options.c Options and multiplier extraction demo
Before solving, ripopt automatically analyzes the problem to reduce its size:
- Fixed variable elimination: Variables with
x_l == x_uare removed and set to their fixed values in all evaluations - Redundant constraint removal: Duplicate constraints (same Jacobian structure, values, and bounds) are eliminated
- Bound tightening: Single-variable linear constraints are used to tighten variable bounds
The reduced problem is solved and the solution is mapped back to the original dimensions. Disable with enable_preprocessing: false.
The Jacobian is evaluated at two points to identify linear constraints (where all Jacobian entries remain constant). For linear constraints, the Hessian contribution lambda[i] * nabla^2 g_i is exactly zero and is skipped, reducing computation in the Hessian evaluation.
The solver follows the primal-dual barrier method from the Ipopt papers (Wachter & Biegler, 2006). At each iteration it:
- Assembles and factors the KKT system using dense LDL^T (Bunch-Kaufman), sparse multifrontal LDL^T (rmumps), or dense condensed Schur complement for tall-narrow problems
- Computes inertia of the factorization and applies regularization if needed
- Applies Mehrotra predictor-corrector with Gondzio centrality corrections (default on)
- Computes search directions with iterative refinement (up to 3 rounds)
- Applies second-order corrections (SOC) when the initial step is rejected
- Uses a filter line search with backtracking to ensure sufficient progress
- Updates the barrier parameter adaptively using Mehrotra sigma-guided updates
When the primary IPM fails, ripopt automatically tries alternative solvers:
- L-BFGS: Tried first for unconstrained problems (m=0, no bounds); used as fallback for bound-constrained problems
- L-BFGS Hessian approximation: Retries the IPM with L-BFGS curvature pairs replacing the exact Hessian (helps when the Hessian is ill-conditioned or buggy)
- Augmented Lagrangian: PHR penalty method for constrained problems, with the IPM solving each AL subproblem
- SQP: Equality-constrained Sequential Quadratic Programming with l1 merit function line search
- Explicit slack reformulation: Converts g(x) to g(x)-s=0 with bounds on s, stabilizing multiplier oscillation at degenerate points
- Best-du tracking: Throughout the solve, tracks the iterate with lowest dual infeasibility and recovers it at max iterations
When hessian_approximation_lbfgs = true, the IPM replaces exact Hessian evaluations with an L-BFGS curvature approximation. This eliminates the need for hessian_values() callbacks entirely.
How it works:
- Each IPM iteration, after accepting a step, the solver computes
s_k = x_{k+1} - x_kandy_k = ∇L(x_{k+1}) - ∇L(x_k)from Lagrangian gradient differences - Powell damping ensures positive curvature (
s^T y > 0) - An explicit dense B_k matrix is formed from the L-BFGS pairs and used in KKT assembly
- Up to 10 curvature pairs are stored (O(n·m) memory where m=10)
When to use it:
- Neural networks in NLPs (dense Hessian, O(n²) memory prohibitive)
- Problems where second derivatives are unavailable or expensive
- Rapid prototyping (skip Hessian derivation)
- When the exact Hessian is ill-conditioned or buggy
Automatic fallback: By default (enable_lbfgs_hessian_fallback = true), if the exact-Hessian IPM fails with MaxIterations, NumericalError, or RestorationFailed, the solver automatically retries with L-BFGS Hessian approximation. This helps when the user-provided Hessian is inaccurate.
Example:
let options = SolverOptions {
hessian_approximation_lbfgs: true,
..SolverOptions::default()
};
let result = ripopt::solve(&problem, &options);See examples/lbfgs_hessian.rs for complete working examples.
ripopt provides sIPOPT-style post-optimal sensitivity analysis: after solving an NLP, compute how the optimal solution changes when problem parameters are perturbed, without re-solving. This is useful for:
- What-if analysis: How does the optimal design change if a constraint bound shifts?
- Uncertainty propagation: Map parameter uncertainty to solution uncertainty via the reduced Hessian
- Real-time optimization: Update the solution for small disturbances at near-zero cost
The core equation is ds/dp = -M⁻¹ · Nₚ — one backsolve using the already-factored KKT matrix.
Usage: Implement the ParametricNlpProblem trait (extends NlpProblem with parameter derivative methods), then call solve_with_sensitivity():
use ripopt::{ParametricNlpProblem, SolverOptions};
// Implement ParametricNlpProblem for your problem type...
// (adds num_parameters, jacobian_p_*, hessian_xp_* methods)
let mut ctx = ripopt::solve_with_sensitivity(&problem, &options);
// Compute sensitivity for a parameter perturbation Δp
let dp = [0.1]; // perturbation vector
let sens = ctx.compute_sensitivity(&problem, &[&dp]).unwrap();
// Predict perturbed solution: x(p + Δp) ≈ x* + dx
let x_new: Vec<f64> = ctx.result.x.iter()
.zip(sens.dx_dp[0].iter())
.map(|(x, dx)| x + dx)
.collect();
// Extract reduced Hessian for covariance estimation
let cov = ctx.reduced_hessian().unwrap();On the HS071 problem with a parametric constraint bound, the sensitivity prediction matches re-solve to within 1e-5:
Predicted x(p=40.1): (1.000000, 4.751642, 3.824904, 1.375540)
Actual x(p=40.1): (1.000000, 4.751634, 3.824896, 1.375553)
Prediction errors: 8.4e-6, 8.0e-6, 1.4e-5
See examples/sensitivity.rs for a complete working example.
For problems where the number of constraints m exceeds 2n, the solver automatically uses a condensed (Schur complement) formulation. This reduces the factorization cost from O((n+m)^3) to O(n^2 m + n^3), enabling efficient handling of problems with many constraints and few variables.
When the solver detects an overdetermined nonlinear equation system (m >= n, f(x) = 0, all equality constraints, starting point not already feasible), it automatically reformulates the problem as unconstrained least-squares minimization:
min 0.5 * ||g(x) - target||^2
using a full Hessian (J^T J + sum of r_i * nabla^2 g_i). If the residual is small at the solution, the original system is consistent and Optimal is reported. Otherwise, LocalInfeasibility is reported with the best least-squares solution.
When the filter line search fails:
- Phase 1 (Gauss-Newton): Fast feasibility solver minimizing ||violation||^2 with gradient descent fallback
- Phase 2 (NLP restoration): Full barrier subproblem with p/n slack variables (Ipopt formulation)
- Multi-attempt recovery: Up to 6 attempts cycling barrier parameter perturbations [10x, 0.1x, 100x, 0.01x, 1000x, 0.001x] with x perturbation
The solver implements a watchdog mechanism that temporarily relaxes the filter acceptance criteria when progress stalls due to shortened steps. This helps escape narrow feasible corridors where strict Armijo conditions are too conservative.
ripopt includes per-iteration phase timing instrumentation. When print_level >= 5 (the default), a summary table is printed at the end of each solve showing where CPU time is spent:
Phase breakdown (47 iterations):
Problem eval 0.234s (45.2%)
KKT assembly 0.089s (17.2%)
Factorization 0.156s (30.1%)
Direction solve 0.012s (2.3%)
Line search 0.021s (4.1%)
Other 0.006s (1.1%)
Total 0.518s
To suppress timing output, set print_level: 0 in SolverOptions.
Release builds include debug symbols (debug = true in [profile.release]), so external profilers can show function names. samply provides flamegraph visualization on macOS and Linux:
cargo install samply
cargo build --release --bin hs_suite
samply record target/release/hs_suiteThis opens a Firefox Profiler UI in the browser with a full call tree and flamegraph. Look for wide bars under solve_ipm to identify dominant functions.
On macOS, Instruments (Xcode) also works without any additional setup:
cargo build --release --bin hs_suite
xcrun xctrace record --template "Time Profiler" --launch target/release/hs_suiteripopt uses the Ipopt convention where the Lagrangian is:
L = f(x) + y^T g(x)
For inequality constraints g(x) >= g_l, the multiplier y is negative at optimality. For equality constraints, y can be positive or negative.
EPL-2.0 (Eclipse Public License 2.0), consistent with Ipopt.
The rmumps workspace member (pure Rust multifrontal solver) is licensed under CeCILL-C, a LGPL-compatible free software license.
