install

install by apt

install on ubuntu20

sudo apt install libceres-dev

install from source

enable eigen_metis during ceres compilation

prerequisites

  1. Ceres version ≥ 2.1.0 (METIS support added in 2.1).
  2. Eigen ≥ 3.3.0
  3. METIS library installed (required by EIGEN_METIS).

Add configuration to compilation: -DEIGENSPARSE=ON, -DEIGEN_METIS=ON.

enable cuda

Prerequisites

  1. CUDA Toolkit ≥ 9.0
  2. CMake ≥ 3.5
  3. g++ ≥ 7

Add configuration to compilation: -DCERES_USE_CUDA=ON

In the CMake output (or CMakeCache.txt), ensure:

-- CUDA support     : YES

check ceres

check version of ceres

sudo cat /usr/local/include/ceres/version.h

construct problem

problem

Problem problem;

// Add residual terms to the problem using the autodiff wrapper to get the derivatives automatically.
problem.AddResidualBlock(new AutoDiffCostFunction<F1, 1, 1, 1>(new F1), nullptr, &x1, &x2);
// AutoDiffCostFunction<函数,残差的数量,第一个参数块,第二个参数块>

parameter

API meaning
AddParameterBlock(values, size) register a block
SetParameterBlockConstant(values) fix that block
SetParameterization(values, local_parameterization) tell Ceres how to update it (e.g., enforce quaternion normalization)
double rotation[4] = {1, 0, 0, 0};  // quaternion
double translation[3] = {0, 0, 0};  // xyz

problem.AddParameterBlock(rotation, 4, new ceres::EigenQuaternionParameterization());
problem.AddParameterBlock(translation, 3);

options

ceres::Solver::Options options;

// Do not print the iteration table to stdout
options.minimizer_progress_to_stdout = false;

// Do not log per-iteration progress via glog
options.logging_type = ceres::SILENT;

// If you added any callbacks that print, remove them.
// options.callbacks.clear();

robust loss function

Loss Function Behavior for Large Residuals Notes
HuberLoss Linear growth Mild robustness
SoftLOneLoss Very smooth, gentle Often good default
CauchyLoss Logarithmic growth Strong outlier suppression
TukeyLoss Hard rejection Non-convex, aggressive
CauchyLoss

Interpretation of Parameter

Small

  1. Strong outlier suppression
  2. Risk of discarding valid but large residuals

Large:

  1. Behavior closer to standard least squares
  2. Weaker robustness

A common practical choice is to set parameter close to the expected inlier residual magnitude. For SLAM / BA, Cauchy is often better than Huber when mismatches exist.

HuberLoss

information and query

int num_parameters = problem.NumParameters();          // total scalar parameters (sum of sizes)
int num_parameter_blocks = problem.NumParameterBlocks();  // number of parameter blocks

/* exclude:
Parameter blocks set as constant (problem.SetParameterBlockConstant(ptr)).
Parameter blocks that were eliminated via the Schur complement
*/
int free_blocks = problem.NumFreeParameterBlocks();
int free_parameters = problem.NumFreeParameters();

convergence

convergence condition

If any one of the three conditions (gradient_tolerance, parameter_tolerance, function_tolerance)is satisfied, the solver declares CONVERGENCE.

Gradient tolerance

gradient_tolerance (default: 1e-10)

If the maximum absolute value of the gradient is less than this tolerance, Ceres declares convergence.

Meaning: we are close to a stationary point.

Parameter (step) tolerance

parameter_tolerance (default: 1e-8)

If the relative change in the parameter vector (update step) is below this threshold, Ceres declares convergence.

Meaning: parameters are not moving significantly anymore.

Function (cost) tolerance

function_tolerance (default: 1e-6)

If the relative reduction in the cost function between two iterations is less than this, Ceres declares convergence.

Meaning: cost function value is not improving enough.

Iteration and time limits

max_num_iterations (default: 50)

max_solver_time_in_seconds (default: 1e9, effectively unlimited)

If exceeded, the solver stops, but this is not considered convergence, rather an early termination.

Custom convergence check

struct MyConvergenceCallback : public ceres::IterationCallback {
    double tolerance_translation;
    double tolerance_rotation;
    double* parameters;  // pointer to parameter block

    MyConvergenceCallback(double* params, 
                          double tol_t, double tol_r)
        : parameters(params), 
          tolerance_translation(tol_t), 
          tolerance_rotation(tol_r) {}

    ceres::CallbackReturnType operator()(const ceres::IterationSummary& summary) override {
        // Example: assume first 3 params are translation, next 3 are rotation
        double dx = summary.step_norm; // global norm
        // Or compute per-component change by tracking previous parameter values
        // For demonstration, just check raw parameter magnitudes:
        if (std::abs(parameters[0]) < tolerance_translation &&
            std::abs(parameters[1]) < tolerance_translation &&
            std::abs(parameters[2]) < tolerance_translation &&
            std::abs(parameters[3]) < tolerance_rotation &&
            std::abs(parameters[4]) < tolerance_rotation &&
            std::abs(parameters[5]) < tolerance_rotation) {
            return ceres::SOLVER_TERMINATE_SUCCESSFULLY;
        }
        return ceres::SOLVER_CONTINUE;
    }
};

/**
 * Compute recommended parameter_tolerance for Ceres
 * so that the average per-parameter update is less than eps.
 *
 * @param n    Number of parameters
 * @param xNorm Approximate Euclidean norm of parameter vector
 * @param eps   Desired per-parameter step threshold (default 1e-4)
 * @return      Recommended parameter_tolerance
 */
double computeParameterTolerance(int n, double xNorm, double eps = 1e-4) {
    if (n <= 0) {
        throw std::invalid_argument("Number of parameters must be positive");
    }
    return (eps * std::sqrt(n)) / (xNorm + 1.0);
}

options.update_state_every_iteration = true;
options.callbacks.push_back(new MyConvergenceCallback(params, 1e-6, 1e-8));

double expext_step_condition = 1e-4;
int number_param_vector = 10;
options.parameter_tolerance = 1e-4 * std::sqrt(number_param_vector);

information during iteration

Iteration:   0
Cost:        1.234567e+03
Gradient:    1.234567e+02
Step:        1.234567e-01
Tr Ratio:    9.876543e-01
item meaning
Iteration iteration number, starts at 0 (before any update)
Cost Reported as (1/2)‖residual‖²
Gradient The maximum norm (∞-norm) of the gradient vector.
Step The size of the parameter update (‖Δx‖)
Trust Region Ratio Only relevant when using Trust Region methods (Levenberg–Marquardt, Dogleg).
linear_solver_iterations iterations inside the linear solver per outer iteration.
regularization (λ) damping parameter in Levenberg–Marquardt.
step_norm actual norm of update step (sometimes separated).
time (s) cumulative runtime until that iteration.

Here the step size is norm of the parameter update vector.

an simple example

#include "ceres/ceres.h"
#include "glog/logging.h"



int main(int argc, char** argv) {
    google::InitGoogleLogging(argv[0]);

    // Build your problem as usual
    ceres::Problem problem;

    // ... add residuals and parameter blocks ...

    // Configure solver options
    ceres::Solver::Options options;

    // Convergence conditions
    options.gradient_tolerance   = 1e-10;  // Stop if gradient is small
    options.parameter_tolerance  = 1e-8;   // Stop if parameter updates are small
    options.function_tolerance   = 1e-6;   // Stop if cost function change is small

    // Iteration and time limits
    options.max_num_iterations = 100;      // maximum iterations
    options.max_solver_time_in_seconds = 60.0; // optional time limit

    // Other useful options
    options.linear_solver_type = ceres::DENSE_QR;
    options.minimizer_progress_to_stdout = true;

    // Solve
    ceres::Solver::Summary summary;
    ceres::Solve(options, &problem, &summary);

    std::cout << summary.FullReport() << "\n";

    return 0;
}