Robust Incremental Smoothing and Mapping (riSAM)#2409
Robust Incremental Smoothing and Mapping (riSAM)#2409DanMcGann wants to merge 20 commits intoborglab:developfrom
Conversation
This commit includes - Adding Dogleg Line Search (DLLS) - Adding Bayes Tree Traversal - Adding one-step-look ahead to ISAM2
The sequence of actions in isam2.update could cause there to be a structural miss-match between the steepest descent direction vector and the gauss-newton decent direction vector which are required to have the same structure in DoglegOptimizerImpl::ComputeBlend.
- Formats riSAM code to gtsam standard - Switches to use std rather than boost to meet gtsam 4.3 convention - Removes some unneeded functions
|
Awesome ! I'll wait to review thoroughly until other PR is merged. But, check naming convention esp. We're Google style except we camelCase variables and non-static methods. Also, please format everything with clang-format, Google style, if that was not already done. Finally, warnings are treated as errors, so please try to compile with that flag locally. |
|
Could you merge in develop so PR diff is up to date? |
There was a problem hiding this comment.
Pull request overview
This PR introduces the core Robust Incremental Smoothing and Mapping (riSAM) implementation into GTSAM by adding the RISAM solver and its graduated robust-kernel factor wrappers, plus supporting incremental tooling in iSAM2/BayesTree and accompanying unit tests.
Changes:
- Add
RISAM(drop-in ISAM2-like update interface) and the graduated-kernel infrastructure (RISAMGraduatedKernel,RISAMGraduatedFactor). - Add incremental “look-ahead” and traversal helpers (
ISAM2::predictUpdateInfo,BayesTree::traverseTop) used by riSAM. - Add/extend tests for Dogleg Line Search, traversal/look-ahead behavior, and riSAM integration.
Reviewed changes
Copilot reviewed 17 out of 17 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
gtsam/sam/RISAM.h / gtsam/sam/RISAM.cpp |
Adds the riSAM algorithm wrapper around iSAM2, housekeeping, and GNC-style iterations. |
gtsam/sam/RISAMGraduatedKernel.h / .cpp |
Adds graduated robust kernel interface + SIGKernel implementation. |
gtsam/sam/RISAMGraduatedFactor.h / .cpp |
Adds factor wrapper enabling graduated robust weighting during linearization. |
gtsam/sam/tests/testRISAM.cpp |
New unit/integration tests validating kernel math, factor behavior, and end-to-end riSAM behavior. |
gtsam/nonlinear/ISAM2.h / gtsam/nonlinear/ISAM2.cpp |
Adds predictUpdateInfo and Dogleg Line Search support inside updateDelta. |
gtsam/nonlinear/ISAM2Params.h |
Introduces ISAM2DoglegLineSearchParams and adds it to the optimization params variant. |
gtsam/nonlinear/DoglegOptimizerImpl.h |
Implements DoglegLineSearchImpl::Iterate (line search over the dogleg arc). |
gtsam/inference/BayesTree.h / gtsam/inference/BayesTree-inst.h |
Adds traverseTop (and helpers) for “contaminated” traversal. |
tests/testGaussianISAM2.cpp |
Adds slamlike regression tests using Dogleg Line Search; adds predict_update_info test. |
tests/testDoglegOptimizer.cpp |
Adds a test for DoglegLineSearchImpl::Iterate and a regression test for ComputeBlend mismatch. |
gtsam/symbolic/tests/testSymbolicBayesTree.cpp |
Adds tests validating traverseTop behavior. |
gtsam/nonlinear/nonlinear.i |
Exposes Dogleg Line Search params and predictUpdateInfo to the wrapper layer. |
Comments suppressed due to low confidence (10)
gtsam/sam/RISAMGraduatedFactor.h:130
vblockis created but never used, which will trigger unused-variable warnings (and can fail builds when warnings are treated as errors). Remove it, or use it if it was intended for something else.
size_t d = current_estimate.at(key).dim();
gtsam::Matrix vblock = gtsam::Matrix::Zero(output_dim, d);
Ablocks.push_back(A.block(0, idx_start, output_dim, d));
idx_start += d;
gtsam/sam/RISAMGraduatedFactor.h:31
GraduatedFactorhas out-of-line methods (seeRISAMGraduatedFactor.cpp) but is not markedGTSAM_EXPORT. Consider exporting it to avoid missing symbols on Windows shared-library builds, especially since this type is part of the public riSAM API.
/// @brief Graduated Factor for riSAM base class
class GraduatedFactor {
/** TYPES **/
public:
typedef std::shared_ptr<GraduatedFactor> shared_ptr;
/** FIELDS **/
gtsam/sam/RISAMGraduatedKernel.h:24
GraduatedKernelis a non-template class with out-of-line virtual methods; consider addingGTSAM_EXPORTto the class declaration to ensure it is exported correctly on Windows shared-library builds.
/** @brief Base class for graduated kernels for riSAM
* Advanced users can write their own kernels by inheriting from this class
*/
class GraduatedKernel {
/** TYPES **/
gtsam/nonlinear/DoglegOptimizerImpl.h:334
- This header uses
std::numeric_limits<double>::epsilon()but does not include<limits>. Add the missing include to keep the header self-contained and avoid relying on transitive includes.
// Search Increase delta
double eps = std::numeric_limits<double>::epsilon();
while (step < max_step - eps) {
gtsam/nonlinear/ISAM2Params.h:149
- The constructor argument order is
(..., wildfire_threshold, sufficient_decrease_coeff, ...), but the member order/docs and call sites in this PR pass( ..., 1e-3, 1e-4, ...)which reads like(sufficient_decrease_coeff, wildfire_threshold). This is very easy to misuse and likely results in swapped parameter values; consider reordering the constructor parameters to match the field order (or use named setters in call sites).
ISAM2DoglegLineSearchParams(double min_delta = 0.02, double max_delta = 0.5,
double step_size = 1.5,
double wildfire_threshold = 1e-4,
double sufficient_decrease_coeff = 1e-3,
bool verbose = false)
tests/testGaussianISAM2.cpp:328
- These arguments appear to be passed as
(min_delta, max_delta, step_size, sufficient_decrease_coeff, wildfire_threshold, verbose), but theISAM2DoglegLineSearchParamsconstructor is declared as(min_delta, max_delta, step_size, wildfire_threshold, sufficient_decrease_coeff, verbose). This likely swapswildfire_thresholdandsufficient_decrease_coeffin the test configuration.
ISAM2 isam = createSlamlikeISAM2(
&fullinit, &fullgraph,
ISAM2Params(ISAM2DoglegLineSearchParams(0.1, 1.0, 3, 1e-3, 1e-4, false),
0.0, 0, false));
gtsam/sam/tests/testRISAM.cpp:178
- This call likely swaps
wildfire_thresholdandsufficient_decrease_coeff:ISAM2DoglegLineSearchParamstakes( ..., wildfire_threshold, sufficient_decrease_coeff, ...)but the passed literals read like( ..., sufficient_decrease_coeff, wildfire_threshold, ...). Please confirm and reorder arguments (or switch to setters) so the intended values are applied.
RISAM::Parameters params;
params.isam2_params = ISAM2Params(
ISAM2DoglegLineSearchParams(0.02, 1.0, 1.5, 1e-3, 1e-4, false));
RISAM risam(params);
gtsam/sam/RISAMGraduatedKernel.h:110
SIGKernelhas out-of-line method definitions inRISAMGraduatedKernel.cppbut the class is not markedGTSAM_EXPORT. For Windows shared-library builds, export the class (or its methods) so it is usable from downstream code.
class SIGKernel : public GraduatedKernel {
/** TYPES **/
public:
/// @brief Shortcut for shared pointer
typedef std::shared_ptr<SIGKernel> shared_ptr;
/// @brief Function type for mu update sequence
typedef std::function<double(double, double, size_t)> MuUpdateStrategy;
gtsam/nonlinear/ISAM2Params.h:203
ISAM2Params::print()currently only handles Gauss-Newton and Dogleg; after addingISAM2DoglegLineSearchParamsto theOptimizationParamsvariant, printing will fall into the "{unknown type}" branch. Update the print logic to handle the new optimization params type.
typedef std::variant<ISAM2GaussNewtonParams, ISAM2DoglegParams,
ISAM2DoglegLineSearchParams>
OptimizationParams; ///< Either ISAM2GaussNewtonParams or
///< ISAM2DoglegParams or
///< ISAM2DoglegLineSearchParams
gtsam/sam/RISAM.h:27
RISAMis a non-template public class implemented in a.cpp, but it is not marked withGTSAM_EXPORT. On Windows shared-library builds this can prevent the class from being visible to external users; consider declaring it asclass GTSAM_EXPORT RISAM.
class RISAM {
/** TYPES **/
| bool is_sufficient_delta = delta.norm() / delta.size() < | ||
| params_.outlier_mu_avg_var_convergence_thresh; |
There was a problem hiding this comment.
delta.size() can be zero (e.g., if RISAM::update() is called before any variables exist), leading to division by zero in delta.norm() / delta.size(). Guard against an empty delta before computing an average-norm convergence check.
| bool is_sufficient_delta = delta.norm() / delta.size() < | |
| params_.outlier_mu_avg_var_convergence_thresh; | |
| bool is_sufficient_delta = false; | |
| if (delta.size() > 0) { | |
| is_sufficient_delta = delta.norm() / delta.size() < | |
| params_.outlier_mu_avg_var_convergence_thresh; | |
| } |
|
|
||
| // Validate | ||
| if (result.details()->variableStatus.size() == size_t(i + 1)) { | ||
| EXPECT(predicted.second) |
There was a problem hiding this comment.
EXPECT(predicted.second) is missing a trailing semicolon, which will cause this test file to fail compilation. Add the semicolon (and keep the EXPECT macro usage consistent with the rest of the file).
| EXPECT(predicted.second) | |
| EXPECT(predicted.second); |
| this->traversePath(traversedKeys, typename sharedClique::shared_ptr( | ||
| clique->parent_.lock())); |
There was a problem hiding this comment.
traversePath tries to construct typename sharedClique::shared_ptr(...), but sharedClique is already a std::shared_ptr<Clique> typedef and has no nested shared_ptr type. This will not compile; just pass clique->parent_.lock() (which already returns a sharedClique) to the recursive call.
| this->traversePath(traversedKeys, typename sharedClique::shared_ptr( | |
| clique->parent_.lock())); | |
| this->traversePath(traversedKeys, clique->parent_.lock()); |
| /// @brief Returns 0.5 \rho(r)^2 | ||
| double error(const gtsam::Values& values) const override { | ||
| return 0.5 * std::pow(robustResidual(values), 2); |
There was a problem hiding this comment.
GraduatedKernel::error() is documented as returning the robust cost \rho(r), but GenericGraduatedFactor::error() returns 0.5 * robustResidual(values)^2, which squares that cost again. This makes the factor's reported nonlinear error inconsistent with the kernel definition and GTSAM's usual robust loss conventions; consider returning the robust cost directly (or rename/adjust robustResidual() to return a true residual-like quantity).
| /// @brief Returns 0.5 \rho(r)^2 | |
| double error(const gtsam::Values& values) const override { | |
| return 0.5 * std::pow(robustResidual(values), 2); | |
| /// @brief Returns 0.5 \rho(r) | |
| double error(const gtsam::Values& values) const override { | |
| return 0.5 * robustResidual(values); |
| removed_factors.push_back(factors_.at(fidx)); | ||
| factors_.remove(fidx); | ||
| mu_.at(fidx).reset(); | ||
| mu_inits_.at(fidx).reset(); |
There was a problem hiding this comment.
Removed factors are set to nullptr in factors_, but factors_to_check_status_ is not updated when factors are removed. If a factor index in factors_to_check_status_ is removed before incrementMuInits() runs, factors_[fidx] will be null and grad_factor->residual(...) will dereference a null pointer. Remove deleted indices from factors_to_check_status_ (and/or skip null factors in incrementMuInits).
| mu_inits_.at(fidx).reset(); | |
| mu_inits_.at(fidx).reset(); | |
| factors_to_check_status_.erase(fidx); |
This is the second of 3 PRs that support the integration of riSAM into GTSAM. This PR adds the actual riSAM algorithm along with unit test to validate its functionality. In addition to the unit tests, I ran an independent test that confirmed that the implementation here matches exactly our internal riSAM implementation!
NOTE: This is blocked by PR #2408 additionally, due to github forks this PR currently appears to replicate the changes contained in #2408. Once that PR is merged this issue will go away. Until then I have marked this as a draft pull request to help ensure the merge order is maintained.
Overview
This PR adds and test sthe
RISAMclass, and its helpersRISAMGraduatedKernelandRISAMGraduatedFactor. RISAM is a drop in replacement for ISAM2 with the same update interface. However, users can wrap any potential outlier in aRISAMGraduatedFactorand riSAM will solve updates that involve these factors using an incrementalized version of Graduated Non-Convexity.TODO - Python Interface
In addition to being blocked by #2408, this PR also needs to be updated to include its python interface. Unfortunately, I was unable to find an effective way to implement such an interface. The template for
RISAMGraduatedFactorallows it to wrap ANY gtsam factor type. In order to define the python interface for this it appears that we would need to explicitly enumerate every factor type with all of their template possible template combinations. This seemed like an unmaintainable definition. Thus before adding this interface I wanted to check in to see if the maintainers had any suggestions for the best way to approach the implementation.Final Notes
This is PR 2/3 for adding RISAM. Once the python interface above is fixed the final PR will contain a jupiter notebook with a example / tutorial on using riSAM!