Skip to content

Pass the -numba-debug flag to libnvvm#681

Merged
gmarkall merged 8 commits intoNVIDIA:mainfrom
mmason-nvidia:mmason-nvidia/feat/numba-debug-flag
Jan 22, 2026
Merged

Pass the -numba-debug flag to libnvvm#681
gmarkall merged 8 commits intoNVIDIA:mainfrom
mmason-nvidia:mmason-nvidia/feat/numba-debug-flag

Conversation

@mmason-nvidia
Copy link
Contributor

When using the CUDA Toolkit release 13.1 or later for debug builds, we need to pass the -numba-debug flag to libnvvm in order to enable enhanced debug information.

Closes #679

When using the CUDA Toolkit release 13.1 or later for debug builds, we need to pass the -numba-debug flag to libnvvm in order to enable enhanced debug information.

Closes NVIDIA#679
@copy-pr-bot
Copy link

copy-pr-bot bot commented Dec 19, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Dec 19, 2025

Greptile Summary

This PR adds support for the -numba-debug flag introduced in CUDA Toolkit 13.1 to enable enhanced debug information for Numba CUDA programs.

Key changes:

  • Added check_options method to NVVM class that tests whether specified compiler options are supported by compiling a minimal test program
  • Modified CompilationUnit.__init__ to conditionally add -numba-debug flag when: (1) building with debug enabled (g in options), and (2) libnvvm supports the flag
  • Uses feature detection approach rather than version checking, making the code resilient to future CUDA Toolkit changes

Implementation quality:

  • Exception handling is correct - check_options catches exceptions and returns False for graceful fallback
  • Resource cleanup is handled properly with finally block that destroys test programs
  • Backward compatible - only adds the flag when supported, no breaking changes
  • The approach of testing option support rather than checking specific toolkit versions is more robust

Confidence Score: 5/5

  • This PR is safe to merge with minimal risk
  • The implementation is clean and well-designed. The new check_options method properly tests if compiler options are supported before using them, avoiding compatibility issues. Error handling is correct with proper exception catching and resource cleanup in finally blocks. The feature only activates for debug builds when libnvvm supports the flag, making it backward compatible.
  • No files require special attention

Important Files Changed

Filename Overview
numba_cuda/numba/cuda/cudadrv/nvvm.py adds check_options method to test compiler option support and conditionally enables -numba-debug flag for debug builds; implementation is solid with proper error handling and cleanup

…bnvvm.

As suggested by leofang, the original approach of determinging if the -numba-debug flag should be used is unreliable. Instead, use the pattern from one of the tests and compile a test program to examine it's PTX output version to deduce the CUDA Toolkit version. This result is cached in the NVVM singleton.
Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additional Comments (3)

  1. numba_cuda/numba/cuda/cudadrv/nvvm.py, line 254-255 (link)

    syntax: Incorrect type for options parameter. nvvmVerifyProgram expects POINTER(c_char_p) but receives a Python list.

  2. numba_cuda/numba/cuda/cudadrv/nvvm.py, line 257 (link)

    syntax: Same issue: incorrect type for options parameter. Must use option_ptrs instead.

  3. numba_cuda/numba/cuda/cudadrv/nvvm.py, line 274-275 (link)

    logic: Unconditional destruction of potentially uninitialized program handle. If nvvmCreateProgram fails at line 239, program remains an empty c_void_p() and destroying it could cause issues.

1 file reviewed, 3 comments

Edit Code Review Agent Settings | Greptile

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additional Comments (3)

  1. numba_cuda/numba/cuda/cudadrv/nvvm.py, line 262 (link)

    syntax: use c_size_t() instead of c_int() to match function signature

  2. numba_cuda/numba/cuda/cudadrv/nvvm.py, line 255-256 (link)

    syntax: options must be encoded to bytes for ctypes

  3. numba_cuda/numba/cuda/cudadrv/nvvm.py, line 277-279 (link)

    logic: calling check_error in finally block can raise exception during error handling, masking the original error. wrap in try-except or check error without raising

1 file reviewed, 3 comments

Edit Code Review Agent Settings | Greptile

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additional Comments (1)

  1. numba_cuda/numba/cuda/cudadrv/nvvm.py, line 283-285 (link)

    logic: re-raising the exception defeats the graceful fallback logic. If test program compilation fails, _libnvvm_cuda_version stays None but the exception propagates, preventing the caller from using the fallback behavior. Consider removing the raise to allow silent failure and return None.

    Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

1 file reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

@gmarkall
Copy link
Contributor

gmarkall commented Jan 6, 2026

/ok to test 6764681

Comment on lines +290 to +293
try:
self.check_error(err, "Failed to destroy test program.")
except Exception:
pass
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there's no point checking the error if we're going to swallow the exception the check will raise anyway.

Suggested change
try:
self.check_error(err, "Failed to destroy test program.")
except Exception:
pass

err = self.nvvmGetCompiledResult(program, ptx_data)
self.check_error(err, "Failed to get test program compiled result.")
except Exception as exception:
print(f"Exception compiling test program: {exception}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should probably be a warning rather than a print:

Suggested change
print(f"Exception compiling test program: {exception}")
warnings.warn(
f"Exception compiling test program: {exception}",
category=NvvmWarning
)

self.check_error(err, "Failed to get test program compiled result.")
except Exception as exception:
print(f"Exception compiling test program: {exception}")
raise exception
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should re-raise the exception, just let it pass - otherwise I'd expect it to propagate all the way back to the user, which we may not want.

Suggested change
raise exception

self._libnvvm_cuda_version = (
get_minimal_required_cuda_ver_from_ptx_ver(ptx_version)
)
except Exception:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that the only likely exception we'd expect from the PTX version functions is a ValueError - anything else is a bit more surprising so we should let it manifest to expose the underlying bug instead:

Suggested change
except Exception:
except ValueError:

# pass in the -numba-debug flag.
if "g" in options:
ctk_version = self.driver.get_cuda_version()
if ctk_version is None or ctk_version >= (13, 1):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we couldn't determine the version of the CTK, that could be because the version is 12.x because the necessary PTX version functions weren't present in the CUDA bindings for 12.x. So I think it'd be safer to assume we don't pass the -numba-debug flag if we can't determine the version:

Suggested change
if ctk_version is None or ctk_version >= (13, 1):
if ctk_version is not None and ctk_version >= (13, 1):

Copy link
Contributor

@gmarkall gmarkall left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made some comments on the diff, but I also saw that the CI is failing.

If you set up commit signing (so that your commits show as "Verified" rather than "Unverified") you should be able to trigger the CI yourself by commenting /ok to test as well.

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 file reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

Comment on lines +283 to +285
except Exception as exception:
print(f"Exception compiling test program: {exception}")
raise exception
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: catching the exception, printing it, and re-raising defeats the purpose of graceful fallback. the code at lines 299-305 expects exceptions to be silently caught, allowing _libnvvm_cuda_version to remain None. this re-raise will prevent the function from returning None on error.

Suggested change
except Exception as exception:
print(f"Exception compiling test program: {exception}")
raise exception
except Exception:
pass

… compiler options.

Add a check_options() method to the NVVM class to determine if a given combination of compiler options is supported. This is done by compiling a short test program with the provided options, and returning True/False depending on the result.

Add a check to the CompilationUnit class for the -numba-debug flag when compiling with debugging enabled.
@mmason-nvidia
Copy link
Contributor Author

/ok to test f32c2a5

@gmarkall gmarkall added 4 - Waiting on reviewer Waiting for reviewer to respond to author and removed 4 - Waiting on author Waiting for author to respond to review labels Jan 21, 2026
@gmarkall
Copy link
Contributor

/ok to test f32c2a5

@gmarkall
Copy link
Contributor

/ok to test 56b553b

Copy link
Contributor

@gmarkall gmarkall left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me. I just pushed a fix so that the pre-commit checks should pass.

@gmarkall gmarkall enabled auto-merge (squash) January 21, 2026 12:51
@gmarkall gmarkall added 4 - Waiting on CI Waiting for a CI run to finish successfully and removed 4 - Waiting on reviewer Waiting for reviewer to respond to author labels Jan 21, 2026
@gmarkall gmarkall merged commit a2b254d into NVIDIA:main Jan 22, 2026
105 checks passed
gmarkall added a commit to gmarkall/numba-cuda that referenced this pull request Jan 27, 2026
- Add Python 3.14 to the wheel publishing matrix (NVIDIA#750)
- feat: swap out internal device array usage with `StridedMemoryView` (NVIDIA#703)
- Fix max block size computation in `forall` (NVIDIA#744)
- Fix prologue debug line info pointing to decorator instead of def line (NVIDIA#746)
- Fix kernel return type in DISubroutineType debug metadata (NVIDIA#745)
- Fix missing line info in Jupyter notebooks (NVIDIA#742)
- Fix: Pass correct flags to linker when debugging in the presence of LTOIR code (NVIDIA#698)
- chore(deps): add cuda-pathfinder to pixi deps (NVIDIA#741)
- fix: enable flake8-bugbear lints and fix found problems (NVIDIA#708)
- fix: Fix race condition in CUDA Simulator (NVIDIA#690)
- ci: run tests in parallel (NVIDIA#740)
- feat: users can pass `shared_memory_carveout` to @cuda.jit (NVIDIA#642)
- Fix compatibility with NumPy 2.4: np.trapz and np.in1d removed (NVIDIA#739)
- Pass the -numba-debug flag to libnvvm (NVIDIA#681)
- ci: remove rapids containers from conda ci (NVIDIA#737)
- Use `pathfinder` for dynamic libraries (NVIDIA#308)
- CI: Add CUDA 13.1 testing support (NVIDIA#705)
- Adding `pixi run test` and `pixi run test-par` support (NVIDIA#724)
- Disable per-PR nvmath tests + follow same test practice (NVIDIA#723)
- chore(deps): regenerate pixi lockfile (NVIDIA#722)
- Fix DISubprogram line number to point to function definition line (NVIDIA#695)
- revert: chore(dev): build pixi using rattler (NVIDIA#713) (NVIDIA#719)
- [feat] Initial version of the Numba CUDA GDB pretty-printer (NVIDIA#692)
- chore(dev): build pixi using rattler (NVIDIA#713)
- build(deps): bump the actions-monthly group across 1 directory with 8 updates (NVIDIA#704)
@gmarkall gmarkall mentioned this pull request Jan 27, 2026
kkraus14 pushed a commit that referenced this pull request Jan 28, 2026
- Add Python 3.14 to the wheel publishing matrix (#750)
- feat: swap out internal device array usage with `StridedMemoryView`
(#703)
- Fix max block size computation in `forall` (#744)
- Fix prologue debug line info pointing to decorator instead of def line
(#746)
- Fix kernel return type in DISubroutineType debug metadata (#745)
- Fix missing line info in Jupyter notebooks (#742)
- Fix: Pass correct flags to linker when debugging in the presence of
LTOIR code (#698)
- chore(deps): add cuda-pathfinder to pixi deps (#741)
- fix: enable flake8-bugbear lints and fix found problems (#708)
- fix: Fix race condition in CUDA Simulator (#690)
- ci: run tests in parallel (#740)
- feat: users can pass `shared_memory_carveout` to @cuda.jit (#642)
- Fix compatibility with NumPy 2.4: np.trapz and np.in1d removed (#739)
- Pass the -numba-debug flag to libnvvm (#681)
- ci: remove rapids containers from conda ci (#737)
- Use `pathfinder` for dynamic libraries (#308)
- CI: Add CUDA 13.1 testing support (#705)
- Adding `pixi run test` and `pixi run test-par` support (#724)
- Disable per-PR nvmath tests + follow same test practice (#723)
- chore(deps): regenerate pixi lockfile (#722)
- Fix DISubprogram line number to point to function definition line
(#695)
- revert: chore(dev): build pixi using rattler (#713) (#719)
- [feat] Initial version of the Numba CUDA GDB pretty-printer (#692)
- chore(dev): build pixi using rattler (#713)
- build(deps): bump the actions-monthly group across 1 directory with 8
updates (#704)

<!--

Thank you for contributing to numba-cuda :)

Here are some guidelines to help the review process go smoothly.

1. Please write a description in this text box of the changes that are
being
   made.

2. Please ensure that you have written units tests for the changes
made/features
   added.

3. If you are closing an issue please use one of the automatic closing
words as
noted here:
https://help.github.com/articles/closing-issues-using-keywords/

4. If your pull request is not ready for review but you want to make use
of the
continuous integration testing facilities please label it with `[WIP]`.

5. If your pull request is ready to be reviewed without requiring
additional
work on top of it, then remove the `[WIP]` label (if present) and
replace
it with `[REVIEW]`. If assistance is required to complete the
functionality,
for example when the C/C++ code of a feature is complete but Python
bindings
are still required, then add the label `[HELP-REQ]` so that others can
triage
and assist. The additional changes then can be implemented on top of the
same PR. If the assistance is done by members of the rapidsAI team, then
no
additional actions are required by the creator of the original PR for
this,
otherwise the original author of the PR needs to give permission to the
person(s) assisting to commit to their personal fork of the project. If
that
doesn't happen then a new PR based on the code of the original PR can be
opened by the person assisting, which then will be the PR that will be
   merged.

6. Once all work has been done and review has taken place please do not
add
features or make changes out of the scope of those requested by the
reviewer
(doing this just add delays as already reviewed code ends up having to
be
re-reviewed/it is hard to tell what is new etc!). Further, please do not
rebase your branch on main/force push/rewrite history, doing any of
these
   causes the context of any comments made by reviewers to be lost. If
   conflicts occur against main they should be resolved by merging main
   into the branch used for making the pull request.

Many thanks in advance for your cooperation!

-->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

4 - Waiting on CI Waiting for a CI run to finish successfully

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEA] Pass -numba-debug flag to libnvvm for debug builds using the 13.1 or later release of the CUDA Toolkit

3 participants