Skip to content

introduce deterministic mode for reproducible LLM-based question generation#666

Open
piyush06singhal wants to merge 2 commits intoAOSSIE-Org:mainfrom
piyush06singhal:feature/deterministic-llm-generator
Open

introduce deterministic mode for reproducible LLM-based question generation#666
piyush06singhal wants to merge 2 commits intoAOSSIE-Org:mainfrom
piyush06singhal:feature/deterministic-llm-generator

Conversation

@piyush06singhal
Copy link
Copy Markdown

@piyush06singhal piyush06singhal commented Apr 14, 2026

Summary

This PR introduces a deterministic mode for LLM-based question generation to enable reproducible outputs for identical inputs.

Currently, LLM-generated responses are non-deterministic due to stochastic sampling, which makes debugging, testing, and result comparison difficult. This change adds an optional mechanism to produce consistent outputs when required.


What was implemented

  • Added support for an optional deterministic flag in LLM-based API endpoints
  • Updated llm_generator.py to conditionally control generation parameters
  • When deterministic mode is enabled:
    • temperature is set to 0.0
    • top_p is set to 1.0
    • seed is derived from input text using: hash(input_text) % (2**32)
  • Default behavior remains unchanged (temperature = 0.7)

How this improves the system

  • Enables reproducible outputs for identical inputs
  • Simplifies debugging and issue tracking
  • Improves reliability of testing workflows
  • Provides controlled generation behavior without affecting existing functionality

Scope

In Scope

  • Deterministic generation for LLM-based endpoints
  • Parameter control within existing generator methods

Out of Scope

  • Retry logic
  • Advanced configuration systems
  • Architectural changes
  • Frontend updates

Screenshots/Recordings:

Not applicable (backend enhancement)


Additional Notes:

This feature is optional and backward compatible. Deterministic mode prioritizes reproducibility over diversity and is intended for debugging, testing, and consistent evaluation scenarios.


AI Usage Disclosure:

We encourage contributors to use AI tools responsibly when creating Pull Requests. While AI can be a valuable aid, it is essential to ensure that your contributions meet the task requirements, build successfully, include relevant tests, and pass all linters. Submissions that do not meet these standards may be closed without warning to maintain the quality and integrity of the project. Please take the time to understand the changes you are proposing and their impact. AI slop is strongly discouraged and may lead to banning and blocking. Do not spam our repos with AI slop.

Check one of the checkboxes below:

  • This PR does not contain AI-generated code at all.
  • This PR contains AI-generated code. I have read the AI Usage Policy and this PR complies with this policy. I have tested the code locally and I am responsible for it.

I have used the following AI models and tools: ChatGPT (for guidance and review)


Checklist

  • My PR addresses a single issue, fixes a single bug or makes a single improvement.
  • My code follows the project's code style and conventions
  • If applicable, I have made corresponding changes or additions to the documentation
  • If applicable, I have made corresponding changes or additions to tests
  • My changes generate no new warnings or errors
  • I have joined the Discord server and I will share a link to this PR with the project maintainers there
  • I have read the Contribution Guidelines
  • Once I submit my PR, CodeRabbit AI will automatically review it and I will address CodeRabbit's comments.
  • I have filled this PR template completely and carefully, and I understand that my PR may be closed without review otherwise.

Summary by CodeRabbit

Release Notes

  • New Features

    • Added optional deterministic mode for question generation, enabling reproducible results for identical inputs across all question types (multiple choice, boolean, and short-answer).
  • Bug Fixes

    • Enhanced input validation for question generation endpoints to gracefully handle invalid inputs.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 14, 2026

Warning

Rate limit exceeded

@piyush06singhal has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 44 minutes and 37 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 44 minutes and 37 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 1c171528-216d-4f13-9c10-348ee7bec0db

📥 Commits

Reviewing files that changed from the base of the PR and between de492bd and ab3a375.

📒 Files selected for processing (2)
  • backend/Generator/llm_generator.py
  • backend/server.py
📝 Walkthrough

Walkthrough

This PR adds optional deterministic question generation to the LLM question generator and corresponding backend endpoints. A new deterministic parameter (default False) enables reproducible outputs by computing a seed from input text hash and applying fixed generation parameters (temperature=0.0, top_p=1.0) when enabled. The changes also introduce input validation to return empty results for invalid inputs.

Changes

Cohort / File(s) Summary
Deterministic Generation Support
backend/Generator/llm_generator.py
Added deterministic=False parameter to four generation methods (generate_short_questions, generate_mcq_questions, generate_boolean_questions, generate_all_questions). Each method now validates input, computes a seed from hashed input when deterministic mode is enabled, and passes deterministic parameters (temperature=0.0, top_p=1.0, seed) to the LLM API. Refactored parameter passing to use **params expansion.
Endpoint Integration
backend/server.py
Updated four LLM-backed endpoints (/get_shortq_llm, /get_mcq_llm, /get_boolq_llm, /get_problems_llm) to read the deterministic field from incoming requests and propagate it to the corresponding generator methods.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

Poem

🐰 A seed from hashes, temperature cold,
Deterministic rabbits, predictable and bold!
No more randomness in our question quest,
Every generation now stands the test. ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: introducing deterministic mode for reproducible LLM-based question generation, which aligns perfectly with the changeset.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/Generator/llm_generator.py`:
- Around line 50-52: Replace uses of Python's built-in hash(...) for seed
computation with a stable, cross-process hash: when computing seed_value (the
variable set when deterministic is true, e.g., the "Compute seed BEFORE text
truncation" block and the other seed computations around the same logic at the
other locations), generate a SHA-256 (or similar cryptographic) digest of the
input_text bytes and convert it to an integer, then take mod 2**32 for the seed;
keep the deterministic flag and None behavior unchanged. Locate the seed_value
assignment and any other places that currently use hash(input_text) and swap in
this stable hashing approach so identical input_text produces the same seed
across process restarts.

In `@backend/server.py`:
- Line 104: The deterministic assignment may treat string values like "false" as
truthy; add a small helper function (e.g., parse_bool(value, default=False)) in
server.py that accepts values of types bool, str, int and returns a strict
boolean (accept "true"/"1" case-insensitively as True, "false"/"0" as False, and
fall back to default), then replace the four occurrences that set deterministic
= data.get("deterministic", False) with deterministic =
parse_bool(data.get("deterministic", None), False) so each endpoint uses the
normalized boolean; ensure the helper is reused by all endpoints that reference
deterministic.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 5ecabd62-6862-44e9-9daf-5f869a87ee2c

📥 Commits

Reviewing files that changed from the base of the PR and between 2038116 and de492bd.

📒 Files selected for processing (2)
  • backend/Generator/llm_generator.py
  • backend/server.py

Comment thread backend/Generator/llm_generator.py
Comment thread backend/server.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant