Feat: Improve OWASP Nest member ranking by introducing calculated score–based algorithm#4266
Feat: Improve OWASP Nest member ranking by introducing calculated score–based algorithm#4266mrkeshav-05 wants to merge 13 commits intoOWASP:mainfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a persisted composite member score: new scoring module, model field + index and migration, scoring applied during GitHub sync, GraphQL/Algolia/API exposure, default API ordering switched to use calculated_score, and tests/infrastructure updates. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip You can customize the high-level summary generated by CodeRabbit.Configure the |
There was a problem hiding this comment.
9 issues found across 14 files
Confidence score: 2/5
- There are concrete scoring regressions in
backend/apps/github/scoring.pyandbackend/apps/github/management/commands/github_update_users.py(inactive users and non-leader memberships being overcounted), which can materially inflate member rankings and user-facing scores. - The rollout path in
backend/apps/github/migrations/0045_user_calculated_score.pyintroduces a practical regression risk: existing users remain tied at 0.0 until an external recomputation runs, andbackend/apps/api/rest/v0/member.pyuses non-unique ordering (-calculated_score) that can make pagination unstable when ties occur. - This is why the merge risk is high rather than moderate: multiple medium-to-high severity issues are high-confidence and affect core ranking correctness, not just test hygiene or refactoring details.
- Pay close attention to
backend/apps/github/scoring.py,backend/apps/github/management/commands/github_update_users.py,backend/apps/github/migrations/0045_user_calculated_score.py,backend/apps/api/rest/v0/member.py,backend/tests/apps/github/management/commands/github_update_users_test.py- scoring logic, ranking rollout behavior, pagination determinism, and weakened test coverage around the new score path.
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="docker-compose/local/compose.yaml">
<violation number="1" location="docker-compose/local/compose.yaml:70">
P2: Renaming the Postgres named volume changes volume identity, causing local DB state reset and orphaning previous developer data.</violation>
</file>
<file name="backend/apps/github/migrations/0045_user_calculated_score.py">
<violation number="1" location="backend/apps/github/migrations/0045_user_calculated_score.py:16">
P2: New score-based ranking is enabled without migration-time backfill, causing existing users to be tied at 0.0 until an external recomputation runs.</violation>
</file>
<file name="backend/apps/github/scoring.py">
<violation number="1" location="backend/apps/github/scoring.py:149">
P1: Consistency scoring incorrectly treats date presence as activity and ignores zero contribution counts, inflating consistency for inactive users.</violation>
</file>
<file name="backend/tests/apps/github/scoring_test.py">
<violation number="1" location="backend/tests/apps/github/scoring_test.py:329">
P2: Rounding test is self-referential and cannot reliably detect regressions to lower rounding precision.</violation>
</file>
<file name="backend/apps/api/rest/v0/member.py">
<violation number="1" location="backend/apps/api/rest/v0/member.py:93">
P2: Default paginated ordering uses only `-calculated_score` (non-unique), making page results unstable when scores tie.</violation>
</file>
<file name="backend/apps/github/management/commands/github_update_users.py">
<violation number="1" location="backend/apps/github/management/commands/github_update_users.py:95">
P2: `distinct_project_count` is incorrectly populated from `project_leader` count, causing the same metric to be used for both breadth and leadership scoring.</violation>
<violation number="2" location="backend/apps/github/management/commands/github_update_users.py:132">
P1: Leadership aggregation incorrectly counts non-leader chapter/project memberships as leader counts, inflating calculated member scores.</violation>
</file>
<file name="backend/tests/apps/github/management/commands/github_update_users_test.py">
<violation number="1" location="backend/tests/apps/github/management/commands/github_update_users_test.py:35">
P2: Command tests neutralize leadership input in all scenarios and never verify `calculated_score`, leaving the new score-based update path insufficiently covered.</violation>
<violation number="2" location="backend/tests/apps/github/management/commands/github_update_users_test.py:72">
P2: Test coverage was weakened: it no longer verifies `values()` is called with `"user_id"`, so aggregation grouping regressions may go undetected.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
backend/tests/apps/github/management/commands/github_update_users_test.py
Outdated
Show resolved
Hide resolved
backend/tests/apps/github/management/commands/github_update_users_test.py
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/tests/apps/api/rest/v0/member_test.py (1)
58-71:⚠️ Potential issue | 🟡 MinorDefault ordering test updated correctly, but new ordering options lack test coverage.
The test for default ordering is correctly updated to expect
-calculated_score. However, the parametrizedtest_list_members_with_ordering(lines 73-84) doesn't include the newcalculated_scoreand-calculated_scoreordering options that were added to the endpoint.📝 Suggested fix to add missing ordering options to parametrized test
`@pytest.mark.parametrize`( "ordering", [ + "calculated_score", + "-calculated_score", "created_at", "-created_at", "updated_at", "-updated_at", "name", "-name", "login", "-login", ], )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/tests/apps/api/rest/v0/member_test.py` around lines 58 - 71, The parametrized test test_list_members_with_ordering is missing the new ordering options; update its parameter list to include "calculated_score" and "-calculated_score" so the list_members ordering behavior is fully covered; locate the test_list_members_with_ordering function in backend/tests/apps/api/rest/v0/member_test.py and add those two strings to the existing ordering parameter values used in the `@pytest.mark.parametrize` decorator (ensuring the test still asserts that mock_user_model.objects.order_by is called with the provided ordering and that mock_filters.filter is applied to the ordered queryset).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/apps/github/management/commands/github_update_users.py`:
- Around line 127-142: The aggregation currently groups only by member_id and
entity_type_id, so MEMBER rows for chapter/project get mapped to leader keys;
update the query in memberships
(EntityMember.objects.filter(...).values(...).annotate(...)) to include "role"
in .values(...) and in the loop check item["role"] == EntityMember.Role.LEADER
before assigning leader counts (or alternately add a conditional filter for
role==LEADER when entity_type_id corresponds to chapter/project). Use the
existing symbols memberships, EntityMember.Role, item["role"], entity_type_map
and result to implement the fix.
- Around line 92-103: The call to calculate_member_score wrongly uses
leadership_id.get("project_leader") for both distinct_project_count and
project_leader_count, causing a single leadership record to affect both breadth
and leadership signals; update the distinct_project_count argument in the
calculate_member_score call (near the user.calculated_score assignment) to use a
dedicated project-participation source (e.g., a distinct_project_counts mapping
or repo_item/project_participation key) or set it to 0 until such data exists,
leaving project_leader_count as leadership_id.get("project_leader", 0).
- Around line 84-86: The progress label uses mixed bases: numerator uses the
global index (idx + offset + 1) while denominator uses the remaining count
(active_users_count - offset), causing impossible values when --offset is set;
update the prefix in the loop that iterates active_users (the for idx, user in
enumerate(active_users[offset:].iterator()) block) so both sides use the same
basis — e.g., make the denominator active_users_count (or alternatively make the
numerator start at 1 by using idx + 1 and keep denominator active_users_count -
offset) so the prefix calculation (currently assigned to prefix) prints
consistent "X of Y" values.
In `@docker-compose/local/compose.yaml`:
- Line 70: Revert the local-branch volume rename by replacing the volume
reference "db-data-issue-4200" with the canonical "db-data" in the
docker-compose volume mapping; locate the occurrence(s) of "db-data-issue-4200"
in compose.yaml (the volume mapping lines) and change them back to "db-data" so
main uses the standard shared volume name.
---
Outside diff comments:
In `@backend/tests/apps/api/rest/v0/member_test.py`:
- Around line 58-71: The parametrized test test_list_members_with_ordering is
missing the new ordering options; update its parameter list to include
"calculated_score" and "-calculated_score" so the list_members ordering behavior
is fully covered; locate the test_list_members_with_ordering function in
backend/tests/apps/api/rest/v0/member_test.py and add those two strings to the
existing ordering parameter values used in the `@pytest.mark.parametrize`
decorator (ensuring the test still asserts that mock_user_model.objects.order_by
is called with the provided ordering and that mock_filters.filter is applied to
the ordered queryset).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: b4198979-ea41-49d9-ad4f-09e24f54bb64
⛔ Files ignored due to path filters (1)
frontend/src/types/__generated__/graphql.tsis excluded by!**/__generated__/**
📒 Files selected for processing (13)
backend/apps/api/rest/v0/member.pybackend/apps/github/api/internal/nodes/user.pybackend/apps/github/index/registry/user.pybackend/apps/github/management/commands/github_update_users.pybackend/apps/github/migrations/0045_user_calculated_score.pybackend/apps/github/models/mixins/user.pybackend/apps/github/models/user.pybackend/apps/github/scoring.pybackend/tests/apps/api/rest/v0/member_test.pybackend/tests/apps/github/management/commands/github_update_users_test.pybackend/tests/apps/github/models/mixins/user_test.pybackend/tests/apps/github/scoring_test.pydocker-compose/local/compose.yaml
There was a problem hiding this comment.
4 issues found across 6 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="backend/apps/github/management/commands/github_update_users.py">
<violation number="1" location="backend/apps/github/management/commands/github_update_users.py:95">
P2: `distinct_project_count` is hardcoded to 0, which disables project-breadth scoring for all users and causes a ranking regression.</violation>
<violation number="2" location="backend/apps/github/management/commands/github_update_users.py:143">
P1: Committee membership totals are overwritten (not accumulated) when a user has both committee leader and member roles, producing incorrect ranking inputs.</violation>
</file>
<file name="backend/apps/api/rest/v0/member.py">
<violation number="1" location="backend/apps/api/rest/v0/member.py:94">
P2: Pagination ordering is still nondeterministic because the new secondary `-created_at` key is not unique; add a final unique tiebreaker.</violation>
</file>
<file name="backend/tests/apps/github/scoring_test.py">
<violation number="1" location="backend/tests/apps/github/scoring_test.py:329">
P2: Rounding test was weakened from a numeric precision assertion to a string-format check, reducing regression protection for the 4-decimal contract.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
backend/apps/github/management/commands/github_update_users.py (1)
92-103:⚠️ Potential issue | 🟠 Major
distinct_project_countis still effectively disabled.Line 95 hard-codes the project-breadth input to
0, so_score_breadth()only reflects repository count in persisted scores. That under-ranks members whose impact spans multiple OWASP projects rather than many repos. Please feed a real distinct-project metric here, or remove this factor until the data exists.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/apps/github/management/commands/github_update_users.py` around lines 92 - 103, The call to calculate_member_score is passing distinct_project_count=0 which disables project breadth; replace the hard-coded 0 with the real distinct-project metric (for example use user.distinct_project_count or a lookup like project_counts.get(user.id, 0) if you compute per-user project counts elsewhere) so _score_breadth() can consider multi-project impact, or if that metric truly doesn't exist yet remove/neutralize the factor by updating the call and/or calculate_member_score signature to accept None/omit the parameter.
🧹 Nitpick comments (1)
backend/tests/apps/github/management/commands/github_update_users_test.py (1)
35-35:_get_leadership_data()still has no direct coverage.These patches stub the new aggregation path out of every command test, so bugs in the scoring inputs can slip through unnoticed. A small focused test for mixed committee
LEADER/MEMBERrows would lock down the contract here.Also applies to: 90-90, 137-137, 177-177, 215-215, 246-246
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/apps/github/management/commands/github_update_users.py`:
- Around line 127-145: The loop that builds result from memberships overwrites
committee counts when the same member has multiple role buckets; update the
logic in the for loop that iterates over memberships (the block referencing
memberships, entity_type_map, EntityMember.Role.LEADER, and role_key
"committee_member") so that when role_key == "committee_member" you
accumulate/add item["count"] to any existing
result[item["member_id"]]["committee_member"] instead of assigning, and for
other role_keys preserve the current assign behavior; ensure you initialize the
key to 0 if missing before adding.
In `@backend/apps/github/scoring.py`:
- Around line 123-133: The loop over contribution_data must validate and
sanitize each count before using it: in the function that iterates
contribution_data.items() (the block using date_str, count, datetime.strptime,
RECENCY_HALF_LIFE_DAYS and accumulating into total_score) ensure count is
coerced to a numeric (int/float) and skip or treat as 0 when it's None, a
non-numeric string, or negative; e.g., try to cast to float and continue on
ValueError/TypeError or clamp negatives to 0 so decay math uses only
non-negative numbers; after the loop, guard the math.log1p(total_score) call by
ensuring total_score is >= 0 (or at least > -1) and clamp to 0 if necessary so a
bad JSON entry cannot cause log1p to throw.
---
Duplicate comments:
In `@backend/apps/github/management/commands/github_update_users.py`:
- Around line 92-103: The call to calculate_member_score is passing
distinct_project_count=0 which disables project breadth; replace the hard-coded
0 with the real distinct-project metric (for example use
user.distinct_project_count or a lookup like project_counts.get(user.id, 0) if
you compute per-user project counts elsewhere) so _score_breadth() can consider
multi-project impact, or if that metric truly doesn't exist yet
remove/neutralize the factor by updating the call and/or calculate_member_score
signature to accept None/omit the parameter.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 84549d73-b991-4de8-9387-8cfc67d186ab
📒 Files selected for processing (6)
backend/apps/api/rest/v0/member.pybackend/apps/github/management/commands/github_update_users.pybackend/apps/github/scoring.pybackend/tests/apps/api/rest/v0/member_test.pybackend/tests/apps/github/management/commands/github_update_users_test.pybackend/tests/apps/github/scoring_test.py
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/tests/apps/api/rest/v0/member_test.py
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/apps/github/management/commands/github_update_users.py`:
- Around line 64-68: user_release_counts is counting all authored releases while
contributions_count is OWASP-scoped, causing calculated_score mismatch; change
the User queryset that builds user_release_counts to apply the same OWASP filter
used for contributions_count (i.e., restrict created_releases to the OWASP scope
rather than just created_releases__isnull) so the annotated release_count only
counts OWASP releases (update the
User.objects.filter(...).annotate(...).values_list(...) query to include the
same OWASP filter condition on created_releases as used when computing
contributions_count).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 023bea08-7c5f-47fc-8373-d569f85ac004
📒 Files selected for processing (1)
backend/apps/github/management/commands/github_update_users.py
There was a problem hiding this comment.
2 issues found across 2 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="backend/tests/apps/github/management/commands/github_update_users_test.py">
<violation number="1">
P2: `github_update_users` tests were updated without covering new `calculated_score` behavior, leaving score wiring regressions in the command undetected.</violation>
</file>
<file name="backend/apps/github/scoring.py">
<violation number="1" location="backend/apps/github/scoring.py:125">
P2: Recency scoring accepts NaN/Infinity counts and can emit non-finite member scores.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
backend/tests/apps/github/management/commands/github_update_users_test.py
Show resolved
Hide resolved
There was a problem hiding this comment.
♻️ Duplicate comments (1)
backend/apps/github/scoring.py (1)
157-163:⚠️ Potential issue | 🟠 MajorHarden consistency scoring against non-numeric
contribution_datavalues.On Line 158, raw
countis compared directly (count <= 0). If a value isNoneor a string, this raisesTypeErrorand can abort scoring for the sync run. Align this path with_score_recencyby coercing tofloatfirst.Proposed fix
- for date_str, count in contribution_data.items(): - if count <= 0: - continue + for date_str, raw_count in contribution_data.items(): + try: + numeric_count = float(raw_count) + except (ValueError, TypeError): + continue + + if numeric_count <= 0: + continue try: date = datetime.strptime(date_str, "%Y-%m-%d").replace(tzinfo=UTC) except (ValueError, TypeError): continue - if date >= one_year_ago: + if one_year_ago <= date <= now: year, week, _ = date.isocalendar() active_weeks.add((year, week))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/apps/github/scoring.py` around lines 157 - 163, The loop over contribution_data currently does a raw numeric comparison (if count <= 0) which raises TypeError for non-numeric values; change the logic in the loop that iterates contribution_data (the block using date_str, count) to coerce count to float (like _score_recency does), catching ValueError/TypeError around the float conversion and skipping entries when conversion fails or result is <= 0, so non-numeric or None values are safely ignored and scoring continues.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@backend/apps/github/scoring.py`:
- Around line 157-163: The loop over contribution_data currently does a raw
numeric comparison (if count <= 0) which raises TypeError for non-numeric
values; change the logic in the loop that iterates contribution_data (the block
using date_str, count) to coerce count to float (like _score_recency does),
catching ValueError/TypeError around the float conversion and skipping entries
when conversion fails or result is <= 0, so non-numeric or None values are
safely ignored and scoring continues.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 9c8a7e51-6c68-4a85-89f7-961a005d53b9
📒 Files selected for processing (2)
backend/apps/github/scoring.pybackend/tests/apps/github/management/commands/github_update_users_test.py
There was a problem hiding this comment.
2 issues found across 2 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="backend/tests/apps/github/management/commands/github_update_users_test.py">
<violation number="1" location="backend/tests/apps/github/management/commands/github_update_users_test.py:328">
P2: The new leadership test is too weak: it only checks `calculated_score > 0`, which can be satisfied by contributions/repo breadth alone, so regressions that ignore leadership data would still pass.</violation>
</file>
<file name="backend/tests/apps/github/scoring_test.py">
<violation number="1" location="backend/tests/apps/github/scoring_test.py:330">
P2: The rounding test is weakened by default relative tolerance in `pytest.approx`, allowing non-4-decimal results to pass.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
backend/tests/apps/github/management/commands/github_update_users_test.py
Outdated
Show resolved
Hide resolved
d47ef86 to
59db468
Compare
There was a problem hiding this comment.
1 issue found across 3 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="backend/tests/apps/github/models/user_test.py">
<violation number="1" location="backend/tests/apps/github/models/user_test.py:236">
P2: Index test is incomplete: it checks only index name and does not verify `-calculated_score` descending field configuration.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
8d9bc99 to
2f7be3b
Compare
2f7be3b to
d692892
Compare
0f50239 to
069f12e
Compare
|
There was a problem hiding this comment.
Pull request overview
This PR introduces a composite, calculated-score–based member ranking for OWASP Nest, replacing the previous contribution-count–centric ordering with a more holistic scoring model (contributions, leadership, breadth, releases, recency, consistency) and propagating it through API/search layers.
Changes:
- Add
calculated_scoreto the GitHubUsermodel (with DB index + migration) and expose it to GraphQL and Algolia indexing. - Implement a scoring module (
apps.github.scoring) with unit tests, and update the GitHub user sync command to compute/persist the score. - Update REST member listing to default-order by
-calculated_scorewith deterministic tie-breakers.
Reviewed changes
Copilot reviewed 14 out of 15 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| frontend/src/types/generated/graphql.ts | Adds calculatedScore to generated GraphQL user types. |
| docker-compose/local/compose.yaml | Renames local DB volume (potential dev workflow impact). |
| backend/apps/github/scoring.py | New scoring algorithm implementation. |
| backend/tests/apps/github/scoring_test.py | Unit tests covering scoring components and weights. |
| backend/apps/github/models/user.py | Adds calculated_score field + descending index. |
| backend/apps/github/migrations/0045_user_calculated_score.py | Migration adding field + index. |
| backend/apps/github/models/mixins/user.py | Adds idx_calculated_score for indexing. |
| backend/apps/github/index/registry/user.py | Adds idx_calculated_score field + uses it for Algolia ranking. |
| backend/apps/github/api/internal/nodes/user.py | Exposes calculated_score via internal GraphQL node fields. |
| backend/apps/github/management/commands/github_update_users.py | Computes and persists calculated_score during sync. |
| backend/tests/apps/github/management/commands/github_update_users_test.py | Updates command tests for iterator/batching and scoring integration. |
| backend/tests/apps/github/models/user_test.py | Tests default/storage + presence of calculated_score index. |
| backend/tests/apps/github/models/mixins/user_test.py | Ensures idx_calculated_score is indexable. |
| backend/apps/api/rest/v0/member.py | Adds ordering by calculated_score and deterministic default ordering. |
| backend/tests/apps/api/rest/v0/member_test.py | Updates member listing tests for new default ordering behavior. |
Files not reviewed (1)
- frontend/src/types/generated/graphql.ts: Language not supported
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
| leadership_id = leadership_data.get(user.id, {}) | ||
| user.calculated_score = calculate_member_score( | ||
| contributions_count=user.contributions_count, | ||
| distinct_repository_count=repo_item.get("repo_count", 0), | ||
| distinct_project_count=leadership_id.get("project_leader", 0), | ||
| release_count=user_release_counts.get(user.id, 0), | ||
| chapter_leader_count=leadership_id.get("chapter_leader", 0), | ||
| project_leader_count=leadership_id.get("project_leader", 0), | ||
| committee_member_count=leadership_id.get("committee_member", 0), |
| if count <= 0: | ||
| continue |
| result = list_members(mock_request, mock_filters, ordering=ordering) | ||
|
|
||
| mock_user_model.objects.order_by.assert_called_once_with(ordering) | ||
| mock_user_model.objects.order_by.assert_called_once_with(ordering, "-created_at", "-id") |



Proposed change
This PR improves the OWASP Nest member ranking system by introducing a calculated score–based algorithm that better reflects the overall impact of a community member within the OWASP ecosystem.
Previously, member ranking relied primarily on the number of contributions, which did not capture other meaningful aspects of participation such as leadership roles, breadth of involvement across OWASP initiatives, and contribution recency.
This change introduces a
calculated_scorefield that aggregates multiple community-related signals to produce a more balanced and meaningful ranking.Resolves: #4200
Checklist
make check-testlocally: all warnings addressed, tests passed