An end-to-end system for turning recurring product meetings into structured project memory, accountable action items, and context-aware Minutes of Meeting (MoM).
This project is not just an AI note taker. The goal is to build an
AI Product Manager that remembers what happened in previous meetings, tracks
open questions and deadlines, and helps teams review delivery continuity over
time.
If you want the fastest path through the project, use this order:
- README.md
- docs/PROJECT_STATUS.md
- docs/REVIEW_GUIDE.md
- docs/SUBMISSION_GUIDE.md
benchmark/scenarios/onboarding_growth_initiative/transcripts/README.md
If you only need the essentials:
What the project is and how to run itREADME.mdWhat is working and what is still in progressdocs/PROJECT_STATUS.mdWhere the code and dataset links aredocs/SUBMISSION_GUIDE.mdHow to use the benchmark dataset and read the scorebenchmark/scenarios/onboarding_growth_initiative/transcripts/README.md
If you want to share just one link, this README is intended to be enough on its own.
- Repository:
https://github.com/KumarSashank/AI-Product-Manager - ZIP download:
https://github.com/KumarSashank/AI-Product-Manager/archive/refs/heads/main.zip
Prerequisites:
Node.js 20+pnpm 8+Docker Desktopor Docker Engine with ComposeOPENAI_API_KEY
Minimal Docker env:
OPENAI_API_KEY=sk-your-openai-keyRun the application:
cp .env.docker.example .env.docker
docker compose --env-file .env.docker up --build -dOpen:
- Web app:
http://localhost:3001 - API health:
http://localhost:3002/api/v1/health
Use this path for the most reliable walkthrough:
- Start the app with Docker.
- Create a project.
- Upload a transcript.
- Generate the Minutes of Meeting.
- Review extracted items and accountability.
- Upload the next transcript into the same project to show continuity.
Direct dataset links:
- Scenario folder:
https://github.com/KumarSashank/AI-Product-Manager/tree/main/benchmark/scenarios/onboarding_growth_initiative - Transcripts folder:
https://github.com/KumarSashank/AI-Product-Manager/tree/main/benchmark/scenarios/onboarding_growth_initiative/transcripts - Scenario JSON:
https://github.com/KumarSashank/AI-Product-Manager/blob/main/benchmark/scenarios/onboarding_growth_initiative/scenario.json
The benchmark transcript files are:
001_week1_kickoff.txt002_week2_status.txt003_week3_scope_risk.txt004_week4_replan.txt005_week5_launch_readiness.txt
Benchmark typecheck:
pnpm benchmark:typecheckRun our stateful method:
pnpm benchmark:longitudinal -- benchmark/scenarios/onboarding_growth_initiative/scenario.jsonRun the normal baseline:
pnpm benchmark:longitudinal -- --system transcript_only benchmark/scenarios/onboarding_growth_initiative/scenario.jsonCompare both methods:
pnpm benchmark:compareBenchmark reports are written to:
- Host runs:
benchmark/reports/ - Docker backend container runs:
/app/benchmark/reports/
The benchmark compares:
current_systemOur method with project memory and accountability carry-forwardtranscript_onlyThe normal baseline without prior project memory
Expected result for the built-in scenario:
current_system:38 passed / 0 failedtranscript_only:32 passed / 6 failed
Interpretation:
- higher
passedis better - lower
failedis better - if
transcript_onlymatches or beatscurrent_system, that means the longitudinal reasoning has regressed
Most reliable:
- transcript upload
- contextual MoM generation
- project-level action items
- benchmark evaluation
Still in progress:
- bot-based meeting joining reliability
- Chrome extension multi-speaker transcript attribution
- audio transcription experimentation
The system supports three capture paths:
Transcript uploadThe most reliable path today. Upload an existing transcript and generate contextual MoM, extracted items, and longitudinal accountability.Join with botA Playwright-based Google Meet bot that can join meetings and capture live captions. Useful for demos and experimentation, but still reliability-limited by meeting permissions, waiting rooms, and Google auth flows.Chrome extensionA browser-based Meet capture path that is under active development. Audio capture is supported. Speaker-attributed multi-person transcript extraction is still being improved.
Typical AI meeting assistants summarize a single meeting well, but they often lose continuity across weeks. This system is designed to answer harder project questions:
- What was promised last week, and is it still open?
- Which owner missed a deadline without giving a status update?
- Which product question is still unresolved?
- Did the team actually close the launch blocker they discussed earlier?
- Is the project becoming more ready, or just generating more meeting notes?
- Persistent meeting memory across a project
- Context-aware MoM generation
- Structured extraction of action items, decisions, blockers, and open questions
- Accountability tracking by owner, team, priority, and due date
- A benchmark harness that compares the stateful system against a transcript-only baseline
What is strongest today:
- Transcript upload workflow
- Context-aware MoM generation
- Project-level item tracking and accountability
- Benchmark and research evaluation scaffolding
- Dockerized local setup
What is still in progress:
- Bot-based meeting joining reliability
- Chrome extension multi-speaker speaker attribution
- Audio-transcription experimentation beyond transcript/caption capture
Usage note:
- For the most reliable flow, use
transcript upload. - Treat
bot joinandextension captureas experimental/preview features.
See docs/PROJECT_STATUS.md for a fuller summary of what has been achieved and what is still being improved.
AI-Product-Manager/
├── packages/
│ ├── ai-backend/ Fastify API, AI pipelines, database access
│ ├── web/ Next.js dashboard and project workspace
│ ├── bot-runner/ Playwright-based Google Meet bot
│ ├── chrome-extension/ Browser-based Meet capture
│ └── shared/ Shared schemas, contracts, and types
├── benchmark/ Longitudinal benchmark harness
├── docs/ Project, research, and operational docs
├── docker/ Dockerfiles and runtime setup
└── scripts/ Utility scripts
Next.jsandReactfor the dashboardFastifyandTypeScriptfor the backendPostgreSQLandDrizzlefor persistenceOpenAIfor MoM generation and structured extractionPlaywrightfor the bot-based Meet capture pathChrome Extension APIsfor in-browser Meet capture
Node.js 20+pnpm 8+Docker Desktopor Docker Engine with Compose supportOPENAI_API_KEY
This is the easiest path for local setup and evaluation.
cp .env.docker.example .env.docker
docker compose --env-file .env.docker up --build -dThen open:
- Web app:
http://localhost:3001 - API:
http://localhost:3002 - Health check:
http://localhost:3002/api/v1/health
Useful commands:
pnpm docker:up
pnpm docker:down
docker compose --env-file .env.docker logs -f
docker compose --env-file .env.docker up -d --build ai-backend
docker compose --env-file .env.docker up -d --build webFull guide:
pnpm install
pnpm devCommon package-level commands:
pnpm --filter @meeting-ai/ai-backend dev
pnpm --filter @meeting-ai/web dev
pnpm --filter @meeting-ai/bot-runner devLocal environment details:
| Variable | Required | Purpose |
|---|---|---|
OPENAI_API_KEY |
Yes | Generates contextual MoM and structured extraction |
| Variable | Required | Purpose |
|---|---|---|
JWT_SECRET |
No for local Docker | JWT signing secret |
COOKIE_SECRET |
No for local Docker | Cookie signing secret |
| Variable | Required | Purpose |
|---|---|---|
GOOGLE_EMAIL |
Optional | Used by the bot-runner for Google auth |
GOOGLE_PASSWORD |
Optional | Used by the bot-runner for Google auth |
DEEPGRAM_API_KEY |
Optional | Only needed for audio transcription experimentation |
OPENAI_API_KEY=sk-your-openai-keyYou can leave the dev-only JWT_SECRET and COOKIE_SECRET defaults as-is for
local Docker runs.
Repository:
GitHub:https://github.com/KumarSashank/AI-Product-Manager
Download options:
- Clone:
git clone https://github.com/KumarSashank/AI-Product-Manager.git- Download ZIP:
https://github.com/KumarSashank/AI-Product-Manager/archive/refs/heads/main.zip
The evaluation dataset used by this project is included in the repository under
benchmark/scenarios/.
Primary benchmark dataset:
- Scenario definition:
benchmark/scenarios/onboarding_growth_initiative/scenario.json - Transcript files:
benchmark/scenarios/onboarding_growth_initiative/transcripts/
GitHub paths:
- Scenario folder:
https://github.com/KumarSashank/AI-Product-Manager/tree/main/benchmark/scenarios/onboarding_growth_initiative - Transcripts folder:
https://github.com/KumarSashank/AI-Product-Manager/tree/main/benchmark/scenarios/onboarding_growth_initiative/transcripts - Scenario JSON:
https://github.com/KumarSashank/AI-Product-Manager/blob/main/benchmark/scenarios/onboarding_growth_initiative/scenario.json
See docs/SUBMISSION_GUIDE.md for a submission-ready
mapping of code, dataset, and benchmark assets, and
benchmark/scenarios/onboarding_growth_initiative/transcripts/README.md
for a short dataset walkthrough.
Recommended walkthrough:
- Start the app with Docker.
- Sign in to the web app.
- Create a project.
- Upload a transcript through the project workspace.
- Review the generated MoM, extracted items, and project item board.
- Open the next meeting in the same project to show continuity across meetings.
Stable demo path:
Projects -> Upload transcript -> Generate MoM -> Review action items
Preview/experimental paths:
Join with botChrome extension capture
See docs/REVIEW_GUIDE.md for a guided walkthrough.
pnpm test
pnpm typecheck
pnpm lint
pnpm format:check
pnpm pre-pushpnpm --filter @meeting-ai/ai-backend test
pnpm --filter @meeting-ai/ai-backend typecheck
pnpm --filter @meeting-ai/web exec tsc --noEmit
pnpm benchmark:typecheckExpected outcome:
pnpm testshould passpnpm typecheckshould passpnpm format:checkshould passpnpm pre-pushshould passpnpm lintmay still show warnings in some areas, but should not fail
More detail:
This repo includes a longitudinal benchmark harness that compares:
current_systemThe full stateful AI Product Manager workflow with project memorytranscript_onlyA baseline that only uses the current meeting transcript
pnpm benchmark:typecheck
pnpm benchmark:longitudinal
pnpm benchmark:compare
pnpm benchmark:longitudinal -- benchmark/scenarios/onboarding_growth_initiative/scenario.json
pnpm benchmark:longitudinal -- --system transcript_only benchmark/scenarios/onboarding_growth_initiative/scenario.json- Host output: benchmark/reports
- Docker backend container output:
/app/benchmark/reports
Expected result for the built-in comparison scenario:
current_system:38 passed / 0 failedtranscript_only:32 passed / 6 failed
This benchmark is important because it demonstrates that persistent project memory outperforms raw transcript-only summarization on continuity, accountability, and final project-state checks.
More detail:
- Built a multi-package product around recurring-meeting intelligence
- Added contextual MoM generation grounded in prior project state
- Added action-item continuity, accountability, and due-date awareness
- Added a Notion-like item workspace for reviewing and updating work
- Dockerized the stack for easier local setup
- Added a benchmark harness for longitudinal evaluation
- Framed the system as a researchable
AI Product Managerproblem, not just a meeting-summary tool
- Improve bot reliability under Google auth and waiting-room edge cases
- Improve extension-based multi-speaker caption separation
- Improve audio-to-transcript experimentation beyond raw capture
- Expand the benchmark dataset beyond the initial scenario pack
- Add stronger evidence trails and inspection tooling for why the system marked an item as open, resolved, blocked, or overdue
Start here:
Most useful docs:
| Document | Purpose |
|---|---|
| docs/PROJECT_STATUS.md | Achievements, limitations, and roadmap |
| docs/REVIEW_GUIDE.md | Guided walkthrough of the product |
| docs/SUBMISSION_GUIDE.md | Code and dataset access for submission |
| benchmark/scenarios/onboarding_growth_initiative/transcripts/README.md | How to use the benchmark dataset |
| docs/DOCKER_RUN.md | How to run the full app locally |
| docs/ENVIRONMENT.md | Environment variable setup |
| docs/ARCHITECTURE.md | System design and data flow |
| docs/TESTING.md | Test strategy and commands |
| benchmark/README.md | Longitudinal benchmark harness |
| docs/AI_PRODUCT_MANAGER_RESEARCH_PLAN.md | Product and research thesis |
| docs/PAPER_OUTLINE.md | Research paper outline |
| Package | Description |
|---|---|
| packages/ai-backend/README.md | Backend API and AI pipeline |
| packages/web/README.md | Web dashboard and project workspace |
| packages/bot-runner/README.md | Bot capture path |
| packages/chrome-extension/README.md | Browser capture path |
| packages/shared/README.md | Shared contracts and schemas |
If you are reviewing this project academically, the strongest way to evaluate it is not only by “does it summarize one meeting well?” but by “does it preserve continuity across a series of meetings and improve project accountability over time?”