Product Launch Post-Mortem AI Prompts for PMs
Most product launches are followed by a debrief meeting. The team gathers, shares what went well, shares what did not, and agrees to “do better next time.” Then everyone moves on to the next launch. The lessons are never captured, never applied, and the same mistakes happen again.
A real launch post-mortem is not a debrief. It is a structured analysis designed to extract actionable insights that improve future launches. The difference is rigor. Debriefs are conversations. Post-mortems are investigations.
AI Unpacker provides prompts designed to help product managers conduct post-mortems that produce genuine learning, not just documentation of what happened.
TL;DR
- Post-mortems should happen within 2 weeks of launch, while memories are fresh.
- The goal is improvement, not blame.
- Patterns across multiple launches reveal systemic issues.
- Action items without owners and deadlines are wishes.
- The best post-mortems are uncomfortable — they surface truths teams avoid.
- Insights without dissemination are worthless.
Introduction
Product launches are complex. They involve dozens of teams, hundreds of tasks, and countless decisions. When they go wrong, the reasons are often invisible — buried in miscommunication, underestimating complexity, or organizational dysfunction. When they go right, the reasons are equally invisible — attributed to good luck or hard work rather than systematic excellence.
The post-mortem is the tool that makes launch performance visible. It turns anecdote into evidence, opinion into insight, and “we should have known” into “here is what we will do differently.”
1. Post-Mortem Framework Development
A post-mortem without a framework produces a discussion, not an analysis. The framework structures the investigation and ensures nothing is missed.
Prompt for Post-Mortem Framework Design
Design a post-mortem framework for this launch.
Launch: AI-powered customer health scoring feature
Launch date: 6 weeks ago
Launch type: Feature launch within existing platform
Initial results: Mixed -- adoption higher than expected among power users, lower than expected among casuals
What I know about the launch:
- Went mostly on time (2-week delay on one component)
- Sales enablement was incomplete at launch
- Beta customer feedback was positive but not incorporated until late
- Competitor launched similar feature 2 weeks after us
- Initial customer response has been enthusiastic from some, confused from others
What I want to learn:
1. Did we launch at the right time?
2. Was our positioning correct?
3. Did we invest in the right areas?
4. What should we do differently next time?
Post-mortem framework requirements:
1. What went well (factual, evidence-based)
2. What did not go well (factual, not blame-based)
3. What was surprising (expectations vs. reality)
4. What we would do differently (specific, actionable)
5. What we will commit to changing (owner + timeline)
Participant considerations:
- Who should attend? (launch team? extended stakeholders?)
- How to encourage honest input vs. political input?
- How to prevent groupthink?
Tasks:
1. Design post-mortem structure with specific sections
2. Create facilitation approach to encourage honesty
3. Define output format (document? presentation? both?)
4. Set timeline for completion and dissemination
Generate complete post-mortem framework.
2. Launch Metric Analysis
Launches generate data. The challenge is analyzing that data to understand what actually happened, not just what the data says on the surface.
Prompt for Launch Metrics Analysis
Analyze launch metrics to understand what happened.
Launch: AI-powered customer health scoring feature
Timeline: Launched 6 weeks ago
Available metrics:
- Feature activation rate: 45% (of eligible customers activated within 2 weeks)
- Daily active users of feature: 180 (growing 8% week-over-week)
- Support tickets related to feature: 67 (higher than expected)
- Net Promoter Score change: +4 points (positive but lower than target of +8)
- Customer feedback sentiment: 60% positive, 25% neutral, 15% negative
- Sales enablement completion: 70% (completed training by launch day)
- Beta customer feature usage: 12 of 15 beta customers actively using
What I expected:
- Activation rate: 50% (we targeted 50%+)
- NPS impact: +8 points (ambitious target)
- Support tickets: 30-40 (we planned for this)
- Sales training: 100% (we planned for complete training at launch)
What I do not know:
- Why activation is lower than target
- Whether support tickets are usage issues or education issues
- Whether NPS impact will grow over time or plateau
Analysis tasks:
1. Calculate actual vs. expected for each metric
2. Identify correlations (does sales training completion correlate with customer activation?)
3. Determine what the data suggests vs. what questions it raises
4. Prioritize findings by impact
Generate metric analysis with interpretation and next questions.
3. Root Cause Identification
Metrics tell you what happened. Root cause analysis tells you why. Without root cause analysis, you will treat symptoms and miss the disease.
Prompt for Root Cause Analysis
Identify root causes of launch underperformance.
Launch issue: Feature activation rate of 45% vs. target of 50%
Timeline: 6 weeks post-launch
What I know:
- Sales enablement completion was 70% (some reps not trained at launch)
- In-app notification had a technical issue (delayed by 3 days)
- Customer documentation was incomplete at launch (added 2 weeks post-launch)
- Competitor launched similar feature 2 weeks after us (timing concern)
Possible root causes:
1. Training gap (reps without training could not guide customers)
2. Communication gap (delayed notification meant some missed launch)
3. Documentation gap (customers did not understand how to use)
4. Competitive timing (customer decision to wait for competitor comparison)
5. Product-market fit issue (feature does not solve real customer problem)
What I need to investigate:
- What is the primary driver of underactivation?
- Is this a one-time issue or systemic?
- What would the impact be of fixing each potential cause?
Investigation approach:
1. Survey non-activated customers (why have they not activated?)
2. Analyze support tickets for patterns
3. Interview sales reps on customer feedback
4. Compare to previous launches (is this pattern or anomaly?)
Tasks:
1. Design investigation to identify root cause
2. Prioritize root causes by impact
3. Distinguish between execution issues and strategy issues
4. Generate specific findings
Generate root cause analysis with investigation findings.
4. Action Item Development
Insights without action are just observations. A good post-mortem produces specific, owned, time-bound commitments to improvement.
Prompt for Action Item Development
Develop actionable improvement plan from post-mortem findings.
Post-mortem findings:
Finding 1: Sales enablement completion at 70% correlated with lower activation
- Root cause: Training scheduled too close to launch date
- Impact: 30% of customers did not receive informed guidance at activation
Finding 2: Documentation incomplete at launch
- Root cause: Documentation team was last-minute addition to launch checklist
- Impact: Support tickets 40% above expectation
Finding 3: Beta feedback not incorporated until late
- Root cause: No structured beta feedback review process
- Impact: Missed opportunity to improve feature before launch
Finding 4: Competitor launch caused some prospects to wait
- Root cause: No competitive response plan
- Impact: Lost some early decisions to competitor
Prioritization criteria:
- High impact: Things that significantly affect customer outcomes
- High feasibility: Things we can actually change with available resources
- Fast fix: Things that can be addressed before next launch
Action item requirements:
1. Each item needs: Specific action, Owner, Deadline, Success metric
2. Items should be prioritized by impact vs. effort
3. Some items may be "accepted risk" -- identified but not addressed
4. Commitments should be shared with broader team (accountability)
Tasks:
1. Prioritize findings by actionability
2. Generate specific action items for each high-priority finding
3. Assign owners and deadlines
4. Create communication plan for commitments
Generate action plan with specific commitments.
FAQ
How do I prevent post-mortems from becoming blame sessions?
Frame the post-mortem around systems, not individuals. The question is not “who failed?” but “what system allowed this to happen?” If you find yourself in a blame conversation, redirect: “What in our process allowed this situation to develop?” This framing surfaces systemic issues rather than personal failures.
What if the post-mortem reveals that the launch should not have happened?
This is valuable information. If a launch failed because the product was not ready or the market was not receptive, that is an important strategic insight. Do not bury it. Present the findings honestly. The purpose of the post-mortem is learning, not justifying past decisions.
How do I ensure action items actually get done?
Assign owners who are accountable to someone. Make the action items visible. Review progress at a specific future date (30 days post-mortem). If items are not completed, understand why — was the action item wrong, or was execution lacking? Both are learnable.
Conclusion
A post-mortem is only valuable if it changes future behavior. A document that is written and filed is not a post-mortem. It is an archive. The true measure of a post-mortem’s value is whether the next launch is better than the last one.
AI Unpacker gives you prompts to conduct post-mortems that produce genuine insight. But the willingness to face uncomfortable truths, the discipline to follow through on commitments, and the organizational courage to act on learning — those come from you.
The goal is not a document that looks thorough. The goal is a launch that improves every time.