Ethical Design Checklist AI Prompts for Product Designers
TL;DR
- Ethical design principles help product teams build AI-driven products that treat users fairly and with respect
- A structured ethical checklist ensures key considerations are addressed before launch, not after harm occurs
- AI prompts assist designers in evaluating bias, transparency, and accountability throughout the design process
- Different product contexts require different ethical considerations—what matters most depends on your specific use case
- Proactive ethical design is more effective than reactive fixes after problems emerge
Introduction
Design has always carried moral weight. Every design decision reflects choices about who benefits, who bears costs, and whose needs matter most. With AI-powered products, these ethical dimensions have become more consequential and more complex. Algorithms can discriminate at scale. Recommendation systems can amplify harmful content. Predictive models can perpetuate historical injustices while appearing neutral and objective.
Yet many product teams approach AI ethics as an afterthought—a compliance checkbox or a PR crisis response. This approach fails users, organizations, and society. Ethical design is not about avoiding AI or refusing to build powerful products. It is about building powerful products responsibly, with clear-eyed awareness of how they can be misused and who can be harmed.
This guide provides AI prompts designed to help product designers embed ethical considerations throughout the design process. The prompts help you ask the right questions, identify potential harms, and develop mitigation strategies before your products reach users. Think of this as a practical toolkit for responsible product design, not an abstract philosophy lecture.
Table of Contents
- Ethical Design Foundations
- Fairness and Bias Evaluation
- Transparency and Explainability
- Privacy and Data Handling
- User Autonomy and Manipulation
- Accessibility and Inclusion
- Accountability and Governance
- Ethical Review Process
- FAQ: Ethical Design
Ethical Design Foundations {#foundations}
Before diving into specific issues, establish a foundational framework for ethical AI product design.
Prompt for Ethical Design Principles:
Develop ethical design principles for our AI-powered product:
PRODUCT CONTEXT:
[DESCRIBE your product, its primary users, and the core AI functionality]
Develop principles that address:
1. FAIRNESS: How will we ensure equitable treatment across different user groups?
2. TRANSPARENCY: How much do users deserve to know about how AI influences their experience?
3. PRIVACY: How will we balance personalization with user privacy?
4. AUTONOMY: How will we preserve user agency rather than manipulating behavior?
5. ACCOUNTABILITY: Who takes responsibility when AI causes harm?
Create:
1. 5-7 core principles with clear, specific language
2. Practical interpretations for each principle in our product context
3. Questions that team members should ask when evaluating design decisions
4. Red lines—design approaches that are unacceptable regardless of business value
These principles should be actionable, not aspirational. Every team member should understand what they mean for daily design decisions.
Prompt for Stakeholder Impact Assessment:
Conduct a stakeholder impact assessment for our AI product:
[DESCRIBE PRODUCT: Core functionality, AI features, user base, business model]
Identify stakeholders across:
1. PRIMARY USERS: Who directly interacts with the AI?
2. AFFECTED PARTIES: Who is impacted by AI decisions even without direct interaction?
3. VULNERABLE POPULATIONS: Who might be particularly susceptible to harm or exclusion?
4. THIRD PARTIES: Who outside our direct relationship might be affected?
For each stakeholder group:
- What benefits does this product create?
- What harms could this product cause?
- How might their experience differ based on demographics, abilities, or circumstances?
- Whose interests are most likely to be overlooked in design decisions?
This assessment surfaces considerations that should inform every subsequent ethical analysis.
Fairness and Bias Evaluation {#fairness-bias}
AI systems often encode historical inequities or amplify existing biases. Systematic bias evaluation helps identify problems before they harm users.
Prompt for Bias Risk Identification:
Identify potential bias risks in our AI-powered feature:
FEATURE DESCRIPTION:
[DESCRIBE the AI feature and what it does]
DATA CONTEXT:
[DESCRIBE the training data, user data inputs, and algorithmic logic]
Examine bias risks across:
1. TRAINING DATA BIAS:
- Does historical data reflect discriminatory patterns?
- Are certain groups systematically underrepresented in data?
- Could the data reinforce societal inequities?
2. FEATURE DESIGN BIAS:
- Do input variables correlate with protected characteristics?
- Could the feature work differently for different demographic groups?
- Are there proxy variables that encode discrimination?
3. OUTPUT BIAS:
- Does the AI systematically favor certain groups over others?
- Are error rates consistent across different populations?
- Could outputs perpetuate stereotypes or historical inequities?
4. ACCESS BIAS:
- Can all user groups access and use this feature equally?
- Does language, literacy, or technology access create disparities?
Provide specific hypotheses about where bias might exist and how to test each one.
Prompt for Fairness Testing Protocol:
Design a fairness testing protocol for our AI system:
[DESCRIBE the AI system and its decision-making or personalization capabilities]
Develop:
1. PROTECTED CLASSES to evaluate (consider: race, gender, age, disability, socioeconomic status, geography, language)
2. TESTING METHODOLOGY:
- What metrics will you use to evaluate fairness?
- How will you obtain representative test data?
- What statistical tests establish disparate impact?
3. ACCEPTABLE THRESHOLDS:
- What difference in outcomes is unacceptable?
- How will you handle tensions between different fairness metrics?
4. TESTING SCHEDULE:
- When in the development process should testing occur?
- How frequently should ongoing monitoring happen?
- What triggers additional testing cycles?
Include specific metrics like equal opportunity, demographic parity, and predictive parity where appropriate.
Prompt for Inclusive Design Audit:
Audit our product for inclusive design gaps that might compound with AI bias:
[DESCRIBE the product, its accessibility features, and user diversity]
Evaluate:
1. ACCESSIBILITY ALIGNMENT:
- Does our AI work with assistive technologies?
- Are outputs accessible to users with visual, auditory, cognitive, or motor impairments?
2. LANGUAGE AND CULTURE:
- Does our AI perform consistently across languages?
- Are cultural contexts and local norms respected?
3. LITERACY AND COMPLEXITY:
- Can users with varying literacy levels understand AI outputs?
- Are explanations provided at appropriate complexity levels?
4. TECHNOLOGY ACCESS:
- Does requiring smartphone access, high-speed internet, or modern devices exclude users?
5. COGNITIVE LOAD:
- Does AI functionality overwhelm users with excessive choices or information?
Identify specific gaps and recommend remediation approaches.
Transparency and Explainability {#transparency}
Users deserve to understand when AI influences their experience, even if they cannot understand the full technical complexity.
Prompt for Transparency Requirements Analysis:
Analyze transparency requirements for our AI product:
[DESCRIBE AI features and their impact on users]
Determine what users should know about:
1. AI INVOLVEMENT:
- When is AI being used versus human decision-making?
- What can AI do that humans cannot?
- What are AI limitations?
2. DECISION EXPLANATION:
- What decisions does AI make that affect users?
- What information does AI consider in its decisions?
- Why did AI make a specific recommendation or decision?
3. DATA USAGE:
- What data does the AI use about this user?
- How was this data collected, and is it accurate?
- Can users access or correct their data?
4. EXPECTATIONS:
- How reliable should users expect the AI to be?
- What happens when the AI is wrong?
- How can users provide feedback or challenge AI decisions?
For each transparency need, recommend specific UI/UX approaches for disclosure.
Prompt for Explainability Design:
Design an explainability framework for our AI system:
[DESCRIBE AI functionality and user base]
Develop explainability approaches:
1. APPROPRIATE EXPLANATION LEVELS:
- What do casual users need to understand?
- What do power users or administrators need to understand?
- What do regulators or auditors need to understand?
2. EXPLANATION CONTENT TYPES:
- What factors influenced this decision?
- Why was this recommendation made for me?
- What would have happened with different inputs?
- How confident is the AI in this output?
3. EXPLANATION DELIVERY:
- When should explanations be proactively offered versus requested?
- How should explanations be presented in context?
- What interfaces support explanation exploration without overwhelming?
4. TRUST CALIBRATION:
- How do we prevent overtrust in AI outputs?
- How do we prevent undertrust that undermines useful AI?
Create specific UI patterns and copy recommendations for explanation interfaces.
Privacy and Data Handling {#privacy-data}
AI products often require extensive user data. Ethical design ensures privacy is respected while enabling valuable functionality.
Prompt for Privacy Impact Assessment:
Conduct a privacy impact assessment for our AI product:
[DESCRIBE data inputs, AI functionality, and user data handling]
Assess:
1. DATA MINIMIZATION:
- What data does the AI genuinely need versus what is nice-to-have?
- Can we achieve functionality with less personal data?
- Are we collecting data beyond what users expect?
2. CONSENT AND CONTROL:
- Do users understand what data we collect and why?
- Can users opt out of data collection without losing all functionality?
- Are consent mechanisms genuine choices or dark patterns?
3. DATA SECURITY:
- What are the consequences if this data is breached?
- How is data protected throughout its lifecycle?
- Who has access to training data and user data?
4. DATA RETENTION:
- How long is data kept, and why?
- Can users request deletion, and what are the consequences?
- What happens to data if we change AI vendors or discontinue products?
5. THIRD-PARTY SHARING:
- What data leaves our systems, and who receives it?
- Do third parties share data with their own AI systems?
- How do we ensure third parties meet our privacy standards?
Provide specific recommendations for each identified risk.
Prompt for Privacy-Preserving Design:
Design privacy-preserving approaches for our AI functionality:
[CURRENT DATA COLLECTION: DESCRIBE]
Explore alternative approaches:
1. FEDERATED LEARNING:
- Could we train models without centralizing raw data?
- What functionality would be lost with this approach?
2. ON-DEVICE PROCESSING:
- Could AI inference happen locally rather than in the cloud?
- What limitations would this create?
3. AGGREGATION AND ANONYMIZATION:
- Could we use aggregated rather than individual data?
- How much functionality would be preserved?
4. DIFFERENTIAL PRIVACY:
- Could we add noise to data to preserve privacy while maintaining utility?
- How would this affect AI accuracy?
5. DATA TRANSFORMATION:
- Could we use transformed or encoded representations of personal data?
- What would training pipelines need to change?
Recommend the most privacy-preserving approach that still achieves acceptable functionality.
User Autonomy and Manipulation {#user-autonomy}
AI can influence user behavior in ways that benefit users or in ways that exploit them. Ethical design preserves genuine choice.
Prompt for Manipulation Risk Assessment:
Assess our product for potential user manipulation risks:
[DESCRIBE AI features, engagement mechanisms, and behavioral influence tactics]
Examine for manipulation patterns:
1. VARIABLE REWARD SCHEDULES:
- Are we using unpredictable rewards that create compulsive behavior?
- Do notification patterns exploit psychological vulnerabilities?
2. ARTIFICIAL SCARCITY:
- Are we creating false urgency that pressures poor decisions?
- Do countdown timers or limited availability reflect reality?
3. SOCIAL PROOF MANIPULATION:
- Are fake or misleading social signals influencing user behavior?
- Are we manufacturing false consensus?
4. DEFAULT BIAS EXPLOITATION:
- Are we making it too easy to agree and too hard to disagree?
- Do cancellation and opt-out paths require excessive effort?
5. CONFIRMATION BIAS AMPLIFICATION:
- Are we radicalizing users by only showing what they want to see?
- Do we bubble users in information ghettos?
For each identified risk, recommend specific design changes to preserve genuine user autonomy.
Prompt for Persuasive Design Ethics:
Evaluate whether our engagement tactics are ethically acceptable:
[CURRENT ENGAGEMENT FEATURES: DESCRIBE user engagement mechanisms, notification strategies, and behavioral prompts]
For each feature, assess:
1. USER BENEFIT: Does this genuinely help users, or primarily benefit us?
2. INFORMED CONSENT: Do users understand what they are agreeing to?
3. REVERSIBILITY: Can users easily reverse decisions made under influence?
4. COMPULSION RISK: Could this feature create unhealthy dependencies?
5. ALTERNATIVE PATHS: Could the same business outcomes be achieved through less manipulative means?
Develop ethical guidelines specific to engagement features, not just core functionality.
Accessibility and Inclusion {#accessibility-inclusion}
AI products must work for all users, including those with disabilities, limited technology access, or different cultural backgrounds.
Prompt for AI Accessibility Audit:
Audit our AI-powered features for accessibility:
[DESCRIBE AI features and user interfaces]
Evaluate:
1. SCREEN READER COMPATIBILITY:
- Can users with visual impairments access AI outputs?
- Are image-based AI explanations or visualizations accessible?
2. COGNITIVE ACCESSIBILITY:
- Can users with cognitive disabilities understand AI interactions?
- Are AI explanations provided at appropriate complexity levels?
- Do AI conversational interfaces support users with processing difficulties?
3. MOTOR ACCESSIBILITY:
- Can users with motor impairments complete AI-mediated tasks?
- Are voice interfaces available as alternatives to visual interfaces?
4. AUDITORY ACCESSIBILITY:
- Are audio AI outputs captioned or transcribed?
- Do voice-based AI interfaces have visual alternatives?
5. NEUROLOGICAL ACCESSIBILITY:
- Do AI interactions accommodate users with ADHD, autism, or anxiety?
- Can users control pacing and timing of AI interactions?
Provide specific remediation recommendations for each identified accessibility gap.
Prompt for Global Usability Assessment:
Assess our AI product for usability across different global contexts:
[DESCRIBE product, languages supported, and target markets]
Examine:
1. LANGUAGE COVERAGE:
- Are AI capabilities available in all supported languages?
- Does translation preserve meaning and nuance?
2. CULTURAL CONTEXT:
- Do AI outputs respect local cultural norms?
- Are date, time, and format conventions localized?
3. INFRASTRUCTURE REQUIREMENTS:
- Does the product work on lower-bandwidth connections?
- Is it functional on older devices common in emerging markets?
4. LOCAL REGULATIONS:
- Does data handling comply with local privacy laws?
- Are there content restrictions based on local regulations?
Identify where global rollout requires design adaptation.
Accountability and Governance {#accountability}
When AI causes harm, someone must be responsible. Ethical design establishes clear accountability structures.
Prompt for AI Governance Framework:
Develop an AI governance framework for our organization:
CURRENT STATE:
[DESCRIBE existing governance structures, team composition, and oversight mechanisms]
Create:
1. ACCOUNTABILITY STRUCTURES:
- Who owns AI ethics decisions?
- Who is accountable when AI causes harm?
- How are ethical concerns escalated?
2. DECISION-MAKING PROCESSES:
- What ethical review is required before launch?
- Who approves ethical trade-offs?
- How are disagreements resolved?
3. OVERSIGHT MECHANISMS:
- How is AI behavior monitored post-launch?
- What triggers ethical review of live AI systems?
- How are users protected when AI behaves unexpectedly?
4. DOCUMENTATION REQUIREMENTS:
- What records must teams maintain about AI decisions?
- How are ethical considerations documented?
- What audit trails are required?
This framework should integrate with existing organizational governance, not create parallel structures.
Prompt for AI Incident Response:
Design an AI incident response process:
AI INCIDENT TYPES TO ADDRESS:
- Bias or discrimination detected in live AI
- AI causing user harm (emotional, financial, physical)
- AI failure affecting safety-critical functions
- Privacy breach involving AI training or user data
- AI being manipulated for malicious purposes
For each incident type:
1. DETECTION: How will we know an incident occurred?
2. TRIAGE: How do we assess severity and prioritize response?
3. CONTAINMENT: What immediate steps prevent further harm?
4. INVESTIGATION: How do we understand root cause?
5. REMEDIATION: How do we fix the problem and support affected users?
6. COMMUNICATION: How do we inform users and stakeholders?
7. PREVENTION: How do we prevent similar incidents?
Create a process document that teams can follow during high-stress incidents.
Ethical Review Process {#review-process}
Systematic ethical review catches problems before launch and maintains standards over time.
Prompt for Pre-Launch Ethical Review:
Create a pre-launch ethical review checklist for AI features:
LAUNCHING FEATURE:
[DESCRIBE AI feature, its capabilities, and user impact]
Required review components:
1. FAIRNESS REVIEW:
- Completed bias testing and results
- Fairness metrics within acceptable thresholds
- Mitigation strategies for identified risks
2. TRANSPARENCY REVIEW:
- User-facing disclosures complete and accurate
- Explanation interfaces tested with users
- Transparency meets regulatory requirements
3. PRIVACY REVIEW:
- Privacy impact assessment completed
- User consent mechanisms verified
- Data handling compliant with policies and regulations
4. ACCESSIBILITY REVIEW:
- Accessibility testing with assistive technologies
- User testing with diverse populations
- Remediation of identified barriers
5. MANIPULATION REVIEW:
- Engagement tactics evaluated for ethics
- User autonomy safeguards in place
- Dark patterns eliminated
6. INCIDENT PLANNING:
- Monitoring for failure modes identified
- Incident response process documented
- User support prepared for AI-related issues
Provide sign-off requirements and escalation paths for each component.
Prompt for Ongoing AI Monitoring:
Design an ongoing monitoring program for our AI systems:
[DESCRIBE AI systems requiring ongoing monitoring]
Monitor:
1. BEHAVIORAL DRIFT:
- How will we detect when AI behavior changes over time?
- What metrics indicate unintended behavior shifts?
2. BIAS EMERGENCE:
- How frequently should we re-test for bias?
- What triggers additional bias testing outside regular cycles?
3. USER FEEDBACK ANALYSIS:
- How do we collect and analyze user concerns about AI?
- What volume or pattern of complaints triggers review?
4. REGULATORY CHANGES:
- How do we track evolving AI regulations?
- What process updates our practices when laws change?
Create monitoring dashboards, review cadences, and escalation triggers.
FAQ: Ethical Design {#faq}
How do we balance ethical considerations with business viability?
Ethical design and business success are not opposed. Products that harm users eventually face regulatory action, reputation damage, and user abandonment. However, some ethical requirements may reduce short-term engagement or complicate business models. The key is making informed trade-offs with full awareness of long-term risks rather than ignoring ethics because they create friction. Document trade-off decisions so leadership understands the choices being made.
Who should be responsible for ethical design decisions?
Ethical design is a team sport. Product designers, engineers, data scientists, legal counsel, and user researchers all contribute to ethical outcomes. However, organizations need clear ownership—someone who can escalate concerns, halt launches if necessary, and is accountable for ethical failures. This might be a dedicated AI ethics role, an existing leader with expanded scope, or a cross-functional committee with genuine authority.
How do we handle situations where different ethical principles conflict?
Ethical principles sometimes pull in different directions—privacy versus personalization, transparency versus security, fairness versus accuracy. There is no formula that resolves these tensions automatically. What works is transparent decision-making: acknowledge the trade-off, consider all stakeholders, make a defensible choice, and document reasoning. Review decisions when circumstances change or new information emerges.
What if we discover bias in our AI after launch?
When bias is discovered, act quickly to contain harm while developing remediation. Understand the scope—who is affected and how severely. Communicate honestly with affected users. Fix the problem, even if expensive. Investigate how the bias was missed and improve processes to catch similar issues earlier. Delayed response or defensive deflection makes situations worse.
How do we train teams on ethical design?
Training should move beyond abstract principles to practical application. Use case studies from real products—both ethical successes and failures. Run workshops where teams evaluate their own products for ethical issues. Create easy-to-use checklists and review templates. Make ethical review part of the standard design process, not an additional burden. Celebrate teams that raise ethical concerns.
Conclusion
Ethical design is not about refusing to build powerful AI products. It is about building powerful AI products responsibly, with clear awareness of potential harms and genuine commitment to user wellbeing. The prompts in this guide help you integrate ethical considerations into every stage of the design process, from initial concept through ongoing monitoring after launch.
Key Takeaways:
-
Establish ethical principles early—clear principles guide decisions before specific ethical dilemmas arise.
-
Bias evaluation is ongoing, not one-time—AI systems can develop new biases over time or reveal biases not visible in initial testing.
-
Transparency builds trust—users who understand AI are better equipped to use it effectively and appropriately.
-
Privacy requires active protection—default to minimal data collection and strong security, not post-hoc privacy fixes.
-
User autonomy matters—persuasive design should inform and empower, not manipulate.
Next Steps:
- Review your current AI products against the bias and fairness prompts
- Develop ethical design principles specific to your organization
- Establish pre-launch ethical review requirements
- Create ongoing monitoring for your AI systems
- Train your team on ethical design practices
Building ethical AI is not just the right thing to do—it is essential for long-term business success in a world where users, regulators, and society increasingly demand responsible technology.