Use it when you want to begin experimental design and methodology work without writing the first draft from scratch.
Validity Threat Audit AI Prompt
Audit my study design for threats to internal and external validity. Study description: {{study_description}} Apply the four validity frameworks systematically. 1. Internal vali... Copy this prompt template, run it in your AI tool, and use related prompts to continue the workflow.
Audit my study design for threats to internal and external validity.
Study description: {{study_description}}
Apply the four validity frameworks systematically.
1. Internal validity (did the treatment really cause the observed outcome?):
Check for each threat:
- History: did any external event occur during the study that could explain the outcome?
- Maturation: could participants have changed naturally over the study period independent of the treatment?
- Testing: could repeated measurement itself change participants' responses?
- Instrumentation: did the measurement tools or procedures change during the study?
- Regression to the mean: were extreme scorers selected? Their scores would likely move toward the mean naturally.
- Selection bias: were treatment and control groups systematically different at baseline?
- Attrition / mortality: did participants drop out differentially across conditions?
- Contamination: did control participants receive elements of the treatment inadvertently?
2. Construct validity (are you measuring and manipulating what you think you are?):
- Construct underrepresentation: does your operationalization miss important aspects of the construct?
- Construct-irrelevant variance: does your measure capture things other than the construct of interest?
- Manipulation check: how do you know the treatment actually changed what it was intended to change?
3. Statistical conclusion validity (are your statistical inferences correct?):
- Low statistical power: are you likely to detect a real effect if it exists?
- Multiple comparisons: are you testing many outcomes without adjustment?
- Assumption violations: do the data meet the assumptions of your planned analyses?
- Fishing and flexibility in data analysis: are analysis decisions made post-hoc after seeing results?
4. External validity (do results generalize?):
- Population validity: how similar is your sample to the population of interest?
- Ecological validity: how similar are your study conditions to real-world conditions?
- Temporal validity: are results likely to hold at other time points?
- Treatment variation: does your treatment represent how it would actually be delivered in practice?
5. For each identified threat:
- Severity: how likely is this threat to bias results and in what direction?
- Mitigation: what design features address this threat?
- Residual risk: what threat remains after mitigation?
- Disclosure: how will this be acknowledged in the limitations section?
Return: validity audit table (threat, severity, mitigation, residual risk), overall validity assessment, and limitations section draft.When to use this prompt
Use it when you want a more consistent structure for AI output across projects or datasets.
Use it when you want prompt-driven work to turn into a reusable notebook or repeatable workflow later.
Use it when you want a clear next step into adjacent prompts in Experimental Design and Methodology or the wider Research Scientist library.
What the AI should return
The AI should return a structured result that covers the main requested outputs, such as Internal validity (did the treatment really cause the observed outcome?):, History: did any external event occur during the study that could explain the outcome?, Maturation: could participants have changed naturally over the study period independent of the treatment?. The final answer should stay clear, actionable, and easy to review inside a experimental design and methodology workflow for research scientist work.
How to use this prompt
Open your data context
Load your dataset, notebook, or working environment so the AI can operate on the actual project context.
Copy the prompt text
Use the copy button above and paste the prompt into the AI assistant or prompt input area.
Review the output critically
Check whether the result matches your data, assumptions, and desired format before moving on.
Chain into the next prompt
Once you have the first result, continue deeper with related prompts in Experimental Design and Methodology.
Frequently asked questions
What does the Validity Threat Audit prompt do?+
It gives you a structured experimental design and methodology starting point for research scientist work and helps you move faster without starting from a blank page.
Who is this prompt for?+
It is designed for research scientist workflows and marked as intermediate, so it works well as a guided starting point for that level of experience.
What type of prompt is this?+
Validity Threat Audit is a single prompt. You can copy it as-is, adapt it, or use it as one step inside a larger workflow.
Can I use this outside MLJAR Studio?+
Yes. The prompt text works in other AI tools too, but MLJAR Studio is the best fit when you want local execution, visible Python code, and reusable notebooks.
What should I open next?+
Natural next steps from here are Confound Identification, Control Condition Designer, Full Study Design Chain.