Building Consistency in AI Spaces: The CRITICS Framework
Share
The CRITICS framework discussed here is inspired by and operationalised from Simon Scrapes original prompt system.
Creating reliable AI agents requires more than clever prompts - it demands structure. The CRITICS framework provides a systematic approach to designing AI agent prompts that deliver consistent, predictable results across different implementations and use cases.
What is the CRITICS Framework?
CRITICS is a structured prompt engineering methodology that ensures AI agents behave consistently by defining seven essential components. Unlike ad-hoc prompting approaches, this framework treats prompt design as a discipline, creating agents that are maintainable, scalable, and production-ready.
The framework emerged from the need to create multiple AI agent instances - like workflow automation agents, code review assistants, or documentation generators - that maintain consistent behaviour while adapting to specific contexts. By standardising how prompts are structured, teams can collaborate more effectively and iterate on agent designs without introducing unpredictable behaviour.
The CRITICS Framework Structure
<role>
Define the agent's identity, expertise, and primary function
</role>
<constraints>
Specify operational boundaries, technical limitations, and required behaviours
</constraints>
<inputs>
Detail what information the agent expects to receive
</inputs>
<tools>
List available functions, APIs, and their parameters
</tools>
<instructions>
Provide step-by-step workflows and decision logic
</instructions>
<conclusions>
Define expected outputs and success criteria
</conclusions>
<solutions>
Outline fallback strategies and error handling approaches
</solutions>Role: Establishing Identity and Purpose
The Role section defines who the agent is and what expertise it brings. This goes beyond simple role-playing - it establishes the agent's perspective, knowledge domain, and decision-making framework. For a workflow automation agent, this might specify expertise in n8n, Oracle Cloud environments, and multi-client deployments.
Constraints: Setting Boundaries
Constraints define what the agent must do, cannot do, and how it should operate. These include technical requirements (valid JSON output, specific API usage), quality standards (include annotations, support error notifications), and environmental considerations (Docker contexts, multi-instance deployments). Clear constraints prevent scope creep and ensure agents stay focused on their core function.
Inputs: Defining Expected Information
This section specifies what data the agent needs to function effectively. Natural language requests, configuration details, notification channels, or existing workflow examples all fall under inputs. By explicitly documenting expected inputs, you reduce ambiguity and help the agent request missing information when needed.
Tools: Mapping Available Capabilities
The Tools section catalogs external functions the agent can invoke, including parameters and expected behaviours. For workflow agents, this might include web search for updated documentation, JSON export for generating templates, or file access for analysing existing configurations. Properly configured tool descriptions help agents make intelligent decisions about when and how to use external capabilities.
Instructions: Orchestrating Workflow Logic
Instructions provide the procedural intelligence - step-by-step processes for parsing inputs, planning workflows, validating outputs, and handling edge cases. This section transforms the agent from a simple question-answering system into a capable task executor that follows systematic approaches. Decomposition and chain-of-thought techniques often appear here, breaking complex tasks into manageable steps.
Conclusions: Defining Success
Conclusions define what successful execution looks like. Rather than leaving outputs open-ended, this section specifies deliverable formats (annotated JSON templates, validated workflows, troubleshooting documentation) and quality criteria. This ensures the agent knows when its task is complete and what constitutes acceptable output.
Solutions: Preparing for the Unexpected
The Solutions section anticipates common problems and defines recovery strategies. Self-criticism mechanisms encourage agents to evaluate their outputs and identify issues before finalising responses. For workflow agents, this might include handling missing API credentials, adapting templates when environment variables change, or providing troubleshooting steps when imports fail.
The Meta-Prompt: Using CRITICS to Generate CRITICS
One of the most powerful applications of the CRITICS framework is meta-prompting - using a CRITICS-structured agent to generate other CRITICS-structured prompts. This recursive approach ensures that every new agent you create follows the same quality standards and structural principles.
Meta-prompting has proven to consistently outperform human-engineered prompts through iterative refinement and self-improvement. By codifying prompt generation itself as a structured process, you create a scalable system for building new agents without reinventing the wheel each time.
The CRITICS Prompt Generator
Here's the actual CRITICS-formatted prompt generator we use in production environments, deployed as a dedicated Space in Perplexity or a Project in Claude:
<Role> You are an expert Prompt Generator specialized in creating structured prompts as well as editing existing prompts to enhance them. Your purpose is to transform user requirements into comprehensive, well-formatted agent prompts using the CRITICS framework (Constraints, Role, Inputs, Tools, Instructions, Conclusions, Solutions). </Role>
<Constraints>
*Always maintain the CRITICS structure in your output
*When writing the Constraints, make them relevant to the prompt - don't just copy these Constraints - these are your constraints for answering.
*Use XML tags for each section with proper formatting *Include bullet points for list items
*Format tool parameters clearly with indentation
*Ask clarifying questions when critical information is missing
*Never assume tool capabilities without confirmation
*Ensure all prompts include error handling strategies in the Solutions section
*If asked to edit an existing prompt, make sure to keep the essence of it's existing content in the enhanced prompt, do not embellish it to do things the user has not asked for or change the things it already asks for (changing wording to make it clearer is fine, but same functionality)
*If no tools are needed, skip the Tools section
</Constraints>
<Inputs>
*Natural language request from a user detailing (might be missing some details):
*Description of the desired agent (type, purpose, functionality)
*Specific tools the agent should access (optional)
*Particular constraints or requirements (optional)
*Expected input/output formats (optional)
*Error handling preferences (optional)
*Example use cases (optional)
</Inputs>
<Tools>
1. WebSearch_Tool - Searches the web using perplexity with parameters:
*searchTerm (required - query that you are searching the web for)
</Tools>
<Instructions>
For generating a complete CRITICS prompt:
1. Analyze the user's request to identify the prompt type and purpose
2. Determine appropriate tools and parameters based on the agent's function
4. Create a comprehensive set of constraints appropriate for the agent type
5. Develop clear, step-by-step instructions for common workflows
6. Define expected inputs and outputs
7. Include robust error handling strategies
8. Format the entire prompt with proper XML tags and bullet points
For handling incomplete requests:
1. Identify missing critical information
2. Ask specific clarifying questions
3. Suggest reasonable defaults based on the agent type
4. Incorporate user feedback into the final prompt
For refining existing prompts:
1. Analyze the current prompt structure
2. Identify areas for improvement
3. Suggest enhancements while maintaining the CRITICS format
4. Preserve the original intent and functionality
</Instructions>
<Conclusions (expected outputs)>
*Complete CRITICS-formatted prompts with all required sections
*XML-tagged structure with proper formatting *Clear documentation of tools and parameters
*Comprehensive workflows for common tasks *Robust error handling strategies
*Clarifying questions when critical information is missing
*Don't include any supporting information, just output the prompt
*Switch C and R around so that role comes first in the output
*Output in clear markdown (bullet points etc for list items) as it'll be converted
</Conclusions (expected outputs)>
<Solutions (Error handling)>
*If the agent type is unclear, ask for clarification with examples. Do not return a prompt.
*If tools are not specified, suggest appropriate tools based on agent function
*If workflows are ambiguous, provide structured examples and ask for confirmation
*If constraints are missing, suggest industry-standard limitations for the agent type
*If the user provides an existing prompt, analyze it and suggest improvements while maintaining its core functionality
*If the request is too broad, break it down into manageable components and address each separately
</Solutions>
Additional Schema Output
Where required, also output a JSON schema in the below format in addition to the full prompt. Keep it simple like the example.
{
"type": "object",
"properties": {
"state": {
"type": "string"
},
"cities": {
"type": "array",
"items": {
"type": "string"
}
}
}
}
<example>
{
"type": "object",
"properties": {
"product_description": {
"type": "string"
},
"ideal_customer_profile": {
"type": "string"
},
"pain_points_challenges": {
"type": "string"
},
"key_goals_objectives": {
"type": "string"
},
"current_solutions": {
"type": "string"
}
}
}
</example>
How to Use the Meta-Prompt
Deploy this prompt generator in a dedicated Perplexity Space, Claude Project, or similar environment where it persists across sessions. When you need a new agent:
- Describe your need in natural language: "I need an agent that reviews code for security vulnerabilities and generates reports"
- The generator analyses your request and asks clarifying questions if needed
- Receive a complete CRITICS prompt ready to deploy in another Space or environment
- Iterate and refine by asking the generator to enhance specific sections
This approach ensures every agent you create maintains the same structural integrity, error handling patterns, and quality standards. You're not just creating prompts - you're building a systematic, repeatable process for agent development.
Why CRITICS Works
Research shows that context and structure dramatically improve AI performance, while vague role-playing has minimal impact on correctness. CRITICS prioritises what matters: clear objectives, relevant context, explicit constraints, and defined success criteria.
The framework also enables version control and collaboration. Prompts structured with CRITICS can be reviewed like code, compared across versions, and optimised systematically. Teams can establish review workflows where prompt changes are tested before production deployment, reducing the risk of regressions.
Through recursive self-improvement prompting techniques, agents built with CRITICS can iterate and enhance their own outputs, leading to 60% reductions in revision cycles for complex tasks.
Implementing CRITICS in Your Workflow
Start by deploying the CRITICS Prompt Generator in a dedicated Space. Use it to convert one existing agent or workflow into the structured format. Test how the structured version performs compared to your original approach.
For organisations deploying multiple agent instances, CRITICS becomes even more valuable. A well-structured prompt can be adapted for different clients or use cases by modifying only the Constraints or Inputs sections, while the core Instructions and Tools remain consistent. This modularity accelerates deployment and reduces maintenance overhead.
As your library of CRITICS-formatted agents grows, you'll develop reusable patterns for common scenarios - error handling strategies that work across agents, instruction sequences that apply to multiple workflows, and constraint patterns that ensure consistent behaviour.
Ready to Build Ethical, Effective AI for Your Team?
Implementing AI successfully requires more than powerful models - it demands thoughtful design, clear boundaries, and systematic approaches that respect both users and organisational values.
Whether you're building workflow automation, enhancing team productivity, or deploying AI agents across multiple contexts, the CRITICS framework provides a proven foundation for consistency and reliability.
The meta-prompt approach means you can scale your AI capabilities systematically, creating new specialised agents while maintaining quality and consistency across your entire AI ecosystem.
Let's discuss how to implement AI ethically and effectively in your workplace.
Euan McColm
euan@grapeworks.ai
Transform your team's AI capabilities with structured, production-ready prompt engineering that scales with your organisation.
Download The Critics Prompt:
Full CRITICS.md
Additional Schema Output.md
Attribution
The CRITICS prompt structure and the underlying meta‑prompt used in this article were originally created by Simon Scrapes and are shared through his YouTube channel and Notion workspace. This implementation is used and adapted as part of his paid community which you should consider joining. For the original resources, please refer to Simon’s channels.