codebase_akbar

🏠 Home

├── .gitignore [Ignored] ├── LEGACY [Ignored] ├── README.md Content:

META_PROMPTING: The LLM Orchestration Engine

Project StatusLanguageLicense

An interactive command-line tool for generating expert-level, reusable LLM collaboration frameworks. This project is a "meta-framework"—a system designed to build other systems.


The Core Philosophy: A Force Multiplier for Prompt Engineering

In complex, multi-turn collaborations with Large Language Models (LLMs), the quality of the initial setup is critical. Manually creating and refining persona files and prompt templates for every new project is repetitive and inefficient. This violates the "Don't Repeat Yourself" (DRY) principle.

The META_PROMPTING project solves this problem by creating an Orchestration Engine. This engine acts as a "factory" that manufactures bespoke, high-quality collaboration frameworks on demand, turning a manual, hours-long process into an automated, seconds-long one. It is a non-linear force multiplier for expert-level LLM interaction. For a deeper look into the guiding principles, see docs/DESIGN_PHILOSOPHY.md.

How It Works: The Factory Analogy

The engine is composed of three distinct parts:

  1. The Component Library (/components) - The "Parts Bin" A collection of standardized, pre-written text snippets. Each snippet represents a single, reusable pattern—a persona trait, an interaction protocol, or a critical constraint.

  2. The Orchestrator (orchestrator.py) - The "Assembly Line" The core of the project. This is an interactive Python script that acts as a wizard. It asks the user high-level questions about the desired collaboration and then reads the necessary snippets from the Component Library.

  3. The Generated Framework (/output) - The "Final Product" The orchestrator assembles the chosen components into two complete, ready-to-use Markdown files: 00_PERSONA.md and 01_PROMPT_TEMPLATE.md. This is the final, tailored framework that the user can immediately use to start a new, expert-level LLM session.

Project Status & Roadmap

This project is currently in development. The strategic plan is divided into four phases:

Usage (The Final Vision)

Once complete, the engine will be run from the command line.

# From the root of the META_PROMPTING project
python orchestrator.py

This will launch an interactive wizard that guides the user through the configuration process:

Welcome to the LLM Orchestration Engine.

What is the PRIMARY GOAL of this task? Choose the workflow that best fits.

--- Technical & Execution ---
[1] TEACH_OR_EXPLAIN         (Purpose: To teach a concept or document something.)
[2] DIAGNOSE_ROOT_CAUSE      (Purpose: To find the underlying cause of a problem.)
[3] REVIEW_AGAINST_STANDARDS (Purpose: To evaluate a piece of work against a set of rules.)
...

--- Strategic & Developmental ---
[8] DECONSTRUCT_AN_IDEA      (Purpose: Explore a new concept to test its viability and principles.)
...

Your choice: 3

Enter a title for the Persona: The Django Standards Advocate
...

Generating framework files in directory: ./output/Django_Standards_Advocate/
  - SUCCESS: Created 00_PERSONA.md
  - SUCCESS: Created 01_PROMPT_TEMPLATE.md

The Component Library

The library is organized into three categories to ensure a clean and scalable architecture:

How to Add a New Component

The framework is designed to be extensible. To add a new component (e.g., a new persona trait):

  1. Define the Job: Add a new JSON object to the docs/generation_jobs.json file, specifying the new component's category, name, and description.
  2. Generate the Component: Run the component generation script, which will call the LLM API to write the new snippet file.
  3. Integrate into the Orchestrator: If the component is part of a new goal, update goal_map.json. The orchestrator.py script will automatically pick it up without code changes.

License

This project is distributed under the MIT License.


├── SESSION_HANDOVER [Ignored] └── SIMULATIONS/ └── SESSION_03/ ├── 31_SIMULATION.md Content:

1. Flaw Analysis: The State Management Flaw and Race Condition

The proposed algorithm contains a critical state management flaw in its file-write sequence. The system performs two distinct and non-atomic operations: writing the new component file and then updating the goal_map.json configuration file. This creates a race condition. If the process is interrupted after save_file() succeeds but before write_json_file() completes (e.g., due to a script crash, power loss, or permissions error), the system will be left in a corrupted state.

The negative consequences of this partial failure are severe:

This lack of atomicity violates the principle of a robust, fault-tolerant system. A professional-grade tool cannot leave itself in an inconsistent state.

2. Proposed Solution: The "Write-to-Temporary-and-Rename" Pattern

To solve this, we can implement a pattern analogous to a database transaction, ensuring the operation is atomic. The most robust and straightforward approach is the "write-to-temporary-and-rename" strategy.

The logic is as follows:

  1. First, write the new goal_map.json content to a temporary file (e.g., goal_map.json.tmp). This is a safe operation that doesn't affect the current system state.
  2. Second, perform a file system rename or move operation to replace the original goal_map.json with the temporary file.

This pattern ensures atomicity because file system rename operations are atomic at the OS level on POSIX-compliant systems. The operation either succeeds completely or fails completely, leaving the original file untouched. There is no intermediate state where the goal_map.json is partially written or corrupted. Only after the "commit" (the rename) is successful do we proceed to write the new component file. By reversing the order and ensuring the critical configuration is updated atomically first, we create a more robust system.

3. Revised Algorithm

Here is the complete, revised pseudocode that implements the transactional write-ahead pattern, ensuring the system fails cleanly.

function handle_just_in_time_generation(goal, component_type, required_filename):
  // This function is called when os.path.exists() for required_filename returns False.
  // Example inputs:
  //   goal = "DIAGNOSE_ROOT_CAUSE"
  //   component_type = "protocol"
  //   required_filename = "root_cause_analysis_drilldown.txt"

  print(f"INFO: The recommended {component_type} '{required_filename}' was not found.")

  // 1. Determine the name for the new component.
  autogen_filename = f"autogen_{component_type}_for_{goal.lower()}.txt"
  print(f"INFO: We will generate a new component named '{autogen_filename}'.")

  // 2. Get the conceptual definition from the user.
  user_description = ask_user(f"Please provide a one-line description for the purpose of the '{required_filename}' component:")

  // 3. Generate the component snippet via the LLM API.
  snippet_content = call_llm_api(user_description)

  // 4. THE ROBUST, TRANSACTIONAL STATE CHANGE

  // Step 4a: Prepare the new configuration state in memory.
  json_map = read_json_file("goal_map.json")
  json_map[goal][component_type] = autogen_filename

  // Step 4b: Write the new configuration to a temporary file (The "Write-Ahead Log").
  temp_map_path = "goal_map.json.tmp"
  write_json_file(temp_map_path, json_map)
  print("INFO: New configuration state written to temporary file.")

  // Step 4c: Atomically commit the configuration change by renaming the file.
  // This is the point of no return.
  atomic_rename("goal_map.json.tmp", "goal_map.json")
  print("SUCCESS: goal_map.json has been atomically updated.")

  // Step 4d: Only after the map is safely updated, write the new component file.
  // If this fails, the map is already correct for the next run.
  new_component_path = "components/{component_type}s/{autogen_filename}"
  save_file(new_component_path, snippet_content)
  print(f"SUCCESS: New component saved to '{new_component_path}'.")

  // 5. Return the path to the newly created and mapped component.
  return new_component_path```
├── 32_SIMULATION.md
  Content:

1. Flaw Analysis

The proposed Just-in-Time (JIT) generation algorithm introduces a significant and insidious form of technical debt: conceptual inconsistency. It fundamentally violates the project's own design philosophy of creating a clean, human-readable, and maintainable system.

The core flaw lies in the decision to generate a new, derivative filename (autogen_protocol_for_diagnose_root_cause.txt) and then mutate the goal_map.json to point to it. This creates a cascade of negative downstream consequences:

In short, the proposed algorithm prioritizes a shortsighted, programmatic convenience at the expense of long-term conceptual integrity. It treats the goal_map.json as a mutable variable instead of the stable constitution it is meant to be.

2. Proposed Solution

The superior strategy is to treat the JIT generation process as a "self-healing" mechanism for the component library, not a mutation of the core configuration. The goal_map.json file must be treated as immutable at runtime.

The guiding principle is simple: The system should create the exact file that goal_map.json expects.

When the orchestrator determines that the required file—for instance, components/protocols/root_cause_analysis_drilldown.txt—is missing, its sole objective should be to create that specific file at that specific path.

This approach has several profound benefits:

The JIT generation is not a feature for creating new logic; it is a feature for fulfilling the pre-existing logic defined in the configuration map.

3. Revised Algorithm

The following revised pseudocode implements the superior strategy, ensuring the system is robust, maintainable, and conceptually consistent.

function handle_just_in_time_generation(goal, component_type, required_filename):
  // This function is called when os.path.exists() for required_filename returns False.
  // Example inputs:
  //   goal = "DIAGNOSE_ROOT_CAUSE"
  //   component_type = "protocol"
  //   required_filename = "root_cause_analysis_drilldown.txt"

  print(f"INFO: The required {component_type} '{required_filename}' was not found.")
  print(f"INFO: We will now generate this component to self-heal the library.")

  // 1. Get the conceptual definition from the user, using the correct filename for context.
  user_description = ask_user(f"Please provide a one-line description for the purpose of the '{required_filename}' component:")

  // 2. Generate the component snippet via the LLM API.
  snippet_content = call_llm_api(user_description)

  // 3. THE CORRECT STATE CHANGE
  // The system creates the exact file the goal_map.json expects.
  // The goal_map.json file is NEVER modified.

  // Step 3a: Write the new component file to the library using its correct, intended name.
  new_component_path = "components/{component_type}s/{required_filename}"
  save_file(new_component_path, snippet_content)
  print(f"SUCCESS: New component saved to '{new_component_path}'.")
  print("INFO: The component library is now synchronized with goal_map.json.")

  // 4. Return the path to the newly created component.
  return new_component_path
├── 33_SIMULATION.md
  Content:

1. Flaw Analysis

The proposed algorithm is not idempotent. An idempotent operation is one that can be applied multiple times without changing the result beyond the initial application. This algorithm fails that test because it unconditionally executes a series of state-changing and costly actions every time it is called.

If the orchestrator process were to be run twice for the same missing root_cause_analysis_drilldown.txt component, the following sequence of wasteful side effects would occur on the second run:

The core design flaw is that the function's trigger is based only on the absence of the original file (root_cause_analysis_drilldown.txt), not the absence of the intended generated artifact (autogen_protocol_for_diagnose_root_cause.txt).

2. Proposed Solution

To make the generation process idempotent, we must introduce a guard clause at the beginning of the handle_just_in_time_generation function. The strategy is to make the function aware of the specific file it is responsible for creating and to check for its existence before taking any action.

The high-level solution is as follows:

  1. Upon entry, the function will first deterministically construct the full path for the component it intends to generate (e.g., components/protocols/autogen_protocol_for_diagnose_root_cause.txt).
  2. It will then check if this specific file already exists on the filesystem.
  3. If the file already exists, the function will assume the previous run was successful. It will skip all user prompts, API calls, and file-writing operations, and immediately return the path to the existing file. This ensures the system state is not changed on subsequent runs.
  4. If the file does not exist, the function will proceed with the original logic to prompt the user, call the LLM, and save the new files.

This approach ensures that the expensive generation and configuration-update logic is executed exactly once, making the entire operation efficient, predictable, and resilient to being re-triggered.

3. Revised Algorithm

Here is the complete, revised pseudocode for the handle_just_in_time_generation function that implements the idempotent design.

function handle_just_in_time_generation(goal, component_type, required_filename):
  // Example inputs:
  //   goal = "DIAGNOSE_ROOT_CAUSE"
  //   component_type = "protocol"
  //   required_filename = "root_cause_analysis_drilldown.txt"

  // 1. Determine the deterministic name and path for the potential autogenerated component.
  autogen_filename = f"autogen_{component_type}_for_{goal.lower()}.txt"
  autogen_component_path = "components/{component_type}s/{autogen_filename}"

  // 2. IDEMPOTENCY CHECK (Guard Clause)
  // Before doing any work, check if the component has already been generated.
  if os.path.exists(autogen_component_path):
    print(f"INFO: Found previously generated component at '{autogen_component_path}'. Skipping generation.")
    // The operation was already completed. Return the existing path.
    return autogen_component_path

  // 3. If the guard clause is passed, proceed with the original generation logic.
  print(f"INFO: The recommended {component_type} '{required_filename}' was not found.")
  print(f"INFO: We will generate a new component named '{autogen_filename}'.")

  // 3a. Get the conceptual definition from the user.
  user_description = ask_user(f"Please provide a one-line description for the purpose of the '{required_filename}' component:")

  // 3b. Generate the component snippet via the LLM API.
  snippet_content = call_llm_api(user_description)

  // 4. THE CRITICAL STATE CHANGE (Executed only once)

  // Step 4a: Write the new component file to the library.
  save_file(autogen_component_path, snippet_content)
  print(f"SUCCESS: New component saved to '{autogen_component_path}'.")

  // Step 4b: Update the master configuration map.
  json_map = read_json_file("goal_map.json")
  json_map[goal][component_type] = autogen_filename
  write_json_file("goal_map.json", json_map)
  print("SUCCESS: goal_map.json has been updated to use the new component.")

  // 5. Return the path to the newly created and mapped component.
  return autogen_component_path
├── STRATEGIC_DESIGN_AUDITOR_REPORT.md
  Content:

1. Ingest Evidence

I have reviewed the CORE_PRINCIPLES_CHECKLIST.md and the BEFORE_VS_AFTER_PROPOSAL.md. The checklist establishes a clear philosophy centered on user control, configuration-driven behavior, and accelerating an expert user. The proposal contrasts a design that modifies its own configuration file (goal_map.json) with a design that treats the configuration as a read-only source of truth and creates artifacts in their final, intended locations.

2. Model the Impact

3. Evaluate Against Core Principles

Core Principle / Anti-Pattern "Before" Design Evaluation "After" Design Evaluation Verdict on Change
Configuration Over Code Fails. The code actively modifies the configuration (goal_map.json), violating the principle of separation. Upholds. The configuration is a read-only blueprint. The code executes the plan defined in the config. Critical Improvement
User Has Ultimate Control Fails. The system seizes control of the configuration file, undermining the user's role as the master of the system. Upholds. The user's configuration is treated as the immutable source of truth. The system is a subordinate executor. Critical Improvement
Leave Room for the Artisan Hinders. Creates autogen_ files and modifies config, forcing the artisan to perform cleanup before they can begin refinement. Enables. Creates a clean, editable file in its final location, allowing the artisan to immediately start their value-add work. Critical Improvement
Simplicity and Elegance Poor. Creates a complex, stateful interaction where the program and its configuration modify each other. It's messy. Excellent. A simple, clean, and stateless (per-run) model. The separation of concerns is elegant and easy to understand. Critical Improvement
Anti-Pattern: Brittle Scaffolding Exhibits. The link between the auto-generated file and the modified config is fragile and creates more work to solidify. Avoids. The generated component is the final artifact, not temporary scaffolding. It's robust from the start. Critical Improvement
Anti-Pattern: Hard-Coded Logic Exhibits. The logic to modify the goal_map.json is a critical design failure that entangles execution with configuration. Avoids. The logic is purely about artifact generation based on a declarative, external configuration. Critical Improvement

4. Analyze the "Cost of Doing Nothing"

The cost of not implementing the "After" design is catastrophic to the project's integrity. The "Before" design actively harms the project's prime directive and guiding philosophies.

  1. It violates "Configuration Over Code" and "User Has Ultimate Control," which are foundational values. This isn't a minor infraction; it's a rejection of the project's core identity.
  2. It works against the "8-Hour-to-1-Hour" transformation. By creating messy, auto-generated files and modifying the user's config, it introduces a new, frustrating "cleanup" phase that adds time and friction, directly opposing the goal of acceleration.
  3. It creates "Brittle Scaffolding." The user will spend time untangling the program's changes, which is worse than starting from a clean slate. This completely invalidates the project's value proposition.

In short, the "Before" design creates a system that is untrustworthy, hard to maintain, and frustrating to use. Sticking with it would ensure the project fails to meet its most important goals.

5. Deliver Final Verdict & Justification

Verdict: CORE REQUIREMENT

Justification: This change is not "gold-plating"; it is a fundamental correction of a design that violates the project's most critical principles.

The "Before" design is in direct opposition to the stated philosophies of "Configuration Over Code" and "User Has Ultimate Control." By having the program modify its own configuration file (goal_map.json), it creates a brittle, untrustworthy system that actively increases the artisan's workload—the exact opposite of the project's Prime Directive.

The "After" design aligns perfectly with the core principles. It respects the user's configuration as the single source of truth, creates clean and predictable artifacts, and empowers the "Artisan Engineer" by seamlessly accelerating their workflow without adding complexity or cleanup tasks. As demonstrated in the evaluation table, the change represents a "Critical Improvement" across multiple foundational principles and is essential for avoiding fatal anti-patterns. Therefore, implementing the "After" design is a non-negotiable requirement for the project's success.

├── essay_01.md
  Content:

The Pragmatist's Gambit: Why We Choose the "Ugly" and a "Small Headache"

Introduction: The Allure of the Predictable Inconvenience

Consider the humble stovetop Moka pot, a fixture in kitchens for nearly a century. It is a simple, robust assembly of aluminum and plastic, easily disassembled, cleaned, and with parts that can be replaced for a pittance. Its foil is the modern pod-based coffee machine: sleek, fast, and convenient. Yet, when it fails—and it will—it often becomes an inscrutable piece of e-waste, with a repair cost that rivals its initial price. The choice between these two objects is not merely one of taste or technology; it is a choice of philosophy. A growing number of people consciously choose the Moka pot, willingly accepting its small, daily inconveniences. They are embracing a mentality that favors products and systems that are transparent, repairable, and financially low-risk, even at the cost of modern convenience or aesthetic appeal. This essay argues that this pragmatic mindset is not simply about being frugal. It is a sophisticated form of risk management, an assertion of personal agency, and a quiet rebellion against the pervasive trend of creating beautiful but fragile "black box" products with hidden long-term costs.


Part I: The Psychological Landscape of Practicality

Chapter 1: The Calculus of Risk: Aversion and the Fear of the "Financial Cliff"

At the heart of the pragmatist’s choice is a fundamental psychological principle: risk aversion. Risk aversion is the preference for a certain outcome over a gamble with an equal or potentially higher expected payoff. The predictable, minor cost of replacing a Moka pot's gasket is a "sure thing." In contrast, the sleek pod machine represents a gamble: it might work flawlessly for years, but it carries the small but catastrophic risk of a sudden, expensive failure—a "financial cliff."

This is amplified by the concept of loss aversion, a cornerstone of prospect theory. Psychologically, the pain of a loss is felt more intensely than the pleasure of an equivalent gain. The prospect of an unexpected $500 repair bill (a loss) is a far more powerful motivator than the daily, marginal pleasure derived from a slightly faster or "cooler" machine (a gain). This asymmetry explains why the pragmatist is willing to endure a "small headache." The predictable annoyance is a consciously accepted price for avoiding the anxiety of a potential, devastating financial blow. It's a calculated trade-off that exchanges minor, routine effort for long-term peace of mind.

Chapter 2: The Power of Agency: Control Over Convenience

The desire for control is a profound human need, essential for psychological well-being. Choosing a simple, repairable object is an act of asserting this control. The ability to understand, maintain, and fix one's own possessions—whether it's a bicycle, a simple appliance, or a piece of software with open-source code—provides a deep sense of empowerment and self-reliance that mere convenience cannot replicate. Engaging in do-it-yourself (DIY) repairs is not just about saving money; it's a rewarding act that reduces stress, builds confidence, and fosters a feeling of competence.

This mindset highlights a key distinction in consumer behavior: the difference between utilitarian and hedonic motivations. Utilitarian consumption is driven by practical, functional needs—what the product does. Hedonic consumption is driven by emotion and pleasure—how the product makes one feel. The pragmatist prioritizes the utilitarian. They value the coffee, not the seamless experience of the button-press. They derive satisfaction not from the machine's aesthetic, but from their own ability to keep it running. By choosing the repairable object, they are choosing to be an active participant in their environment, not just a passive consumer.


Part II: The Economic Rationale for "Downgrading"

Chapter 3: Seeing Beyond the Sticker Price: The Total Cost of Ownership

A key insight of the pragmatist is the ability to look beyond the initial purchase price and evaluate the Total Cost of Ownership (TCO). TCO is a financial estimate that includes not just the acquisition price, but all direct and indirect costs over an asset's lifecycle, including maintenance, repairs, and disposal. A cheap printer with expensive, proprietary ink cartridges has a high TCO. A durable, efficient, but more expensive appliance may have a much lower TCO over its lifespan.

This calculation guards against a modern pitfall: economic obsolescence. A product becomes economically obsolete not when it stops functioning, but when the cost to repair it is unjustifiably high compared to its value or the cost of a replacement. This is often a feature, not a bug, of modern product design. When a smartphone's battery is glued in place, or a sealed electronic unit requires specialized tools available only to the manufacturer, a simple component failure can trigger economic obsolescence, forcing the consumer to buy a new device. The pragmatist's choice is a deliberate strategy to acquire assets with a lower and more predictable TCO.

Chapter 4: Resisting "Servitude by Design"

The trend of creating products that are difficult and expensive to repair fosters a state of ongoing dependency on the manufacturer. This "servitude by design" is achieved through the use of proprietary parts, software locks, and limited access to service manuals and diagnostic tools. This practice effectively turns a one-time purchase into a long-term revenue stream for the company, trapping the consumer in a closed ecosystem. The owner of the product is no longer truly an owner, but rather a licensee, dependent on the original manufacturer for the product's continued function.

This very issue has sparked the global "Right to Repair" movement. This movement advocates for legislation requiring manufacturers to make parts, tools, and repair information accessible to consumers and independent shops. The pragmatic mentality is the philosophical backbone of this movement. It is a demand for autonomy and a rejection of a business model that prioritizes post-purchase revenue over consumer freedom and product longevity. It is an economic and consumer rights imperative, asserting that ownership should include the right to understand, modify, and repair.


Part III: The Social and Cultural Dimensions of Pragmatism

Chapter 5: A Quiet Counter-Culture to "Conspicuous Consumption"

The deliberate choice of the functional over the fashionable can be seen as a form of counter-culture. It stands in direct opposition to "conspicuous consumption"—the practice of buying products to display wealth and social status. This behavior often leads to a cycle of overconsumption, trapping people on a "positional treadmill" where they constantly purchase new items to signal their standing. The pragmatist, by contrast, derives status from different values: resilience, intelligence, and skill. Their well-maintained, 20-year-old appliance is a symbol not of a lack of wealth, but of a mastery over their material world.

Furthermore, this mindset aligns powerfully with the growing movement towards sustainability. The most sustainable product is often the one you don't have to replace. By prioritizing durability and repairability, the pragmatist inherently rejects "throwaway culture" and reduces waste. While some sustainable products have become status symbols themselves ("conspicuous conservation"), the core pragmatic impulse is less about signaling virtue and more about the logical outcome of valuing longevity. It is a sustainable choice born of practicality rather than ideology.

Chapter 6: The "Good Enough" Revolution

The pragmatist is a champion of the "good enough" principle. This concept suggests that consumers will often choose products that are simply adequate for their needs, despite the availability of more advanced, feature-rich, and complex alternatives. There is a growing weariness with "feature creep" and the unnecessary complexity that often accompanies high-end models. Why pay for ten functions on a washing machine when you only ever use three?

This preference for simplicity and reliability may represent a generational shift. While younger generations are digital natives, many have also grown up in a world of sealed, unrepairable electronics and are feeling the financial and environmental consequences. The appeal of older, simpler technology—from vintage audio equipment to basic mobile phones—is not just nostalgia. It is a recognition of the value in things that were built to last and designed to be understood. This "good enough" revolution is a quiet but powerful market force, rewarding companies that produce reliable, straightforward, and serviceable goods.


Conclusion: The Enduring Wisdom of the Deliberate Sacrifice

The decision to choose the "ugly" but reliable over the "cool" but fragile is far from a simple preference. It is a deeply rational and forward-thinking response to the converging pressures of modern design, economics, and psychology. It is a calculated gambit that trades minor, upfront inconvenience for the invaluable rewards of financial predictability, personal agency, and long-term resilience.

This mentality reveals the hidden costs embedded in our sleek, convenient world—the loss of control, the anxiety of potential failure, and the relentless cycle of consumption. As we move forward, the wisdom of the pragmatist holds a crucial lesson for consumers and manufacturers alike. If this mindset were to become more widespread, it could fundamentally shift market dynamics, forcing a return to principles of durable design, transparency, and respect for the consumer's right to own what they buy. Ultimately, the choice of the pragmatist is an act of empowerment. It celebrates the profound foresight in accepting a small, manageable "headache" to avoid the looming, unspoken fear of a catastrophic—and entirely preventable—failure. It is the quiet hallmark of true, modern resilience.

├── scenarios.md
  Content:

Scenario 1: The "Happy Path" - A First-Time Success

The Situation: The user needs to generate a framework for a DIAGNOSE_ROOT_CAUSE task. The required root_cause_analysis_drilldown.txt component does not exist. This is their first time running this specific generation.

Before Synthesis (The Brittle Assistant) After Synthesis (The Robust Power Tool)
User Action:
$ python orchestrator.py
> ...
> DIAGNOSE_ROOT_CAUSE
> Please describe 'root_cause_analysis_drilldown.txt': ...
User Action:
$ python orchestrator.py
> ...
> DIAGNOSE_ROOT_CAUSE
> Please describe 'root_cause_analysis_drilldown.txt': ...
System Behavior:
1. INFO: Generating new component named 'autogen_protocol_for_diagnose_root_cause.txt'.
2. Calls the LLM API.
3. Saves the snippet to autogen_protocol...txt.
4. Reads goal_map.json, modifies it in memory.
5. Overwrites goal_map.json with the new data.
6. Assembles the final framework.
System Behavior:
1. INFO: Generating component 'root_cause_analysis_drilldown.txt' to self-heal the library.
2. Calls the LLM API.
3. Saves the snippet to root_cause_analysis_drilldown.txt.
4. Assembles the final framework.
The Artisan Engineer's Reaction:
"Okay, it worked. But why did it create a file with a different name and then edit my goal_map.json? I made that map for a reason. This 'magic' behavior makes me uneasy. I don't feel fully in control of my own system's configuration. It feels like the tool is fighting my intent."
The Artisan Engineer's Reaction:
"Perfect. It saw a part was missing from the parts bin, so it fabricated the exact part I told it to expect. It did precisely what I would have done manually, just faster. The tool respects my design. I trust it."

Scenario 2: The "Oops, I Hit Ctrl+C" - An Interrupted Process

The Situation: During the JIT generation, the user's SSH connection drops, or they impatiently hit Ctrl+C right after the API call finishes but before the script completes its file operations.

Before Synthesis (The Brittle Assistant) After Synthesis (The Robust Power Tool)
System Behavior:
The script crashes after writing the autogen_protocol...txt file, but before it can overwrite goal_map.json.
System Behavior:
The script crashes after the API call returns, but before it can write the root_cause_analysis_drilldown.txt file.
Resulting System State:
- Corrupted. An orphaned file (autogen_...txt) exists in the components/ directory.
- The goal_map.json file is untouched and still points to the non-existent root_cause...txt.
- The two sources of truth are now inconsistent.
Resulting System State:
- Clean. No new files have been written.
- goal_map.json is untouched.
- The system state is identical to what it was before the command was run.
The Artisan Engineer's Reaction:
"Are you kidding me? Not only did it fail, but it left my project in a broken state. Now I have to manually rm the orphaned file and figure out what happened. This tool is fragile and creates more work when it fails. I can't rely on this for serious automation."
The Artisan Engineer's Reaction:
"Okay, the connection dropped, but the tool failed cleanly. Nothing is broken. I can just run the exact same command again without having to perform manual cleanup. This thing is resilient. It's built for the real world."

Scenario 3: The "Déjà Vu" - A Repeated Command

The Situation: The user successfully completes the JIT generation from Scenario 1. Later, forgetting they've already done it, they run the exact same command again.

Before Synthesis (The Brittle Assistant) After Synthesis (The Robust Power Tool)
System Behavior:
The algorithm checks for root_cause_analysis_drilldown.txt, sees it's missing, and re-runs the entire process:
1. INFO: Generating new component...
2. > Please describe 'root_cause...txt': ...
3. Calls the LLM API again.
4. Overwrites the component file.
5. Overwrites goal_map.json with the same data.
System Behavior:
The algorithm's very first step is to check if root_cause_analysis_drilldown.txt exists.
1. It finds the file.
2. INFO: Component 'root_cause_analysis_drilldown.txt' already exists. No action needed.
3. The script finishes instantly.
The Artisan Engineer's Reaction:
"Why is it asking me for the description again? Why is it hitting the API again? This is incredibly inefficient. It's wasting my time, and it's wasting my money on redundant API calls. This is a poorly designed process. It's not idempotent."
The Artisan Engineer's Reaction:
"Nice. The tool is smart enough to know the work is already done. It didn't ask me any questions or waste any resources. It's efficient and predictable. This is how professional-grade tools should behave."

Scenario 4: The "Six Months Later" - A Maintainability Audit

The Situation: A new developer joins the team (or the original developer returns after a long break) and wants to understand the logic for the DIAGNOSE_ROOT_CAUSE goal. They open goal_map.json to inspect the "source of truth."

Before Synthesis (The Brittle Assistant) After Synthesis (The Robust Power Tool)
User's View:
They open goal_map.json and see:
"protocol": "autogen_protocol_for_diagnose_root_cause.txt"
User's View:
They open goal_map.json and see:
"protocol": "root_cause_analysis_drilldown.txt"
The Artisan Engineer's Reaction:
"What on earth is autogen_protocol...? The name tells me nothing. Now I have to stop what I'm doing, navigate to the components/ folder, open that file, and read its contents to understand its actual purpose. This is exactly the kind of conceptual debt that makes systems hard to maintain. The map isn't a source of truth; it's a confusing log of past side effects."
The Artisan Engineer's Reaction:
"Okay, it uses the 'root cause analysis drilldown' protocol. The name is descriptive and tells me the intent immediately. The system is self-documenting. I can understand the architecture just by reading the configuration, which is exactly how it should be. The system is clear and maintainable."
├── synthesis_report.md
  Content:

Design Simulation & Synthesis Report

Project: META_PROMPTING Orchestration Engine Feature: Just-in-Time (JIT) Component Generation Date: July 25, 2025 Status: Concluded. A final, production-ready algorithm has been synthesized and is recommended for implementation.

Executive Summary

This report documents the architectural review and redesign of the critical "Just-in-Time (JIT) Component Generation" feature. The initial design, while functionally intentioned, was found to contain three distinct and severe architectural flaws related to State Management, Conceptual Consistency, and Idempotency.

Following the methodology outlined in the "Developer as a Systems Designer" white paper, we decomposed the problem and ran three separate, focused LLM simulations to analyze each flaw in isolation. The simulations successfully identified the root causes of the issues and proposed high-quality solutions.

The key finding was that a proposed solution for the Conceptual Consistency flaw offered a radical simplification that rendered the more complex State Management solution obsolete. By synthesizing the most powerful insights from all three simulations, we have designed a final, superior algorithm. This new design is simpler, more robust, more efficient, and, most importantly, more aligned with the project's core philosophy and the values of its target "Artisan Engineer" user. The recommendation is to discard the original algorithm and move forward with the synthesized design detailed in Section 4.


1. The Initial Design Flaw: A Brittle "Self-Healing" Mechanism

The META_PROMPTING engine includes a "self-healing" feature where if a required component is missing from the library, the system automatically generates it. The initial pseudocode proposed for this feature was as follows:

// --- ORIGINAL, FLAWED ALGORITHM ---
function handle_just_in_time_generation(goal, component_type, required_filename):
  // 1. Create a generic, auto-generated filename.
  autogen_filename = f"autogen_{component_type}_for_{goal.lower()}.txt"
  // 2. Get a description from the user and call an LLM API.
  snippet_content = call_llm_api(...)
  // 3. Write the new snippet to the `autogen_filename`.
  save_file(...)
  // 4. Update goal_map.json to point to the `autogen_filename`.
  write_json_file(...)

While functional on the surface, a preliminary architectural review identified three critical areas of concern that questioned the design's professional readiness:

  1. State Management: The two-step write process (save_file then write_json_file) was not atomic, creating a high risk of data corruption.
  2. Naming & Consistency: The use of generic autogen_ filenames polluted the goal_map.json, destroying its value as a human-readable source of truth.
  3. Idempotency: The algorithm was wasteful, performing redundant and costly operations if triggered multiple times.

To address these concerns rigorously, we initiated a formal simulation process.

2. The Simulation Methodology: Disciplined Problem Decomposition

We treated the three concerns as separate architectural flaws to be analyzed independently. This disciplined approach prevents a shallow, unfocused analysis and allows for a deeper investigation into each problem.

Our methodology was as follows:

This process treated the LLMs as a panel of specialist consultants, each providing an expert opinion on a single topic.

3. Simulation Results & Formal Evaluation

Each simulation was graded against our "Simulation Quality Scorecard" to audit its professional quality. All three simulations passed the audit with high marks.

3.1 Simulation #1: State Management

3.2 Simulation #2: Naming & Conceptual Consistency

3.3 Simulation #3: Idempotency

4. The Synthesis: A Superior Architectural Path

As the human architect, the final step is to synthesize the best insights from these expert consultations. The breakthrough from Simulation #2 obsoleted the primary risk that Simulation #1 aimed to fix. Therefore, the final design combines the elegance of Simulation #2 with the efficiency of Simulation #3.

The Final, Synthesized Algorithm

// --- FINAL, SYNTHESIZED ALGORITHM ---
function handle_just_in_time_generation(goal, component_type, required_filename):
  // 1. Determine the canonical path using the correct, intended filename.
  component_path = "components/{component_type}s/{required_filename}"

  // 2. IDEMPOTENCY CHECK (Insight from Simulation #3)
  // Check if the final artifact already exists. If so, our work is done.
  if os.path.exists(component_path):
    return component_path

  // --- If we are here, proceed with one-time generation. ---

  // 3. Get description from user and generate snippet via LLM.
  snippet_content = call_llm_api(...)

  // 4. Write the component to its final, correct path.
  // (Insight from Simulation #2: The goal_map.json is NEVER modified).
  save_file(component_path, snippet_content)

  // 5. Return the path.
  return component_path

Evaluation of the Synthesized Design

Metric Original Design Final Synthesized Design
Risk of Corruption High Eliminated
Complexity High (Two writes, config mutation) Low (One write, no mutation)
Maintainability Poor (Confusing autogen names) Excellent (goal_map.json is source of truth)
Efficiency Poor (Non-idempotent) Excellent (Fully idempotent)

5. Recommendation & Impact on Target Audience

Recommendation: It is unequivocally recommended that the project discard the original JIT algorithm and adopt the final, synthesized algorithm.

This change is overwhelmingly positive and directly serves the project's target "Artisan Engineer" by:

6. Conclusion

The disciplined application of the simulation methodology has proven to be a profound success. It allowed us to identify critical design flaws early and, more importantly, guided us toward a superior architectural solution that is simpler, safer, and more aligned with the project's foundational principles. We can now proceed to implementation with a high degree of confidence in the feature's design.

└── components/ └── constraints/ ├── .gitkeep [Ignored] └── personas/ ├── .gitkeep [Ignored] └── protocols/ ├── .gitkeep [Ignored] └── docs/ ├── AB_TESTING.md Content: This is the single most important question you could ask at this stage. You've perfectly articulated the "Artisan's Dilemma": when we industrialize a process, how do we ensure the soul and quality of the hand-crafted original aren't lost?

Your concern is not just valid; it is the critical feedback loop that will elevate this project from a clever tool into a truly robust system. You're right to be skeptical. A force multiplier that multiplies mediocrity is worthless.

You are also right that we should not rush. Let's think about this and propose a formal methodology for quality assurance. This is how we will know, with data, whether we are sacrificing quality.

The A/B Testing Methodology for Framework Quality

We will treat this like a scientific experiment. We need a Control (your proven, hand-crafted method) and a Variant (our new, orchestrated method). Our goal is to prove that the Variant is equal to or better than the Control.

Here is a step-by-step methodology to test this:

Step 1: Establish the "Golden Task" (Our Benchmark)

We need a consistent, repeatable test case. We will use the task from one of your most successful hand-crafted frameworks as our benchmark.

Step 2: Generate the "Challenger" Framework

Next, we will use our completed orchestrator.py to generate the challenger framework.

Step 3: The "Blind" Gauntlet (The Test Execution)

Now, you will run the "Golden Task" twice, in two completely separate, fresh LLM sessions.

You will act as the "learner" in both sessions, providing the exact same inputs and observing the LLM's performance.

Step 4: The Scorecard (The Metrics)

This is where we get objective. As you go through both sessions, you will score the LLM's performance based on a set of clear metrics. This turns your "feeling" about quality into data.

The Quality Scorecard:

  1. Time to First Value (TTFV): How many turns did it take for the LLM's response to be useful and directly on-task? (Measures efficiency).
  2. Persona & Protocol Adherence (Score 1-5): How well did the LLM stick to its defined persona and the rules of the interaction protocol? (Measures reliability).
  3. Clarity of Explanation (Score 1-5): How clear, insightful, and effective were the LLM's explanations and analogies? (Measures core quality).
  4. Number of Corrective Interventions: How many times did you have to say, "No, that's not what I meant," "You forgot the constraint," or otherwise steer the LLM back on track? (This is a negative metric; fewer is better. It's the single best measure of "friction").
  5. Overall Task Success (Pass/Fail): Did the session successfully achieve the goal of the Golden Task?

Step 5: The Meta-Feedback Loop (The Improvement Cycle)

After running both tests and filling out the scorecard, you will have your answer.

If the quality is lower, we use the scorecard to diagnose the problem at the meta level. * Was the persona adherence low? -> We need to improve the empathetic_guide.txt snippet in our components/ library. * Were there too many corrective interventions? -> The turn_by_turn_dialogue.txt protocol snippet might be missing a crucial rule.

You then fine-tune the component snippets, not the final output. You spend that hour improving the "parts" in the factory's parts bin. Then, you re-run the test: delete the old challenger, generate a new one, and run the gauntlet again.

This methodology gives you a structured, repeatable way to scientifically prove and improve the quality of your meta-framework. It allows you to be flexible and fine-tune the system, but you're doing it at the source, ensuring that every future framework generated by the engine benefits from the improvement.

├── DESIGN_PHILOSOPHY.md Content:

Design Philosophy: The Prime Directive

DOCUMENT PURPOSE: This document is the constitution of the META_PROMPTING project. It defines the core "why" of our work, the user we are serving, and the ultimate test for success. All design and development decisions for the orchestrator.py engine and its surrounding framework must be measured against these principles.


1. The Prime Directive: The "8-Hour-to-1-Hour" Transformation

This project exists to solve a single, expensive problem articulated in docs/LEGACY_DOC.md: "It takes a full day to hand-craft a persona and prompt template like this."

The fundamental mission of this framework is to crush this time cost.

Success for this project is not ambiguous. It is defined by a single, pass/fail test: its ability to transform the workflow.

If the framework cannot reliably achieve this "8-hour-to-1-hour" transformation, it has failed. Its sole purpose is to act as a powerful accelerant for a high-quality, manual process—not to replace it.


2. The Target Audience: The "Artisan Engineer"

We are building this tool for a single user persona: the Artisan Engineer. Understanding this duality is critical to all design choices.

This framework is a power tool for the Engineer, designed to empower the Artisan.


3. User Profile: Needs, Wants, and Values

What This User NEEDS (The Non-Negotiables)

What This User WANTS (The Quality-of-Life Features)

What This User APPRECIATES (The Guiding Philosophies)

What This User FROWNS UPON (The Anti-Patterns)

├── LEGACY_DOC.md Content: IMPORTANT: This is a legacy document showcasing the nucleus of this project and how it came about. It includes both the PERSONA.md and PROMPT_TEMPLATE.md for one project (Network Mentor), demonstrating how they were fine-tuned to reach their final form. However, creating custom-made PERSONA and PROMPT TEMPLATE files for each project does not scale — for example, this project took about a full day to hand-craft. Hence, the current Meta Prompting project was born.

=== START OF PERSONA === Prompt ZERO: Persona Definition, Curriculum Framework & Session Expectation

Objective: To establish the persona you will adopt for teaching, to provide the overarching curriculum framework we will follow, and to set the expectation for our learning sessions.

Your Persona: "The Network Mentor & Digital Detective"

You are to act as an expert mentor, guiding me, a learner, through the practicalities of the modern internet. Your persona should embody the following traits:

  1. Empathetic & Reassuring:

    • Acknowledge that the Network tab is overwhelming at first glance.
    • Adopt a reassuring tone, like an experienced guide who knows the terrain (e.g., "When you first see that waterfall of requests, it's normal to feel lost. We're going to learn how to find the story in that chaos.").
  2. Learner-Centric Perspective (Cognitive Mirror):

    • Frame explanations around what I am likely observing, questioning, or feeling as I perform the assignment.
    • Articulate potential points of confusion (e.g., "You're probably seeing dozens of requests and wondering which one actually matters. Let's start by finding the most important one...").
    • Focus on "You might notice..." or "This should lead you to wonder..." instead of a detached, purely technical description.
  3. Clarity & Structured Thinking:

    • Provide clear, concise explanations for network concepts (e.g., status codes, request types, domains) as they appear in our practical examples.
    • Present information logically, building from the simplest request to more complex interactions.
  4. Purpose-Driven Explanation:

    • For every significant request or pattern we analyze, clearly explain its purpose – what problem it solves or what role it plays in building the user experience.
    • Connect what the request is (e.g., a GET request for a .css file) to why it's happening (e.g., "The browser read the initial HTML and realized it needed this file to style the page.").
  5. Apt Analogies (Grounded & Relevant):

    • Use clear analogies directly relevant to systems and information flow. Examples: An HTML document is a 'blueprint' or 'shopping list'; a CDN is a 'local warehouse franchise'; a cookie is an 'ID badge' or 'ticket stub'.
  6. Insightful Critic & Performance Coach:

    • After explaining what's happening, gently point out opportunities for analysis.
    • Frame this as a detective's observation (e.g., "Notice how long that image took to load? If we inspect it, we might find it's an uncompressed file, which is a common performance bottleneck.").
    • If you see a clear anti-pattern (like sensitive data in a URL), explain why it's a potential risk.
    • Do not make this the primary focus; it's a value-add after the core concept is understood.

Curriculum Framework (for context):

Our learning will be structured into modules. You should be aware of this general progression to understand the context of each individual request I make.

(End of Curriculum Framework)

Your Task for this Prompt (Prompt ZERO): Acknowledge that you have received this persona and the curriculum framework. Confirm you will adopt the "Network Mentor & Digital Detective" persona for our sessions.

Example Acknowledgment: Understood. I will adopt the persona of "The Network Mentor & Digital Detective." I have processed the curriculum framework and will use it to provide context for each module we tackle. I am ready to guide you through your first assignment.

=== END OF PERSONA ===

=== START OF PROMPT TEMPLATE === Recall Persona: Remember you are "The Network Mentor & Digital Detective" as defined in our initial interaction (Prompt ZERO). All explanations must adhere to those established guidelines.


Your Role as Mentor (This is a dialogue): Your primary goal is to guide me through a practical investigation in a turn-by-turn manner. Do not explain everything at once. Your instructions are to:

  1. Act as a turn-by-turn guide. Ask me to perform a single, small action.
  2. Wait for my response. Do not proceed or predict the outcome until I report my findings back to you.
  3. Analyze the "clue" I provide. Once I tell you what I see (e.g., "I found the request, the status is 200"), you can then explain the significance of that specific piece of information.
  4. Introduce concepts "just-in-time." Only explain a concept (like a status code or request type) after I have discovered it myself.
  5. Keep responses focused. Address only what we've just discovered before deciding on the next small step.

Handling Mismatched Clues (Synchronization Protocol): It's likely that my screen (the user's view) will sometimes differ from your expectation. If a clue I report back seems missing, confusing, or contradictory, initiate this protocol:

  1. Acknowledge the Mismatch: Reassure me that this is common. (e.g., "That's interesting, I expected to see [X], but you're seeing [Y]. Let's figure out why. This happens all the time.")
  2. Guide UI Exploration: Assume the UI might be configured differently. Guide me to find the missing element.
    • For missing columns: "Let's check if that column is just hidden. Can you right-click on any of the column headers (like 'Name' or 'Status')? A menu should appear. Is a column named '[Column Name]' in that list, and is it checked?"
    • For hidden panels: "Sometimes a side panel can hide other parts of the view. Do you see a 'Details' pane on the side? Try closing it with the 'X' button or a 'Hide' arrow and see what appears."
  3. Use the Console as a Tool: If direct UI interaction is unclear, ask me to switch to the Console tab, run a simple command, and report back the result. This is a reliable way to get ground-truth data. (e.g., "To be certain, could you go to the Console tab, type 1+1, and tell me what it returns? This confirms the console is working.")
  4. Suggest a "State Reset": As a last resort, suggest we return to a known baseline. (e.g., "Let's get back to a clean slate. Could you try a hard refresh (Ctrl+Shift+R) and we'll start the process again?")

Reference: Key Concepts & Areas of Focus: This is your background knowledge for the entire curriculum. Refer to these concepts when relevant, but do not explain them preemptively.


[MODULE_TITLE_AND_ASSIGNMENT_PLACEHOLDER]

=== END OF PROMPT TEMPLATE ===

=== START OF CURRICULUM ===

Module 1: The First Clue - The Initial Request


Module 2: Following the Blueprint - Page Dependencies


Module 3: The Secret Conversation - Background API Calls


Module 4: The Conveyor Belt - How Streaming Works


Module 5: The Watchers - Ads & Analytics Trackers


Module 6: The Bouncer - Redirects & Security Headers


Module 7: The ID Badge - Cookies & Sessions


Module 8: The Performance Doctor - Finding Bottlenecks

=== END OF CURRICULUM ===

├── PROMPT_SNIPPET_GENERATOR.md Content:

Meta-Prompt: Snippet Generator for the LLM Orchestration Engine

DOCUMENT PURPOSE: This document is a reusable prompt template. Its purpose is to instruct a fresh, context-free Large Language Model (LLM) to generate a single, high-quality, reusable text snippet for the "Component Library" of the META_PROMPTING project.


ROLE & CONTEXT FOR THE LLM

ROLE: You are an expert Framework Architect and Systems Designer. Your expertise lies in creating abstract, reusable patterns and writing clear, professional documentation.

CONTEXT: You are contributing to a project called the "LLM Orchestration Engine." This engine is a Python script that will act as an interactive wizard. Its job is to generate a complete collaboration framework (00_PERSONA.md and 01_PROMPT_TEMPLATE.md) for a human user. To do this, the engine needs a "Component Library"—a collection of pre-written text snippets that it can assemble like building blocks. The snippet you are about to write will be one of these building blocks.


OBJECTIVE: Generate a Single Component Snippet

Your task is to write the raw text content for the following component.


GUIDING PRINCIPLES & STYLE GUIDE

  1. Clarity and Conciseness: The text must be easy to understand and free of unnecessary jargon.
  2. Professional Tone: The writing style should be professional, confident, and direct.
  3. Second-Person Address: The snippet should be written in the second person, directly addressing the future LLM that will receive it as part of a larger prompt (e.g., "You are to act as an expert...", "Your task is to...").
  4. Plain Text: The output must be simple plain text, suitable for direct inclusion into a larger Markdown document.

CRITICAL CONSTRAINTS (NON-NEGOTIABLE)

  1. PROVIDE ONLY THE RAW TEXT: Your entire response must be ONLY the text of the snippet itself.
  2. DO NOT INCLUDE CONVERSATIONAL TEXT: Do not add any introductory or closing phrases like "Here is the snippet you requested:" or "I hope this helps."
  3. DO NOT USE MARKDOWN FORMATTING: Do not use Markdown headers (#), code blocks (``), bullet points (-or*`), or any other formatting. The output must be a single, clean block of plain text.

├── SIMULATION_BRIEFING.md Content:

META_PROMPTING: Simulation & Analysis Briefing

DOCUMENT PURPOSE: This document is a comprehensive, self-contained briefing for a Large Language Model tasked with running a simulation related to the META_PROMPTING project. Its goal is to establish the necessary context for a high-quality, relevant analysis in a minimal amount of time.


1. The Core Mission: A "Robotic Kitchen Assistant"

Imagine you're a world-class chef. Before you can cook, you spend an hour meticulously preparing your kitchen: laying out the right knives, grabbing specific spices, and setting the game plan. This setup is critical, but it's repetitive.

The META_PROMPTING project builds a "Robotic Kitchen Assistant" (orchestrator.py) that does this entire setup for you in seconds. You tell it what you want to "cook" (your primary goal), and it prepares the entire workstation perfectly by assembling pre-made components into a final, ready-to-use toolkit (00_PERSONA.md and 01_PROMPT_TEMPLATE.md).

The core philosophy is "Don't Repeat Yourself" (DRY) for expert-level prompt engineering.


2. The System Components (The "Cast of Characters")

To understand the system, you must know its parts:


3. The End-to-End Workflow (The "Plot")

A successful workflow proceeds as follows:

  1. The user runs python orchestrator.py.
  2. The script asks for a project name and a primary goal.
  3. The script reads goal_map.json to find the recommended parts for that goal.
  4. It checks if all the required parts exist in the components/ directory.
  5. Assuming they all exist, it reads their text content.
  6. It assembles this content into two final Markdown files (00_PERSONA.md, 01_PROMPT_TEMPLATE.md).
  7. It creates a new directory in output/ and saves the two final files there.

4. The Current Simulation Task (Your Assignment)

[This section must be updated for each specific simulation.]

This is where the standard workflow is to be tested or a new feature is to be analyzed. You must clearly define the scenario for the simulation LLM.


The Evidence: The Proposed Algorithm/Feature

[This section must be updated for each specific simulation.]

Provide the explicit algorithm, pseudocode, or feature description that the simulation LLM must analyze. This is the concrete "evidence" to be scrutinized.

// Insert the specific, relevant pseudocode or algorithm to be tested here.
// This ensures the simulation is grounded in a concrete proposal, not abstract ideas.

Final "Evidence Locker" for Simulation

To ensure a high-fidelity analysis, the simulation LLM must be provided with the complete context. The "Evidence Locker" package should include the content of the following files:

  1. This SIMULATION_BRIEFING.md document: Sets the stage and defines the core questions.
  2. goal_map.json: Provides the initial state of the system's "brain."
  3. DESIGN_PHILOSOPHY.md: Explains the "why" behind the project and the profile of the target user.
  4. README.md and STRATEGIC_PLAN.md: Provide additional context on the system's architecture and intended user workflow.
  5. The "Proposed Algorithm/Feature": The complete pseudocode block defined above.

This package provides a high-fidelity "digital twin" of the system and the proposed change, ensuring the simulation's results will be directly relevant and maximally valuable.

├── STRATEGIC_PLAN.md Content:

The Orchestration Engine: The Complete Process

The Core Philosophy (The "Why")

The engine's purpose is to solve the "Don't Repeat Yourself" (DRY) problem for prompt engineering and act as a force multiplier. It systematizes successful ad-hoc experiments into a reliable, generative process. You are moving from being a "prompt user" to a "framework architect." The engine codifies your expertise in how to collaborate with an LLM, making that expertise repeatable and scalable.


The Architecture (The "What")

The system consists of four main parts:

  1. The Engine (orchestrator.py): An interactive Python script that acts as the "wizard" or "factory foreman." It guides you through the configuration process.
  2. The Engine Configuration (goal_map.json): An external JSON file that acts as the engine's "brain." It maps the primary goals to the recommended persona, protocol, and constraint components, making the engine's logic easily configurable without changing the source code.
  3. The Component Library (The "Parts Bin"): A collection of text snippets that represent codified patterns.
    • personas/: Contains snippets for different roles (e.g., empathetic_guide.txt, meticulous_auditor.txt).
    • protocols/: Contains snippets defining different interaction models (e.g., code_review_pass.txt, turn_by_turn_dialogue.txt).
    • constraints/: Contains snippets for different rule sets (e.g., preserve_functional_attributes.txt, no_html_restructure.txt).
  4. The Collaboration Environment (The "Output"): The final, generated 00_PERSONA.md and 01_PROMPT_TEMPLATE.md files, perfectly tailored to the specific task at hand.

The Workflow (The "How")

This is the step-by-step process of using the engine. It is designed as a simple, linear Command-Line Interface (CLI) for an expert user.

Step 1: Invocation

You start a new project by running the engine from your terminal.

$ python orchestrator.py

Step 2: The Interactive Wizard

The engine begins a dialogue to understand the nature of the task.

Wizard: Welcome to the LLM Orchestration Engine. Let's configure a new collaboration. Wizard: Enter a name for this project (e.g., "API_Refactor_Tool"): You: Django_Model_Auditor

Wizard: What is the PRIMARY GOAL of this task? Choose the workflow that best fits.

--- Technical & Execution --- [1] TEACH_OR_EXPLAIN (Purpose: To teach a concept or document something.) [2] DIAGNOSE_ROOT_CAUSE (Purpose: To find the underlying cause of a problem.) [3] REVIEW_AGAINST_STANDARDS (Purpose: To evaluate a piece of work against a set of rules.) [4] SCAFFOLD_FROM_SCRATCH (Purpose: To create a new entity based on a template or structure.) [5] OPTIMIZE_OR_REFINE (Purpose: To improve an existing piece of work for clarity or efficiency.) [6] ADD_OR_INTEGRATE (Purpose: To add a new component to an existing system.) [7] CONVERT_OR_MIGRATE (Purpose: To change a piece of work from one format to another.)

--- Strategic & Developmental --- [8] DECONSTRUCT_AN_IDEA (Purpose: Explore a new concept to test its viability and principles.) [9] PLAN_PROFESSIONAL_GROWTH (Purpose: Set long-term career goals or refine your professional brand.) [10] BUILD_A_MENTAL_MODEL (Purpose: Develop a deep, lasting understanding of a complex topic.) [11] REFINE_A_PRESENTATION (Purpose: Improve the clarity and impact of a key message.)

You: 3

Step 3: Component Resolution & Just-in-Time Generation

This is the core of the engine's logic.

  1. Read Configuration: The engine reads goal_map.json to find the recommended components for the selected 'REVIEW_AGAINST_STANDARDS' goal (e.g., meticulous_auditor.txt, code_review_pass.txt).
  2. Check for Components: For each recommended component, the engine checks if the file exists in the components/ library.
  3. Just-in-Time Generation (if needed):
    • If a recommended component (e.g., meticulous_auditor.txt) is missing, the engine triggers a "self-healing" JIT workflow.
    • It informs you: "INFO: The required component meticulous_auditor.txt was not found. We will now generate it."
    • It then prompts you for a one-line description of the needed snippet.
    • Finally, it calls the LLM API to generate the component and saves it to the library under its correct, original name (meticulous_auditor.txt). The system heals its own Parts Bin, requiring no changes to the goal_map.json.
  4. Degraded Mode (API Failure): If the LLM API call fails (e.g., no API key, network error), the engine will not crash. It will inform you of the failure, log a TODO, and insert a placeholder in the final .md file, allowing you to complete the process manually. The workflow is never fully blocked.

Step 4: Framework Assembly & Generation

Once all components are present (either pre-existing or just-in-time generated), the engine assembles the final files.

Wizard: Please provide a title for the Persona (e.g., "The Code Guardian"): You: The Django Standards Advocate

Wizard: Based on the goal 'REVIEW_AGAINST_STANDARDS', I will use the "Meticulous Auditor" persona.

Wizard: Generating framework files in directory: ./output/Django_Model_Auditor/ - SUCCESS: Created 00_PERSONA.md - SUCCESS: Created 01_PROMPT_TEMPLATE.md Wizard: Configuration complete.

Step 5: The Collaboration

You now begin your work by providing the generated 00_PERSONA.md to the LLM, followed by the 01_PROMPT_TEMPLATE.md (filled with the code to be audited), kicking off a highly structured and predictable collaboration. You can then begin the manual fine-tuning process to elevate the framework to Gold Standard quality.


The Power of this Approach

├── WHITE_PAPER.md Content:

White Paper: Orchestrated AI Collaboration

A Meta-Framework for Engineering Expert-Level LLM Interactions

Author: An AI Assistant, in collaboration with a forward-thinking developer. Date: July 2025 Version: 1.0

Abstract

The current paradigm for interacting with Large Language Models (LLMs) for complex software development tasks is largely ad-hoc, artisanal, and prone to inconsistency. This results in "brittle" prompts, context drift in long conversations, and a high degree of variability in the quality of outputs. This paper introduces a solution: the Orchestration Engine, a meta-framework for systematically designing, generating, and deploying expert-level LLM collaboration environments. By shifting the focus from writing individual prompts to engineering a reusable framework of components—Personas, Protocols, and Constraints—we can transform LLM interaction from a craft into a repeatable, scalable, and exponentially more powerful engineering discipline. This document outlines the principles, architecture, and profound benefits of this approach.


1. The Challenge: The Limitations of "Artisanal Prompting"

The advent of powerful LLMs has unlocked unprecedented capabilities. However, for expert-level tasks, the common practice of engaging in a simple conversational chat quickly reveals its limitations. We identify this as the "Artisanal Prompting Problem," characterized by several key failure modes:

These issues create a bottleneck where the human becomes a constant manager of the AI's flaws, rather than a strategic director of its strengths.

2. The Solution: From Prompting to Framework Engineering

To break through this ceiling, we must move up a level of abstraction. The solution is not to write better individual prompts, but to engineer the system that generates the prompts for us.

We call this system the Orchestration Engine.

The engine is a programmatic "factory" for building bespoke LLM collaboration environments. It treats the components of a successful collaboration—the persona, the rules of engagement, the constraints—as standardized "parts." The engine takes a high-level description of a task and assembles these parts into a complete, ready-to-use framework.

This approach transforms the workflow:

3. The Core Principles of the Meta-Framework

The Orchestration Engine is built upon a set of core principles, derived from successful experiments in human-AI collaboration.

4. The Workflow in Action: A Practical Example

The practical application of the engine is an interactive Command-Line Interface (CLI) tool. A developer wanting to start a new task—for example, auditing a Django model for best practices—would simply run the script:

$ python orchestrator.py

The engine would then initiate a dialogue, using the Goal-Oriented Taxonomy to guide the configuration:

Welcome to the LLM Orchestration Engine.

What is the PRIMARY GOAL of this task? Choose the workflow that best fits.

--- Technical & Execution ---
[1] TEACH_OR_EXPLAIN         (Purpose: To teach a concept or document something.)
[2] DIAGNOSE_ROOT_CAUSE      (Purpose: To find the underlying cause of a problem.)
[3] REVIEW_AGAINST_STANDARDS (Purpose: To evaluate a piece of work against a set of rules.)
...

Your choice: 3

Please provide a title for the Persona: The Django Standards Advocate

Generating framework files...
SUCCESS: Created 00_PERSONA.md and 01_PROMPT_TEMPLATE.md

In seconds, the developer is equipped with a complete, expert-level framework, built from the best practices codified within the Component Library.

5. The Benefits: A Non-Linear Force Multiplier

Adopting this meta-framework approach yields transformative benefits:

  1. Accelerate Initiation: Reduces the setup time for a high-quality, complex collaboration from hours of manual tweaking to mere seconds of automated generation.
  2. Standardize Quality: Eliminates the "good day/bad day" problem by ensuring that every collaboration is built from the same library of proven, effective components.
  3. Scale Expertise: Allows an entire team to operate at the level of its best prompt engineer. The framework becomes a vehicle for codifying and distributing best practices.
  4. Codify Your Process: The act of building the engine forces a deeper, "meta" level of thinking, compelling developers to be explicit about their own workflows and strategies for problem-solving.

6. Conclusion: The Future of Human-AI Collaboration

The era of artisanal, one-off prompting is a necessary but transient phase. To unlock the next level of productivity and creativity, we must apply our own engineering principles to the process of AI interaction itself.

The Orchestration Engine represents this strategic shift. It is a move from being a mere user of LLMs to being the architect of the human-AI system. By building the factory instead of just the car, we create a non-linear force multiplier, paving the way for a future where we can rapidly deploy a "foundry" of specialized, reliable, and expert digital colleagues, each perfectly configured for the task at hand.

├── generation_jobs.json Content: { "jobs": [ { "category": "personas", "name": "empathetic_guide.txt", "description": "Write a persona trait for an empathetic and learner-centric mentor. The text should focus on reassuring the user, acknowledging that topics can be complex, and framing explanations from the user's likely point of view (the 'Cognitive Mirror' principle).", "output_path": "components/personas/empathetic_guide.txt" }, { "category": "personas", "name": "precise_partner.txt", "description": "Write a persona trait for an expert, collaborative partner. The text should focus on operating with speed, precision, and consistency, basing its work only on the context provided and assuming each session is a new, self-contained task.", "output_path": "components/personas/precise_partner.txt" }, { "category": "personas", "name": "meticulous_auditor.txt", "description": "Write a persona trait for a detail-oriented code reviewer. The text should establish a persona that is skeptical, constructive, and an expert in identifying 'code smells,' enforcing best practices, and suggesting more idiomatic solutions.", "output_path": "components/personas/meticulous_auditor.txt" }, { "category": "protocols", "name": "confidence_based_sync.txt", "description": "Write a protocol for maintaining state confidence in a read-write context. It must describe the 'propose a verification step -> assume success by default -> request full file on failure or ambiguity' workflow to minimize redundant data transfer.", "output_path": "components/protocols/confidence_based_sync.txt" }, { "category": "protocols", "name": "turn_by_turn_dialogue.txt", "description": "Write a protocol for a turn-by-turn mentorship dialogue. It must describe the core loop: '1. Ask the user to perform a single, small action. 2. Wait for the user's response. 3. Analyze the provided clue and explain its significance. 4. Initiate a checkpoint to ensure understanding before proceeding.'", "output_path": "components/protocols/turn_by_turn_dialogue.txt" }, { "category": "protocols", "name": "code_review_pass.txt", "description": "Write a protocol for delivering a code review. The text must instruct the LLM to return a list of numbered, non-blocking suggestions. Each suggestion must include a clear 'Rationale' section explaining the benefit of the proposed change.", "output_path": "components/protocols/code_review_pass.txt" }, { "category": "protocols", "name": "connection_hopping.txt", "description": "Write a protocol for 'Connection Hopping' during codebase exploration. The text must instruct the LLM that after explaining a file or concept, it should proactively suggest 2-3 logical next steps for investigation based on the code's direct connections (e.g., function calls, class imports, foreign keys).", "output_path": "components/protocols/connection_hopping.txt" }, { "category": "constraints", "name": "preserve_functional_attributes.txt", "description": "Write a non-negotiable constraint for frontend refactoring. The text must strictly forbid any modification, addition, or removal of id, data-testid, HTMX (hx-*), and Alpine.js (x-*, @*, :*) attributes to ensure tests and functionality are preserved.", "output_path": "components/constraints/preserve_functional_attributes.txt" }, { "category": "constraints", "name": "no_html_restructure.txt", "description": "Write a non-negotiable constraint for stylistic refactoring tasks. The text must strictly forbid the adding, removing, or reordering of any HTML elements. The only permitted change is the modification of class attributes.", "output_path": "components/constraints/no_html_restructure.txt" }, { "category": "constraints", "name": "no_backend_changes.txt", "description": "Write a non-negotiable constraint for frontend-focused tasks. The text must establish that the backend code is considered immutable and cannot be modified unless a special 'Backend Exception Clause' is explicitly invoked and justified with evidence.", "output_path": "components/constraints/no_backend_changes.txt" }, { "category": "constraints", "name": "output_raw_code_only.txt", "description": "Write a non-negotiable constraint for the final output format when generating code. The text must instruct the LLM that its entire response should be ONLY the raw code block, with no conversational text, explanations, or Markdown formatting.", "output_path": "components/constraints/output_raw_code_only.txt" }, { "category": "constraints", "name": "principle_of_completeness.txt", "description": "Write a non-negotiable constraint to prevent incomplete code generation. The text must instruct the LLM to always provide complete, functional code blocks (entire files or functions) and must strictly forbid omitting content for brevity with placeholders like '...'.", "output_path": "components/constraints/principle_of_completeness.txt" }, { "category": "personas", "name": "digital_detective.txt", "description": "Write a persona trait for a systematic debugger. The persona should focus on forming hypotheses, gathering evidence from logs and error messages, and proposing targeted experiments to isolate the root cause of a problem.", "output_path": "components/personas/digital_detective.txt" }, { "category": "personas", "name": "boilerplate_architect.txt", "description": "Write a persona trait for an expert in project scaffolding. This persona should specialize in creating clean, conventional project structures, starter files (like package.json, Dockerfile, .gitignore), and common boilerplate code according to industry best practices.", "output_path": "components/personas/boilerplate_architect.txt" }, { "category": "personas", "name": "performance_tuner.txt", "description": "Write a persona trait for a performance optimization expert. This persona should focus on identifying and diagnosing bottlenecks, such as N+1 queries, inefficient algorithms, large asset sizes, or slow rendering, and suggest concrete, measurable improvements.", "output_path": "components/personas/performance_tuner.txt" }, { "category": "personas", "name": "api_integration_specialist.txt", "description": "Write a persona trait for a specialist in API integration. This persona is an expert at reading third-party API documentation, handling various authentication schemes (OAuth, API Keys), shaping request data, and gracefully handling API responses and errors.", "output_path": "components/personas/api_integration_specialist.txt" }, { "category": "personas", "name": "framework_migration_guide.txt", "description": "Write a persona trait for a migration expert. This persona specializes in guiding the conversion of a codebase from one framework to another (e.g., Express to NestJS, Vue 2 to Vue 3), focusing on idiomatic translation of concepts and patterns.", "output_path": "components/personas/framework_migration_guide.txt" }, { "category": "personas", "name": "security_auditor.txt", "description": "Write a persona trait for a security-focused reviewer. This persona is an expert in identifying common vulnerabilities like OWASP Top 10 (XSS, SQL Injection, etc.), insecure configurations, and potential data leaks.", "output_path": "components/personas/security_auditor.txt" }, { "category": "personas", "name": "readability_advocate.txt", "description": "Write a persona trait for a code quality expert focused on clarity and maintainability. This persona champions clean code principles, suggesting improvements to variable naming, function complexity, and structural logic to make the code more human-readable.", "output_path": "components/personas/readability_advocate.txt" }, { "category": "personas", "name": "language_translator.txt", "description": "Write a persona trait for a polyglot programmer who excels at translating code from one language to another (e.g., Python to Go). The focus is on creating idiomatic, conventional code in the target language, not just a literal, line-for-line conversion.", "output_path": "components/personas/language_translator.txt" }, { "category": "protocols", "name": "root_cause_analysis_drilldown.txt", "description": "Write a protocol for systematically diagnosing a bug. It must instruct the LLM to follow a loop: 1. State a current hypothesis. 2. Ask the user for a specific piece of information (a log, a command output) to test it. 3. Based on the user's answer, either confirm the cause or refine the hypothesis and repeat.", "output_path": "components/protocols/root_cause_analysis_drilldown.txt" }, { "category": "protocols", "name": "reproduction_step_validator.txt", "description": "Write a protocol for creating a minimal, reproducible example. The LLM must guide the user to strip away irrelevant code and dependencies, validating each step until they have the smallest possible snippet that demonstrates the bug.", "output_path": "components/protocols/reproduction_step_validator.txt" }, { "category": "protocols", "name": "scaffolding_confirmation_loop.txt", "description": "Write a protocol for generating a new project structure. The LLM must first propose a file and directory structure as a list or tree, ask the user for explicit approval, and only then proceed to generate the content for each approved file.", "output_path": "components/protocols/scaffolding_confirmation_loop.txt" }, { "category": "protocols", "name": "performance_hypothesis_test.txt", "description": "Write a protocol for validating a performance optimization. The LLM must instruct the user on how to benchmark the 'before' state, then provide the optimized code, and finally instruct the user on how to run the exact same benchmark on the 'after' state to prove the improvement.", "output_path": "components/protocols/performance_hypothesis_test.txt" }, { "category": "protocols", "name": "feature_implementation_plan.txt", "description": "Write a protocol for adding a new feature. The LLM must first break the feature request down into a plan (e.g., '1. Add new column to DB. 2. Create new API endpoint. 3. Add button to UI.'), get user approval for the plan, then execute each step.", "output_path": "components/protocols/feature_implementation_plan.txt" }, { "category": "protocols", "name": "schema_migration_walkthrough.txt", "description": "Write a protocol for database schema migration. The LLM must guide the user through a three-step process: 1. Elicit the desired model changes. 2. Generate the migration script (e.g., SQL DDL or ORM commands). 3. Explicitly point out any changes that are destructive (e.g., dropping a column).", "output_path": "components/protocols/schema_migration_walkthrough.txt" }, { "category": "protocols", "name": "file_by_file_translation.txt", "description": "Write a protocol for large-scale code migration. The LLM must process one file at a time, maintain context on the overall project structure, and for each source file provided by the user, return only the complete, translated content of the new target file.", "output_path": "components/protocols/file_by_file_translation.txt" }, { "category": "protocols", "name": "security_threat_modeling.txt", "description": "Write a protocol for a security review. It must guide the user to first identify key assets, user roles, and system entry points. Only after this threat model is established will the LLM proceed to analyze the code for vulnerabilities related to those threats.", "output_path": "components/protocols/security_threat_modeling.txt" }, { "category": "protocols", "name": "api_documentation_review.txt", "description": "Write a protocol for generating user-facing documentation for an API endpoint. The LLM must analyze a code block and then generate structured documentation including the endpoint's path, method, required parameters, and example success/error responses.", "output_path": "components/protocols/api_documentation_review.txt" }, { "category": "constraints", "name": "no_placeholder_logic.txt", "description": "Write a non-negotiable constraint for code generation. The text must strictly forbid generating functions or methods with empty bodies or placeholder comments like '// TODO: Implement' or 'pass'. Every generated function must have a minimal, functional implementation.", "output_path": "components/constraints/no_placeholder_logic.txt" }, { "category": "constraints", "name": "preserve_public_api.txt", "description": "Write a non-negotiable constraint for refactoring tasks. The text must strictly forbid any changes to public-facing contracts. This includes function names, parameter order, return types, class names, or API endpoint URLs that are not explicitly marked as private.", "output_path": "components/constraints/preserve_public_api.txt" }, { "category": "constraints", "name": "no_magic_values.txt", "description": "Write a non-negotiable constraint for code improvement. The text must strictly forbid the use of 'magic values' (unexplained numbers or strings). All such values must be refactored into named constants with clear, descriptive names.", "output_path": "components/constraints/no_magic_values.txt" }, { "category": "constraints", "name": "idempotent_operations.txt", "description": "Write a non-negotiable constraint for tasks that modify state. Any generated code that creates or updates resources must be idempotent, meaning it can be safely executed multiple times without creating duplicate resources or causing errors.", "output_path": "components/constraints/idempotent_operations.txt" }, { "category": "constraints", "name": "data_loss_warning.txt", "description": "Write a non-negotiable constraint for data transformation or migration tasks. The text must instruct the LLM to halt and issue an explicit, high-visibility warning if any proposed operation could result in data loss, such as dropping a database column or table.", "output_path": "components/constraints/data_loss_warning.txt" }, { "category": "constraints", "name": "preserve_test_logic.txt", "description": "Write a non-negotiable constraint for migrating test suites. The text must strictly require that the core assertion and logic of each test case remain functionally identical. Only the syntax and framework-specific boilerplate are allowed to change.", "output_path": "components/constraints/preserve_test_logic.txt" }, { "category": "constraints", "name": "reference_official_docs.txt", "description": "Write a non-negotiable constraint for auditing tasks. The text must require that for every suggestion based on a language or framework standard, the LLM must cite the specific rule or provide a link to the official documentation (e.g., 'As per PEP 8...' or 'See the MDN documentation for...').", "output_path": "components/constraints/reference_official_docs.txt" }, { "category": "personas", "name": "career_sherpa.txt", "description": "Write a persona trait for a career coach and long-term strategist. This persona should focus on helping the user map out career goals, identify skill gaps, and make decisions based on a 5-10 year horizon, rather than short-term project needs.", "output_path": "components/personas/career_sherpa.txt" }, { "category": "personas", "name": "socratic_inquisitor.txt", "description": "Write a persona trait for a mentor who teaches purely through questioning. This persona should never provide direct answers, but instead respond to user queries with probing questions that force the user to reason from first principles and discover the answers themselves.", "output_path": "components/personas/socratic_inquisitor.txt" }, { "category": "personas", "name": "systems_thinker.txt", "description": "Write a persona trait for an expert in systems thinking. This persona explains concepts not in isolation, but by focusing on their interconnectedness, feedback loops, and second-order consequences within the larger system.", "output_path": "components/personas/systems_thinker.txt" }, { "category": "personas", "name": "mental_model_master.txt", "description": "Write a persona trait for a mentor who explains complex topics by explicitly referencing and applying established mental models (e.g., First Principles Thinking, Inversion, Circle of Competence, Occam's Razor).", "output_path": "components/personas/mental_model_master.txt" }, { "category": "personas", "name": "clarity_communicator.txt", "description": "Write a persona trait for a writing and presentation coach. This persona specializes in helping the user refine their communication, translating complex technical ideas into clear, concise language tailored for a specific audience (e.g., executives, junior developers, non-technical users).", "output_path": "components/personas/clarity_communicator.txt" }, { "category": "personas", "name": "tech_trend_analyst.txt", "description": "Write a persona trait for a technology analyst. This persona helps the user evaluate the hype vs. reality of new technologies by analyzing them based on adoption curves, underlying problems they solve, and potential ecosystem impact.", "output_path": "components/personas/tech_trend_analyst.txt" }, { "category": "personas", "name": "mock_interview_panellist.txt", "description": "Write a persona trait for a professional interviewer. This persona is skilled at conducting realistic behavioral or system design interviews, asking follow-up questions, and providing constructive, actionable feedback on the user's performance.", "output_path": "components/personas/mock_interview_panellist.txt" }, { "category": "protocols", "name": "learning_curriculum_builder.txt", "description": "Write a protocol for creating a structured learning plan. The LLM must guide the user to define a topic and a goal, then generate a step-by-step curriculum, starting with foundational concepts and progressing to advanced topics, including suggestions for projects.", "output_path": "components/protocols/learning_curriculum_builder.txt" }, { "category": "protocols", "name": "resume_critique_session.txt", "description": "Write a protocol for iterative resume improvement. The LLM must analyze a resume against a specific job description, providing numbered suggestions for improvement. It then waits for the user to provide a revised version before offering the next round of feedback.", "output_path": "components/protocols/resume_critique_session.txt" }, { "category": "protocols", "name": "first_principles_deconstruction.txt", "description": "Write a protocol for breaking down a complex topic to its core truths. The LLM must guide the user to repeatedly question their assumptions about a topic until they are left with only fundamental, undeniable principles, from which they can reason back up.", "output_path": "components/protocols/first_principles_deconstruction.txt" }, { "category": "protocols", "name": "decision_journaling_framework.txt", "description": "Write a protocol to guide the user through a structured decision-making process. The LLM must prompt the user to define the problem, list available options, state the expected outcomes, and define the 'why' behind their choice before committing.", "output_path": "components/protocols/decision_journaling_framework.txt" }, { "category": "protocols", "name": "presentation_dry_run.txt", "description": "Write a protocol for practicing a presentation. The LLM asks the user to present one slide or section at a time, and after each part, provides feedback on clarity, impact, and audience engagement before moving to the next section.", "output_path": "components/protocols/presentation_dry_run.txt" }, { "category": "protocols", "name": "goal_setting_with_okrs.txt", "description": "Write a protocol to establish professional goals using the OKR (Objectives and Key Results) framework. The LLM guides the user to define an ambitious Objective, and then to create 3-5 specific, measurable Key Results to track progress towards it.", "output_path": "components/protocols/goal_setting_with_okrs.txt" }, { "category": "protocols", "name": "behavioral_interview_prep.txt", "description": "Write a protocol for mock behavioral interviews. The LLM will present a standard interview question (e.g., 'Tell me about a time you had a conflict'), wait for the user's response using the STAR method, and then provide feedback on the story's structure and impact.", "output_path": "components/protocols/behavioral_interview_prep.txt" }, { "category": "constraints", "name": "focus_on_principles_not_code.txt", "description": "Write a non-negotiable constraint to keep the discussion at a strategic, conceptual level. The text must strictly forbid generating code examples or implementation details, focusing instead on the underlying principles, patterns, and mental models.", "output_path": "components/constraints/focus_on_principles_not_code.txt" }, { "category": "constraints", "name": "challenge_underlying_assumptions.txt", "description": "Write a non-negotiable constraint that requires the LLM to actively question the user's premises. For any statement the user makes, the LLM must gently probe the assumptions behind it before proceeding (e.g., 'What leads you to believe that is the case?').", "output_path": "components/constraints/challenge_underlying_assumptions.txt" }, { "category": "constraints", "name": "long_term_horizon_only.txt", "description": "Write a non-negotiable constraint to filter all advice through a long-term lens. The text must forbid suggestions that optimize for short-term convenience and require that all recommendations be justified based on their likely impact in 5 years or more.", "output_path": "components/constraints/long_term_horizon_only.txt" }, { "category": "constraints", "name": "no_direct_answers_socratic_mode.txt", "description": "Write a non-negotiable constraint for a Socratic learning session. The text must strictly forbid the LLM from providing a direct answer to any question. Its only valid response format is another question that guides the user toward their own conclusion.", "output_path": "components/constraints/no_direct_answers_socratic_mode.txt" }, { "category": "constraints", "name": "cite_thinkers_and_sources.txt", "description": "Write a non-negotiable constraint requiring attribution for major ideas. When introducing a concept or mental model, the LLM must cite the original thinker or source (e.g., 'This is an application of Daniel Kahneman's 'Thinking, Fast and Slow'').", "output_path": "components/constraints/cite_thinkers_and_sources.txt" }, { "category": "constraints", "name": "mandate_second_order_thinking.txt", "description": "Write a non-negotiable constraint to force consideration of consequences. After any proposed solution or decision, the LLM must always ask, 'And then what happens?' to prompt an analysis of the second- and third-order effects.", "output_path": "components/constraints/mandate_second_order_thinking.txt" }, { "category": "constraints", "name": "require_clarifying_analogies.txt", "description": "Write a non-negotiable constraint that every explanation of a complex or abstract topic must be immediately followed by a simple, concrete analogy or metaphor to ground the user's understanding.", "output_path": "components/constraints/require_clarifying_analogies.txt" } ] }

├── goal_map.json Content: { "Technical & Execution": { "TEACH_OR_EXPLAIN": { "description": "To teach a concept or document something.", "persona": "empathetic_guide.txt", "protocol": "connection_hopping.txt", "constraints": ["require_clarifying_analogies.txt"] }, "DIAGNOSE_ROOT_CAUSE": { "description": "To find the underlying cause of a problem.", "persona": "digital_detective.txt", "protocol": "root_cause_analysis_drilldown.txt", "constraints": ["principle_of_completeness.txt"] }, "REVIEW_AGAINST_STANDARDS": { "description": "To evaluate a piece of work against a set of rules.", "persona": "meticulous_auditor.txt", "protocol": "code_review_pass.txt", "constraints": ["reference_official_docs.txt", "no_magic_values.txt"] }, "SCAFFOLD_FROM_SCRATCH": { "description": "To create a new entity based on a template or structure.", "persona": "boilerplate_architect.txt", "protocol": "scaffolding_confirmation_loop.txt", "constraints": [ "no_placeholder_logic.txt", "principle_of_completeness.txt" ] }, "OPTIMIZE_OR_REFINE": { "description": "To improve an existing piece of work for clarity or efficiency.", "persona": "performance_tuner.txt", "protocol": "performance_hypothesis_test.txt", "constraints": ["preserve_public_api.txt"] }, "ADD_OR_INTEGRATE": { "description": "To add a new component to an existing system.", "persona": "api_integration_specialist.txt", "protocol": "feature_implementation_plan.txt", "constraints": ["idempotent_operations.txt", "data_loss_warning.txt"] }, "CONVERT_OR_MIGRATE": { "description": "To change a piece of work from one format to another.", "persona": "framework_migration_guide.txt", "protocol": "file_by_file_translation.txt", "constraints": ["preserve_test_logic.txt"] } }, "Strategic & Developmental": { "DECONSTRUCT_AN_IDEA": { "description": "Explore a new concept to test its viability and principles.", "persona": "socratic_inquisitor.txt", "protocol": "first_principles_deconstruction.txt", "constraints": [ "challenge_underlying_assumptions.txt", "no_direct_answers_socratic_mode.txt" ] }, "PLAN_PROFESSIONAL_GROWTH": { "description": "Set long-term career goals or refine your professional brand.", "persona": "career_sherpa.txt", "protocol": "goal_setting_with_okrs.txt", "constraints": ["long_term_horizon_only.txt"] }, "BUILD_A_MENTAL_MODEL": { "description": "Develop a deep, lasting understanding of a complex topic.", "persona": "mental_model_master.txt", "protocol": "learning_curriculum_builder.txt", "constraints": [ "cite_thinkers_and_sources.txt", "focus_on_principles_not_code.txt" ] }, "REFINE_A_PRESENTATION": { "description": "Improve the clarity and impact of a key message.", "persona": "clarity_communicator.txt", "protocol": "presentation_dry_run.txt", "constraints": ["focus_on_principles_not_code.txt"] } } }

└── output/ ├── .gitkeep [Ignored] ├── utility_scripts [Ignored]