Skip to main content

Research Agent Sandbox

Updated over 3 months ago

Overview

What is the Research Agent Sandbox?

The Research Agent Sandbox is a test environment inside the Agent Training Center where admins can safely create, test, and refine research/qualification agents before deploying them to ecosystems. It allows you to see what information agents return, and the reasoning behind their decisions. This helps ensure your agents deliver consistent, accurate results.

When should I use it?

  • When you want to test new Research Agent instructions before adding them to an ecosystem.

  • If you notice inconsistent or unexpected outputs from an agent and want to troubleshoot.

  • When you want to compare how different models or instructions affect agent performance.

  • To validate that an agent works as intended using known companies/accounts.

  • When you want to experiment without impacting live data.


Before You Start

  • You must have Admin access.

  • Prepare a list of companies you want to use for testing (ideally ones you know well).


Step-by-Step Guide

Use this process to safely test and improve research agents before deploying them to your live environment.

  1. Go to Agent Training Center >> Research Agent Sandbox

  2. Create a New Sandbox

    • Click to create a new research agent sandbox.

    • Name your sandbox (any name is fine; there are no strict naming rules).

  3. Add Companies for Testing

    • Inside the sandbox, upload a list of companies/accounts.

      • Tip: Use companies you already know well so you can easily judge the agent’s responses.

  4. Add Research Agents

    • Add a new agent to the sandbox by clicking the "+" button.

    • You can:

      • Use an existing agent (for example, one already assigned to an ecosystem).

      • Or create a new agent from scratch by using our templates or “generate with AI functionality”. Learn how to create Research Agents.

  5. Run Agent Tests

    • For each agent, click the play button next to its name.

    • Choose how many test runs you want (3 is recommended).

    • Run the tests and wait for results.

  6. Review Results for Consistency

    • For each company, review the agent’s responses across multiple test runs.

      • Check for consistency in answers. Your goal is to have consistent results per each test for test company

  7. Analyze Agent Output

    • Hover over the agent’s response to see:

      • The sources of information

      • The agent’s reasoning for its answer.

    • Hover over the agent’s name to see the instructions used (for quick access).

  8. Change Instructions or Model

    • If results are inconsistent or not as expected, adjust the agent’s instructions or try a different model.

    • Repeat testing as needed.

  9. Add the trained Agent to specific Ecosystem

    • Once satisfied with the agent’s performance, go to your ecosystem.

    • When adding an agent, select the "Sandbox" tab to choose the agent you built and tested.


FAQ

What is the purpose of the Research Agent Sandbox in Evergrowth?

The Research Agent Sandbox lets admins safely test and refine research/qualification agents before deploying them to live ecosystems. It helps ensure research/qualification agents deliver consistent, accurate results and that instructions are clear.

How do I test a research agent for consistency?

Add your agent and a list of known companies to the sandbox, run at least three tests per company, and compare the outputs. Look for consistent answers and review the agent’s reasoning and sources.

Can I use the sandbox to test agents without affecting my live data?

Yes, anything you do in the sandbox does not impact your existing ecosystems. It is a safe environment for experimentation.

How do I move a tested agent from the sandbox to an ecosystem?

When adding an agent to an ecosystem, select the "Sandbox" tab and choose the agent you created and tested. You do not need to recreate it from scratch.

What should I do if my agent’s output is inconsistent?

Review the agent’s instructions and the model used. Adjust instructions for clarity, try different models, and retest in the sandbox until you achieve consistent results.


Additional Notes & Tips

  • Aim for at least three test runs per agent per company to check for consistency.

  • Use companies you know well for testing so you can accurately judge the agent’s responses.

  • You can control both the agent’s instructions and the AI model used. If you use Evergrowth’s AI-generated instructions, both are selected automatically, but you can customize them if needed.

  • Hovering over responses and agent names in the sandbox gives you quick access to instructions, reasoning, and sources without extra clicks.

  • The sandbox is a feedback loop: test, review, refine, and retest until satisfied.


Related articles

Did this answer your question?