Skip to main content

Testing Your AI Assistant in Sandbox Mode

After creating or editing your AI assistant, you need to test it before deploying it in live campaigns. The Sandbox feature lets you simulate real prospect conversations to ensure your AI behaves exactly as you intended.

Updated over 2 months ago

  1. From Your Conversifi Dashboard, Click Sandbox in the left sidebar menu under the "Prospecting" section

Setting Up a Test Conversation

Once in Sandbox, you'll see two main sections: Prospect Settings and Conversation.

Step 1: Select Your AI Assistant

At the top of the screen, use the dropdown menu labeled "Select AI Assistant" to choose the assistant you want to test.

Step 2: Create a Test Prospect Profile

In the Prospect Settings panel on the left, fill in the test prospect details:

  • First Name: Enter a realistic first name (e.g., Sarah)

  • Last Name: Enter a realistic last name (e.g., Johnson)

  • Company: Enter a company name (e.g., TechCorp Inc)

  • Job Title: Enter a relevant job title (e.g., VP Sales)

  • Location: Enter a location (e.g., Sutton, England, United Kingdom)

These details help simulate realistic conversations. Your AI will reference these details in its responses (like using first names or mentioning their company).

Step 3: Start the Conversation

In the Conversation panel on the right, you'll see a message input box at the bottom labelled "Type as the prospect..."

This is crucial to understand: You are now role-playing as the prospect, NOT your business.

The AI will respond as your sales assistant, and you type the prospect's responses to test how your AI handles different scenarios.

Test Scenarios to Run

Here are essential test scenarios you should run through:

Test 1: Positive Response

You type (as prospect): "Yes, I'm interested"

What to check:

  • Does the AI push for booking immediately?

  • Does it send your calendar link?

  • Does it avoid unnecessary details?

Test 2: Skeptical Response

You type (as prospect): "This sounds too good to be true"

What to check:

  • Does the AI acknowledge skepticism appropriately?

  • Does it move toward a call booking?

Test 3: Pricing Question

You type (as prospect): "How much does this cost?"

What to check:

  • Does it follow your pricing discussion rules?

  • Does it redirect to a call if that's your strategy?

  • Does it avoid giving exact numbers if that's your rule?

Test 4: Common Objection

You type (as prospect): "I don't have time for this right now"

What to check:

  • Does the AI handle the objection?

  • Is the response empathetic but still pushes value?

  • Does it attempt to overcome or accept the objection per your rules?

Test 5: Not Interested

You type (as prospect): "Not interested, thanks"

What to check:

  • Does the AI stop pushing immediately?

  • Does it say goodbye appropriately?

Test 6: Industry-Specific Objection

You type (as prospect): "This won't work in [your industry]"

What to check:

  • Does your industry-specific objection handler fire?

  • Does it reference relevant case studies?

  • Does it show industry knowledge?

Test 7: Word Count Check

Run multiple test messages and count the words in each AI response.

What to check:

  • Are responses staying within limits (30, 50, 70 words)?

  • Are they only exceeding when appropriate (like detailed explanations)?

Test 8: Forbidden Phrase Check

Review all AI responses in your test conversation.

What to check:

  • Are any forbidden phrases appearing?

  • Look for "circle back", "touch base", "synergy", etc.

  • Any phrases you specifically banned in your rules?

Test 9: Tone and Personality Check

Read the entire conversation out loud.

What to check:

  • Does it sound like the persona you defined?

  • Is it too formal or too casual?

  • Does it match your brand voice?

  • Does it sound human and natural?

Test 10: Booking Link Check

You type (as prospect): "OK let's schedule a call"

What to check:

  • Does the AI immediately provide your calendar link?

  • Is it the correct link?

  • Does the message format look clean?

How to Clear and Start a New Test

After running through a scenario, click the Clear button in the top right of the Conversation panel to reset and start a fresh test conversation.

What to Do If Something's Wrong

If your AI doesn't behave as expected:

  1. Note the specific issue - What did it say that was wrong? What should it have said?

  2. Go back to Advanced Mode - Click Edit Assistant β†’ Advanced β†’ Load Current Prompt

  3. Find the relevant section - Locate where that behavior is controlled

  4. Make your adjustment - Add a new rule, modify an existing one, or mark something as STRICT

  5. Save changes

  6. Return to Sandbox - Test the exact same scenario again

  7. Repeat until correct - Keep refining until the AI responds perfectly

Pro Testing Tips

Tip 1: Test Edge Cases Don't just test obvious scenarios. Try weird responses like:

  • "Maybe"

  • "Send me more info"

  • "What's your commission?"

  • "Are you a bot?"

Tip 2: Test Multi-Turn Conversations Don't just test single responses. Have a 5-10 message conversation to see if the AI maintains context and stays on track.

Tip 3: Role-Play Realistically Type like a real busy executive would:

  • Short responses

  • Occasional typos

  • Questions out of left field

  • Mixed signals

Tip 4: Keep a Testing Checklist Create a document with your standard test scenarios so you run the same tests after every optimization.

Sandbox Best Practices

Always test after making prompt changes - Even small tweaks can have unexpected effects

Test before launching new campaigns - Never deploy untested AI to live prospects

Test with your team - Have colleagues role-play as prospects to find issues you might miss

Save successful conversation examples - Screenshot good AI responses to reference later

Document what works - Keep notes on which rules produce the best responses

Don't skip testing - "It should work" is not the same as "I tested it and it works"

Don't test just once - Run multiple scenarios every time you update your prompt

Don't test unrealistically - Write as prospects actually write, not as you wish they would

Did this answer your question?