Create Simulation
Use Playground to define sample inputs that help you test how NEXT AI prepares highlights before you rely on the setup in production workflows.
Creating a simulation gives your team a reusable test case for checking whether the current preparation setup behaves as expected. It is especially useful when you are not clear why a tag was assigned, or not assigned, by a Tagging Job.
Before You Start
- You must have permission to edit teamspace settings.
- Choose a realistic highlight example that represents the behavior you want to test.
- Decide how a successful AI preparation would look like, e.g. which tags must be assigned.
Steps
- Open Teamspace Settings > Prepare with AI > Playground.
- Select Create.
- Name the simulation so the purpose is obvious to your team.
- Paste the sample highlight text you want to test.
- Add a success criterion, e.g. for the tag you expect to be assigned.
When Should I Use Simulations?
- When you want a reusable test case for validating AI preparation behavior.
- When your team is refining the setup and needs a consistent example to check against.
- When you need to understand why a Tagging Job did or did not assign a specific tag.
Tips
- Use realistic examples from customer conversations.
- Give each simulation a name that makes the expected outcome obvious.
- Add the expected tag as a success criterion so Playground can explain why the tag was not assigned.
- Use Playground to inspect the lowest-level understanding that NEXT has of the highlight in the context of a Tagging Job.
Example
This example creates a simulation for a customer asking for a reminder before the next invoice is due.
FAQ
Q: What makes a good simulation?
A good simulation uses realistic customer language, a clear name, and success criteria for the expected tag so anyone on the team can understand what outcome the test is meant to validate.
Q: Why should I add the expected tag as a success criterion?
Adding the expected tag as a success criterion gives Playground a clear target. That makes it easier to understand why a tag was not assigned, because the result explains how NEXT interpreted the highlight in the context of the Tagging Job.
Q: What if the Playground result looks correct but the final tag assignment is still different?
Playground shows the lowest-level interpretation of the highlight for the Tagging Job, but additional processing happens afterward. If the Playground result looks correct and the final assignment still does not match, share the Playground deeplink with NEXT Support for further investigation.