The Salesforce Agentforce Testing Center is a tool that has been designed to Test AI Agents, providing confidence in actions taken before deployment.
Further detailed information can be found here.
In this blog post, we will discuss how to access Testing Centre, how to setup Test Cases and how to interpret the results.
How to access Testing Centre
The Salesforce Testing Centre is available in the following editions; Lightning Experience, Enterprise, Performance, Unlimited, Developer.
The Testing Centre can be accessed within your Salesforce Instance by following this route:
Setup>Einstein>Einstein Generative AI>Agent Studio>Testing Centre
Within here, you can add new tests, by clicking the New Test button. Once you have tests created, from here you can delete the Test Cases, View them or Rerun. Note: You can also add tests within the Agent Builder by using the ‘Batch Test’ button.
How to Setup Test Cases
Setting up Tests to run is straight forward. You can either create your own using the testing template provided, or you can ask for Test Cases to be generated for you. If you are testing a specific agent, then creating your own Test Cases would be our recommended option, just so you know the Tests are relevant and because you probably already have some scenarios written that you want to cover.

There is a template provided that you can download to use to write the Tests, which is in the format required.
There are 4 sections within the Test Template that need completing:
- Utterance – The Test Scenario (input into Agentforce).
- Expected Topic – The API name for the Topic (e.g. GeneralCRM).
- Expected Actions – The AI functions (e.g. IdentifyRecordbyName).
- Expected Response – A description of the expected output.
To become familiar with the terminology, it would be worthwhile generating test cases within Testing Centre using the ‘Generate Test Cases’ button and downloading the file that is created.
To do this, you need to create a Test Name, select the agent you are testing, and enter a description. Selecting Generate Test Cases will start the process.
Once you have created your own tests, you simply upload the CSV file, enter a Test Name, select the Agent and enter a Description.
Clicking the Save and Run button will execute the Tests. Keep refreshing the page until the status shows as Complete.
How to Interpret the Test Results.
Once the Tests have completed a summary of the % Passed are displayed and a list of the Test Results per each ‘Utterance’ that was created. You can also download the Test Results, which display the Test Case as created by you and the actual results that has been returned, alongside a Pass/Fail outcome.
It is important to remember when analyzing the test results, that unlike a hand on manual or automated test with inputs and outputs, AI will have some deviances along the way, so it may be a case that although 100% hasn’t been achieved, the actual result is still acceptable.
Summary
In summary, the setting up and running of the test cases in Testing Center is relatively straightforward, once you understand the format of the template required. Creating your own produces significant and relevant cases. The execution is simple and the test results are clear. It is a great way to validate initial AI responses.
