Given an agent this endpoint generates digital humans to test that agent based on varying simulation types.
allow_silence_tool or silence_tool_instructions; those are set when you create or update a digital human explicitly. Generated or returned digital_human objects in responses may still include those fields at model defaults (allow_silence_tool false, silence_tool_instructions "default"). Whether the silence tool runs in voice simulations is determined by the execution layer that reads stored test-case data, not by this API alone.API key required to authenticate requests.
Pydantic model for digital humans (simulation type) request
ID of the agent to be used to generate the digital humans
Optionally attach the digital humans to a simulation
Prompt UUID associated with the simulation
Knowledge base UUID associated with the simulation
List of goal adherence scenarios
Dictionary of workflow IDs and their counts
Dictionary of workflow_v2 IDs and their counts (uses new graph definition format)
List of transcript replays to generate digital humans from
Replay transcript by providing the transcript text directly.
List of customer personas to be used in the simulation
Number of load testing calls
Number of red teaming calls
List of traits to apply to all generated digital humans
Number of runs per digital human per simulation run (run count).
x >= 1Successful Response
List of digital humans created for the simulation
Status of the response
Whether the generation succeeded
Scenario view of created digital humans (includes original_transcript and formatted_transcript for transcript replays)
Warning message if the request was modified (e.g. truncated due to exceeding the max limit)