Skip to main content
PUT
/
v1
/
update-digital-human
/
{digital_human_id}
Update Digital Human
curl --request PUT \
  --url https://api.getbluejay.ai/v1/update-digital-human/{digital_human_id} \
  --header 'Content-Type: application/json' \
  --header 'X-API-Key: <x-api-key>' \
  --data '
{
  "intent": "<string>",
  "success_criteria": "<string>",
  "name": "<string>",
  "tag": "<string>",
  "language": "en",
  "accent": "multilingual",
  "gender": "male",
  "background_noise": "none",
  "voice_speed": "normal",
  "audio_quality": "high",
  "fluency": "beginner",
  "verbosity": "low",
  "phone_number": "<string>",
  "outbound_text_number": "<string>",
  "background_noise_volume": 0.8,
  "expected_tool_calls": [
    {
      "name": "<string>",
      "parameters": {},
      "output": "<unknown>"
    }
  ],
  "allow_end_call_tool": true,
  "allow_silence_tool": true,
  "silence_tool_instructions": "<string>",
  "endpointing_delay": 1.5,
  "creativity": 0.7,
  "hangup_phrases": [
    "<string>"
  ],
  "hangup_instructions": "<string>",
  "silence_timeout": 16,
  "simulation_ids": [
    123
  ],
  "traits": [
    {
      "trait_name": "<string>",
      "trait_data_type": "BOOLEAN",
      "value": "<unknown>",
      "is_sip_header": false
    }
  ],
  "interruptions": {
    "type": "none"
  },
  "scripted_responses": [
    {
      "match_type": "exact",
      "match_phrase": "<string>",
      "response_type": "phrase",
      "occurrence_mode": "always",
      "response_value": "<string>",
      "occurrence_n": 1,
      "silence_duration": 1
    }
  ],
  "role_description": "<string>",
  "speaks_first_config": {
    "speaks_first": true,
    "mode": "custom",
    "message": "<string>"
  },
  "original_transcript": "<string>",
  "formatted_transcript": [
    {}
  ],
  "enriched_playback": [
    {}
  ],
  "num_runs": 2,
  "livekit_metadata": {},
  "always_on_mode": true,
  "always_on_active": true,
  "override_conflict": false
}
'
{
  "digital_human": {
    "intent": "<string>",
    "success_criteria": "<string>",
    "id": 123,
    "tag": "<string>",
    "name": "<string>",
    "language": "en",
    "accent": "multilingual",
    "gender": "male",
    "background_noise": "<string>",
    "voice_speed": "<string>",
    "audio_quality": "<string>",
    "fluency": "<string>",
    "verbosity": "<string>",
    "phone_number": "<string>",
    "outbound_text_number": "<string>",
    "websocket_url": "<string>",
    "background_noise_volume": 123,
    "expected_tool_calls": [
      {
        "name": "<string>",
        "parameters": {},
        "output": "<unknown>"
      }
    ],
    "allow_end_call_tool": true,
    "allow_silence_tool": true,
    "silence_tool_instructions": "default",
    "endpointing_delay": 123,
    "creativity": 123,
    "hangup_phrases": [
      "<string>"
    ],
    "hangup_instructions": "<string>",
    "silence_timeout": 123,
    "role_description": "<string>",
    "created_at": "2023-11-07T05:31:56Z",
    "traits": [
      {
        "trait_name": "<string>",
        "trait_data_type": "BOOLEAN",
        "value": "<unknown>",
        "is_sip_header": false
      }
    ],
    "interruptions": {},
    "scripted_responses": [
      {
        "match_type": "exact",
        "match_phrase": "<string>",
        "response_type": "phrase",
        "occurrence_mode": "always",
        "response_value": "<string>",
        "occurrence_n": 1,
        "silence_duration": 1
      }
    ],
    "speaks_first_config": {
      "speaks_first": true,
      "mode": "custom",
      "message": "<string>"
    },
    "original_transcript": "<string>",
    "formatted_transcript": [
      {}
    ],
    "enriched_playback": [
      {}
    ],
    "workflow_v2_id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
    "workflow_path_index": 123,
    "livekit_metadata": {}
  },
  "simulation_ids": [
    123
  ],
  "simulation_id": 123
}
Integration Prompt for AI Agents
# Bluejay — Testing & Monitoring Platform for Conversational AI Agents

You are a senior backend engineer integrating the Bluejay API. Think step-by-step: first understand the endpoint, then plan the integration, then implement with minimal changes.

## Update Digital Human — PUT /v1/update-digital-human/{digital_human_id}

> **What this endpoint does:** Update a digital human by ID. Returns 404 if the digital human does not exist or belongs to a different organization. Body fields are patch-style: omit a field to leave it unchanged; include `allow_silence_tool` and/or `silence_tool_instructions` to update them (use the string `"default"` for built-in silence-tool behavior). **Effective tag:** If the body includes `tag`, workflow rules use that tag; otherwise they use existing stored tags. **Workflow-tagged (tag lowercased contains `workflow`):** **400** if intent, success_criteria, role_description, original_transcript, or formatted_transcript **change** from stored values. Same values as DB (idempotent full PUT) are OK. `enriched_playback` remains updatable. **Transcript update behavior (three cases):** - `original_transcript` only: the middleware calls an LLM formatter to produce `formatted_transcript`; both are stored. - Both `original_transcript` and `formatted_transcript`: LLM formatting is skipped; both values are stored as-is and intent/description is derived from the formatted transcript. - `formatted_transcript` only: LLM formatting is skipped; `formatted_transcript` is stored, `original_transcript` is set to null in the DB, and intent/description is derived from the formatted transcript. See docs: Workflow tags & enriched playback.

**Endpoint:** PUT `https://api.getbluejay.ai/v1/update-digital-human/{digital_human_id}`
**Auth:** `X-API-Key` header
**Content-Type:** application/json

### Required Parameters
| Name | Type | Description |
|------|------|-------------|
| digital_human_id | integer |  |
| X-API-Key | string | API key required to authenticate requests. |

Review the full parameter list at https://docs.getbluejay.ai/api-reference/endpoint/update-digital-human and include any optional parameters (e.g., `intent`, `success_criteria`, `name`, `tag`, `language`, `accent`) that serve your integration's use case and align with Bluejay's testing and monitoring capabilities.

### Request Body
```json
{
  "intent": "string",
  "success_criteria": "string",
  "name": "string",
  "tag": "string",
  "language": "en",
  "accent": "multilingual",
  "gender": "male",
  "background_noise": "none",
  "voice_speed": "slowest",
  "audio_quality": "high",
  "fluency": "beginner",
  "verbosity": "low",
  "phone_number": "string",
  "outbound_text_number": "string",
  "background_noise_volume": 1.0,
  "expected_tool_calls": [
    {
      "name": "example_name",
      "parameters": {
        "key": "value"
      }
    }
  ],
  "allow_end_call_tool": true,
  "endpointing_delay": 1.0,
  "creativity": 1.0,
  "hangup_phrases": [
    "string"
  ],
  "hangup_instructions": "string",
  "allow_silence_tool": false,
  "silence_tool_instructions": "default",
  "silence_timeout": 123,
  "simulation_ids": [
    123
  ],
  "traits": [
    {
      "trait_name": "example_name",
      "trait_data_type": "BOOLEAN",
      "value": "string",
      "is_sip_header": false
    }
  ],
  "interruptions": {
    "type": "none"
  },
  "scripted_responses": [
    {
      "match_type": "exact",
      "match_phrase": "string",
      "response_type": "phrase",
      "response_value": "string",
      "occurrence_mode": "always",
      "occurrence_n": 123,
      "silence_duration": 123
    }
  ],
  "role_description": "string",
  "speaks_first_config": {
    "speaks_first": true,
    "mode": "custom",
    "message": "string"
  },
  "original_transcript": "string",
  "formatted_transcript": [
    {
      "key": "value"
    }
  ],
  "enriched_playback": [
    {
      "key": "value"
    }
  ],
  "num_runs": 123,
  "livekit_metadata": {
    "key": "value"
  },
  "always_on_mode": true,
  "always_on_active": true,
  "override_conflict": true
}
```

### Example
**PUT with body:**
```python
import requests

def update_digital_human(digital_human_id: int, payload: dict, api_key: str) -> dict:
    url = f"https://api.getbluejay.ai/v1/update-digital-human/{digital_human_id}"
    headers = {"X-API-Key": api_key}
    response = requests.put(url, headers=headers, json=payload)
    response.raise_for_status()
    return response.json()
```

### Constraints
- Minimal changes — only add/change files needed for this integration.
- Match existing codebase patterns (naming, file structure, error handling).
- Include error handling for 400: Workflow-tagged digital human: field changes not allowed, or business-rule violation (e.g. invalid phone number, invalid accent); 404: Digital human not found or belongs to a different organization; 422: Validation Error.

### Integration Checklist
Before writing code, verify:
1. Which module/service owns this API domain in the codebase?
2. What HTTP client and error-handling patterns does the project use?
3. Are there existing types/interfaces to extend?

Then implement the integration, export it, and confirm it compiles/passes lint.
This endpoint allows you to update an existing digital human. Effective tag (body vs stored) can affect validation; see the OpenAPI schema and response codes on this page for update constraints.

Headers

X-API-Key
string
required

API key required to authenticate requests.

Path Parameters

digital_human_id
integer
required

Body

application/json

Request model for updating a digital human - contains only the digital human data.

intent
string | null

Description of the digital human

success_criteria
string | null

Success criteria for the digital human

name
string | null

Name of the digital human

tag
string | null

Tag for categorizing the digital human

language
enum<string> | null
default:en

Language the digital human speaks

Available options:
en,
es,
pt,
ja,
tr,
hi,
ar,
ru,
zh,
ml,
fr,
yue,
vi,
de
accent
enum<string> | null

Accent of the digital human

Available options:
multilingual,
american,
american2,
mature,
southern,
italian,
indian,
british,
australian,
mexican,
spanish,
portuguese,
french,
turkish,
japanese,
hindi,
arabic,
russian,
chinese,
german
gender
enum<string> | null

Gender of the digital human

Available options:
male,
female
background_noise
enum<string> | null
default:none

Type of background noise

Available options:
none,
office,
talking,
traffic,
cafe,
park,
tv
voice_speed
enum<string> | null
default:normal

Speed of the digital human's voice

Available options:
slowest,
slow,
normal,
fast,
fastest
audio_quality
enum<string> | null

Audio quality of the digital human's voice

Available options:
high,
medium,
low,
horrible
fluency
enum<string> | null

Fluency level of the digital human's speech

Available options:
beginner,
intermediate,
native
verbosity
enum<string> | null

Verbosity level of the digital human's responses

Available options:
low,
medium,
high
phone_number
string | null

Phone number for the digital human

outbound_text_number
string | null

Outbound text number

background_noise_volume
number | null
default:0.8

Volume of background noise

Required range: 0 <= x <= 1
expected_tool_calls
ExpectedToolCall · object[] | null

Expected tool call outputs

allow_end_call_tool
boolean | null

Allow the digital human to end the tool call

allow_silence_tool
boolean | null

Allow the digital human to use the silence tool

silence_tool_instructions
string | null

Tool instructions; set to "default" for built-in behavior or custom text

endpointing_delay
number | null
default:1.5

Delay for endpointing

creativity
number | null
default:0.7

How creative the digital human is (Model temperature)

Required range: 0 <= x <= 2
hangup_phrases
string[] | null

Phrases that trigger hangup

hangup_instructions
string | null

Freeform instructions for how/when to hang up

silence_timeout
integer | null

Silence timeout in seconds

Required range: x >= 15
simulation_ids
integer[] | null

Array of simulation IDs to associate with this digital human. If provided, completely replaces existing associations.

traits
Trait · object[] | null

List of traits associated with this digital human. If provided, completely replaces existing traits.

interruptions
Default · object

Simple interruption configuration with predefined levels.

scripted_responses
ScriptedResponse · object[] | null

List of scripted responses for the digital human. If provided, completely replaces existing scripted responses.

role_description
string | null

Description of the role for the digital human

speaks_first_config
SpeaksFirstConfig · object

Speaks first configuration for the digital human. If provided, completely replaces existing speaks first config.

original_transcript
string | null

Original transcript text. If changed from the current value, utterances are re-extracted and the intent is regenerated.

formatted_transcript
Formatted Transcript · object[] | null

Pre-computed structured transcript as [{role, utterance}]. When provided alongside original_transcript, skips the LLM formatting call.

enriched_playback
Enriched Playback · object[] | null

Optional enriched playback stored as JSONB: a list of turn objects

num_runs
integer | null

Number of times this digital human is run per simulation run (run count).

Required range: x >= 1
livekit_metadata
Livekit Metadata · object

LiveKit-specific configuration and metadata for this digital human. If provided, completely replaces existing metadata.

always_on_mode
boolean | null

Whether always-on mode is enabled for this digital human

always_on_active
boolean | null

When true, this DH actively receives calls on phone_number; when false, number is assigned but inactive

override_conflict
boolean | null
default:false

When true, allows activation to replace an existing always-on active DH using the same phone number.

Response

Successful Response

Response model for digital human operations with clear separation of digital human data and simulation context.

digital_human
DigitalHumanResponseData · object
required

The digital human data

simulation_ids
integer[] | null

List of simulation IDs associated with this digital human

simulation_id
integer | null
deprecated

ID of the associated simulation. Use simulation_ids instead.