Skip to main content
POST
/
chat
/
completions
curl --request POST \
  --url https://http.llm.model-cluster.on-prem.clusters.yotta-uat.cluster.s9t.link/chat/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --header 'id: <id>' \
  --data '
{
  "model": "deepseek-v3",
  "messages": [
    {
      "role": "user",
      "content": "How do I delete a team member from an org given user and org tables?"
    }
  ],
  "stream": true,
  "max_tokens": 1024,
  "temperature": 0.7,
  "top_p": 1
}
'
{
"id": "chatcmpl-deepseek-abc123",
"object": "chat.completion",
"created": 1699014493,
"model": "deepseek-v3",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "To delete a team member from an organization, you'll need to handle the relationship between user and org tables. Here's a typical approach:\n\n```sql\n-- Option 1: Delete from junction table\nDELETE FROM user_org_memberships \nWHERE user_id = ? AND org_id = ?;\n\n-- Option 2: Update user record\nUPDATE users \nSET org_id = NULL \nWHERE user_id = ? AND org_id = ?;\n```\n\nThe exact implementation depends on your database schema design."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 18,
"completion_tokens": 156,
"total_tokens": 174,
"reasoning_tokens": 0
},
"system_fingerprint": "deepseek-v3-20241201"
}

Authorizations

Authorization
string
header
required

JWT token for authentication - Use your API token as the Bearer token

Headers

id
string
default:e04b66b0-e8ac-410e-b028-b25716a91a92
required

Model UUID for the request (DeepSeek V3 model identifier)

Body

application/json
messages
object[]
required

Array of messages in the conversation

model
enum<string>
default:deepseek-v3
required

Model identifier for DeepSeek V3

Available options:
deepseek-v3
stream
boolean
default:false

Whether to stream the response

temperature
number
default:0.7

Sampling temperature (0.0 to 2.0). Lower values for analytical tasks, higher for creative work

Required range: 0 <= x <= 2
max_tokens
integer
default:1024

Maximum number of tokens to generate

Required range: 1 <= x <= 65536
top_p
number
default:1

Nucleus sampling parameter for controlling response diversity

Required range: 0 <= x <= 1
top_k
integer
default:50

Top-k sampling parameter for vocabulary selection

Required range: 1 <= x <= 100
stop
string[] | null

Sequences where the API will stop generating further tokens

Maximum array length: 4
frequency_penalty
number
default:0

Penalty for frequent tokens to reduce repetition

Required range: -2 <= x <= 2
presence_penalty
number
default:0

Penalty for new tokens to encourage topic diversity

Required range: -2 <= x <= 2
repetition_penalty
number
default:1

Penalty for repeating tokens (DeepSeek-specific parameter)

Required range: 0.1 <= x <= 2
do_sample
boolean
default:true

Whether to use sampling for generation

seed
integer | null

Random seed for reproducible outputs

reasoning_mode
boolean
default:false

Enable enhanced reasoning capabilities for complex problems

Response

Successful chat completion

  • Option 1
  • Option 2

Response for non-streaming chat completion

id
string

Unique identifier for the completion

Example:

"chatcmpl-deepseek-abc123"

object
enum<string>

Object type

Available options:
chat.completion
Example:

"chat.completion"

created
integer

Unix timestamp of when the completion was created

Example:

1699014493

model
string

The model used for completion

Example:

"deepseek-v3"

choices
object[]
usage
object
system_fingerprint
string | null

System fingerprint for the model version

Example:

"deepseek-v3-20241201"