Chat SDK
The Chat SDK allows you to have conversations with Grok in a stateless fashion. Stateless in this context means that conversations are not stored on the server and every request to the API contains the entire conversation history. Use this SDK to implement custom applications that are based on Grok.
Getting started
To get started with the Chat SDK, create a new client and access its chat
property:
Turn-based conversations
To start a new conversation with Grok via the Chat SDK, call the create_conversation()
method followed by
the Conversation.add_response()
method.
"""A simple example demonstrating text completion."""
import asyncio
import sys
import xai_sdk
async def main():
"""Runs the example."""
client = xai_sdk.Client()
conversation = client.chat.create_conversation()
print("Enter an empty message to quit.\n")
while True:
user_input = input("Human: ")
print("")
if not user_input:
return
token_stream, _ = conversation.add_response(user_input)
print("Grok: ", end="")
async for token in token_stream:
print(token, end="")
sys.stdout.flush()
print("\n")
asyncio.run(main())
About streaming
The add_response
function returns a tuple of the form token_stream, final_response
where token_stream
is
an async
iterable and final_response
is a future that evaluates to the final response once it has fully been
generated. Using token_stream
is entirely optional, and it's only used when you want to visualize the token-generation
process in real time. Note that final_response
won't resolve unless token_stream
is used. If you are only interested
in the final response, you can use the auxiliary method add_response_no_stream
instead.
"""A simple example demonstrating text completion without using streams."""
import asyncio
import xai_sdk
async def main():
"""Runs the example."""
client = xai_sdk.Client()
conversation = client.chat.create_conversation()
print("Enter an empty message to quit.\n")
while True:
user_input = input("Human: ")
print("")
if not user_input:
return
response = await conversation.add_response_no_stream(user_input)
print(f"Grok: {response.message}\n")
asyncio.run(main())
API reference
xai_sdk.chat.AsyncChat
Provides a simple chat API that can be used for products.
Source code in xai_sdk/chat.py
xai_sdk.chat.AsyncChat.create_conversation(fun_mode=False, disable_search=False, model_name='')
Creates a new empty conversation.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
fun_mode |
bool
|
Whether fun mode shall be enabled for this conversation. |
False
|
disable_search |
bool
|
If true, Grok will not search X for context. This means Grok won't be able to answer questions that require realtime information. |
False
|
model_name |
str
|
Name of the model use it. If empty, the default model will be used. |
''
|
Returns:
Type | Description |
---|---|
Conversation
|
Newly created conversation. |
Source code in xai_sdk/chat.py
xai_sdk.chat.Conversation
A conversation held via the stateless Chat API.
Source code in xai_sdk/chat.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 |
|
xai_sdk.chat.Conversation.fun_mode: bool
property
Returns true if the conversation happens in fun mode.
xai_sdk.chat.Conversation.history: Sequence[stateless_chat_pb2.StatelessResponse]
property
Returns the linear conversation history.
xai_sdk.chat.Conversation.add_response(user_message, *, image_inputs=())
Adds a new user response to the conversation and samples a model response in return.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
user_message |
str
|
Message the user has entered. |
required |
image_inputs |
Sequence[str]
|
A list of base64-encoded images that are attached to the response. |
()
|
Returns:
Type | Description |
---|---|
tuple[AsyncGenerator[str, None], Future[StatelessResponse]]
|
A tuple of the form |
Source code in xai_sdk/chat.py
xai_sdk.chat.Conversation.add_response_no_stream(user_message, *, image_inputs=())
async
Same as add_response
but doesn't return a token stream.
Use this function if you are only interested in the complete response and don't need to stream the individual tokens.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
user_message |
str
|
Message the user has entered. |
required |
image_inputs |
Sequence[str]
|
A list of base64-encoded images that are attached to the response. |
()
|
Returns:
Type | Description |
---|---|
StatelessResponse
|
The newly generated response. |