API
Mendable Chat API
POST /api.mendable.ai/v1/mendableChat
This endpoint enables a chat with the Mendable AI assistant, which provides responses based on the given question and history, and the project's documentation. We have added the retriever options - num_chunks params to the chat endpoint on our API. This allows people to pick how many information chunks the retriever will bring back.
Ingesting your documents
For this endpoint to work, your documents have to already be ingested in our system. To do that, go to our platform or schedule an intro call.
We will soon provide a way to ingest documents through our API.
Creating a conversation
A conversation is a series of 'chats' between a user and a Mendable AI assistant. To be able to start chatting you should create a new conversation using the newConversation
Mendable API endpoint. The endpoint will return a conversation_id which you can use to start a chat. Here is how to get a conversation_id in cURL:
curl -X POST https://api.mendable.ai/v1/newConversation \
-H "Content-Type: application/json" \
-d '{
"api_key": "YOUR_API_KEY",
}'
Creating a chat
Now that you have your conversation_id
, you can use that to start chatting with your Mendable AI. You can also specify the number of information chunks you want the retriever to bring back by using the num_chunks
parameter in the retriever_options
.
Example Usage with Streaming
Note: You must be able to handle Server-Sent Events (SSE) events to capture the streamed tokens. See below the cURL example of how to do streaming in Javascript.
Request
Here is an example request using cURL:
curl -X POST https://api.mendable.ai/v1/mendableChat \
-H "Content-Type: application/json" \
-d '{
"api_key": "YOUR_API_KEY",
"question": "How do I create a new project?",
"history": [
{ "prompt" : "How do I create a new project?", "response" : "You can create a new project by going to the projects page and clicking the new project button." }
],
"conversation_id": 12345,
"retriever_options":{
"num_chunks": 4
},
}'
or using JavaScript:
Installing Microsoft Fetch Event Source
To use streaming you need to capture Server-Sent Events (SSE). Performing a normal fetch
won't work. Therefore, we recommend using the Microsoft Fetch Event Source library. This library allows you to easily handle SSE. Here is how to install it:
npm install @microsoft/fetch-event-source
Example Code
import { fetchEventSource } from '@microsoft/fetch-event-source'
const url = 'https://api.mendable.ai/v1/mendableChat'
const data = {
api_key: 'YOUR_API_KEY',
question: 'How do I create a new project?',
history: [
{
prompt: 'How do I create a new project?',
response:
'You can create a new project by going to the projects page and clicking the new project button.',
},
],
conversation_id: 12345,
retriever_options:{
num_chunks: 4
},
}
let fullResponse = ''
fetchEventSource(url, {
method: 'POST',
headers: {
Accept: 'text/event-stream',
'Content-Type': 'application/json',
},
openWhenHidden: true, // This is important to avoid the connection being closed when the tab is not active
body: JSON.stringify(data),
onopen(res: any) {
if (res.ok && res.status === 200) {
console.log('Connection made ', res)
} else if (res.status >= 400 && res.status < 500 && res.status !== 429) {
console.log('Client side error ', res)
}
return res
},
onmessage(event: any) {
const parsedData = JSON.parse(event.data)
const chunk = parsedData.chunk
if (chunk === '<|source|>') {
sources = parsedData.metadata
return
} else if (chunk === '<|message_id|>') {
response_message_id = parsedData.metadata
return
}
// TODO: handle chunks
return
},
onclose() {
// on close functionality
return
},
onerror(err: any) {
// on error functionality
return
},
})
.then((response) => response.json())
.then((data) => console.log(data))
.catch((error) => console.error('Error:', error))
Response
data: {"chunk" : "<|source|>" , "metadata": [{"id": "string", "content": "string", "link": "string"}]}
data: {"chunk": "To create a new project in Mendable, follow these steps:"}
data: {"chunk": "\n\n1. Log in to your Mendable account."}
data: {"chunk": "\n2. Click the 'New Project' button in the dashboard."}
data: {"chunk": "\n3. Fill in the required project details and click 'Create Project'."}
data: {"chunk": "<|message_id|>", "metadata" : 12345 }
Pay attention how the first chunk of the response is a special chunk that contains the metadata for the response. You can capture the chunk as it is sent as "<|source|>". The metadata is an array of objects, each containing the id, content and link of the source used to generate the response.
The last chunk is a <|message_id|>
which is the id of the message in Mendable. You can ignore this chunk for now...
Request Parameters
Parameter | Type | Required | Description |
---|---|---|---|
api_key | string | true | Your unique Mendable API key. |
question | string | true | The user's input or question. |
history | array | true | An array of conversation objects containing prompt and response strings and an optional array of source objects. |
conversation_id | float | false | Id of the conversation, returned from here |
temperature | float | false | Controls the randomness of the AI's response (0.0 to 1.0). |
additional_context | string | false | Additional context from your API to be added to the prompt |
relevance_threshold | number | false | This allows you to filter out sources that fall below a specified relevance_threshold. The relevance_threshold is a value between 0 and 1, where 0 would filter no sources and 1 will filter all sources |
where | object | false | This allows you to filter out sources based on metadata. See below for more details. |
retriever_options | object | false | This allows you to specify the number of information chunks the retriever will bring back. |
Stream Response
The response is sent as a series of Server-Sent Events (SSE). Each chunk of the AI-generated response is sent as a data event.
Chunk <|source|>
contains metadata for the souces of the content.
Chunk <|message_id|>
contains the id of the message in Mendable.
Chunk <|loading_tools|>
only shows if "Tools and Actions" is enabled. That chunk will contain "true"
if AI is looking for tools to take action on the user's request.
Chunk <|tool_called|>
only shows if "Tools and Actions" is enabled. That chunk will contain an object with the name of the tool that was called {name: tool_name}
.
Chunk <|tool_output|>
only shows if "Tools and Actions" is enabled. That chunk will contain a data
object with the tool output that was used.
Non-Stream Response
If you disable streaming, you will get a normal JSON response. Here is an example:
{
"answer": {
"text": "This is how to deploy it..."
},
"message_id": 123,
"sources": [
{
"id": 866,
"content":"",
"link": "",
"relevance_score": 0.99
},
],
"tools_used": [
{
"id": "eb2b1c99-8e9b-4697-82ca-6151a7f3c0b5",
"internal_run_id": "9263987",
"tool_name": "Web Search",
"has_error": false,
"date": "2024-01-11T22:07:05.932Z"
}
]
}
Metadata Filtering (Beta)
You can filter out sources based on metadata. This is useful if you want to filter out sources based on the version of the document or any other metadata you have ingested. Metadata ingestion is currently only available through the /ingestDocuments
endpoint.
To use metadata filtering, use the where
parameter when querying. Here is an example of what you need to do when calling the /mendableChat
endpoint:
{
"api_key": "",
"question": "What does Jack play?",
"history":[],
"conversation_id": 123145,
"shouldStream":false,
"where":{
"version": { "$lte": 6 } ,
"age": {"$eq": 22 }
}
}
The where
parameter is an object that contains the metadata you want to filter on. The key is the name of the metadata and the value is an object that contains the operator and the value you want to filter on.
Metadata supports the following operators:
- $eq - equal to (string, int, float)
- $ne - not equal to (string, int, float)
- $gt - greater than (int, float)
- $gte - greater than or equal to (int, float)
- $lt - less than (int, float)
- $lte - less than or equal to (int, float)
- $in - matches any value in an array (string, int, float)
It currently does not support and
, or
operators, but we plan to support them soon.
All the other chunks are the AI-generated response that you can concat.
data: {"chunk" : "<|source|>" , "metadata": [{"id": "string", "content": "string", "link": "string"}]}
data: {"chunk": "To create a new project in Mendable, follow these steps:"}
data: {"chunk": "\n\n1. Log in to your Mendable account."}
data: {"chunk": "\n2. Click the 'New Project' button in the dashboard."}
data: {"chunk": "\n3. Fill in the required project details and click 'Create Project'."}
data: {"chunk": "<|message_id|>", "metadata" : 12345 }
Disabling Streaming
Alternatively, you can disable streaming by passing the shouldStream
parameter as false
to the request. With streaming disabled, you can get the bots response from response body.
Here is an example of the request body with streaming disabled:
{
"question": "How to deploy my application?",
"history": [],
"anon_key": "<ANON_KEY>",
"conversation_id": "<convo_id>",
"shouldStream": false
}
And this is the response:
{
"answer": {
"text": "This is how to deploy it..."
},
"message_id": 123,
"sources": [
{
"id": 866,
"content":"",
"link": "",
"relevance_score": 0.99
},
]
}
Additional Tracking
To enhance your analytics capabilities, the mendableChat
API request supports an additional tracking parameter, tracking
.
Tracking allows you to include additional information about the user's interaction with the Mendable AI assistant. This can be used to track the user's role, the user's location, or any other information that you would like to track.
As an example, including roleInfo
as a string value representing the user's role, you can gather more detailed insights into user interactions. This feature is designed to assist in tracking the role of the user more effectively, enabling the generation of more refined analytics.
To use it pass roleInfo
inside tracking
:
{
"api_key": "YOUR_API_KEY",
"question": "How do I create a new project?",
"history": [
{ "prompt" : "How do I create a new project?", "response" : "You can create a new project by going to the projects page and clicking the new project button." }
],
"conversation_id": 12345,
"retriever_options":{
"num_chunks": 4
},
"tracking": {
"roleInfo": "admin"
}
}