Create eval
Create the structure of an evaluation that can be used to test a model's performance. An evaluation is a set of testing criteria and the config for a data source, which dictates the schema of the data used in the evaluation. After creating an evaluation, you can run it on different models and model parameters. We support several types of graders and datasources. For more information, see the Evals guide.
ParametersExpand Collapse
body: EvalCreateParams { data_source_config, testing_criteria, metadata, name }
data_source_config: Custom { item_schema, type, include_sample_schema } | Logs { type, metadata } | StoredCompletions { type, metadata } The configuration for the data source used for the evaluation runs. Dictates the schema of the data used in the evaluation.
The configuration for the data source used for the evaluation runs. Dictates the schema of the data used in the evaluation.
Custom { item_schema, type, include_sample_schema } A CustomDataSourceConfig object that defines the schema for the data source used for the evaluation runs.
This schema is used to define the shape of the data that will be:
- Used to define your testing criteria and
- What data is required when creating a run
A CustomDataSourceConfig object that defines the schema for the data source used for the evaluation runs. This schema is used to define the shape of the data that will be:
- Used to define your testing criteria and
- What data is required when creating a run
The json schema for each row in the data source.
The type of data source. Always custom.
Whether the eval should expect you to populate the sample namespace (ie, by generating responses off of your data source)
Logs { type, metadata } A data source config which specifies the metadata property of your logs query.
This is usually metadata like usecase=chatbot or prompt-version=v2, etc.
A data source config which specifies the metadata property of your logs query.
This is usually metadata like usecase=chatbot or prompt-version=v2, etc.
The type of data source. Always logs.
Metadata filters for the logs data source.
StoredCompletions { type, metadata } Deprecated in favor of LogsDataSourceConfig.
Deprecated in favor of LogsDataSourceConfig.
The type of data source. Always stored_completions.
Metadata filters for the stored completions data source.
testing_criteria: Array<LabelModel { input, labels, model, 3 more } | StringCheckGrader { input, name, operation, 2 more } | TextSimilarity { pass_threshold } | 2 more>A list of graders for all eval runs in this group. Graders can reference variables in the data source using double curly braces notation, like {{item.variable_name}}. To reference the model's output, use the sample namespace (ie, {{sample.output_text}}).
A list of graders for all eval runs in this group. Graders can reference variables in the data source using double curly braces notation, like {{item.variable_name}}. To reference the model's output, use the sample namespace (ie, {{sample.output_text}}).
LabelModel { input, labels, model, 3 more } A LabelModelGrader object which uses a model to assign labels to each item
in the evaluation.
A LabelModelGrader object which uses a model to assign labels to each item in the evaluation.
input: Array<SimpleInputMessage { content, role } | EvalItem { content, role, type } >A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.
SimpleInputMessage { content, role }
The content of the message.
The role of the message (e.g. "system", "assistant", "user").
EvalItem { content, role, type } A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
ResponseInputText { text, type } A text input to the model.
A text input to the model.
The text input to the model.
The type of the input item. Always input_text.
OutputText { text, type } A text output from the model.
A text output from the model.
The text output from the model.
The type of the output text. Always output_text.
InputImage { image_url, type, detail } An image input block used within EvalItem content arrays.
An image input block used within EvalItem content arrays.
The URL of the image input.
The type of the image input. Always input_image.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
ResponseInputAudio { input_audio, type } An audio input to the model.
An audio input to the model.
input_audio: InputAudio { data, format }
Base64-encoded audio data.
format: "mp3" | "wav"The format of the audio data. Currently supported formats are mp3 and
wav.
The format of the audio data. Currently supported formats are mp3 and
wav.
The type of the input item. Always input_audio.
GraderInputs = Array<string | ResponseInputText { text, type } | OutputText { text, type } | 2 more>A list of inputs, each of which may be either an input text, output text, input
image, or input audio object.
A list of inputs, each of which may be either an input text, output text, input image, or input audio object.
ResponseInputText { text, type } A text input to the model.
A text input to the model.
The text input to the model.
The type of the input item. Always input_text.
OutputText { text, type } A text output from the model.
A text output from the model.
The text output from the model.
The type of the output text. Always output_text.
InputImage { image_url, type, detail } An image input block used within EvalItem content arrays.
An image input block used within EvalItem content arrays.
The URL of the image input.
The type of the image input. Always input_image.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
ResponseInputAudio { input_audio, type } An audio input to the model.
An audio input to the model.
input_audio: InputAudio { data, format }
Base64-encoded audio data.
format: "mp3" | "wav"The format of the audio data. Currently supported formats are mp3 and
wav.
The format of the audio data. Currently supported formats are mp3 and
wav.
The type of the input item. Always input_audio.
role: "user" | "assistant" | "system" | "developer"The role of the message input. One of user, assistant, system, or
developer.
The role of the message input. One of user, assistant, system, or
developer.
The type of the message input. Always message.
The labels to classify to each item in the evaluation.
The model to use for the evaluation. Must support structured outputs.
The name of the grader.
The labels that indicate a passing result. Must be a subset of labels.
The object type, which is always label_model.
StringCheckGrader { input, name, operation, 2 more } A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
The input text. This may include template strings.
The name of the grader.
operation: "eq" | "ne" | "like" | "ilike"The string check operation to perform. One of eq, ne, like, or ilike.
The string check operation to perform. One of eq, ne, like, or ilike.
The reference text. This may include template strings.
The object type, which is always string_check.
TextSimilarity extends TextSimilarityGrader { evaluation_metric, input, name, 2 more } { pass_threshold } A TextSimilarityGrader object which grades text based on similarity metrics.
A TextSimilarityGrader object which grades text based on similarity metrics.
The threshold for the score.
A PythonGrader object that runs a python script on the input.
A PythonGrader object that runs a python script on the input.
The threshold for the score.
A ScoreModelGrader object that uses a model to assign a score to the input.
A ScoreModelGrader object that uses a model to assign a score to the input.
The threshold for the score.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
The name of the evaluation.
ReturnsExpand Collapse
EvalCreateResponse { id, created_at, data_source_config, 4 more } An Eval object with a data source config and testing criteria.
An Eval represents a task to be done for your LLM integration.
Like:
- Improve the quality of my chatbot
- See how well my chatbot handles customer support
- Check if o4-mini is better at my usecase than gpt-4o
An Eval object with a data source config and testing criteria. An Eval represents a task to be done for your LLM integration. Like:
- Improve the quality of my chatbot
- See how well my chatbot handles customer support
- Check if o4-mini is better at my usecase than gpt-4o
Unique identifier for the evaluation.
The Unix timestamp (in seconds) for when the eval was created.
data_source_config: EvalCustomDataSourceConfig { schema, type } | Logs { schema, type, metadata } | EvalStoredCompletionsDataSourceConfig { schema, type, metadata } Configuration of data sources used in runs of the evaluation.
Configuration of data sources used in runs of the evaluation.
EvalCustomDataSourceConfig { schema, type } A CustomDataSourceConfig which specifies the schema of your item and optionally sample namespaces.
The response schema defines the shape of the data that will be:
- Used to define your testing criteria and
- What data is required when creating a run
A CustomDataSourceConfig which specifies the schema of your item and optionally sample namespaces.
The response schema defines the shape of the data that will be:
- Used to define your testing criteria and
- What data is required when creating a run
The json schema for the run data source items. Learn how to build JSON schemas here.
The type of data source. Always custom.
Logs { schema, type, metadata } A LogsDataSourceConfig which specifies the metadata property of your logs query.
This is usually metadata like usecase=chatbot or prompt-version=v2, etc.
The schema returned by this data source config is used to defined what variables are available in your evals.
item and sample are both defined when using this data source config.
A LogsDataSourceConfig which specifies the metadata property of your logs query.
This is usually metadata like usecase=chatbot or prompt-version=v2, etc.
The schema returned by this data source config is used to defined what variables are available in your evals.
item and sample are both defined when using this data source config.
The json schema for the run data source items. Learn how to build JSON schemas here.
The type of data source. Always logs.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
EvalStoredCompletionsDataSourceConfig { schema, type, metadata } Deprecated in favor of LogsDataSourceConfig.
Deprecated in favor of LogsDataSourceConfig.
The json schema for the run data source items. Learn how to build JSON schemas here.
The type of data source. Always stored_completions.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
The name of the evaluation.
The object type.
testing_criteria: Array<LabelModelGrader { input, labels, model, 3 more } | StringCheckGrader { input, name, operation, 2 more } | EvalGraderTextSimilarity { pass_threshold } | 2 more>A list of testing criteria.
A list of testing criteria.
LabelModelGrader { input, labels, model, 3 more } A LabelModelGrader object which uses a model to assign labels to each item
in the evaluation.
A LabelModelGrader object which uses a model to assign labels to each item in the evaluation.
input: Array<Input>
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.
ResponseInputText { text, type } A text input to the model.
A text input to the model.
The text input to the model.
The type of the input item. Always input_text.
OutputText { text, type } A text output from the model.
A text output from the model.
The text output from the model.
The type of the output text. Always output_text.
InputImage { image_url, type, detail } An image input block used within EvalItem content arrays.
An image input block used within EvalItem content arrays.
The URL of the image input.
The type of the image input. Always input_image.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
ResponseInputAudio { input_audio, type } An audio input to the model.
An audio input to the model.
input_audio: InputAudio { data, format }
Base64-encoded audio data.
format: "mp3" | "wav"The format of the audio data. Currently supported formats are mp3 and
wav.
The format of the audio data. Currently supported formats are mp3 and
wav.
The type of the input item. Always input_audio.
GraderInputs = Array<string | ResponseInputText { text, type } | OutputText { text, type } | 2 more>A list of inputs, each of which may be either an input text, output text, input
image, or input audio object.
A list of inputs, each of which may be either an input text, output text, input image, or input audio object.
ResponseInputText { text, type } A text input to the model.
A text input to the model.
The text input to the model.
The type of the input item. Always input_text.
OutputText { text, type } A text output from the model.
A text output from the model.
The text output from the model.
The type of the output text. Always output_text.
InputImage { image_url, type, detail } An image input block used within EvalItem content arrays.
An image input block used within EvalItem content arrays.
The URL of the image input.
The type of the image input. Always input_image.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
ResponseInputAudio { input_audio, type } An audio input to the model.
An audio input to the model.
input_audio: InputAudio { data, format }
Base64-encoded audio data.
format: "mp3" | "wav"The format of the audio data. Currently supported formats are mp3 and
wav.
The format of the audio data. Currently supported formats are mp3 and
wav.
The type of the input item. Always input_audio.
role: "user" | "assistant" | "system" | "developer"The role of the message input. One of user, assistant, system, or
developer.
The role of the message input. One of user, assistant, system, or
developer.
The type of the message input. Always message.
The labels to assign to each item in the evaluation.
The model to use for the evaluation. Must support structured outputs.
The name of the grader.
The labels that indicate a passing result. Must be a subset of labels.
The object type, which is always label_model.
StringCheckGrader { input, name, operation, 2 more } A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
The input text. This may include template strings.
The name of the grader.
operation: "eq" | "ne" | "like" | "ilike"The string check operation to perform. One of eq, ne, like, or ilike.
The string check operation to perform. One of eq, ne, like, or ilike.
The reference text. This may include template strings.
The object type, which is always string_check.
EvalGraderTextSimilarity extends TextSimilarityGrader { evaluation_metric, input, name, 2 more } { pass_threshold } A TextSimilarityGrader object which grades text based on similarity metrics.
A TextSimilarityGrader object which grades text based on similarity metrics.
The threshold for the score.
A PythonGrader object that runs a python script on the input.
A PythonGrader object that runs a python script on the input.
The threshold for the score.
A ScoreModelGrader object that uses a model to assign a score to the input.
A ScoreModelGrader object that uses a model to assign a score to the input.
The threshold for the score.
Create eval
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: process.env['OPENAI_API_KEY'], // This is the default and can be omitted
});
const _eval = await client.evals.create({
data_source_config: {
item_schema: { foo: 'bar' },
type: 'custom',
},
testing_criteria: [
{
input: [{ content: 'content', role: 'role' }],
labels: ['string'],
model: 'model',
name: 'name',
passing_labels: ['string'],
type: 'label_model',
},
],
});
console.log(_eval.id);{
"id": "id",
"created_at": 0,
"data_source_config": {
"schema": {
"foo": "bar"
},
"type": "custom"
},
"metadata": {
"foo": "string"
},
"name": "Chatbot effectiveness Evaluation",
"object": "eval",
"testing_criteria": [
{
"input": [
{
"content": "string",
"role": "user",
"type": "message"
}
],
"labels": [
"string"
],
"model": "model",
"name": "name",
"passing_labels": [
"string"
],
"type": "label_model"
}
]
}Returns Examples
{
"id": "id",
"created_at": 0,
"data_source_config": {
"schema": {
"foo": "bar"
},
"type": "custom"
},
"metadata": {
"foo": "string"
},
"name": "Chatbot effectiveness Evaluation",
"object": "eval",
"testing_criteria": [
{
"input": [
{
"content": "string",
"role": "user",
"type": "message"
}
],
"labels": [
"string"
],
"model": "model",
"name": "name",
"passing_labels": [
"string"
],
"type": "label_model"
}
]
}