Skip to content

Create eval

EvalCreateResponse evals().create(EvalCreateParamsparams, RequestOptionsrequestOptions = RequestOptions.none())
POST/evals

Create the structure of an evaluation that can be used to test a model's performance. An evaluation is a set of testing criteria and the config for a data source, which dictates the schema of the data used in the evaluation. After creating an evaluation, you can run it on different models and model parameters. We support several types of graders and datasources. For more information, see the Evals guide.

ParametersExpand Collapse
EvalCreateParams params
DataSourceConfig dataSourceConfig

The configuration for the data source used for the evaluation runs. Dictates the schema of the data used in the evaluation.

class Custom:

A CustomDataSourceConfig object that defines the schema for the data source used for the evaluation runs. This schema is used to define the shape of the data that will be:

  • Used to define your testing criteria and
  • What data is required when creating a run
ItemSchema itemSchema

The json schema for each row in the data source.

JsonValue; type "custom"constant"custom"constant

The type of data source. Always custom.

Optional<Boolean> includeSampleSchema

Whether the eval should expect you to populate the sample namespace (ie, by generating responses off of your data source)

class Logs:

A data source config which specifies the metadata property of your logs query. This is usually metadata like usecase=chatbot or prompt-version=v2, etc.

JsonValue; type "logs"constant"logs"constant

The type of data source. Always logs.

Optional<Metadata> metadata

Metadata filters for the logs data source.

class StoredCompletions:

Deprecated in favor of LogsDataSourceConfig.

JsonValue; type "stored_completions"constant"stored_completions"constant

The type of data source. Always stored_completions.

Optional<Metadata> metadata

Metadata filters for the stored completions data source.

List<TestingCriterion> testingCriteria

A list of graders for all eval runs in this group. Graders can reference variables in the data source using double curly braces notation, like {{item.variable_name}}. To reference the model's output, use the sample namespace (ie, {{sample.output_text}}).

class LabelModel:

A LabelModelGrader object which uses a model to assign labels to each item in the evaluation.

List<Input> input

A list of chat messages forming the prompt or context. May include variable references to the item namespace, ie {{item.name}}.

Accepts one of the following:
class SimpleInputMessage:
String content

The content of the message.

String role

The role of the message (e.g. "system", "assistant", "user").

class EvalItem:

A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions.

Content content

Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.

Accepts one of the following:
String
class ResponseInputText:

A text input to the model.

String text

The text input to the model.

JsonValue; type "input_text"constant"input_text"constant

The type of the input item. Always input_text.

class OutputText:

A text output from the model.

String text

The text output from the model.

JsonValue; type "output_text"constant"output_text"constant

The type of the output text. Always output_text.

class InputImage:

An image input block used within EvalItem content arrays.

String imageUrl

The URL of the image input.

JsonValue; type "input_image"constant"input_image"constant

The type of the image input. Always input_image.

Optional<String> detail

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

class ResponseInputAudio:

An audio input to the model.

InputAudio inputAudio
String data

Base64-encoded audio data.

Format format

The format of the audio data. Currently supported formats are mp3 and wav.

Accepts one of the following:
MP3("mp3")
WAV("wav")
JsonValue; type "input_audio"constant"input_audio"constant

The type of the input item. Always input_audio.

Accepts one of the following:
String
class ResponseInputText:

A text input to the model.

String text

The text input to the model.

JsonValue; type "input_text"constant"input_text"constant

The type of the input item. Always input_text.

OutputText
String text

The text output from the model.

JsonValue; type "output_text"constant"output_text"constant

The type of the output text. Always output_text.

InputImage
String imageUrl

The URL of the image input.

JsonValue; type "input_image"constant"input_image"constant

The type of the image input. Always input_image.

Optional<String> detail

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

class ResponseInputAudio:

An audio input to the model.

InputAudio inputAudio
String data

Base64-encoded audio data.

Format format

The format of the audio data. Currently supported formats are mp3 and wav.

Accepts one of the following:
MP3("mp3")
WAV("wav")
JsonValue; type "input_audio"constant"input_audio"constant

The type of the input item. Always input_audio.

Role role

The role of the message input. One of user, assistant, system, or developer.

Accepts one of the following:
USER("user")
ASSISTANT("assistant")
SYSTEM("system")
DEVELOPER("developer")
Optional<Type> type

The type of the message input. Always message.

List<String> labels

The labels to classify to each item in the evaluation.

String model

The model to use for the evaluation. Must support structured outputs.

String name

The name of the grader.

List<String> passingLabels

The labels that indicate a passing result. Must be a subset of labels.

JsonValue; type "label_model"constant"label_model"constant

The object type, which is always label_model.

class StringCheckGrader:

A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.

String input

The input text. This may include template strings.

String name

The name of the grader.

Operation operation

The string check operation to perform. One of eq, ne, like, or ilike.

Accepts one of the following:
EQ("eq")
NE("ne")
LIKE("like")
ILIKE("ilike")
String reference

The reference text. This may include template strings.

JsonValue; type "string_check"constant"string_check"constant

The object type, which is always string_check.

class TextSimilarity:

A TextSimilarityGrader object which grades text based on similarity metrics.

double passThreshold

The threshold for the score.

class Python:

A PythonGrader object that runs a python script on the input.

Optional<Double> passThreshold

The threshold for the score.

class ScoreModel:

A ScoreModelGrader object that uses a model to assign a score to the input.

Optional<Double> passThreshold

The threshold for the score.

Optional<Metadata> metadata

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

Optional<String> name

The name of the evaluation.

ReturnsExpand Collapse
class EvalCreateResponse:

An Eval object with a data source config and testing criteria. An Eval represents a task to be done for your LLM integration. Like:

  • Improve the quality of my chatbot
  • See how well my chatbot handles customer support
  • Check if o4-mini is better at my usecase than gpt-4o
String id

Unique identifier for the evaluation.

long createdAt

The Unix timestamp (in seconds) for when the eval was created.

DataSourceConfig dataSourceConfig

Configuration of data sources used in runs of the evaluation.

Accepts one of the following:
class EvalCustomDataSourceConfig:

A CustomDataSourceConfig which specifies the schema of your item and optionally sample namespaces. The response schema defines the shape of the data that will be:

  • Used to define your testing criteria and
  • What data is required when creating a run
Schema schema

The json schema for the run data source items. Learn how to build JSON schemas here.

JsonValue; type "custom"constant"custom"constant

The type of data source. Always custom.

class Logs:

A LogsDataSourceConfig which specifies the metadata property of your logs query. This is usually metadata like usecase=chatbot or prompt-version=v2, etc. The schema returned by this data source config is used to defined what variables are available in your evals. item and sample are both defined when using this data source config.

Schema schema

The json schema for the run data source items. Learn how to build JSON schemas here.

JsonValue; type "logs"constant"logs"constant

The type of data source. Always logs.

Optional<Metadata> metadata

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

class EvalStoredCompletionsDataSourceConfig:

Deprecated in favor of LogsDataSourceConfig.

Schema schema

The json schema for the run data source items. Learn how to build JSON schemas here.

JsonValue; type "stored_completions"constant"stored_completions"constant

The type of data source. Always stored_completions.

Optional<Metadata> metadata

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

Optional<Metadata> metadata

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

String name

The name of the evaluation.

JsonValue; object_ "eval"constant"eval"constant

The object type.

List<TestingCriterion> testingCriteria

A list of testing criteria.

Accepts one of the following:
class LabelModelGrader:

A LabelModelGrader object which uses a model to assign labels to each item in the evaluation.

List<Input> input
Content content

Inputs to the model - can contain template strings. Supports text, output text, input images, and input audio, either as a single item or an array of items.

Accepts one of the following:
String
class ResponseInputText:

A text input to the model.

String text

The text input to the model.

JsonValue; type "input_text"constant"input_text"constant

The type of the input item. Always input_text.

class OutputText:

A text output from the model.

String text

The text output from the model.

JsonValue; type "output_text"constant"output_text"constant

The type of the output text. Always output_text.

class InputImage:

An image input block used within EvalItem content arrays.

String imageUrl

The URL of the image input.

JsonValue; type "input_image"constant"input_image"constant

The type of the image input. Always input_image.

Optional<String> detail

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

class ResponseInputAudio:

An audio input to the model.

InputAudio inputAudio
String data

Base64-encoded audio data.

Format format

The format of the audio data. Currently supported formats are mp3 and wav.

Accepts one of the following:
MP3("mp3")
WAV("wav")
JsonValue; type "input_audio"constant"input_audio"constant

The type of the input item. Always input_audio.

Accepts one of the following:
String
class ResponseInputText:

A text input to the model.

String text

The text input to the model.

JsonValue; type "input_text"constant"input_text"constant

The type of the input item. Always input_text.

OutputText
String text

The text output from the model.

JsonValue; type "output_text"constant"output_text"constant

The type of the output text. Always output_text.

InputImage
String imageUrl

The URL of the image input.

JsonValue; type "input_image"constant"input_image"constant

The type of the image input. Always input_image.

Optional<String> detail

The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.

class ResponseInputAudio:

An audio input to the model.

InputAudio inputAudio
String data

Base64-encoded audio data.

Format format

The format of the audio data. Currently supported formats are mp3 and wav.

Accepts one of the following:
MP3("mp3")
WAV("wav")
JsonValue; type "input_audio"constant"input_audio"constant

The type of the input item. Always input_audio.

Role role

The role of the message input. One of user, assistant, system, or developer.

Accepts one of the following:
USER("user")
ASSISTANT("assistant")
SYSTEM("system")
DEVELOPER("developer")
Optional<Type> type

The type of the message input. Always message.

List<String> labels

The labels to assign to each item in the evaluation.

String model

The model to use for the evaluation. Must support structured outputs.

String name

The name of the grader.

List<String> passingLabels

The labels that indicate a passing result. Must be a subset of labels.

JsonValue; type "label_model"constant"label_model"constant

The object type, which is always label_model.

class StringCheckGrader:

A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.

String input

The input text. This may include template strings.

String name

The name of the grader.

Operation operation

The string check operation to perform. One of eq, ne, like, or ilike.

Accepts one of the following:
EQ("eq")
NE("ne")
LIKE("like")
ILIKE("ilike")
String reference

The reference text. This may include template strings.

JsonValue; type "string_check"constant"string_check"constant

The object type, which is always string_check.

class EvalGraderTextSimilarity:

A TextSimilarityGrader object which grades text based on similarity metrics.

double passThreshold

The threshold for the score.

class EvalGraderPython:

A PythonGrader object that runs a python script on the input.

Optional<Double> passThreshold

The threshold for the score.

class EvalGraderScoreModel:

A ScoreModelGrader object that uses a model to assign a score to the input.

Optional<Double> passThreshold

The threshold for the score.

Create eval

package com.openai.example;

import com.openai.client.OpenAIClient;
import com.openai.client.okhttp.OpenAIOkHttpClient;
import com.openai.core.JsonValue;
import com.openai.models.evals.EvalCreateParams;
import com.openai.models.evals.EvalCreateResponse;

public final class Main {
    private Main() {}

    public static void main(String[] args) {
        OpenAIClient client = OpenAIOkHttpClient.fromEnv();

        EvalCreateParams params = EvalCreateParams.builder()
            .customDataSourceConfig(EvalCreateParams.DataSourceConfig.Custom.ItemSchema.builder()
                .putAdditionalProperty("foo", JsonValue.from("bar"))
                .build())
            .addTestingCriterion(EvalCreateParams.TestingCriterion.LabelModel.builder()
                .addInput(EvalCreateParams.TestingCriterion.LabelModel.Input.SimpleInputMessage.builder()
                    .content("content")
                    .role("role")
                    .build())
                .addLabel("string")
                .model("model")
                .name("name")
                .addPassingLabel("string")
                .build())
            .build();
        EvalCreateResponse eval = client.evals().create(params);
    }
}
{
  "id": "id",
  "created_at": 0,
  "data_source_config": {
    "schema": {
      "foo": "bar"
    },
    "type": "custom"
  },
  "metadata": {
    "foo": "string"
  },
  "name": "Chatbot effectiveness Evaluation",
  "object": "eval",
  "testing_criteria": [
    {
      "input": [
        {
          "content": "string",
          "role": "user",
          "type": "message"
        }
      ],
      "labels": [
        "string"
      ],
      "model": "model",
      "name": "name",
      "passing_labels": [
        "string"
      ],
      "type": "label_model"
    }
  ]
}
Returns Examples
{
  "id": "id",
  "created_at": 0,
  "data_source_config": {
    "schema": {
      "foo": "bar"
    },
    "type": "custom"
  },
  "metadata": {
    "foo": "string"
  },
  "name": "Chatbot effectiveness Evaluation",
  "object": "eval",
  "testing_criteria": [
    {
      "input": [
        {
          "content": "string",
          "role": "user",
          "type": "message"
        }
      ],
      "labels": [
        "string"
      ],
      "model": "model",
      "name": "name",
      "passing_labels": [
        "string"
      ],
      "type": "label_model"
    }
  ]
}