All Classes and Interfaces

Class
Description
Configures temperature, topP, penalties and seed.
A stage to choose the Images related action desired Current version includes following possibilities: Creating images Editing images Generating images variations
A stage to choose a model supported by /v1/chat/completions model endpoint compatibility .
A stage to choose a model supported by /v1/chat/completions model endpoint compatibility .
A stage to choose a model supported by /v1/chat/completions model endpoint compatibility .
A stage to choose a model supported by /v1/audio/speech model endpoint compatibility .
A stage to choose a model supported by /v1/audio/transcriptions model endpoint compatibility .
A stage to choose a model supported by /v1/audio/translations model endpoint compatibility .
Reactive API to add a final instruction and proceed with the HTTP call.
Async context to choose between raw response or just a simple string answer.
Configures target folder
Promise abstraction.
Promise abstraction.
Async promise to choose between raw response or just a simple string answer.
Registers callbacks
Registers callbacks and sends request
Registers callbacks and sends request
Sends calls to OpenAI API with the already defined callbacks, and possibly with prompt.
Registers text input
Registers user input optionally
Registers user input optionally
Promise abstraction.
Configures which audio file to transcribe
Configures which audio file to translate
 
Entrypoint for Chat Completions API.
The interaction starts by configuring the HTTP connection
either by providing a preconfigured ChatHttpExecutor http client,
HttpExecutorContext, SdkAuth or leave it to Chat.defaults().
Base for all Chat stages
Configures how the HTTP client to act.
A stage to choose the number of the images generated.
 
Explicit marker action type stage for flows where only creating is possible
E.g.
A stage to choose the dimensions of the image generated supported by dall-e-2
A stage to choose the dimensions of the image generated supported by dall-e-3
An internal executor used to handle the files download
Utility to download files with random names to a given folder
 
Explicit marker action type stage for flows where only editing is possible
E.g.
A stage to specify the editable areas of the image.
A stage to choose the format of the response images
Configures the format of the output file
Configures the format of the output file
Use this for manually passing the credentials and still conforming the SdkAuth contract.
When instantiated and FromEnvironment.credentials() is called, series of environment variables will be scanned to provide necessary ApiCredentials.
When instantiated and FromJson.credentials() is called, already configured json file is scanned for credentials.
 
Default usage is to configure the HTTP connection and
serialization / deserialization methods using ObjectMapper.
Either inputs more images and goes back to ImageDetailStage or goes further to select a runtime.
Configures image either as a file or base64 encoded one
Base for all Image stages
Configures the DetailedAnalyze
Entrypoint for Images API, supporting Creating, Editing/a>, and Variating.
The interaction starts by configuring the HTTP connection
either by providing a preconfigured http client - CreateImageHttpExecutor, EditImageHttpExecutor and ImageVariationHttpExecutor respectively,
HttpExecutorContext, SdkAuth or leave it to Images.defaults().
 
Marker stage for finishing the configurations of the prompt.
Synchronous context to choose between raw response or just a simple string answer.
Marker Interface for all stages that are not TerminalStage
Tells the OpenAI API in which language the voice in the audio file is.
Configures tokens, tools and accuracy/
Configures messages such as system and assistant messages.
Usually thrown by SdkAuth if integral parts of the authentication are missing.
Configures the output format
 
Sets audio file for transcription, before all other autoconfiguration.
Sets audio file for translation, before all other autoconfiguration.
Processing type.
Configures how the HTTP client to act.
Configures how the HTTP client to act.
A stage to choose the quality of the images generated.
Reactive API to add a final instruction and proceed with the HTTP call.
Reactive context to choose between raw response or just a simple string answer.
Configures target folder
Reactive context to choose if the response image should be automatically downloaded or just delivered.
Sends calls to OpenAI API in a reactive fashion
Registers text input
Registers text input optionally
Registers text input optionally
Marker interface for stages that send requests on behalf of a runtime - synchronously, asynchronously or reactive.
Defines all the ways the underlying runtime should act.
This interface provider an instance of ApiCredentials
It can have multiple implementations according to the way the api key is provided to the application
As easy for implementation it is, this library provides the following default implementation for immediate usage: FromDeveloper FromEnvironment FromJson As this is a FunctionalInterface it can also be implemented as a lambda (supplier) in the notation of
A simplified stage where some things such as AI Model Type and creativity are already configured.
A simplified stage where some things such as AI Model Type and creativity are already configured.
A simplified stage where some things such as AI Model Type and tokens are already configured.
Preconfigured stage with AI Model, speaker, format and speed
Simplifies configuration with already configured AI Model, temperature, language and output format;
Simplifies configuration with already configured AI Model, temperature and output format;
Entrypoint for Speech API.
The interaction starts by configuring the HTTP connection
either by providing a preconfigured SpeechHttpExecutor http client,
HttpExecutorContext, SdkAuth or leave it to Speech.defaults().
Base for all Speech stages
Configures how the HTTP client to act.
Configures the speed of the output audio file
 
 
A stage to choose the style of the images generated.
Synchronous API to add a final instruction and proceed with the HTTP call.
Sends blocking requests to OpenAI API
Synchronous context to choose between raw response, just a simple string answer, or an image download
Synchronous context to choose between raw response or just a simple string answer.
Adds user input and proceeds to blocking execution
 
Sends blocking requests to OpenAI API with given user supplied prompt
Configures the temperature or in other words, the creativity of the model.
Configures the temperature or in other words, the creativity of the model.
Configures the temperature or in other words, the creativity of the model.
Stages which provide the ability to bypass further configurations and go to selecting a runtime.
Configures maxTokens, N(choices) and stop.
Configures maxTokens
Configures tools such as functions.
Entrypoint for Transcription API.
The interaction starts by configuring the HTTP connection
either by providing a preconfigured TranscriptionHttpExecutor http client,
HttpExecutorContext, SdkAuth or leave it to Transcription.defaults().
Base for all Transcription stages
Configures how the HTTP client to act.
Entrypoint for Translation API.
The interaction starts by configuring the HTTP connection
either by providing a preconfigured TranslationHttpExecutor http client,
HttpExecutorContext, SdkAuth or leave it to Translation.defaults().
Base for all Translation stages
Configures how the HTTP client to act.
Explicit marker action type stage for flows where only creating variations is possible
E.g.
Entrypoint for Vision API.
The interaction starts by configuring the HTTP connection
either by providing a preconfigured VisionHttpExecutor http client,
HttpExecutorContext, SdkAuth or leave it to Vision.defaults().
 
 
Configures which person's voice to use