Locate the API Specification: Determine if the API specification exists at a known URL. For example, OpenAI's specification can be found at https://raw.githubusercontent.com/openai/openai-openapi/master/openapi.yaml
.
Validate the Specification: Use https://editor.swagger.io/
to validate the OpenAPI spec.
Add to System:
Navigate to packages/omni-server/extensions/omni-core-blocks/server/apis/[NAMESPACE]/[NAMESPACE].yaml
.
Replace [NAMESPACE]
with your desired namespace (e.g., openai
).
Update the api
section as below.
namespace: openai
api:
url: https://raw.githubusercontent.com/openai/openai-openapi/master/openapi.yaml
basePath: https://api.openai.com/v1
componentType: OAIComponent31
...
Generate Draft Spec: If the specification doesn't exist, use tools like "OpenAPI Spec Generator" or request ChatGPT 4 to draft an initial spec.
Validate the Specification: Use https://editor.swagger.io/
to validate the OpenAPI spec.
Add to System:
Go to packages/omni-server/extensions/omni-core-blocks/server/apis/[NAMESPACE]/api/[NAMESPACE].yaml
.
Replace [NAMESPACE]
with your desired namespace (e.g., getimg
).
Update the api
section as below.
namespace: getimg
api:
spec: ./api/getimg.yaml
basePath: https://api.getimg.ai
title: getimg
We support the following authentication type:
http_basic' | 'http_bearer' | 'apiKey' | 'oauth2
If auth is not defined globally in the original OpenAPI spec, you can patch it in the API yaml /omni-core-blocks/server/apis/[NAMESPACE].yaml
namespace: elevenlabs
api:
url: https://api.elevenlabs.io/openapi.json
basePath: https://api.elevenlabs.io
auth:
type: apiKey
requireKeys:
- id: xi-api-key
displayName: xi-api-key
type: string
in: header
title: elevenlabs
namespace: openai
api:
url: https://raw.githubusercontent.com/openai/openai-openapi/master/openapi.yaml
basePath: https://api.openai.com/v1
componentType: OAIComponent31
auth:
filter:
operationIds:
- createChatCompletion
- createCompletion
- createImage
- createModeration
- createTranscription
- createTranslation
- createImageVariation
- createImageEdit
- createEdit
- listModels
- createEmbedding
title: openai
category: Text-to-Speech
description: >-
Text to Speech Synthesis using the ElevenLabs API, supporting a variety of
monolingual (english) and multilingual voices.
meta:
source:
title: 'ElevenLabs: Text To Speech'
links:
Website: https://beta.elevenlabs.io/speech-synthesis
Subscription: https://beta.elevenlabs.io/subscription
API Reference: https://docs.elevenlabs.io/api-reference/quick-start/introduction
Documentation: https://docs.elevenlabs.io/welcome/introduction
Voice Lab: https://beta.elevenlabs.io/voice-lab
summary: >-
Text to Speech Synthesis using the ElevenLabs API, supporting a variety of
monolingual (english) and multilingual voices.
title: Text To Speech
apiNamespace: elevenlabs
apiOperationId: Text_to_speech_v1_text_to_speech__voice_id__post
displayNamespace: elevenlabs
displayOperationId: simpletts
scripts:
hideExcept:inputs:
- prompt
- temperature
- model
- top_p
- seed
- max_tokens
- instruction
- images
hideExcept:outputs:
- text
scripts:
transform:inputs:
transform:outputs:
scripts:
hoist:input
controls:
preview:
type: AlpineImageGalleryComponent
displays: output:image
opts:
readonly: true
placeholder?
type:
customSocket:
image:
customSocket: image
Base64 image socket option:
socketOpts:
format: base64
socketOpts:
format: base64_withHeader
Array:
socketOpts:
array: true
Allow multiple connect
allowMultiple: true
inputs:
'n':
title: Number of Images
model:
type: string
customSocket: text
choices:
block: getimg.listModels
cache: global
args:
pipeline: face-fix
family: enhancements
map:
title: name
value: id
messages:
scripts:
jsonata: >-
[{"role":"system", "content": $string(instruction) }, {"role":"user",
"content": $string(prompt) }]
delete:
- prompt
- instruction
hidden: true
securitySchemes
how to patch when response doesn't have property?
outputs:
_omni_result:
hidden: true
When properties have mutually exclusive or dependencies
image_strength:
scripts:
jsonata: >
$exists(init_image_mode) and init_image_mode = "IMAGE_STRENGTH" ? image_strength : undefined
step_schedule_end:
scripts:
jsonata: >
$exists(init_image_mode) and init_image_mode = "STEP_SCHEDULE" ? step_schedule_end : undefined
step_schedule_start:
scripts:
jsonata: >
$exists(init_image_mode) and init_image_mode = "STEP_SCHEDULE" ? step_schedule_start : undefined
Property name is Case-sensitive
Accept:
hidden: true
default: application/json
Organization:
hidden: true
When request content type is multipart/form-data, need file type to carry mimetype
image:
customSocket: file