arrow-left

All pages
gitbookPowered by GitBook
1 of 1

Loading...

Adding new Block via OpenAPI Spec

hashtag
Importing an Existing OpenAPI Specification

  1. Locate the API Specification: Determine if the API specification exists at a known URL. For example, OpenAI's specification can be found at https://raw.githubusercontent.com/openai/openai-openapi/master/openapi.yaml.

  2. Validate the Specification: Use https://editor.swagger.io/ to validate the OpenAPI spec.

  3. Add to System:

    • Navigate to packages/omni-server/extensions/omni-core-blocks/server/apis/[NAMESPACE]/[NAMESPACE].yaml.

    • Replace [NAMESPACE] with your desired namespace (e.g.,

hashtag
Example Configuration for OpenAI

hashtag
Creating an OpenAPI Spec

  1. Generate Draft Spec: If the specification doesn't exist, use tools like "OpenAPI Spec Generator" or request ChatGPT 4 to draft an initial spec.

  2. Validate the Specification: Use https://editor.swagger.io/ to validate the OpenAPI spec.

  3. Add to System:

hashtag
Example Configuration for GetImg

hashtag
Configure API in YAML

hashtag
Patch Authentication

We support the following authentication type:

If auth is not defined globally in the original OpenAPI spec, you can patch it in the API yaml /omni-core-blocks/server/apis/[NAMESPACE].yaml

hashtag
Filter APIs

hashtag
Configure Blocks in YAML

hashtag
Basic Metadata

hashtag
Filter Inputs/Outputs

hashtag
Transform

hashtag
Hoist?

hashtag
Control

hashtag
Input

type:

customSocket:

hashtag
Socket Format Options

Base64 image socket option:

Array:

Allow multiple connect

hashtag
Rename

hashtag
Run block to Generate Choices

hashtag
Use JSONATA to Manipulate the format

hashtag
Patch Security Scheme

hashtag
Default Result

how to patch when response doesn't have property?

hashtag
Special Cases

When properties have mutually exclusive or dependencies

Property name is Case-sensitive

When request content type is multipart/form-data, need file type to carry mimetype

openai
).
  • Update the api section as below.

  • Go to packages/omni-server/extensions/omni-core-blocks/server/apis/[NAMESPACE]/api/[NAMESPACE].yaml.

  • Replace [NAMESPACE] with your desired namespace (e.g., getimg).

  • Update the api section as below.

  • namespace: openai
    api:
      url: https://raw.githubusercontent.com/openai/openai-openapi/master/openapi.yaml
      basePath: https://api.openai.com/v1
      componentType: OAIComponent31
    ...
    namespace: getimg
    api:
      spec: ./api/getimg.yaml
      basePath: https://api.getimg.ai
    title: getimg
    http_basic' | 'http_bearer' | 'apiKey' | 'oauth2
    namespace: elevenlabs
    api:
      url: https://api.elevenlabs.io/openapi.json
      basePath: https://api.elevenlabs.io
      auth:
        type: apiKey
        requireKeys:
          - id: xi-api-key
            displayName: xi-api-key
            type: string
            in: header
    title: elevenlabs
    namespace: openai
    api:
      url: https://raw.githubusercontent.com/openai/openai-openapi/master/openapi.yaml
      basePath: https://api.openai.com/v1
      componentType: OAIComponent31
      auth:
      
    filter:
      operationIds:
        - createChatCompletion
        - createCompletion
        - createImage
        - createModeration
        - createTranscription
        - createTranslation
        - createImageVariation
        - createImageEdit
        - createEdit
        - listModels
        - createEmbedding
    title: openai
    category: Text-to-Speech
    description: >-
      Text to Speech Synthesis using the ElevenLabs API, supporting a variety of
      monolingual (english) and multilingual voices.
    meta:
      source:
        title: 'ElevenLabs: Text To Speech'
        links:
          Website: https://beta.elevenlabs.io/speech-synthesis
          Subscription: https://beta.elevenlabs.io/subscription
          API Reference: https://docs.elevenlabs.io/api-reference/quick-start/introduction
          Documentation: https://docs.elevenlabs.io/welcome/introduction
          Voice Lab: https://beta.elevenlabs.io/voice-lab
        summary: >-
          Text to Speech Synthesis using the ElevenLabs API, supporting a variety of
          monolingual (english) and multilingual voices.
    title: Text To Speech
    apiNamespace: elevenlabs
    apiOperationId: Text_to_speech_v1_text_to_speech__voice_id__post
    displayNamespace: elevenlabs
    displayOperationId: simpletts
    scripts:
      hideExcept:inputs:
        - prompt
        - temperature
        - model
        - top_p
        - seed
        - max_tokens
        - instruction
        - images
      hideExcept:outputs:
        - text
    scripts:
      transform:inputs:
      transform:outputs:
    scripts:
        hoist:input
    controls:
      preview:
        type: AlpineImageGalleryComponent
        displays: output:image
        opts:
          readonly: true
    placeholder?
    image:
        customSocket: image
    socketOpts:
          format: base64
    socketOpts:
          format: base64_withHeader
    socketOpts:
          array: true
    allowMultiple: true
    inputs:
      'n':
        title: Number of Images
      model:
        type: string
        customSocket: text
        choices:
          block: getimg.listModels
          cache: global
          args: 
            pipeline: face-fix
            family: enhancements
          map:
            title: name
            value: id
    messages:
        scripts:
          jsonata: >-
            [{"role":"system", "content": $string(instruction) }, {"role":"user",
            "content": $string(prompt) }]
          delete:
            - prompt
            - instruction
        hidden: true
    securitySchemes
    outputs:
      _omni_result:
        hidden: true
    image_strength:
        scripts:
          jsonata: >
            $exists(init_image_mode) and init_image_mode = "IMAGE_STRENGTH" ? image_strength : undefined
      step_schedule_end:
        scripts:
          jsonata: >
            $exists(init_image_mode) and init_image_mode = "STEP_SCHEDULE" ? step_schedule_end : undefined
      step_schedule_start:
        scripts:
          jsonata: >
            $exists(init_image_mode) and init_image_mode = "STEP_SCHEDULE" ? step_schedule_start : undefined
    Accept:
        hidden: true
        default: application/json
      Organization:
        hidden: true
    image:
        customSocket: file