Omnitool.ai
  • Welcome to Omnitool.ai
    • Key Features
  • Getting Started
    • Installation
    • Configuration
    • FAQ
  • Tutorial
    • Glossary
    • UI Overview
    • Adding API Keys
    • Blocks
      • Omnitool
      • OpenAI
      • Replicate
      • HuggingFace
      • ElevenLabs
      • OpenRouter
      • Sharp Image Processing
  • Contributing
  • Advanced
    • Run Recipe via REST API
    • Create New Blocks
      • Add a New Block via Composition API
      • Adding New Block via Extension
      • Adding new Block via OpenAPI Spec
    • Create New Extension
    • Use Local Models
  • License
Powered by GitBook
On this page
  • Importing an Existing OpenAPI Specification
  • Creating an OpenAPI Spec
  • Configure API in YAML
  • Configure Blocks in YAML
  • Special Cases

Was this helpful?

Edit on GitHub
Export as PDF
  1. Advanced
  2. Create New Blocks

Adding new Block via OpenAPI Spec

Importing an Existing OpenAPI Specification

  1. Locate the API Specification: Determine if the API specification exists at a known URL. For example, OpenAI's specification can be found at https://raw.githubusercontent.com/openai/openai-openapi/master/openapi.yaml.

  2. Validate the Specification: Use https://editor.swagger.io/ to validate the OpenAPI spec.

  3. Add to System:

    • Navigate to packages/omni-server/extensions/omni-core-blocks/server/apis/[NAMESPACE]/[NAMESPACE].yaml.

    • Replace [NAMESPACE] with your desired namespace (e.g., openai).

    • Update the api section as below.

Example Configuration for OpenAI

namespace: openai
api:
  url: https://raw.githubusercontent.com/openai/openai-openapi/master/openapi.yaml
  basePath: https://api.openai.com/v1
  componentType: OAIComponent31
...

Creating an OpenAPI Spec

  1. Generate Draft Spec: If the specification doesn't exist, use tools like "OpenAPI Spec Generator" or request ChatGPT 4 to draft an initial spec.

  2. Validate the Specification: Use https://editor.swagger.io/ to validate the OpenAPI spec.

  3. Add to System:

    • Go to packages/omni-server/extensions/omni-core-blocks/server/apis/[NAMESPACE]/api/[NAMESPACE].yaml.

    • Replace [NAMESPACE] with your desired namespace (e.g., getimg).

    • Update the api section as below.

Example Configuration for GetImg

namespace: getimg
api:
  spec: ./api/getimg.yaml
  basePath: https://api.getimg.ai
title: getimg

Configure API in YAML

Patch Authentication

We support the following authentication type:

http_basic' | 'http_bearer' | 'apiKey' | 'oauth2

If auth is not defined globally in the original OpenAPI spec, you can patch it in the API yaml /omni-core-blocks/server/apis/[NAMESPACE].yaml

namespace: elevenlabs
api:
  url: https://api.elevenlabs.io/openapi.json
  basePath: https://api.elevenlabs.io
  auth:
    type: apiKey
    requireKeys:
      - id: xi-api-key
        displayName: xi-api-key
        type: string
        in: header
title: elevenlabs

Filter APIs

namespace: openai
api:
  url: https://raw.githubusercontent.com/openai/openai-openapi/master/openapi.yaml
  basePath: https://api.openai.com/v1
  componentType: OAIComponent31
  auth:
  
filter:
  operationIds:
    - createChatCompletion
    - createCompletion
    - createImage
    - createModeration
    - createTranscription
    - createTranslation
    - createImageVariation
    - createImageEdit
    - createEdit
    - listModels
    - createEmbedding
title: openai

Configure Blocks in YAML

Basic Metadata

category: Text-to-Speech
description: >-
  Text to Speech Synthesis using the ElevenLabs API, supporting a variety of
  monolingual (english) and multilingual voices.
meta:
  source:
    title: 'ElevenLabs: Text To Speech'
    links:
      Website: https://beta.elevenlabs.io/speech-synthesis
      Subscription: https://beta.elevenlabs.io/subscription
      API Reference: https://docs.elevenlabs.io/api-reference/quick-start/introduction
      Documentation: https://docs.elevenlabs.io/welcome/introduction
      Voice Lab: https://beta.elevenlabs.io/voice-lab
    summary: >-
      Text to Speech Synthesis using the ElevenLabs API, supporting a variety of
      monolingual (english) and multilingual voices.
title: Text To Speech
apiNamespace: elevenlabs
apiOperationId: Text_to_speech_v1_text_to_speech__voice_id__post
displayNamespace: elevenlabs
displayOperationId: simpletts

Filter Inputs/Outputs

scripts:
  hideExcept:inputs:
    - prompt
    - temperature
    - model
    - top_p
    - seed
    - max_tokens
    - instruction
    - images
  hideExcept:outputs:
    - text

Transform

scripts:
  transform:inputs:
  transform:outputs:

Hoist?

scripts:
    hoist:input

Control

controls:
  preview:
    type: AlpineImageGalleryComponent
    displays: output:image
    opts:
      readonly: true
placeholder?

Input

type:

customSocket:

image:
    customSocket: image

Socket Format Options

Base64 image socket option:

socketOpts:
      format: base64
socketOpts:
      format: base64_withHeader

Array:

socketOpts:
      array: true

Allow multiple connect

allowMultiple: true

Rename

inputs:
  'n':
    title: Number of Images

Run block to Generate Choices

  model:
    type: string
    customSocket: text
    choices:
      block: getimg.listModels
      cache: global
      args: 
        pipeline: face-fix
        family: enhancements
      map:
        title: name
        value: id

Use JSONATA to Manipulate the format

messages:
    scripts:
      jsonata: >-
        [{"role":"system", "content": $string(instruction) }, {"role":"user",
        "content": $string(prompt) }]
      delete:
        - prompt
        - instruction
    hidden: true

Patch Security Scheme

securitySchemes

Default Result

how to patch when response doesn't have property?

outputs:
  _omni_result:
    hidden: true

Special Cases

When properties have mutually exclusive or dependencies

image_strength:
    scripts:
      jsonata: >
        $exists(init_image_mode) and init_image_mode = "IMAGE_STRENGTH" ? image_strength : undefined
  step_schedule_end:
    scripts:
      jsonata: >
        $exists(init_image_mode) and init_image_mode = "STEP_SCHEDULE" ? step_schedule_end : undefined
  step_schedule_start:
    scripts:
      jsonata: >
        $exists(init_image_mode) and init_image_mode = "STEP_SCHEDULE" ? step_schedule_start : undefined

Property name is Case-sensitive

Accept:
    hidden: true
    default: application/json
  Organization:
    hidden: true

When request content type is multipart/form-data, need file type to carry mimetype

image:
    customSocket: file 

PreviousAdding New Block via ExtensionNextCreate New Extension

Last updated 1 year ago

Was this helpful?