Skip to main content

Overview

60db’s LM (Large Language Model) API provides intelligent chat completions with advanced text correction capabilities, streaming responses, and function calling support. Our Small Language Model (SLM) is optimized for fast, efficient responses.

Features

OpenAI Compatible

Drop-in compatible with OpenAI’s chat completion format

Text Correction

Smart text correction with dictionary and style options

Real-time Streaming

Server-Sent Events for instant response streaming

Function Calling

Built-in tool/function calling support

Basic Usage

import { SixtyDBClient } from '60db';

const client = new SixtyDBClient('your-api-key');

const response = await client.chat.completions.create({
  model: '60db-tiny',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'How can I improve my English?' }
  ],
  stream: true
});

for await (const chunk of response) {
  console.log(chunk.choices[0]?.delta?.content);
}

Streaming Responses

Real-time streaming with Server-Sent Events:
const response = await fetch("https://api.60db.ai/v1/chat/completions", {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify({
    model: "60db-tiny",
    messages: [{ role: "user", content: "Tell me a story" }],
    stream: true,
  }),
});

const reader = response.body.getReader();
const decoder = new TextDecoder();

while (true) {
  const { done, value } = await reader.read();
  if (done) break;

  const chunk = decoder.decode(value);
  const lines = chunk.split("\n");

  for (const line of lines) {
    if (line.startsWith("data: ")) {
      const data = JSON.parse(line.slice(6));
      if (data.choices?.[0]?.delta?.content) {
        console.log(data.choices[0].delta.content);
      }
    }
  }
}

Function Calling (Tools)

Define tools the model can use:
const response = await client.chat.completions.create({
  model: "60db-tiny",
  messages: [
    { role: "user", content: "What is the weather in San Francisco?" },
  ],
  tool: [
    {
      name: "get_weather",
      description: "Get the current weather for a location",
      parameters: {
        type: "object",
        properties: {
          location: {
            type: "string",
            description: "The city and state, e.g. San Francisco, CA",
          },
          unit: {
            type: "string",
            enum: ["celsius", "fahrenheit"],
          },
        },
        required: ["location"],
      },
    },
  ],
});

Streaming Response (SSE)

data: {"chat_id": "550e8400-e29b-41d4-a716-446655440000", "type": "chat_id"}

data: {"id": "chatcmpl-123", "choices": [{"delta": {"content": "I'm"}}]}

data: {"id": "chatcmpl-123", "choices": [{"delta": {"content": " doing"}}]}

data: {"id": "chatcmpl-123", "choices": [{"delta": {"content": " well!"}}]}

data: {"type": "done", "response_time_ms": 1250}

data: [DONE]

Application Contexts

Supported contexts for text correction:
ContextDescription
emailProfessional email communication
chatCasual chat/messaging
documentFormal documents/reports
messageShort messages/notifications
socialSocial media posts
const response = await client.chat.completions.create({
  text: "hey can u send me the report pls",
  appContext: "email",
  style: { tone: "professional", autoCapitalize: true },
});

// Output: "Hey, can you send me the report, please?"

Best Practices

  • Be specific in your system prompt
  • Provide examples for few-shot learning
  • Use appropriate tone for your use case
  • Test different prompts for optimal results
  • Use streaming for better UX - Handle connection errors gracefully - Buffer chunks for smooth display - Implement timeout handling
  • Cache common responses - Use shorter prompts when possible - Enable chat history for context - Monitor token usage
  • Use dictionary for domain-specific terms
  • Set appropriate app context
  • Test style options for your use case
  • Combine multiple style options

Use Cases

Customer Support Chatbot

async function handleCustomerMessage(message) {
  const response = await client.chat.completions.create({
    model: "60db-tiny",
    messages: [
      {
        role: "system",
        content:
          "You are a helpful customer support assistant. Be friendly and professional.",
      },
      { role: "user", content: message },
    ],
    stream: true,
    save_chat: true,
    chat_id: customerId,
  });

  return response;
}

Text Correction Service

async function correctText(text, context = "email") {
  const response = await client.chat.completions.create({
    text: text,
    appContext: context,
    dictionary: commonTypos,
    style: {
      tone: "professional",
      autoCapitalize: true,
      autoPunctuate: true,
    },
  });

  return response.choices[0].message.content;
}

AI Assistant with Tools

async function aiAssistant(userQuery) {
  const response = await client.chat.completions.create({
    messages: [{ role: "user", content: userQuery }],
    tool: [getWeatherTool, searchDatabaseTool, sendEmailTool],
  });

  // Handle tool calls and return results
  return response;
}

Pricing

Charges are based on token usage:
MetricCost
Per Token~$0.00002
Token usage is tracked and billed to your workspace. Monitor usage via the Analytics API.

API Reference

For detailed API documentation, see:

Chat

Complete API reference with all parameters and examples