Skip to main content

vLLM Backend

Connects to a local vLLM or remote vLLM backend.

It supports both Docker images:

Uses the OpenAI compatible API.

Import

import { LmScript } from "@lmscript/client";
import { VllmBackend } from "@lmscript/client/backends/vllm";

Optionally setup usage tracking

let promptTokens = 0;

let completionTokens = 0;

const reportUsage: ReportUsage = (usage) => {
promptTokens += usage.promptTokens;
completionTokens += usage.completionTokens;
};

Instantiate

const backend = new VllmBackend({
url: `http://localhost:8000`,
template: "mistral",
reportUsage, // Optional
model: "TheBloke/Mistral-7B-Instruct-v0.2-AWQ",
auth: "YOUR_API_KEY", // Can be undefined if running the backend locally
});

Use

const model = new LmScript(backend, { temperature: 0 });

const { captured, rawText } = await model
.user("Tell me a joke.")
.assistant((m) => m.gen("joke", { maxTokens: 128 }))
.run();

The captured text is available in the captured object.

console.log(captured.joke);
` Why don't scientists trust atoms?

Because they make up everything!`

The raw text is available in the rawText variable.

console.log(rawText);
`<s>[INST] Tell me a joke. [/INST] Why don't scientists trust atoms?

Because they make up everything!`

The promptTokens and completionTokens have been updated by the reportUsage function.

console.log(promptTokens);
14
console.log(completionTokens);
17