Service Providers | Agent Tool Protocol
Skip to main content

Service Providers

Service providers are client-side handlers that provide capabilities to code running in the server sandbox. They enable powerful patterns like LLM-powered execution, human-in-the-loop workflows, and semantic operations - all while keeping sensitive operations on the client side.

What are Service Providers?

Service providers are functions you implement on the client that can be called from sandbox code via the pause/resume mechanism. When sandbox code calls a service provider API (like llm.call() or approval.request()), execution pauses, the client handles the request, and execution resumes with the result.

Available Service Providers

1. LLM Provider

Enables sandbox code to make LLM calls without exposing API keys to the server.

import { AgentToolProtocolClient } from '@mondaydotcomorg/atp-client';
import { ChatOpenAI } from '@langchain/openai';

const llm = new ChatOpenAI({
model: 'gpt-4',
apiKey: process.env.OPENAI_API_KEY // Stays on client
});

const client = new AgentToolProtocolClient({
baseUrl: 'http://localhost:3333',
serviceProviders: {
llm: {
// Basic text completion
call: async (params) => {
const response = await llm.invoke(params.prompt);
return { text: response.content };
},

// Structured data extraction
extract: async (params) => {
const response = await llm.invoke(
`Extract data from: ${params.text}\nSchema: ${JSON.stringify(params.schema)}`
);
return { data: JSON.parse(response.content) };
},

// Classification
classify: async (params) => {
const response = await llm.invoke(
`Classify "${params.text}" into one of: ${params.categories.join(', ')}`
);
return { category: response.content.trim() };
}
}
}
});

Usage in sandbox code:

// In executed code
const summary = await llm.call({
prompt: 'Summarize this data: ' + JSON.stringify(data)
});

const extracted = await llm.extract({
text: document,
schema: { name: 'string', age: 'number' }
});

const category = await llm.classify({
text: 'I love this product!',
categories: ['positive', 'negative', 'neutral']
});

2. Approval Provider

Enables human-in-the-loop workflows for sensitive operations.

const client = new AgentToolProtocolClient({
baseUrl: 'http://localhost:3333',
serviceProviders: {
approval: {
request: async (message, context) => {
// Show confirmation dialog to user
console.log('Approval requested:', message);
console.log('Context:', context);

// In a real app, this might show a UI dialog
const answer = await promptUser(message);

return {
approved: answer === 'yes',
message: answer === 'yes' ? 'Approved' : 'Denied by user'
};
}
}
}
});

Usage in sandbox code:

// Check before destructive action
const users = await api.users.listInactiveUsers();

const { approved } = await approval.request(
`Delete ${users.length} inactive users?`,
{ userIds: users.map(u => u.id) }
);

if (approved) {
await api.users.bulkDelete({ userIds: users.map(u => u.id) });
return { deleted: users.length };
}

return { cancelled: true };

3. Embedding Provider

Enables semantic search and similarity operations.

import { OpenAIEmbeddings } from '@langchain/openai';

const embeddings = new OpenAIEmbeddings({
apiKey: process.env.OPENAI_API_KEY
});

const client = new AgentToolProtocolClient({
baseUrl: 'http://localhost:3333',
serviceProviders: {
embedding: {
// Generate embedding vector
embed: async (text) => {
const vector = await embeddings.embedQuery(text);
return vector; // Array of numbers
},

// Optional: Calculate similarity
similarity: async (text1, text2) => {
const vec1 = await embeddings.embedQuery(text1);
const vec2 = await embeddings.embedQuery(text2);

// Cosine similarity
const dotProduct = vec1.reduce((sum, val, i) => sum + val * vec2[i], 0);
const mag1 = Math.sqrt(vec1.reduce((sum, val) => sum + val * val, 0));
const mag2 = Math.sqrt(vec2.reduce((sum, val) => sum + val * val, 0));

return dotProduct / (mag1 * mag2);
}
}
}
});

Usage in sandbox code:

// Generate embedding for semantic search
const queryEmbedding = await embedding.embed('machine learning algorithms');

// Find similar documents
const docs = await api.database.getDocuments();
const scored = await Promise.all(
docs.map(async (doc) => ({
doc,
score: await embedding.similarity(query, doc.content)
}))
);

const bestMatches = scored
.sort((a, b) => b.score - a.score)
.slice(0, 5);

Combined Example

Here's a complete example using all service providers:

import { AgentToolProtocolClient } from '@mondaydotcomorg/atp-client';
import { ChatOpenAI } from '@langchain/openai';
import { OpenAIEmbeddings } from '@langchain/openai';

const llm = new ChatOpenAI({ model: 'gpt-4' });
const embeddings = new OpenAIEmbeddings();

const client = new AgentToolProtocolClient({
baseUrl: 'http://localhost:3333',
serviceProviders: {
llm: {
call: async ({ prompt }) => {
const response = await llm.invoke(prompt);
return { text: response.content };
},
extract: async ({ text, schema }) => {
const response = await llm.invoke(
`Extract from: ${text}\nSchema: ${JSON.stringify(schema)}`
);
return { data: JSON.parse(response.content) };
},
classify: async ({ text, categories }) => {
const response = await llm.invoke(
`Classify "${text}" into: ${categories.join(', ')}`
);
return { category: response.content.trim() };
}
},

approval: {
request: async (message, context) => {
const confirmed = await showDialog({
title: 'Approval Required',
message,
buttons: ['Approve', 'Deny']
});
return { approved: confirmed === 'Approve' };
}
},

embedding: {
embed: async (text) => {
return await embeddings.embedQuery(text);
}
},

tools: [
// Client tools (see Client Tools guide)
localFileTool,
browserTool
]
}
});

await client.init();
await client.connect();

// Now execute code that uses all providers
const result = await client.execute(`
// 1. Use LLM to analyze data
const data = await api.database.getRecords();
const analysis = await llm.call({
prompt: \`Analyze: \${JSON.stringify(data)}\`
});

// 2. Use embeddings for semantic search
const query = 'machine learning';
const queryVec = await embedding.embed(query);
const similar = await api.database.vectorSearch({ vector: queryVec });

// 3. Use client tool to save results
await api.fs.writeFile({
path: '/tmp/results.json',
content: JSON.stringify({ analysis, similar })
});

// 4. Request approval for action
const { approved } = await approval.request(
'Publish these results?',
{ recordCount: similar.length }
);

if (approved) {
await api.publishing.publish({ data: similar });
return { published: true };
}

return { saved: true, published: false };
`);

Advanced Patterns

Caching LLM Responses

const cache = new Map();

const client = new AgentToolProtocolClient({
serviceProviders: {
llm: {
call: async (params) => {
const cacheKey = JSON.stringify(params);

if (cache.has(cacheKey)) {
console.log('Cache hit');
return cache.get(cacheKey);
}

const response = await llm.invoke(params.prompt);
const result = { text: response.content };

cache.set(cacheKey, result);
return result;
}
}
}
});

Streaming LLM Responses

const client = new AgentToolProtocolClient({
serviceProviders: {
llm: {
call: async (params) => {
let fullText = '';

const stream = await llm.stream(params.prompt);

for await (const chunk of stream) {
fullText += chunk.content;
// Optionally report progress
if (params.onProgress) {
params.onProgress(fullText);
}
}

return { text: fullText };
}
}
}
});

Multi-Provider Approval

const client = new AgentToolProtocolClient({
serviceProviders: {
approval: {
request: async (message, context) => {
// Require multiple approvals for high-risk operations
if (context.risk === 'high') {
const approvals = await Promise.all([
getUserApproval(message),
getManagerApproval(message),
getSecurityApproval(message)
]);

const allApproved = approvals.every(a => a.approved);
return {
approved: allApproved,
approvers: approvals.map(a => a.approver)
};
}

// Single approval for normal operations
return await getUserApproval(message);
}
}
}
});

Embeddings with Batching

const client = new AgentToolProtocolClient({
serviceProviders: {
embedding: {
embed: async (text) => {
// Handle both single text and arrays
if (Array.isArray(text)) {
// Batch processing
return await embeddings.embedDocuments(text);
}

return await embeddings.embedQuery(text);
}
}
}
});

Error Handling

Service providers should handle errors gracefully:

const client = new AgentToolProtocolClient({
serviceProviders: {
llm: {
call: async (params) => {
try {
const response = await llm.invoke(params.prompt);
return { text: response.content };
} catch (error) {
// Return error information instead of throwing
return {
error: error.message,
text: 'An error occurred during LLM processing'
};
}
}
},

approval: {
request: async (message, context) => {
try {
const result = await promptUser(message);
return { approved: result === 'yes' };
} catch (error) {
// Default to denial on error
return {
approved: false,
error: 'Approval request failed'
};
}
}
}
}
});

Best Practices

✅ Do's

  1. Keep credentials client-side - Never send API keys to the server
  2. Handle errors gracefully - Return error objects, don't throw
  3. Add timeouts - Set reasonable timeouts for external calls
  4. Cache when appropriate - Cache LLM responses and embeddings
  5. Validate inputs - Check parameters before processing
  6. Log operations - Track service provider calls for debugging
  7. Use async/await - All providers should be async
  8. Provide feedback - Use progress callbacks for long operations

❌ Don'ts

  1. Don't expose secrets - Keep API keys and credentials secure
  2. Don't block indefinitely - Always set timeouts
  3. Don't ignore errors - Handle all error cases
  4. Don't make synchronous calls - Use async operations
  5. Don't return huge payloads - Limit response sizes
  6. Don't skip validation - Always validate inputs
  7. Don't forget cleanup - Close connections when done

Integration with LangChain

Service providers integrate seamlessly with LangChain:

import { ChatOpenAI } from '@langchain/openai';
import { OpenAIEmbeddings } from '@langchain/openai';

const llm = new ChatOpenAI({ model: 'gpt-4' });
const embeddings = new OpenAIEmbeddings();

const client = new AgentToolProtocolClient({
baseUrl: 'http://localhost:3333',
serviceProviders: {
llm: {
call: async ({ prompt, ...options }) => {
const response = await llm.invoke(prompt, options);
return { text: response.content };
}
},
embedding: {
embed: async (text) => {
return await embeddings.embedQuery(text);
}
}
}
});

Summary

Service providers enable:

  • Client-side LLM calls - Keep API keys secure
  • Human-in-the-loop - Request approvals for sensitive operations
  • Semantic operations - Generate embeddings and calculate similarity
  • Flexible integration - Works with any LLM/embedding provider
  • Seamless execution - Pause/resume handles complexity

Service providers are a powerful way to extend sandbox capabilities while maintaining security and control!

Next Steps

Agent Tool Protocol | ATP - Code Execution for AI Agents