Client Overview
The Agent Tool Protocol (ATP) client is your application's interface to the ATP server. It handles communication, authentication, execution management, and provides powerful features like client tools, service providers, and API discovery.
What is the ATP Client?β
The ATP client is a TypeScript/JavaScript library that:
- π Connects to ATP servers - Establishes secure authenticated connections
- π Executes code in sandboxes - Sends code for secure server-side execution
- π Discovers APIs - Searches and explores available server APIs
- π οΈ Provides services - Offers LLM, approval, and embedding capabilities to sandbox code
- π± Registers client tools - Makes local functions available to sandbox code
- βΈοΈ Manages execution state - Handles pause/resume for callbacks
- π Handles authentication - Manages tokens and session lifecycle
Client Architectureβ
Core Capabilitiesβ
1. Code Executionβ
Execute TypeScript/JavaScript code in secure server-side sandboxes:
import { AgentToolProtocolClient } from '@mondaydotcomorg/atp-client';
const client = new AgentToolProtocolClient({
baseUrl: 'http://localhost:3333'
});
await client.init();
// Execute code
const result = await client.execute(`
const data = await api.users.getUser({ userId: '123' });
return { username: data.username };
`);
console.log(result.result); // { username: 'john' }
2. API Discoveryβ
Search and explore available APIs:
// Semantic search for APIs
const apis = await client.searchAPI('get user information', {
limit: 5
});
// Explore API structure
const userAPI = await client.exploreAPI('users');
console.log(userAPI.functions); // List of available functions
// Get type definitions
const types = await client.getTypeDefinitions();
3. Service Providersβ
Provide capabilities that sandbox code can access:
import { ChatOpenAI } from '@langchain/openai';
const llm = new ChatOpenAI({ model: 'gpt-4' });
const client = new AgentToolProtocolClient({
baseUrl: 'http://localhost:3333',
serviceProviders: {
// LLM provider
llm: {
call: async (params) => {
const response = await llm.invoke(params.prompt);
return { text: response.content };
}
},
// Approval provider
approval: {
request: async (message, context) => {
const answer = await askUser(message);
return { approved: answer === 'yes' };
}
},
// Embedding provider
embedding: {
embed: async (text) => {
return await embeddings.embedQuery(text);
}
}
}
});
4. Client Toolsβ
Register tools that execute locally on your machine:
import { ToolOperationType } from '@mondaydotcomorg/atp-protocol';
import * as fs from 'fs/promises';
const client = new AgentToolProtocolClient({
baseUrl: 'http://localhost:3333',
serviceProviders: {
tools: [
{
name: 'readFile',
namespace: 'fs',
description: 'Read a local file',
inputSchema: {
type: 'object',
properties: {
path: { type: 'string' }
},
required: ['path']
},
metadata: {
operationType: ToolOperationType.READ,
},
handler: async (input) => {
return await fs.readFile(input.path, 'utf-8');
}
}
]
}
});
Client Lifecycleβ
Initialization Flowβ
Session Managementβ
The client automatically manages:
- Token generation - Server issues JWT tokens on init
- Token rotation - Automatic renewal before expiration
- Session tracking - Unique client ID per session
- Connection retry - Automatic reconnection on failure
Execution Statesβ
When executing code, the client handles multiple execution states:
Configuration Optionsβ
Basic Configurationβ
const client = new AgentToolProtocolClient({
// Required: Server URL
baseUrl: 'http://localhost:3333',
// Optional: Custom headers
headers: {
'X-API-Key': 'your-api-key'
},
// Optional: Service providers
serviceProviders: {
llm: llmHandler,
approval: approvalHandler,
embedding: embeddingHandler,
tools: clientTools,
},
// Optional: Lifecycle hooks
hooks: {
onInit: async (clientId) => {
console.log('Client initialized:', clientId);
},
onConnect: async (apis) => {
console.log('Connected with APIs:', apis.length);
},
onExecute: async (executionId) => {
console.log('Execution started:', executionId);
},
onComplete: async (executionId, result) => {
console.log('Execution completed:', executionId);
},
onError: async (error) => {
console.error('Client error:', error);
}
}
});
Execution Optionsβ
const result = await client.execute(code, {
// Timeout in milliseconds
timeout: 30000,
// Provenance tracking mode
provenanceMode: 'proxy', // 'none' | 'track' | 'proxy'
// Security policies
securityPolicies: [preventDataExfiltration],
// Cache key for result caching
cacheKey: 'unique-operation-key',
// Execution context
context: {
userId: '123',
requestId: 'req-456'
}
});
Common Patternsβ
Pattern 1: Agent with Client Toolsβ
const client = new AgentToolProtocolClient({
baseUrl: 'http://localhost:3333',
serviceProviders: {
llm: llmHandler,
tools: [localFileTool, browserTool]
}
});
await client.init();
await client.connect();
// Agent can now use both server APIs and client tools
const result = await client.execute(`
// Server API
const userData = await api.users.getUser({ userId: '123' });
// Client tool
const localData = await api.fs.readFile({
path: '/tmp/user-prefs.json'
});
return { ...userData, ...JSON.parse(localData) };
`);
Pattern 2: LLM-Powered Executionβ
const client = new AgentToolProtocolClient({
baseUrl: 'http://localhost:3333',
serviceProviders: {
llm: {
call: async ({ prompt }) => {
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: prompt }]
});
return { text: response.choices[0].message.content };
}
}
}
});
// Code can now use LLM
const result = await client.execute(`
const data = await api.database.getRecords();
// Ask LLM to analyze
const analysis = await llm.call({
prompt: \`Analyze this data: \${JSON.stringify(data)}\`
});
return analysis.text;
`);
Pattern 3: Human-in-the-Loopβ
const client = new AgentToolProtocolClient({
baseUrl: 'http://localhost:3333',
serviceProviders: {
approval: {
request: async (message, context) => {
// Show dialog to user
const confirmed = await showConfirmDialog(message);
return { approved: confirmed };
}
}
}
});
const result = await client.execute(`
const users = await api.users.listInactiveUsers();
// Request approval before destructive action
const { approved } = await approval.request(
\`Delete \${users.length} inactive users?\`
);
if (approved) {
await api.users.bulkDelete({ userIds: users.map(u => u.id) });
return { deleted: users.length };
}
return { cancelled: true };
`);
Error Handlingβ
The client provides structured error handling:
try {
const result = await client.execute(code);
console.log('Success:', result.result);
} catch (error) {
if (error.code === 'EXECUTION_TIMEOUT') {
console.error('Execution timed out');
} else if (error.code === 'VALIDATION_ERROR') {
console.error('Code validation failed:', error.details);
} else if (error.code === 'RUNTIME_ERROR') {
console.error('Runtime error:', error.message);
console.error('Stack trace:', error.stack);
} else if (error.code === 'AUTH_ERROR') {
console.error('Authentication failed');
} else {
console.error('Unknown error:', error);
}
}
Best Practicesβ
β Do'sβ
- Initialize once - Create one client instance and reuse it
- Handle errors - Always catch and handle execution errors
- Use timeouts - Set appropriate execution timeouts
- Provide services - Implement LLM/approval providers for rich interactions
- Register tools early - Define client tools during initialization
- Monitor state - Use hooks to track execution lifecycle
- Clean up - Close connections when done
β Don'tsβ
- Don't create multiple clients - Reuse the same instance
- Don't ignore errors - Handle all error cases
- Don't skip initialization - Always call
init()before operations - Don't hardcode credentials - Use environment variables
- Don't block the event loop - Use async/await properly
- Don't forget timeouts - Always set execution limits
- Don't expose sensitive data - Be careful with logging
Next Stepsβ
Explore specific client capabilities:
- Client Tools - Execute local operations from sandbox code
- Service Providers - Provide LLM, approval, and embedding services
- API Discovery - Search and explore available APIs
Learn Moreβ
- Architecture Overview - Understand the full system
- Pause/Resume Mechanism - How callbacks work
- LangChain Integration - Use with LangChain agents