Back to Blog

5 Essential Patterns for Production-Ready AI Workflows

By Tiago
5 Essential Patterns for Production-Ready AI Workflows
Share:

As AI becomes a core component of modern applications, developers face new challenges in building reliable, scalable workflows. Through our work with AI developers, we've identified five essential patterns that can make the difference between a prototype and a production-ready AI application.

1. The Chain of Responsibility Pattern

One of the most powerful patterns in AI workflows is the chain of responsibility, where each step in the process handles a specific task and passes results to the next step.

typescript
// Instead of this
async function processChatbot(input) {
const context = await fetchContext(input);
const aiResponse = await generateResponse(context, input);
const formattedResponse = formatOutput(aiResponse);
return formattedResponse;
}
// Use this pattern
class WorkflowStep {
constructor(public nextStep: WorkflowStep | null = null) {}
async process(input: any): Promise<any> {
const result = await this.execute(input);
return this.nextStep ? this.nextStep.process(result) : result;
}
// Abstract method to be implemented by subclasses
async execute(input: any): Promise<any> {
throw new Error("Execute method not implemented");
}
}
interface ChatInput {
text: string;
context?: any; // Define a more specific type for context if possible
}
class ContextEnricher extends WorkflowStep {
async execute(input: ChatInput): Promise<ChatInput> {
const context = await fetchContext(input.text); // Assuming fetchContext takes the text
return { ...input, context };
}
}
class AIGenerator extends WorkflowStep {
async execute(input: ChatInput): Promise<any> {
// Define a more specific return type
if (!input.context || !input.text) {
throw new Error("Missing context or text for AI generation");
}
// Assuming generateResponse takes context and text
return await generateResponse(input.context, input.text);
}
}
// Placeholder functions for demonstration
declare function fetchContext(text: string): Promise<any>;
declare function generateResponse(context: any, text: string): Promise<any>;

This pattern enables:

  • Easy addition of new processing steps
  • Better error handling at each stage
  • Clear separation of concerns
  • Simpler testing and debugging

2. The Retry with Exponential Backoff Pattern

AI services can be unpredictable. Implementing proper retry logic is crucial for reliability.

typescript
async function withRetry<T>(
operation: () => Promise<T>,
maxAttempts: number = 3
): Promise<T> {
for (let attempt = 1; attempt <= maxAttempts; attempt++) {
try {
return await operation();
} catch (error) {
if (attempt === maxAttempts) throw error;
// Exponential backoff with jitter
const backoff = Math.min(1000 * Math.pow(2, attempt - 1), 10000); // Start backoff from 2^0
const jitter = Math.random() * 1000;
console.log(
`Attempt ${attempt} failed. Retrying in ${(backoff + jitter).toFixed(
0
)}ms...`
);
await new Promise((resolve) => setTimeout(resolve, backoff + jitter));
}
}
// This line should technically be unreachable due to the throw in the catch block
throw new Error("Max retry attempts reached without success.");
}
// Placeholder AI service call
declare const aiService: { generate: (prompt: string) => Promise<any> };
declare const prompt: string;
// Usage in AI workflows
async function exampleUsage() {
try {
const response = await withRetry(() => aiService.generate(prompt));
console.log("Success:", response);
} catch (error) {
console.error("Operation failed after multiple retries:", error);
}
}

Key considerations:

  • Implement proper backoff intervals
  • Add jitter to prevent thundering herd
  • Set appropriate timeout limits
  • Handle different types of errors differently (e.g., don't retry on 4xx errors)

3. The Results Cache Pattern

AI operations are often expensive and time-consuming. Intelligent caching can significantly improve performance and reduce costs.

typescript
interface CacheEntry<T> {
value: T;
timestamp: number;
}
class AICache<T> {
private cache = new Map<string, CacheEntry<T>>();
private ttl: number; // Time-to-live in milliseconds
constructor(ttl: number = 3600 * 1000) {
// Default TTL: 1 hour
this.ttl = ttl;
}
async get(key: string, generator: () => Promise<T>): Promise<T> {
const cached = this.cache.get(key);
if (cached && Date.now() - cached.timestamp < this.ttl) {
console.log(`Cache hit for key: ${key}`);
return cached.value;
}
console.log(`Cache miss for key: ${key}. Generating value...`);
const value = await generator();
this.cache.set(key, {
value,
timestamp: Date.now(),
});
// Optional: Implement cache eviction strategy if memory is a concern
return value;
}
// Optional: Method to manually invalidate a cache entry
invalidate(key: string): void {
this.cache.delete(key);
console.log(`Cache invalidated for key: ${key}`);
}
}
// Placeholder functions/variables
declare function promptHash(prompt: string): string; // Function to generate a unique key from the prompt
declare const prompt: string;
declare const aiService: { generate: (prompt: string) => Promise<any> };
// Usage
async function exampleCacheUsage() {
const cache = new AICache<any>(); // Specify the expected type of the cached value
const key = promptHash(prompt); // Generate a cache key based on the prompt
const result = await cache.get(key, () => aiService.generate(prompt));
console.log("Result:", result);
}

Consider:

  • Cache invalidation strategies
  • Storage options (in-memory vs. distributed cache like Redis)
  • Cache key design (ensure uniqueness and relevance)
  • TTL policies (balance freshness vs. performance gains)

4. The Fallback Chain Pattern

AI services can fail or become unavailable. Having fallback options ensures your application remains functional.

typescript
interface AIService {
generate(input: any): Promise<any>;
// Optional: Add an identifier for logging purposes
serviceId?: string;
}
class FallbackChain {
constructor(private services: AIService[]) {
if (!services || services.length === 0) {
throw new Error("FallbackChain requires at least one service.");
}
}
async execute(input: any): Promise<any> {
for (let i = 0; i < this.services.length; i++) {
const service = this.services[i];
const serviceId = service.serviceId || `Service ${i + 1}`;
try {
console.log(`Attempting ${serviceId}...`);
return await service.generate(input);
} catch (error: any) {
// Catch specific error types if possible
console.error(`${serviceId} failed: ${error.message}`);
if (i === this.services.length - 1) {
// Last service failed, rethrow the error
console.error("All fallback services failed.");
throw error;
}
// Log failure and continue to the next service in the chain
}
}
// This line should be unreachable if the constructor ensures at least one service
throw new Error("No services available in the fallback chain.");
}
}
// Placeholder services
declare const primaryAIService: AIService;
declare const backupAIService: AIService;
declare const fallbackRuleEngine: AIService; // Can be a simpler rule-based system
// Usage
async function exampleFallbackUsage(input: any) {
const chain = new FallbackChain([
{ ...primaryAIService, serviceId: "Primary AI" },
{ ...backupAIService, serviceId: "Backup AI" },
{ ...fallbackRuleEngine, serviceId: "Fallback Rules" },
]);
try {
const result = await chain.execute(input);
console.log("Fallback chain succeeded:", result);
} catch (error) {
console.error("Fallback chain failed:", error);
// Handle the final failure (e.g., return a default response or error message)
}
}

Benefits:

  • Improved reliability and availability
  • Cost optimization opportunities (use cheaper services as fallbacks)
  • Graceful degradation of service
  • Better user experience during service outages

5. The Result Validator Pattern

AI outputs need validation to ensure they meet your application's requirements.

typescript
interface ValidationResult {
success: boolean;
error?: string; // Optional error message if validation fails
}
type ValidatorFunction = (
result: any
) => Promise<ValidationResult> | ValidationResult;
class ResultValidator {
constructor(private validators: ValidatorFunction[]) {}
async validate(result: any): Promise<{ valid: boolean; issues: string[] }> {
const issues: string[] = [];
for (const validator of this.validators) {
try {
const validation = await Promise.resolve(validator(result)); // Handles both sync and async validators
if (!validation.success) {
issues.push(
validation.error ||
"Validation failed without specific error message."
);
}
} catch (error: any) {
console.error("Error during validation:", error);
issues.push(`Validator threw an exception: ${error.message}`);
}
}
return {
valid: issues.length === 0,
issues,
};
}
}
// Placeholder validation logic
declare function containsSensitiveInfo(data: any): boolean;
// Example usage
async function exampleValidationUsage(aiOutput: any) {
const validator = new ResultValidator([
// Example: Check response length
(result) => ({
success: typeof result === "string" && result.length <= 1000,
error: "Response too long (max 1000 characters).",
}),
// Example: Check for sensitive information (potentially async)
async (result) => {
const hasSensitive = await containsSensitiveInfo(result); // Assume this is an async check
return {
success: !hasSensitive,
error: "Response contains sensitive information.",
};
},
// Example: Check JSON format (if applicable)
(result) => {
try {
JSON.parse(result); // Assuming result should be a JSON string
return { success: true };
} catch (e) {
return { success: false, error: "Invalid JSON format." };
}
},
]);
const validationResult = await validator.validate(aiOutput);
if (validationResult.valid) {
console.log("AI output passed validation.");
// Proceed with using the validated result
} else {
console.warn("AI output failed validation:");
validationResult.issues.forEach((issue) => console.warn(`- ${issue}`));
// Handle invalid output (e.g., request regeneration, return error, use default)
}
}

Important aspects:

  • Input validation (before sending to AI)
  • Output sanitization (cleaning up the AI response)
  • Content safety checks (toxicity, bias, etc.)
  • Format verification (JSON, specific structure, etc.)

Implementing These Patterns

While these patterns are powerful, implementing them properly requires careful consideration of:

  • Error handling strategies across different patterns
  • Comprehensive monitoring and logging
  • Performance implications (e.g., cache overhead, retry delays)
  • Robust testing approaches for each component

The key is to find the right balance between reliability, complexity, and performance for your specific use case.

Simplifying Implementation

While understanding these patterns is valuable, implementing them properly requires significant engineering resources. Waveloom's SDK provides these patterns out of the box:

  • Chain of Responsibility → Built into our visual workflow builder
  • Retry Management → Automatic handling with configurable policies
  • Caching → Integrated caching layer with smart invalidation
  • Fallbacks → Simple service switching and redundancy
  • Validation → Pre-built validators and custom rules support

This means you can focus on building your AI features instead of implementing infrastructure patterns.

Looking Ahead

As AI applications become more complex, these patterns will evolve, and new ones will emerge. The most successful teams will be those that can efficiently implement and adapt these patterns while focusing on their core business logic and delivering value to users.

Apply These Insights

Turn knowledge into action. Start building powerful AI workflows with Waveloom's intuitive platform.