This example demonstrates advanced AI data processing using Upstash Workflow. The following example workflow downloads a large dataset, processes it in chunks using OpenAI’s GPT-4 model, aggregates the results and generates a report.
Note that we use context.call for the download, a way to make HTTP requests that run for much longer than your serverless execution limit would normally allow.
We split the dataset into chunks and process each one using OpenAI’s GPT-4 model:
Copy
for (let i = 0; i < chunks.length; i++) { const { body: processedChunk } = await context.api.openai.call<OpenAiResponse>( `process-chunk-${i}`, { token: process.env.OPENAI_API_KEY!, operation: "chat.completions.create", body: { model: "gpt-4", messages: [ { role: "system", content: "You are an AI assistant tasked with analyzing data chunks. Provide a brief summary and key insights for the given data.", }, { role: "user", content: `Analyze this data chunk: ${JSON.stringify(chunks[i])}`, }, ], max_completion_tokens: 150, }, } )}
Non-blocking HTTP Calls: We use context.call for API requests so they don’t consume the endpoint’s execution time (great for optimizing serverless cost).
Long-running tasks: The dataset download can take up to 2 hours, though is realistically limited by function memory.