📝 Module 8 Quiz
Module 08 — Langfuse in Production: SandSync Case Study
Answer all questions. You need 70% to pass.
1. SandSync's trace waterfall showed Ogma and fal.ai running sequentially after Anansi. What type of problem is this, and how would you fix it?
A network problem — fix by choosing a closer Fly.io region
A code architecture problem — fix by running Ogma review and fal.ai image generation in parallel after Anansi completes
An API rate limit problem — fix by adding retry logic to both services
A database problem — fix by adding an index to the story_chapters table
2. Ogma's latency was bimodal: sometimes 2s, sometimes 8s. The root cause turned out to be provider fallback. What instrumentation change revealed this?
Adding Prometheus metrics for each provider's response time
Adding the `ogma.provider` attribute to each Ogma span, allowing filtering by resolved provider in Langfuse
Enabling debug logging in the Groq SDK
Adding a health check that pings Groq and Anthropic every 30 seconds
3. Langfuse's token distribution showed Anansi had an input/output ratio of 1.8:1, unusually high for a creative writing task. What did this reveal?
Claude Haiku was being inefficient with its context window
The story context was growing too large across chapters, exceeding the expected length
A 450-token stale system prompt with redundant instructions was bloating every Anansi call by ~36%
The tokeniser was double-counting tokens in the Langfuse SDK
Submit Quiz
Cancel