Batch jobs kept retailing alive for decades-overnight fare loads, hourly availability syncs, daily revenue reports. Event-driven retailing keeps the same discipline but removes the stale lag. Rather than rewriting everything, add streaming step by step.
Map the triggers
List every situation where a human currently hits “refresh.” Those are your event candidates:
- Inventory changes: aircraft swaps, schedule adjustments, overbooking controls.
- Pricing changes: filed fare updates, continuous pricing curve edits, tax table updates.
- Customer actions: order creation, payment capture, voluntary changes, disruption acceptance.
Design event shapes carefully
Each event should declare what changed, the scope, and how long the data remains valid. Include correlation IDs so downstream services can relate the event to requests they process. Favor additive schemas-when you extend them, add new fields rather than breaking existing consumers.
Pick the platform and guardrails
Kafka, Pulsar, cloud-native messaging-it matters less than the guardrails you wrap around it:
- Define ownership per topic or stream.
- Set retention policies aligned with replay needs.
- Instrument lag and failure metrics so operators can see consumer health.
- Backfill strategy documented for new consumers.
- Dead-letter queues or equivalent in place for poison messages.
- Idempotent consumers confirmed with load tests.
Coexist with batch during the transition
Do not remove batch jobs immediately. Run both, compare outputs, and let analytics confirm that event-driven data stays consistent. Gradually move downstream processes (dashboards, alerts, settlement prep) to consume the event streams once you trust them.
Event-driven retailing is simply about removing waiting time between reality and the offer you show. With a clear trigger map and good observability, the migration feels evolutionary, not reckless.