
[04]
Data Ingestion Experience - UX Case Study
[UX Design][Enterprise Data Platforms][Financial Services][Data Ingestion][FSI]


emPower is a financial data platform designed to help financial institutions (FSIs) bring together scattered data from multiple sources and turn it into usable business insights. My role was to design the data ingestion experience, the part of the product where organizations connect, import, validate, and prepare their data before it flows into analytics and reporting. Goal: Create a reliable, low-friction, and transparent data ingestion system that reduces dependency on engineering teams and enables business users to onboard data independently.
Financial institutions struggled to onboard and validate data from multiple fragmented sources, leading to delayed reporting, frequent pipeline failures, and heavy dependence on engineering teams. Financial institutions deal with: - Multiple data sources (CBS, CRM, payment systems, risk engines) - Different file formats (CSV, APIs, databases) - Poor data quality - Heavy reliance on data engineering teams
Design a guided, self-serve ingestion experience that: - Reduces onboarding time - Prevents data errors early - Makes complex pipelines understandable - Supports both technical and non-technical users
Goals: - Quickly upload and prepare datasets - Ensure accuracy Pain Points: - Dependence on engineering - Manual validation
Goals: - Get timely reports - Trust data quality Pain Points: - Late error detection - Missing fields
Goals: - Maintain data reliability - Reduce repetitive tasks Pain Points: - Constant ingestion requests - Debugging bad pipelines
Data ingestion is the foundation of all insights. If ingestion fails: - Reports become unreliable - Business decisions become risky - Trust in the platform drops
I approached the design with three principles: ### 1. Reduce Cognitive Load Make complex data pipelines feel simple and guided. ### 2. Increase Transparency Users should always know: - What data is coming in - What is happening to it - Where errors exist ### 3. Shift Left Error Detection Catch issues before data enters downstream systems.
Spoke with: - Data engineers - Product managers - Financial analysts Key insight: 70% of ingestion failures were due to incorrect schema or missing fields.
Users said: - "I do not know if my data actually got ingested." - "Errors come after reports fail." - "The system feels like a black box."
Mapped the ingestion journey: - Connect source - Upload data - Map fields - Validate - Run pipeline - Monitor Pain was highest at: - Schema mapping - Error debugging - Pipeline status visibility
Step-by-step flow: - Select data source - Upload/connect - Auto-detect schema - Map fields - Validate - Confirm ingestion Impact: - Reduced setup confusion - Enabled self-serve onboarding
Impact: - Reduced manual effort - Prevented ingestion failures
Users could: - See sample rows - Validate format - Catch anomalies early Impact: - Increased confidence - Reduced downstream errors
Instead of: - "Schema mismatch error 409" We showed: - "Customer ID column is missing. This field is required for reporting." Impact: - Faster issue resolution - Reduced dependency on engineers
Showed: - Ingestion status - Last run - Failure reason - Data freshness Impact: - Built trust - Improved operational awareness
Impact: - Faster recovery - Clear accountability
Designed ingestion not as a screen, but as: - A data lifecycle - An operational workflow
Advanced options only visible when needed: - API configs - Scheduling - Transformation rules