[04]

emPower Foundation

Data Ingestion Experience - UX Case Study

[BACK TO PROJECTS]

[UX Design][Enterprise Data Platforms][Financial Services][Data Ingestion][FSI]

emPower Foundation ingestion flow
emPower Foundation pipeline dashboard

Project Snapshot

  • Company: Lumiq.ai
  • Role: UX Designer
  • Domain: Financial Services Industry (FSI) • Enterprise Data Platforms
  • Figma: https://www.figma.com/design/7hXMl5TbIQomj8MRL9v72y/emPower--Foundation?node-id=0-1

Project Overview

emPower is a financial data platform designed to help financial institutions (FSIs) bring together scattered data from multiple sources and turn it into usable business insights. My role was to design the data ingestion experience, the part of the product where organizations connect, import, validate, and prepare their data before it flows into analytics and reporting. Goal: Create a reliable, low-friction, and transparent data ingestion system that reduces dependency on engineering teams and enables business users to onboard data independently.

Problem Context

Financial institutions struggled to onboard and validate data from multiple fragmented sources, leading to delayed reporting, frequent pipeline failures, and heavy dependence on engineering teams. Financial institutions deal with: - Multiple data sources (CBS, CRM, payment systems, risk engines) - Different file formats (CSV, APIs, databases) - Poor data quality - Heavy reliance on data engineering teams

Existing Challenges

  • Data onboarding took days or weeks
  • Errors were discovered too late in the pipeline
  • Business users had no visibility
  • Technical workflows were not usable for non-technical teams
  • No standardized ingestion process

Business Impact

  • Delayed reporting
  • Compliance risks
  • Poor decision-making
  • High operational cost

My Design Objective

Design a guided, self-serve ingestion experience that: - Reduces onboarding time - Prevents data errors early - Makes complex pipelines understandable - Supports both technical and non-technical users

Users

Primary Users

1. Data Analysts

Goals: - Quickly upload and prepare datasets - Ensure accuracy Pain Points: - Dependence on engineering - Manual validation

2. Risk & Operations Teams

Goals: - Get timely reports - Trust data quality Pain Points: - Late error detection - Missing fields

3. Data Engineers (Secondary)

Goals: - Maintain data reliability - Reduce repetitive tasks Pain Points: - Constant ingestion requests - Debugging bad pipelines

Where This Is Used

  • Banking analytics
  • Risk monitoring
  • Compliance reporting
  • Financial performance dashboards

When Is It Needed

  • New client onboarding
  • New data source integration
  • Regulatory reporting cycles
  • Daily/weekly batch ingestion

Why It Matters

Data ingestion is the foundation of all insights. If ingestion fails: - Reports become unreliable - Business decisions become risky - Trust in the platform drops

UX Strategy

I approached the design with three principles: ### 1. Reduce Cognitive Load Make complex data pipelines feel simple and guided. ### 2. Increase Transparency Users should always know: - What data is coming in - What is happening to it - Where errors exist ### 3. Shift Left Error Detection Catch issues before data enters downstream systems.

Discovery Process

Stakeholder Interviews

Spoke with: - Data engineers - Product managers - Financial analysts Key insight: 70% of ingestion failures were due to incorrect schema or missing fields.

User Interviews

Users said: - "I do not know if my data actually got ingested." - "Errors come after reports fail." - "The system feels like a black box."

Journey Mapping

Mapped the ingestion journey: - Connect source - Upload data - Map fields - Validate - Run pipeline - Monitor Pain was highest at: - Schema mapping - Error debugging - Pipeline status visibility

Key UX Problems Identified

  • Technical terminology confused business users
  • No preview of data before ingestion
  • Errors were cryptic
  • Pipeline status unclear
  • No ownership visibility

Design Solutions

1. Guided Ingestion Wizard

Step-by-step flow: - Select data source - Upload/connect - Auto-detect schema - Map fields - Validate - Confirm ingestion Impact: - Reduced setup confusion - Enabled self-serve onboarding

2. Smart Schema Detection

  • Auto-suggested field mapping
  • Highlighted missing columns
  • Flagged data type mismatches

Impact: - Reduced manual effort - Prevented ingestion failures

3. Data Preview Layer

Users could: - See sample rows - Validate format - Catch anomalies early Impact: - Increased confidence - Reduced downstream errors

4. Human-Readable Error Messaging

Instead of: - "Schema mismatch error 409" We showed: - "Customer ID column is missing. This field is required for reporting." Impact: - Faster issue resolution - Reduced dependency on engineers

5. Pipeline Visibility Dashboard

Showed: - Ingestion status - Last run - Failure reason - Data freshness Impact: - Built trust - Improved operational awareness

6. Ownership & Alerts

  • Assigned pipeline owners
  • Failure notifications
  • SLA indicators

Impact: - Faster recovery - Clear accountability

UX Approach

Systems Thinking

Designed ingestion not as a screen, but as: - A data lifecycle - An operational workflow

Progressive Disclosure

Advanced options only visible when needed: - API configs - Scheduling - Transformation rules

Collaboration With Engineering

  • Defined reusable ingestion components
  • Standardized schemas
  • Reduced custom builds

Metrics & Impact

Before

  • Ingestion setup: 2-5 days
  • Failure rate: High
  • Business visibility: Low

After

  • Setup time reduced by 60-70%
  • Pipeline errors reduced by ~40%
  • Engineering dependency reduced significantly
  • Faster report generation cycles
  • Increased trust in platform data

Business Impact

  • Faster decision-making
  • Reduced operational cost
  • Improved regulatory readiness
  • Scalable onboarding for new clients

Takeaways

  • Data problems are often experience problems
  • Visibility builds trust more than automation alone
  • Error design is a major differentiator in enterprise tools
  • Self-serve systems reduce organizational friction