Data Platform Engineer (SME)
Our enterprise clients are moving from fragmented data foundations to AI-first data platforms capable of supporting large-scale, business-critical AI systems.
AI performance is directly constrained by data quality, availability, governance, and latency.
This role exists to build and operate the data backbone that enables reliable, scalable, and compliant AI at enterprise scale.
__________________________________________________
Mission
You will operate as a Subject Matter Expert in complex enterprise environments, designing and delivering AI-ready data platforms where reliability, scalability, lineage, and governance are non-negotiable.
Acting in consultative client engagements, outsourced delivery, or product-based models, you will own the end-to-end lifecycle of data pipelines, from ingestion to serving, while acting as a technical authority on data engineering for AI systems.
Key Responsibilities
Data Engineering (Core)
Design, build, and maintain scalable batch and streaming data pipelines
Implement data ingestion from heterogeneous enterprise sources (databases, APIs, events, files)
Structure data for downstream AI and ML consumption
Ensure data quality, consistency, and availability across environments
...
AI-Ready Data Platforms
Build and operate feature stores and analytical data layers for ML
Design data models optimized for training and inference workloads
Enable efficient data access patterns for real-time and near-real-time AI use cases
Support experimentation while enforcing production-grade standards
Enterprise-Grade Data Delivery
Translate business and AI requirements into robust data architectures
Collaborate closely with AI/ML Engineers, MLOps, Infra, Security, and Product teams
Implement monitoring, validation, and alerting on data pipelines
Design lifecycle strategies for data versioning, backfills, and schema evolution
Technical Scope
Data Stack
Batch and streaming processing
Data modeling for analytics and AI workloads
Feature engineering and data serving patterns
Tooling & Engineering
Python, SQL
Data processing frameworks (Spark or equivalent)
Orchestration tools (Airflow, Dagster or equivalent)
Interaction with data lakes, warehouses, and real-time stores
Production Awareness
Data reliability, observability, and SLA-driven pipelines
Cost and performance optimization
Strong integration with MLOps and inference layers
Profile
Experience
Strong background in data engineering for large-scale systems
Proven experience delivering production-grade data pipelines
Familiarity with enterprise data landscapes and constraints
Mindset
Engineering-first approach to data
Strong ownership and accountability for data reliability
Comfortable operating in complex, multi-stakeholder environments
High standards for robustness, scalability, and maintainability
____________________________________________________
This is not a generic ETL role and not a reporting-focused data position.
This role is designed to solve enterprise-grade problems using cutting-edge AI technologies, operating as a technical force multiplier across projects and client environments.
You will position yourself as a Subject Matter Expert capable of pushing AI forward by multiple orders of magnitude, contributing to the construction of one of the most performant AI engineering teams across Europe.