Senior Data Architect IT
Senior Data Architect in Warsaw: lead cloud-native data platforms, governance, and AI-ready pipelines. Hybrid 3 days onsite, grow with a US AI leader in a collaborative fintech.
Senior Data Architect – Cloud Data Architecture & Governance
📍 Hybrid in Warsaw (3 days/week onsite required) | 💼 Full-time | B2B contract up to $8,000/month
About the Role
Our client is a US-based leader in AI-powered enterprise operations, delivering digital solutions and consulting services that transform high-growth businesses and private equity-backed platforms. With over a decade of deep domain expertise in private capital markets, the company operates an integrated ecosystem spanning PaaS, SaaS, and a Solutions & Consulting Suite.
We are seeking a Senior Data Architect to join the company's growing Warsaw engineering centre. In this role you will own enterprise data architecture from strategy through execution — designing cloud-native data platforms, establishing governance standards, and enabling AI/ML-ready data infrastructure that powers business intelligence across the portfolio.
This is a hybrid position — you will be expected to work from the Warsaw office at least 3 days per week.
Key Responsibilities
Design, develop, and maintain enterprise data architecture strategies, standards, and blueprints supporting operational, analytical, and AI/ML workloads
Architect cloud-native data solutions on AWS (Redshift, RDS, Glue, Lake Formation) or equivalent platforms, ensuring scalability, security, and cost efficiency
Define and enforce data modeling standards: dimensional modeling, denormalized schemas, OLTP/OLAP design patterns, and AI-friendly ontologies
Architect and oversee data transformation layers using DBT, delivering modular, tested, and well-documented models across the analytics stack
Lead design of data integration and orchestration patterns with Prefect and Airflow — batch ETL, real-time streaming, event-driven, and API-based data exchange
Define and implement data validation, quality control, and automated testing frameworks across pipelines and warehouses
Establish data quality SLAs, monitoring, and alerting standards; design automated reconciliation processes to catch issues before downstream impact
Build and maintain data governance frameworks: data quality, lineage, cataloging, classification, and access control
Collaborate with Data Engineers, Software Engineers, Product, and Analytics teams to translate business requirements into scalable designs
Evaluate and recommend data technologies and tools; own technical decision-making for data infrastructure within assigned domains
Design data partitioning, indexing, and optimization strategies for high-performance queries and big data workloads
Ensure architectures support AI/ML consumption — feature stores, embedding pipelines, and model training datasets
Perform architecture and code reviews to uphold data standards, optimal execution patterns, and long-term maintainability
Mentor data engineers on best practices in modeling, architecture patterns, and cloud data design
Assist with CI/CD processes and automated release management for data infrastructure deployments
Key Requirements
7+ years of experience in data architecture, data engineering, or related technical roles
5+ years designing and implementing cloud-based data architectures (AWS, GCP, or Azure)
5+ years writing complex SQL queries across RDBMSes
5+ years developing and deploying ETL/ELT pipelines using Airflow, Prefect, or similar tools
Strong experience with DBT for data transformation, testing, and documentation
Experience with data warehouse design: OLTP, OLAP, star schemas, snowflake schemas, dimensions, and facts
Experience with data modeling tools and methodologies (conceptual, logical, physical models)
Hands-on experience with cloud-based data warehouses such as Redshift, Snowflake, or BigQuery
Experience implementing data validation frameworks, quality control processes, and automated testing for data pipelines
Familiarity with how data architectures serve AI/ML workloads, including feature stores and vector-based retrieval patterns
Strong understanding of data governance, data quality frameworks, and metadata management
Bachelor's degree in Computer Science or equivalent — preferred
Nice to Have
Python / Pandas / PySpark · Docker · Kubernetes · CI/CD Automation · AWS Lambdas / Step Functions · Data Partitioning · Databricks · Vector Databases (Pinecone, Weaviate, pgvector) · Data Mesh / Data Fabric · Graph Databases / Knowledge Graph Design · Cloud Certifications
What's Offered
B2B contract with monthly compensation up to $8,000
Strategic, high-ownership role in a fast-growing global fintech
Direct influence over data infrastructure decisions and team direction
Mentorship opportunities and clear career progression
Collaborative, open, and ambitious team culture
Hybrid model — minimum 3 days/week in the Warsaw office
About OPTIVEUM sp. z o.o.
Optiveum is a recruitment and consulting company created based on our 20-plus years of experience in HR & IT services.
We work for Clients located in Poland and abroad providing our local and international Candidates with Project-based or Permanent job opportunities in a remote or office-based model.
COMPANY DATA
Optiveum Sp. z o.o.
ul. Tomasza Zana 43 lok. 2.1 20-601 Lublin, Poland
nr KRS: 0000834436, NIP 7010975729
Contact us at: info (at) optiveum.com