Job Title Data Engineer Job Family Summary
As a Data Engineer, you will join Genus and work within the Data Platform Engineering team alongside
the Data Platform Product Owner and Senior Data Engineer. In addition, you will be collaborating with
the Data Operations Team and Data Enablement Team, along with Key stakeholders and other members
...
of the team, such as Data Architects and Analysts, to design and deliver advanced data engineering
solutions that enable scalable, secure, and high-performing data platforms. ESSENTIAL DUTIES AND RESPONSIBILITIES include the following. Other duties may be assigned.
Design, develop, test, and deploy advanced data engineering soluƟons using Azure SQL Server, Python, PySpark,
Databricks, and Azure Synapse.
OpƟmize data models and queries for performance and resource efficiency.
Build scalable data processing pipelines using Python and PySpark for large-scale data transformaƟon and analyƟcs.
Implement and manage Databricks and Azure Synapse environments for big data processing and analyƟcs.
Develop and maintain data transformaƟon models in dbt and Azure Data Factory.
Ensure proper source control and deployment pracƟces using Git and CI/CD tools (Azure DevOps, GitHub).
Collaborate with stakeholders to understand requirements and deliver technical soluƟons aligned with business
needs.
Guarantee data accuracy and integrity through data quality checks and cleansing processes.
Performance tuning of SQL queries and data processing scripts.
Manage and opƟmize the data plaƞorm for cost and performance, including proacƟve monitoring and self-healing
design paƩerns. Genus Core Behaviours / Competencies
Customer impact: builds strong, profitable, sustainable customer relationships, anticipating and exceeding
customer expectations to increase demand for services and products in order to build loyalty.
Managing external environment: anticipates and responds quickly to environmental changes for the benefit of
the business and customers, through strong external networks and deep understanding of the markets.
Execution Orientation: Drives to set ever higher standards and achieve results through determination, resilience
and commitment. Develops solutions to enhance the service offering and drive continuous improvement.
Setting Direction: develops simple, deliverable plans based on pragmatic new thinking, ideas or concepts.
Assesses accurately commercial risk and return.
Change management: Champions, leads, support or embeds change to improve things. Communicates well and
helps others by overcoming barriers.
Analysis and Decision Making: Analyses opportunities and problems thoughtfully and thoroughly to make good
and timely decisions.Team Mobilisation: contributes towards the success to be a part of a high performing diverse team.
Collaboration: ‘One team’ approach – gains commitment to strategic vision and goals. Builds and maintains
networks and relationships, sharing knowledge and experience, delivering on commitments.
The following are qualities that are the foundations on which Genus team members work:
Integrity
Honesty
A desire to work to make a difference in the communities & countries that we work in
Delivery on commitments – do what you say you are going to do
Alignment with the business goals and valuesEssential Functions include the following. Other duties may be assigned. (Include NO more
than 10 functions)% of
Position
Core Data Engineering & Data Architecture 35%
Data Pipeline Design & Development 30%
Data Platform Monitoring & Optimisation 15%
Data Platform Operations Support & Documentation 15%
Data Platform Innovation 5%
100%Requirements
Travel Occasional Travel will be required
Education Bachelors Degree or Equivilent in Computing related subject
Licenses/Certifications Desirable: Databricks Certified Professions.
Dbt Certified
Microsoft CertifiedExperience SQL / T-SQL and analytical data warehouse development – 8+ years
Apache Spark experience using Python, PySpark and SparkSQL at
scale – 5+ years
Expertise on performance optimisation and production stability of
the data workloads – 3+ years
Expertise on building metadata-driven, reusable data pipelines using
Databricks and ADF – 3+ years
Hands-on experience working with Delta-based datasets, multi-layer
Lakehouse architectures, and evolving schemas – 3+ years
Good knowledge of Unity Catalog, including governed data access,
catalogs and schemas, and secure data sharing across teams – 2+
years Design and operation of batch and incremental pipelines integrating
Databricks with ADF, dbt, Airflow, and external source systems – 3+
years
Strong experience integrating Databricks with Azure services such as
ADLS Gen2, Azure SQL / Synapse, and cloud-native security services –
3+ years
Experience implementing structured logging, pipeline metrics, and
operational monitoring for data ingestion workloads, including
failure handling and alerting – 3+ years
CI/CD-driven deployment of Databricks assets using Azure DevOps,
GitHub, or similar tools – 3+ years
Experience working in Agile delivery environments (SAFe or similar)
Strong communication skills and ability to work across platform,
analytics, and business teams
Comfortable operating in fast-paced environments with production
responsibilityOtherThe above position description is intended to describe the general content, identify the essential functions of, and
requirements for the performance of this position. It is not to be construed as an exhaustive statement of duties,
responsibilities or requirements.
experience
12show more
Job Title Data Engineer Job Family Summary
As a Data Engineer, you will join Genus and work within the Data Platform Engineering team alongside
the Data Platform Product Owner and Senior Data Engineer. In addition, you will be collaborating with
the Data Operations Team and Data Enablement Team, along with Key stakeholders and other members
of the team, such as Data Architects and Analysts, to design and deliver advanced data engineering
solutions that enable scalable, secure, and high-performing data platforms. ESSENTIAL DUTIES AND RESPONSIBILITIES include the following. Other duties may be assigned.
Design, develop, test, and deploy advanced data engineering soluƟons using Azure SQL Server, Python, PySpark,
Databricks, and Azure Synapse.
OpƟmize data models and queries for performance and resource efficiency.
Build scalable data processing pipelines using Python and PySpark for large-scale data transformaƟon and analyƟcs.
Implement and manage Databricks and Azure Synapse environments for big data processing and analyƟcs.
Develop and maintain data transformaƟon models in dbt and Azure Data Factory.
...
Ensure proper source control and deployment pracƟces using Git and CI/CD tools (Azure DevOps, GitHub).
Collaborate with stakeholders to understand requirements and deliver technical soluƟons aligned with business
needs.
Guarantee data accuracy and integrity through data quality checks and cleansing processes.
Performance tuning of SQL queries and data processing scripts.
Manage and opƟmize the data plaƞorm for cost and performance, including proacƟve monitoring and self-healing
design paƩerns. Genus Core Behaviours / Competencies
Customer impact: builds strong, profitable, sustainable customer relationships, anticipating and exceeding
customer expectations to increase demand for services and products in order to build loyalty.
Managing external environment: anticipates and responds quickly to environmental changes for the benefit of
the business and customers, through strong external networks and deep understanding of the markets.
Execution Orientation: Drives to set ever higher standards and achieve results through determination, resilience
and commitment. Develops solutions to enhance the service offering and drive continuous improvement.
Setting Direction: develops simple, deliverable plans based on pragmatic new thinking, ideas or concepts.
Assesses accurately commercial risk and return.
Change management: Champions, leads, support or embeds change to improve things. Communicates well and
helps others by overcoming barriers.
Analysis and Decision Making: Analyses opportunities and problems thoughtfully and thoroughly to make good
and timely decisions.Team Mobilisation: contributes towards the success to be a part of a high performing diverse team.
Collaboration: ‘One team’ approach – gains commitment to strategic vision and goals. Builds and maintains
networks and relationships, sharing knowledge and experience, delivering on commitments.
The following are qualities that are the foundations on which Genus team members work:
Integrity
Honesty
A desire to work to make a difference in the communities & countries that we work in
Delivery on commitments – do what you say you are going to do
Alignment with the business goals and valuesEssential Functions include the following. Other duties may be assigned. (Include NO more
than 10 functions)% of
Position
Core Data Engineering & Data Architecture 35%
Data Pipeline Design & Development 30%
Data Platform Monitoring & Optimisation 15%
Data Platform Operations Support & Documentation 15%
Data Platform Innovation 5%
100%Requirements
Travel Occasional Travel will be required
Education Bachelors Degree or Equivilent in Computing related subject
Licenses/Certifications Desirable: Databricks Certified Professions.
Dbt Certified
Microsoft CertifiedExperience SQL / T-SQL and analytical data warehouse development – 8+ years
Apache Spark experience using Python, PySpark and SparkSQL at
scale – 5+ years
Expertise on performance optimisation and production stability of
the data workloads – 3+ years
Expertise on building metadata-driven, reusable data pipelines using
Databricks and ADF – 3+ years
Hands-on experience working with Delta-based datasets, multi-layer
Lakehouse architectures, and evolving schemas – 3+ years
Good knowledge of Unity Catalog, including governed data access,
catalogs and schemas, and secure data sharing across teams – 2+
years Design and operation of batch and incremental pipelines integrating
Databricks with ADF, dbt, Airflow, and external source systems – 3+
years
Strong experience integrating Databricks with Azure services such as
ADLS Gen2, Azure SQL / Synapse, and cloud-native security services –
3+ years
Experience implementing structured logging, pipeline metrics, and
operational monitoring for data ingestion workloads, including
failure handling and alerting – 3+ years
CI/CD-driven deployment of Databricks assets using Azure DevOps,
GitHub, or similar tools – 3+ years
Experience working in Agile delivery environments (SAFe or similar)
Strong communication skills and ability to work across platform,
analytics, and business teams
Comfortable operating in fast-paced environments with production
responsibilityOtherThe above position description is intended to describe the general content, identify the essential functions of, and
requirements for the performance of this position. It is not to be construed as an exhaustive statement of duties,
responsibilities or requirements.
experience
12show more