about the company:
Our client is a well-established global financial institution with a strong presence in Asia. They are committed to innovation and technology-driven solutions, particularly in supporting core banking and investment operations. The firm fosters a collaborative work environment and provides opportunities for professional growth in a dynamic and international setting.
about the job:
In this role, you will take ownership of the stability and performance of high-scale data pipelines. You will bridge the gap between development and operations to ensure the "single source of truth" for the bank remains robust and reliable. Your key responsibilites include:
- Overseeing 3rd level support for complex data pipelines, including incident and problem management.
- Troubleshooting daily data ingestion processes from source to consumption, ensuring all data availability thresholds are met.
- Managing JSON-based metadata and schemas to ensure consistency across the global organization.
- Continuously optimizing service availability, system capacity, and platform performance.
- Working closely with agile development teams to support release cycles and implement changes that prevent production disruptions.
skills and experience required:
...
- Significant experience (ideally 10+ years) managing cluster environments involving Kubernetes, Kafka, Spark, and distributed storage solutions such as S3 or HDFS.
- Advanced proficiency in Python and expert-level SQL skills, particularly in mixed environments (Data Warehouse vs. Distributed systems).
- Proven track require in utilizing monitoring tools to track the health of daily ingestion pipelines and consumer-facing applications.
- Solid experience with Linux-based infrastructure and advanced shell scripting.
To apply online please use the apply function, alternatively you may contact Dalpreet Kaur at +65 8450 8938 (EA: 94C3609 /R23111951)