Every year, we help hundreds of thousands of people find rewarding jobs in the ever-changing world of work.
We understand the importance of a job in peoples lifes and we want to help them find work that feels good. And we’ll help them continue to grow as their needs and ambitions change.
At Randstad, our value comes from our people and that is why we put them first. We are proud of our learning culture and career architecture framework that encourages ours team to develop both personally and professionally.
We believe that talent grows when presented with opportunity and this is why we encourage our people to think beyond their role. We have created a culture that enables talent to flourish, encouraging entrepreneurship, fostering team spirit, and continually building mutual trust.
Similar profile to a data engineer but needs to have much more experience, self sufficiency and ingenuity.
Key ResponsibilitiesBigQuery Performance & Cost Management: Actively monitor BigQuery usage to identify "heavy" queries and inefficient patterns. Work with developers to implement optimization strategies such as partitioning, clustering, and slot management to balance performance with budget.
Platform Observability: Design and maintain comprehensive alerting and monitoring that notifies the team before a failure impacts the business.
Incident Response & Troubleshooting: Lead the root-cause analysis (RCA) for data platform outages or performance degradation. You will define and lead how we respond to issues involving the data platform.
Operational Excellence: Automate routine maintenance tasks using Terraform and Python. You will ensure that our IAM roles, service accounts, and data encryption policies adhere to the principle of least privilege.
Workflow Orchestration: Support the stable execution of data ingestion/publishing and scheduled jobs (pub/sub, cloud functions, DBT, Workflows), ensuring that dependencies are respected and data freshness expectations are met.
Qualifications & SkillsGCP Infrastructure: Deep hands-on experience with Google Cloud, specifically BigQuery, Cloud Logging/Monitoring, Pub/Sub, and IAM.
BigQuery Specialist: You should understand the "under the hood" mechanics of BigQuery (slots, shuffle, and storage) to provide actionable optimization advice to the engineering team.
Automation & IaC: Proficiency with Terraform for managing cloud resources and Python or Bash for operational scripting.
Problem Solving: A methodical approach to troubleshooting.
Communication: Ability to communicate technical "post-mortems" to stakeholders and provide clear documentation for operational procedures.
Looker: Experience of building and maintaining Looker environments at production scale would be beneficial.
Interviewers
Linda Malby (Global Data Architect) and Benjamin Webb (S-One Data Engineering Solutions Architect)
Is this the job for you? We would love to hear from you! Please apply directly to the role and we will get in touch with you.
...