Senior Data Engineer | Canada | Remote
Confirmed live in the last 24 hours
Grafana Labs
Job Description
Grafana Labs is a remote-first, open-source powerhouse. There are more than 20M users of Grafana, the open source visualization tool, around the globe, monitoring everything from beehives to climate change in the Alps. The instantly recognizable dashboards have been spotted everywhere from a NASA launch and Minecraft HQ to Wimbledon and the Tour de France. Grafana Labs also helps more than 3,000 companies -- including Bloomberg, JPMorgan Chase, and eBay -- manage their observability strategies with the Grafana LGTM Stack, which can be run fully managed with Grafana Cloud or self-managed with the Grafana Enterprise Stack, both featuring scalable metrics (Grafana Mimir), logs (Grafana Loki), and traces (Grafana Tempo).
We’re scaling fast and staying true to what makes us different: an open-source legacy, a global collaborative culture, and a passion for meaningful work. Our team thrives in an innovation-driven environment where transparency, autonomy, and trust fuel everything we do.
You may not meet every requirement, and that’s okay. If this role excites you, we’d love you to raise your hand for what could be a truly career-defining opportunity.
This is a remote opportunity and we would be interested in applicants from Canadian time zones only at this time.
Senior Data Engineer
The Opportunity:
We are looking for a Senior Data Engineer who can help maintain frameworks and systems that acquire, validate/cleanse, and load data into and out of our analytics systems. The systems that this role builds and maintains will allow our business partners to more accurately and reliably track and forecast sales, revenue, and usage/consumption metrics.
This position will have engagement across many parts of the company, including finance, revenue and cx operations, analytics teams, and analytics engineering. The frameworks and systems that you work on will integrate with and enhance our current stack that includes GCS, BigQuery, dbt, dlt, Prefect, Python, Fivetran, Rudderstack, Hightouch, and OpenMetadata.
What You’ll Be Doing:
- Build and maintain production quality data pipelines between operational systems and BigQuery (ingress and egress).
- Implement data quality and freshness checks and monitoring processes to ensure data accuracy and consistency.
- Maintain and contribute to our ingestion framework that leverages various purpose-built data load tool (dlt) connectors.
- Create and maintain comprehensive documentation for data engineering processes, systems, and workflows
- Maintain observability and monitoring of our internal data pipelines.
- Troubleshoot and resolve data pipeline issues to ensure downstream data availability.
- Contribute to our dbt systems by making sure the source and staging layers align with our standards, are efficient, cost-effective, and highly available.
- Participate in the investigation and implementation of event-driven data movement and transformation processes.
- Participate in the investigation and implementation of analytic data storage/table formats (e.g. Apache Iceberg)
What Makes You a Great Fit:
- Software development skills (some combination of Python, Java, Scala, Go)
- High proficiency in SQL
- Experience building and maintaining data ingestion pipelines using a workflow orchestration system (e.g. Prefect, Dagster, Airflow)
- Working knowledge of dbt or similar data transformation tools
- Highly motivated self-starter that is keen to make an impact and is unafraid of tackling large, complicated problems
- Excellent communication skills, able to explain technical topics to non-technical audiences, and maintain many of the essential cross-team and cross-functional relationships necessary for the team’s success
Bonus Points For:
- Experience working with Prefect, BigQuery, and GCP services
- K
Similar Jobs
Mozilla
Senior Data Engineer
Life360
Senior Analytics Engineer II, AI Native
ZoomInfo
Senior Data Engineer
Afresh
Senior Data Engineer
ConsenSys
Senior Analytics Engineer
ConsenSys