Data Engineer
Confirmed live in the last 24 hours
Transform
Job Description
About Transform
Transform is the AI-native platform for managing and orchestrating change across complex enterprise systems landscapes. We empower consultants and companies with superpowers - capturing and organizing knowledge, turning it into actionable artifacts and work plans, and accelerating project delivery to radically simplify how organizations manage change. We're a well-funded startup led by a successful repeat founder and backed by top investors. With an ambitious vision, an experienced North American founding team, and a growing global footprint. Our Bogotá hub will be a cornerstone of our future, and we're building something extraordinary from day one.
The Role
We're hiring a Data Engineer to join us in building and operating the data infrastructure that powers our AI workloads and application features. You'll work at the intersection of data engineering, streaming systems, and operational excellence to create reliable, scalable data pipelines from the ground up. This role encompasses hands-on pipeline development, streaming infrastructure management, and data platform operations. You'll collaborate closely with ML engineers, backend developers, and product teams to ensure data availability and quality across the organization while maintaining high-performance systems that scale with our growth. This is an in-person, in-office role in Bogotá, starting remotely while our office is being finalized. You’ll be one of the founding members of our Colombia engineering team, helping shape the culture and technical foundation of our Latin American hub.
What You'll Do
- Build and operate batch data pipelines using Airflow, Prefect, or Dagster
- Develop real-time streaming applications and manage Kafka infrastructure
- Create data APIs and caching layers for application performance
- Implement data quality checks, monitoring, and alerting systems
- Manage data warehouse schemas, partitioning, and optimization
- Handle pipeline failures and implement recovery mechanisms
- Maintain search infrastructure and real-time data feeds
- Respond to data incidents and ensure operational excellence
- Optimize data storage costs and query performance
- Implement data security, access controls, and audit logging
What We're Looking For
- 3+ years of hands-on data engineering experience building production systems
- Strong proficiency in Python and SQL with proven ability to write efficient code
- Practical experience with Apache Kafka and stream processing
- Expertise with orchestration tools (Airflow, Dagster, Prefect, or Argo)
- Working knowledge of cloud platforms (AWS, GCP, or Azure)
- Experience with both batch and streaming data architectures
- Understanding of data formats, transformation, and quality best practices
- Operational mindset with troubleshooting and monitoring expertise
- Clear communicator who can work effectively with cross-functional teams
- Experience with version control, CI/CD, and infrastructure as code
Bonus Points
- Experience with Spark, Kubernetes, or container orchestration
- Knowledge of ML pipelines and feature engineering
- Familiarity with data governance and compliance requirements
- Track record of cost optimization and performance tuning at scale
- Experience building data platforms in high-growth startups
Why Join Us
- Join a high-caliber team building cutting-edge data infrastructure
- Work on challenging problems at the intersection of AI and data engineering
- Shape our data platform architecture from the early stages
- Be part of a fast-paced, supportive, and technically ambitious environment
- Competitive compensation, equity upside, and career growth opportunities
Similar Jobs
iCapital
Data Engineer - Assistant Vice President
FanDuel
Senior Data Engineer
FanDuel
Data Engineer
FanDuel
Data Engineer
FanDuel
Data Engineer
Keystone