Back to Search
Overview
Mid-Level

Data Analytics Engineer

Confirmed live in the last 24 hours

Veeam Software

Veeam Software

San Jose, Costa Rica
Remote
Posted April 3, 2026

Job Description

Veeam is the Data and AI Trust Company, specializing in helping organizations ensure their data and AI are fully understood, secured, and resilient to enable the acceleration of safe AI at scale. As the market leader in both data resilience and data security posture management, Veeam is built for the convergence of identity, data, security, and AI risk. Headquartered in Seattle with offices in more than 30 countries, Veeam protects over 550,000 customers worldwide, who trust Veeam to keep their businesses running. Join us as we go fearlessly forward together, growing, learning, and making a real impact for some of the world’s biggest brands.

About the Role

We are looking for an Analytics Engineer to join our internal Data Management team and help power data-driven decision-making across the organization. In this role, you will design and maintain scalable data models, pipelines, and transformation processes that enable analytics, reporting, and data science initiatives. You will collaborate closely with analysts, data scientists, and engineering teams to ensure reliable, efficient, and well-structured data that supports business insights and strategic initiatives.

What You’ll Do

  • Design scalable and reliable data marts and transformation processes that support analytics and reporting needs
  • Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and translate them into efficient data solutions
  • Develop optimized data models and schemas that enable efficient storage, retrieval, and analysis of large datasets
  • Build ETL and ELT pipelines that transform raw data from multiple sources into structured datasets for analytics and reporting
  • Partner with data, architecture, and DevOps teams to ensure the scalability, performance, and reliability of data systems
  • Monitor data pipelines and systems to troubleshoot issues and identify opportunities for improvement
  • Document data architectures, data flows, and processes to support transparency, collaboration, and maintainability

Technologies You’ll Work With

  • SQL and relational databases
  • Cloud platforms such as Azure, AWS, or GCP
  • Programming languages such as Python, Java, or Scala
  • Databricks and Spark
  • Data warehouse technologies such as Postgres or Vertica
  • ETL/ELT frameworks and data transformation tools

What You’ll Bring

  • Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field, or equivalent practical experience
  • Strong proficiency in SQL and experience working with databases and data warehouse systems
  • Experience building data pipelines and data processes in a cloud environment such as Azure, AWS, or GCP
  • Experience with at least one programming language such as Python, Java, or Scala
  • Knowledge of ETL/ELT processes, data modeling, and data warehousing best practices
  • Ability to collaborate with cross-functional teams including analysts,
pythonjavagorustawsgcpazureaidevopsdata