Back to Search
Overview
Mid-Level

Data Engineer | BEES Personalization

Confirmed live in the last 24 hours

AB InBev

AB InBev

Remoto
On-site
Posted March 27, 2026

Job Description

About AB InBev

AB InBev is the leading global brewer and one of the top 5 consumer goods companies in the world. With over 500 beer brands, we are the number one or two in many of the main beer markets worldwide, including North America, Latin America, Europe, Asia, and Africa.

 

About ABI Growth Group

Created in 2022, the Growth Group unifies our business-to-business (B2B), direct-to-consumer (DTC), Sales & Distribution, and Marketing teams. By bringing together global tech and commercial functions, the Growth Group enables us to maximize the use of data and drive AB InBev's digital transformation and organic growth around the world.

 

About BEES:

As a tech cell of our organization, we have a simple goal: to grow. Grow as people, as professionals, and as a company. To achieve this, we use technology to create digital solutions that simplify our customers’ lives, make their decisions smarter, and their businesses more profitable. With offices in São Paulo and Campinas, we encourage our team to attend key events and meaningful in-person meetings throughout the year.

 

Responsibilities:

  • Implement ETL/ELT solutions and data integration between multiple systems and data sources.
  • Design, implement, and maintain data pipelines to ingest, store, and process large volumes of data.
  • Collaborate with other teams to ensure data security and compliance.
  • Design, develop, and implement solutions to optimize the performance and scalability of data processing systems.

 

Requirements and qualifications:

  • Bachelor's degree in Computer Science, Computer Engineering, Information Systems, Systems Analysis and Development, or similar;
  • Intermediate English;
  • Apply knowledge of data pipeline concepts and tools to implement data transformation, cleaning, and aggregation tasks.
  • Use data pipeline frameworks and libraries to automate data processing tasks and optimize workflows.
  • Follow best practices for developing data pipelines, including version control, testing, and thorough documentation.
  • Apply programming skills in Python, PySpark, Scala, and SQL to effectively manipulate and transform data.
  • Understand and utilize cloud computing platforms and services offered by providers such as AWS, Azure, and Google Cloud.
  • Develop data pipelines using orchestration frameworks (e.g., Apache Airflow, Luigi, Mage,
pythongoawsazureaidevopsdataproductdesignmarketing