Back to Search
Overview
Senior

Sr Data & AI Technical Solutions Engineer

Confirmed live in the last 24 hours

Databricks

Databricks

Sao Paulo, Brazil
On-site
Posted February 19, 2026

Job Description

P-993

Mission 

As a Data & AI Technical Solutions Engineer, you play a critical role by helping customers debug and maintain stable production data pipelines, AI workflows, and more using the Databricks platform.  You will develop product expertise in a couple of areas by advising a broad set of customers and use cases across those technologies.  You will collaborate with other teams at Databricks to deliver an excellent customer experience - whether that’s product engineering teams or technical account team members.  TSE’s have proven production troubleshooting and optimization experience to help our customers’ workloads run smoothly and to achieve their strategic objectives using the Databricks’ platform.  Reporting to a TSE manager - you will be part of a world class global support engineering organization for Databricks, known for your technical depth and delivering impeccable customer service.

The impact you will have:

  • Be the first point of contact for customer production issues - provide initial analysis, troubleshooting, and resolution for data engineering and AI workloads. 
  • Deep Dive into code-level analysis of customer workloads to address issues related to Databricks products - including Spark core internals, Spark SQL, Delta, DLT, and Model Serving.
  • Provide excellent customer support - be knowledgeable and empathetic in customer communications over email and video, act with a sense of urgency to find mitigations and solutions, diffuse escalations during incidents and be proactive at helping prevent future customer problems.   
  • Help make the Databricks products simpler to use and customer production environments more stable - such as through coordinating with Engineering and Backline Support teams to identify areas for product improvements. 
  • Develop expertise in productionizing systems in Databricks and share your knowledge by contributing to wikis and other technical documentation to be used internally or externally by customers and partners.

What we look for: 

  • Minimum of 6 years of experience in designing, building, testing, and maintaining Python/Java/Scala based applications in typical project delivery and consulting environments. 
  • Experience with SQL databases or data warehouses (such as Oracle, Teradata, SQL Server, MySQL) and ETL technologies (such as Informatica, DataStage, Talend, Fivetran). 
  • 3 years of hands-on experience in developing two or more Big Data technologies such as Spark & Hadoop, Lakehouse architecture - such as Delta, Data Ingestion, Data Streaming applications, or ML/AI applications for industry use cases. 
  • Hands-on experience in performance tuning and troubleshooting of Data and AI applications at production scale - including query optimization, memory management, garbage collection, and heap/thread dump analysis. 
  • Prior Support or customer facing experience is not required for this role, but the ability and desire to develop excellent customer service skills is.
  • Preferrable: Hands-on experience with public cloud (AWS or Azure or GCP).
  • Technical degree or the equivalent experience.

About Databricks

Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitterpythonjavagoawsgcpazureaidataanalyticsproduct