Senior Data Engineer
Confirmed live in the last 24 hours
EquiLend
Job Description
Team Overview
We're looking for a Senior Data Engineer to join our team working on products that power real-time analytics, business intelligence, and trading solutions across securities finance markets. This role blends core data engineering with hands-on exposure to machine learning, with work focused on building and maintaining scalable data pipelines as well as supporting the development and deployment of AI/ML capabilities across our platform.
You'll contribute to products like EquiLend Competitive Bid and DataLend, helping shape how we deliver analytics at scale. If you have solid experience with streaming or batch data processing and want to extend your skills into ML-focused work, we'd like to hear from you.
What you’ll do
-
Design, build, and maintain scalable ETL data pipelines using Python on Spark to process large datasets
-
Develop and manage data workflows to support batch and real-time processing needs
-
Deploy, monitor, and maintain data pipelines in production, ensuring robustness and reliability
-
Build data integration solutions that connect disparate data sources into unified systems
-
Support the development, monitoring, and enhancement of machine learning models, including models such as Predicted Short Interest
-
Contribute to initiatives such as the DataLend Trade Analytics project
-
Collaborate with analytics, product, and business teams to refine data models and support BI tools
-
Monitor, troubleshoot, and optimise performance for ETL jobs and streaming pipelines to keep latency low and throughput high
-
Improve data quality, reliability, and observability through monitoring and alerting practices
-
Implement data governance practices, including validation checks, versioning, and compliance with data security requirements
-
Partner with global engineering teams to standardise data processing frameworks and share best practices
What we need
- 5+ years of hands-on experience in data engineering, with a focus on building and optimising batch or streaming pipelines
- Strong proficiency in Python/PySpark or other object-oriented languages such as Scala or Java
- Advanced SQL skills for creating complex queries, database optimisation, and working with relational and non-relational databases
- Experience with stream processing technologies such as Kafka, Spark Streaming, AWS Kinesis, or Flink
- Hands-on experience with scheduling and orchestration tools like Airflow, Luigi, or similar
- Familiarity with building and deploying cloud-native solutions using AWS, GCP, or Azure
- Knowledge of data lake and data warehouse architectures, including tools like Snowflake or Redshift
- pythonjavagoawsgcpazuremachine learningaidataanalytics
Similar Jobs
Smartsheet
Principal Data Engineer / Architect - Individual Contributor
West Monroe
Principal Data Engineer (Costa Rica)
Exadel
Senior/Lead Data Engineer
Truecaller
Senior Data Engineer
DEPT Agency
Principal, Data Engineer
DEPT Agency