Sr. Data Platform Engineer
Adela
Adela’s mission is to deliver innovative and accessible blood tests that harness biology to transform cancer care and improve well-being. The company is developing best-in-class technology to accelerate the diagnosis and improve the management of cancer through blood tests for minimal residual disease (MRD) monitoring and multi-cancer early detection (MCED). Adela’s blood-based, tissue-free product ensures universal accessibility to MRD testing for patients with cancer, eliminating any dependency on tumor tissue availability. Adela’s first product was recently clinically validated for predicting and surveilling for recurrence in patients with head & neck cancer and published in Annals of Oncology.
Adela is seeking a Senior Data Platform Engineer to execute and extend our established R&D data platform strategy. You will work within a modern, cloud-first architecture built on Databricks, Spark, Python, and AWS, supporting large-scale genomics and clinical datasets that serve research, assay development, and early analytical workflows.This remote position is open to candidates authorized to work in the U.S. or Canada. No visa sponsorship is available.
The ideal candidate is a strong implementer who excels at building distributed data processing workflows, improving the reliability and performance of data systems, and collaborating closely with bioinformatics, data science, and assay development teams. While you will not develop the architecture from scratch, you will be expected to interpret existing architectural patterns, make independent engineering decisions within that framework, and deliver robust, scalable data workflows.
You will help ensure that R&D data is organized, reliable, and accessible to teams who depend on it, with awareness of how upstream data decisions influence downstream consumers across the organization. Experience with backend frameworks such as Django or integrating external systems is beneficial but not required.
Experience supporting machine learning workflows is a forward-looking bonus as the platform evolves to enable additional analytical capabilities and will be strongly valued as Adela expands its use of modeling and advanced analytics. This position is ideal for an engineer who enjoys deep technical execution, close collaboration with scientific stakeholders, and contributing to the data foundations of a growing biotech organization.
Essential Duties and Responsibilities
R&D Data Platform Execution
- Build, enhance, and maintain distributed data processing workflows in Databricks/Spark to support genomics, scientific analysis, and experimental datasets.
- Implement ETL/ELT pipelines, ingestion logic, and transformation workflows aligned with the established architectural framework.
- Optimize workflows for reliability, performance, cost efficiency, and scalability.
- Apply best practices for code quality, testing, documentation, and operational readiness.
Cross-Functional Data Collaboration
- Work closely with bioinformatics, data science, assay development, and clinical data teams to understand data requirements and translate them into reliable workflows.
- Provide debugging support, troubleshooting assistance, and workflow improvements for R&D users.
- Maintain awareness of how changes within the R&D platform affect downstream consumers and communicate implications as needed.
Platform Reliability and Continuous Improvement
- Contribute to improvements in monitoring, observability, data quality checks, and operational resilience.
- Participate in architectural reviews and planning discussions, offering feedback rooted in practical implementation experience.
- Support onboarding and education of platform users, promoting best practices for data usage and workflow development.
Future-Facing Support for ML and Advanced Analytics
- Assist ML and analytics teams with data preparation workflows as organizational needs expand (future-looking).
- Experience working with distributed systems.
- Experience with laboratory information systems (LIMS), clinical informatics, or operations-heavy environments.
- Exposure to ML workflow support, feature engineering workflows, or data preparation for modeling (not required).
- Experience with infrastructure-as-code tooling (e.g., Terraform), CI/CD pipelines, or automated deployment workflows.
- Experience in biotech, genomics, or scientific computing, or strong motivation to learn these domains.
- Familiarity with data quality frameworks, data validation, and lineage tracking.Support experimentation tooling and reproducibility enhancements where relevant to R&D workflows.
Work Experience & Skill Requirements
Required
- 5-8+ years of experience in data engineering, software engineering, and/or distributed systems.
- Strong hands-on experience with Apache Spark and the Databricks ecosystem in production environments.
- Proficiency in Python and SQL, including experience building and maintaining production-grade data workflows.
- Experience working with cloud platforms (AWS strongly preferred) and modern data storage technologies.
- Familiarity with backend engineering and API development
- Demonstrated ability to work within an existing architecture while making independent engineering decisions.
- Strong communication skills and comfort interacting with scientists, lab operations personnel, and cross-functional stakeholders.
- Experience integrating external systems via APIs, data contracts, or message-based workflows.
Nice to haves:
- Experience working with distributed systems.
- Experience with laboratory information systems (LIMS), clinical informatics, or operations-heavy environments.
- Exposure to ML workflow support, feature engineering workflows, or data preparation for modeling (not required).
- Experience with infrastructure-as-code tooling (e.g., Terraform), CI/CD pipelines, or automated deployment workflows.
- Experience in biotech, genomics, or scientific computing, or strong motivation to learn these domains.
- Familiarity with data quality frameworks, data validation, and lineage tracking.
- Experience or willingness to obtain basic proficiency with Django
The annual base salary range for this position $140,000 to $170,000 USD, This range reflects only the base salary component of compensation and is provided in compliance with applicable pay transparency requirements. Actual base salary will be determined at the Company’s discretion and may vary based on a number of factors, including but not limited to geographic location, relevant experience, skills, qualifications, internal equity, and business needs.
Adela is committed to fostering diversity and inclusion in our workplace. We embrace and celebrate the unique qualities and perspectives of all individuals, and we provide equal employment opportunities to candidates without regard to race, color, ancestry, national origin, religion, creed, sex, gender, gender identity/expression, age, veteran status, political affiliation, sexual orientation, medical condition, genetic information, marital status, or disability.
At Adela, everyone belongs.