Description
The Role Tactable is seeking an experienced Senior Data Engineer with strong expertise in Databricks to own and further develop our data infrastructure, ingestion, and to help us build our data products. You’ll work independently while collaborating closely with both engineering and other data teams to build robust, scalable data systems. As a professional with a deep understanding of Java, Python, Databricks, you’ll take the lead in building data pipelines used to ingest the information that fuels models and trading systems. The ideal candidate will have experience working in an evolving startup environment. You will be an in the moment problem solver with the ability to think about the short term and long term plan. You’re energized about building and scaling and being part of a forward-thinking organization. We are building an incredible company and looking for talented, energetic, and motivated people to join our team. You can learn more about our Company, Culture and Values here: https://www.tactable.io/careers . Responsibilities: Build and maintain scalable ETL/ELT pipelines using Databricks Leverage PySpark/Spark and SQL to transform and process large datasets Integrate data from multiple sources including Azure Blob Storage, ADLS and other relational/non-relational systems Optimize Databricks workloads for cost efficiency and performance, monitor and troubleshoot data pipelines to ensure reliability and accuracy Onboard and integrate new data sources Migrate existing data pipelines to new architectures Break down large tasks into manageable components and drive them to completion Lead from a technical perspective and support a team of data engineers with mentoring and guidance Design and maintain automation of workflows and processes to boost team efficiency and enforce standardization Write excellent documentation for yourself, your team, as well as our clients Required Core Skills: 8-10 years of experience in Data Engineering space At least 5 years’ experience with mid to large Databricks data platform implementations Proficiency in Java ecosystem and strong knowledge of SQL Proficiency in Python or similar programming languages (TypeScript, C#, etc.) Proficiency with Databricks and data storage, including relational and non-relational databases General Understanding of continuous integration/continuous deployment (CI/CD) pipelines You must be located in Toronto and eligible to work in Canada to be considered for this role Other Skills: Degree in Computer Science, Engineering, or equivalent industry experience Experience with data workflow management tools Strong communication and teamwork skills Strong time management skills and ability to manage multiple workstreams What We Offer: Hybrid working model Comprehensive Health Benefits Generous holidays and flexible PTO Laptop/Equipment provided Potential for professional growth and advancement