Description
We are seeking a highly skilled and motivated Senior Data Engineer to join our growing team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines that extract, transform, and load (ETL) data into our data warehouse. You will collaborate with cross-functional teams to ensure data is processed efficiently and optimized for analytics, enabling data-driven decisions across the organization. Your expertise in cloud technologies, data engineering, and pipeline orchestration will be critical to the success of the role.
Requirements
Core Responsibilities:
- Design, implement, and maintain scalable ETL pipelines for extracting data from multiple sources, including APIs, databases, flat files, and streaming data.
- Transform and load data into AWS Redshift, ensuring optimization for analytical workloads.
- Develop and maintain workflows using Apache Airflow for orchestrating ETL jobs and data processing pipelines.
- Write optimized SQL queries for data manipulation, querying, and transformation.
- Utilize Python or Scala to process data, automate tasks, and create reusable data processing scripts.
- Work with AWS services such as S3, Lambda, Glue, and Redshift to create seamless, reliable, and efficient data pipelines.
- Ensure data models are well-structured and optimized for high-performance analytics.
- Collaborate with data scientists, analysts, and other stakeholders to understand data needs and provide solutions.
Required Skills/Abilities:
- 4-6 years of experience in data engineering or related field.
- Strong expertise in ETL processes and data pipeline development.
- Advanced SQL skills for querying and manipulating large datasets.
- Proficiency in Python or Scala for data processing.
- In-depth experience with AWS services, particularly S3, Lambda, Glue, and Redshift.
- Solid understanding of data modeling, optimization for analytics, and performance tuning.
- Experience with workflow orchestration tools, such as Apache Airflow.
- Strong problem-solving skills, with the ability to identify and resolve data-related challenges.
- Ability to work both independently and in a collaborative team environment.
Nice-to-Have Skills:
- Familiarity with Spark and Databricks for distributed data processing.
- Experience with BI tools and analytics platforms.
Education and Experience:
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- Relevant experience in data engineering, cloud computing, or a similar field.
Compensation:
$125,400 - $138,000 annually, based on experience. Kindthread encourages applications from candidates at all levels.
Summary
Kindthread is a leading company in the healthcare apparel space, with brands including Scrubs & Beyond, Landau, and White Cross. We are dedicated to providing high-quality apparel and an exceptional customer experience across our brands. Our mission is to elevate the healthcare workforce with products that are functional, stylish, and comfortable.