Description
Responsibilities:
- Lead the design, development, and deployment of data solutions on the AWS cloud platform, including data ingestion, processing, storage, and analytics.
- Architect and implement scalable and reliable data pipelines using AWS services such as Amazon S3, Glue, EMR, Redshift, Athena, and Lambda.
- Design and build data lake architectures to consolidate and organize structured and unstructured data from diverse sources for analytics and reporting purposes.
- Collaborate with cross-functional teams, including data scientists, business analysts, and software developers, to understand data requirements and deliver high-quality solutions.
- Optimize data infrastructure and performance by implementing best practices for data modeling, indexing, partitioning, and optimization techniques.
- Implement data security and governance measures to ensure compliance with regulatory requirements and industry standards.
- Monitor and troubleshoot data pipelines and infrastructure to identify and resolve issues in a timely manner.
- Develop and maintain documentation, standards, and best practices for data engineering processes and methodologies.
- Stay updated on emerging trends and technologies in data engineering, cloud computing, and AWS services to drive continuous improvement and innovation.
- Mentor and coach junior team members, providing guidance and support in developing their skills and expertise in AWS data engineering.
Requirements:
- Bachelor's degree in Computer Science, Engineering, or a related field. Master's degree is a plus.
- Minimum of 5 years of experience in data engineering, with at least 3 years of hands-on experience in designing and implementing data solutions on the AWS cloud platform.
- Strong proficiency in AWS cloud services, particularly in data-related services such as S3, Glue, EMR, Redshift, Athena, and Lambda.
- Expertise in data modeling, ETL processes, data integration, and data warehousing concepts and methodologies.
- Proficiency in programming languages such as Python, Scala, or Java for data processing and scripting tasks.
- Experience with big data technologies such as Hadoop, Spark, or Kafka is desirable.
- Solid understanding of data security, privacy, and compliance requirements in cloud environments.
- Excellent analytical, problem-solving, and communication skills with the ability to work effectively in a fast-paced, collaborative environment.
- AWS certifications such as AWS Certified Big Data - Specialty or AWS Certified Solutions Architect - Associate are a plus.
- Strong leadership and mentoring skills with the ability to coach and develop junior team members.
Job Type: Full-time
Only registered members can apply for jobs.