Description
Responsibilities:
- Design, develop, and implement scalable and efficient data pipelines to ingest, process, and transform large volumes of data from diverse sources.
- Collaborate with data scientists, business analysts, and software engineers to understand data requirements and develop data solutions that meet business objectives.
- Architect and maintain data warehouses, data lakes, and other data storage solutions to store and manage structured and unstructured data.
- Implement data quality and governance processes to ensure data accuracy, completeness, and consistency across data sets.
- Optimize data infrastructure and performance by tuning database queries, optimizing ETL processes, and implementing caching and indexing strategies.
- Monitor data pipelines and infrastructure to identify and resolve performance issues, data anomalies, and other technical challenges.
- Develop and maintain documentation, standards, and best practices for data engineering processes, methodologies, and tools.
- Stay updated on emerging trends and technologies in data engineering, big data, cloud computing, and data management to drive continuous improvement and innovation.
- Collaborate with IT and security teams to implement data security and privacy measures to protect sensitive data and ensure compliance with regulatory requirements.
- Provide technical guidance and support to junior team members, mentoring them in data engineering best practices and methodologies.
Requirements:
- Bachelor's degree in Computer Science, Engineering, or a related field. Master's degree is a plus.
- Minimum of 3 years of experience in data engineering or a similar role, with a strong background in designing and implementing data solutions.
- Proficiency in programming languages such as Python, Java, or Scala for data processing, scripting, and automation tasks.
- Experience with database technologies such as SQL, NoSQL, and distributed data stores (e.g., PostgreSQL, MySQL, MongoDB, Cassandra).
- Familiarity with big data technologies such as Hadoop, Spark, Kafka, and Hive for distributed data processing and analytics.
- Strong understanding of data modeling, ETL processes, data integration, and data warehousing concepts and methodologies.
- Experience with cloud platforms such as AWS, Azure, or Google Cloud Platform, and related services for data storage, processing, and analytics.
- Excellent analytical, problem-solving, and communication skills with the ability to work effectively in a collaborative team environment.
- Knowledge of data security, privacy, and compliance requirements, particularly in a regulatory environment.
- Fluency in Greek and English languages.
Job Type: Full-time
Only registered members can apply for jobs.