Data Engineer (m/w)
COHERENT Switzerland AG
Contattare
We are seeking a skilled and experienced Data Engineer to join our team in the Laser Business Unit at Coherent Corp.
The ideal candidate will have a solid background in data engineering, experience with hybrid data environments
(on-premises and cloud), and a proven track record of managing data pipelines and orchestrating ETL workflows in
production.
Key Responsibilities:
- Develop and maintain scalable data pipelines to support data integration across on-premises and cloud environments.
- Implement and manage ETL processes, ensuring data accuracy and consistency through automated workflows within a
hybrid infrastructure.
- Collaborate with cross-functional teams to design and optimize data architecture for both on-premises and cloud
storage solutions.
- Manage data within a cloud environment (AWS or Azure) and utilize Databricks for data processing and analytics tasks.
- Monitor, troubleshoot, and optimize data pipelines to ensure high performance, reliability, and scalability in a
hybrid setup.
- Enforce data quality standards and implement data governance practices across on-premises and cloud platforms.
- Collaborate with data scientists, analysts, and stakeholders to support data-driven initiatives and provide
technical guidance.
Required Skills and Qualifications:
Experience: Minimum 1+ years in data engineering, with hands-on experience in production environments, ideally in a
hybrid data infrastructure.
- Technical Skills:
- Proficiency in Python and SQL for data manipulation and automation.
- Experience with relational databases (e.g., Microsoft SQL Server)
- Strong understanding of data modeling, including schema design and normalization.
- Hands-on experience with cloud platforms (AWS or Azure) for data storage, processing, and analytics, alongside
on-premises data systems.
- Proficiency with ETL tools and pipeline orchestration (e.g., Apache Airflow, AWS Glue, Azure Data Factory).
- Experience with big data frameworks like Spark for large-scale data processing.
- Knowledge of version control systems, particularly Git, for collaboration and code management.
- Understanding of data governance and quality standards.
- Data Processing:
- Experience using Databricks or similar platforms for scalable data processing and analytics.
- Soft Skills:
- Strong problem-solving skills with a detail-oriented approach.
- Excellent communication skills for effectively conveying technical details to both technical and non-technical
stakeholders.
- Collaborative mindset with the ability to work effectively within a team environment.
Preferred Skills and Qualifications:
- Familiarity with CI/CD processes and tools (e.g., GitLab CI) for automated testing and deployment.
- Background in the semiconductor or manufacturing industry.
- Experience with data lakehouse architecture and related technologies.
- Experience with NoSQL databases (e.g., MongoDB).
Education and Experience:
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field.
- 1+ years of experience in data engineering, with a focus on hybrid on-premises and cloud infrastructure, ETL
processes, and data pipeline management in production environments.
The ideal candidate will have a solid background in data engineering, experience with hybrid data environments
(on-premises and cloud), and a proven track record of managing data pipelines and orchestrating ETL workflows in
production.
Key Responsibilities:
- Develop and maintain scalable data pipelines to support data integration across on-premises and cloud environments.
- Implement and manage ETL processes, ensuring data accuracy and consistency through automated workflows within a
hybrid infrastructure.
- Collaborate with cross-functional teams to design and optimize data architecture for both on-premises and cloud
storage solutions.
- Manage data within a cloud environment (AWS or Azure) and utilize Databricks for data processing and analytics tasks.
- Monitor, troubleshoot, and optimize data pipelines to ensure high performance, reliability, and scalability in a
hybrid setup.
- Enforce data quality standards and implement data governance practices across on-premises and cloud platforms.
- Collaborate with data scientists, analysts, and stakeholders to support data-driven initiatives and provide
technical guidance.
Required Skills and Qualifications:
Experience: Minimum 1+ years in data engineering, with hands-on experience in production environments, ideally in a
hybrid data infrastructure.
- Technical Skills:
- Proficiency in Python and SQL for data manipulation and automation.
- Experience with relational databases (e.g., Microsoft SQL Server)
- Strong understanding of data modeling, including schema design and normalization.
- Hands-on experience with cloud platforms (AWS or Azure) for data storage, processing, and analytics, alongside
on-premises data systems.
- Proficiency with ETL tools and pipeline orchestration (e.g., Apache Airflow, AWS Glue, Azure Data Factory).
- Experience with big data frameworks like Spark for large-scale data processing.
- Knowledge of version control systems, particularly Git, for collaboration and code management.
- Understanding of data governance and quality standards.
- Data Processing:
- Experience using Databricks or similar platforms for scalable data processing and analytics.
- Soft Skills:
- Strong problem-solving skills with a detail-oriented approach.
- Excellent communication skills for effectively conveying technical details to both technical and non-technical
stakeholders.
- Collaborative mindset with the ability to work effectively within a team environment.
Preferred Skills and Qualifications:
- Familiarity with CI/CD processes and tools (e.g., GitLab CI) for automated testing and deployment.
- Background in the semiconductor or manufacturing industry.
- Experience with data lakehouse architecture and related technologies.
- Experience with NoSQL databases (e.g., MongoDB).
Education and Experience:
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field.
- 1+ years of experience in data engineering, with a focus on hybrid on-premises and cloud infrastructure, ETL
processes, and data pipeline management in production environments.