Harmonya- Senior Data Engineer

  • Harmonya
  • Tel Aviv, Israel
About The Position

Harmonya helps retailers and manufacturers overcome the limitations caused by legacy data structures and unlock the true value of their data. Harmonya is an AI-powered product data classification and enrichment platform for retailers and manufacturers. Leveraging proprietary ML and AI models, Harmonya synthesizes data from trillions of alternative data points to generate a holistic and dynamic view of products sold across the country.

We’re seeking talented data engineers to join our rapidly growing team, which includes senior software and data engineers. Together, we drive our data platform from acquisition and processing to enrichment, delivering valuable business insights. Join us in designing and maintaining robust data pipelines, making an impact in our collaborative and innovative workplace.

Responsibilities

  • Design, implement, and optimize scalable data pipelines for efficient data processing and analysis.
  • Build and maintain robust data acquisition systems to gather, process, and store data from various sources.
  • Work closely with DevOps engineers, Data Science and Product teams to understand their requirements and provide data solutions that meet business objectives.
  • Proactively monitor data pipelines and production environments to identify and resolve issues promptly.
  • Implement best practices for data security, integrity, and performance.
  • Mentor and guide junior team members, sharing expertise and fostering their professional development.
Requirements:
  • 6+ years of experience in data or backend engineering, preferably with Python proficiency for data tasks.
  • Demonstrated experience in designing, developing, and delivering sophisticated data applications
  • Ability to thrive under pressure, consistently delivering results, and making strategic prioritization decisions in challenging situations.
  • Hands-on experience with data pipeline orchestration and data processing tools, particularly Apache Airflow and Spark.
  • Deep experience with public cloud platforms, preferably GCP, and expertise in cloud-based data storage and processing.
  • Excellent communication skills and the ability to collaborate effectively with cross-functional teams.
  • Bachelor’s degree in Computer Science, Information Technology, or a related field or equivalent experience.

Advantage:

  • Familiarity with data science tools and libraries.
  • Experience with Docker containers and Kubernetes.

Apply for this position