Senior Data Engineer

Tel Aviv, Israel · Full-time

About The Position

Guardicore is a segmentation company disrupting the legacy firewall market. Our software only approach is decoupled from the physical network, providing a faster alternative to firewalls. Built for the agile enterprise, we offer greater security and visibility in the cloud, data-center and endpoint. Our customers are some of the world's largest and most advanced Enterprises.

Guardicore Labs is the research arm of the company, a global team of hackers & researchers, delivering cutting edge cyber security research and providing analysis, insights and response methodologies. We help Guardicore’ customers and the security community to continually enhance their security posture and protect critical business applications and infrastructure. 

 

As a Senior Data Engineer at Guardicore, you will be responsible for the design and creation of Guardicore’s brand-new data pipeline and platform.

This platform would support:

  • Exploration and research of novel data products.
  • Productionalizing and maintaining data applications to extend Guardicore’s core products offerings with new and improved capabilities.


Roles responsibilities:

  • Lead the design, creation and delivery of a brand-new data pipeline and platform.
  • Work closely with fellow developers and researchers for the delivery of brand-new data-based products.
  • Act as a sounding board and technical architect for the team members.
  • Help develop the strategy, technical vision, and roadmap for the data platform as a service 

 

Requirements:

  • +5 years of experience building and architecting large scale, production quality big data pipelines and ETL
  • Experience in the development of data orchestration infrastructure built using large scale distributed cloud-based systems, ideally using GCP.
  • Experience with designing and creating data-centric RESTful APIs and services
  • Strong knowledge of containerization, Kubernetes and cloud-based orchestration technologies (GKE, AKS, Cloud Run, etc)
  • Experience with Big Data technologies and storage systems to support batch and streaming pipelines processing e.g. Apache Spark, ElasticSearch, Kafka, Apache Beam, Hive and other frameworks.
  • Deep understanding of Data Quality/Metadata management and semantic datastore
  • Cybersecurity & computer networks background is preferred

Apply for this position