About Zeliot
At Zeliot, we are redefining the future of real-time data streaming, empowering enterprises and developers to unlock the full potential of their data with speed, simplicity, and scale. We envision a world where data moves seamlessly, insights are delivered instantly, and innovation happens at the speed of thought.
To bring this vision to life, we created Condense, a next-generation all-in-one data streaming platform that radically simplifies how real-time applications are built, deployed, and scaled. Condense eliminates the complexities traditionally associated with infrastructure management through fully managed Kafka, intelligent autoscaling, and a Bring Your Own Cloud (BYOC) deployment model, freeing developers from operational overhead and enabling them to focus entirely on creating new real-time experiences.
With an AI-driven development framework and a Custom Transformation Framework, Condense allows developers to write, test, and deploy stream processing logic in their preferred programming languages, accelerating innovation and shortening development cycles. This approach enables enterprises to bring new applications to market faster and operate at true cloud-native speed, supported by optimized infrastructure utilization and up to 40–60% reduction in total cost of ownership (TCO), all while maintaining data sovereignty, performance, and scalability across cloud environments.
Driven by deep domain expertise across connected mobility, IoT, and large-scale data ecosystems, Zeliot extends beyond platform innovation to deliver the complete ecosystem enterprises need to realize their real-time data ambitions. By combining advanced streaming technology with contextual intelligence and industry focus, Zeliot enables organizations to build, scale, and manage real-time applications with exceptional efficiency and measurable business impact.
At Zeliot, streaming becomes effortless, development becomes frictionless, and innovation becomes continuous, transforming how enterprises turn data into decisions and vision into value.
Location: Bangalore, India
Role Summary:
You will be responsible for guiding the technical direction of data engineering projects, ensuring efficient data workflows and driving innovations in real-time and batch data processing. You will work closely with cross-functional teams (Data Science, Infrastructure, App Development) to deliver advanced data management and analytical solutions. The ideal candidate will demonstrate strong technical expertise in modern data platforms, open-source streaming, and containerization.
Key Responsibilities:
- Design and implement scalable data pipelines for real-time and batch processing using open-source technologies such as Apache Kafka, Apache Flink, and Apache Iceberg.
- Architect and manage data lakes, warehouses, and marts, ensuring optimal structures for storage, retrieval, and analytics.
- Leverage Apache Iceberg to manage large-scale data lakes, incorporating schema evolution, time travel, and other advanced features.
- Develop and maintain microservices for data ingestion and transformation, leveraging Docker and Kubernetes (K8s) for container orchestration.
- Optimize data workflows and orchestration (Airflow, Luigi, etc.) for reliable and efficient scheduling of complex ETL/ELT pipelines.
- Ensure high availability, fault tolerance, and performance of data platforms by Applying best practices in CI/CD, monitoring, and alerting.
- Collaborate with Data Science and BI teams to implement machine learning models at
- scale, enabling intelligent segmentation, personalization, and real-time decision making.
- Establish and enforce engineering standards, including coding, testing, deployment, and documentation, focusing on performance, security, and scalability.
- Mentor and guide a team of data engineers, fostering technical growth and setting best practice guidelines for development.
Qualifications/Skills:
- 6+ years of experience in data engineering and data management, with a focus on large scale systems.
- Proficient in Java with strong understanding of SOLID Principle and design Parten (or other JVM languages) and experienced in building real-time/batch data processing solutions with Apache Kafka, Apache Flink, Apache Spark and Apache Iceberg
- Hands-on experience designing data architectures (data lakes, data warehouses, data marts) and building end-to-end ETL/ELT pipelines.
- Strong understanding of containerization (Docker) and orchestration with Kubernetes for deploying scalable microservices.
- Experience with relational databases (Oracle, SQL Server, Postgres, MySQL) and knowledge of best practices in data modelling and optimization.
- Familiarity with DAG schedulers (Airflow, Luigi, etc.) for pipeline orchestration and scheduling.
- Excellent analytical, quantitative, problem-solving, and critical thinking skills.
- Collaborative mindset with the ability to thrive in a fast-paced environment and manage multiple priorities eUectively.
- Bachelor’s/master's degree in computer science, Information Science, or related STEM field.
- Exposure to large-scale web event data (web/ad/video/email streams, identity graphs/maps) is a plus
What We Offer
- Competitive compensation and comprehensive benefits.
- Opportunity to work on a cutting-edge real-time data platform.
- High ownership, autonomy, and impact.
- Collaborative, fast-paced, deep-tech environment.
- Strong focus on learning, growth, and long-term career development.
Want to build the future of real-time systems?
Join Zeliot and help shape how enterprises turn streaming data into real-world impact.