About Zeliot
At Zeliot, we are redefining the future of real-time data streaming, empowering enterprises and developers to unlock the full potential of their data with speed, simplicity, and scale. We envision a world where data moves seamlessly, insights are delivered instantly, and innovation happens at the speed of thought.
To bring this vision to life, we created Condense, a next-generation all-in-one data streaming platform that radically simplifies how real-time applications are built, deployed, and scaled. Condense eliminates the complexities traditionally associated with infrastructure management through fully managed Kafka, intelligent autoscaling, and a Bring Your Own Cloud (BYOC) deployment model, freeing developers from operational overhead and enabling them to focus entirely on creating new real-time experiences.
With an AI-driven development framework and a Custom Transformation Framework, Condense allows developers to write, test, and deploy stream processing logic in their preferred programming languages, accelerating innovation and shortening development cycles. This approach enables enterprises to bring new applications to market faster and operate at true cloud-native speed, supported by optimized infrastructure utilization and up to 40–60% reduction in total cost of ownership (TCO), all while maintaining data sovereignty, performance, and scalability across cloud environments.
Driven by deep domain expertise across connected mobility, IoT, and large-scale data ecosystems, Zeliot extends beyond platform innovation to deliver the complete ecosystem enterprises need to realize their real-time data ambitions. By combining advanced streaming technology with contextual intelligence and industry focus, Zeliot enables organizations to build, scale, and manage real-time applications with exceptional efficiency and measurable business impact.
At Zeliot, streaming becomes effortless, development becomes frictionless, and innovation becomes continuous, transforming how enterprises turn data into decisions and vision into value.
Location: Bangalore, India
Role Summary:
You will be responsible to design, build, and optimize scalable data pipelines and platforms. The ideal candidate should have hands-on experience with Databricks, Spark, Kubernetes, Docker, Kafka, and strong proficiency in SQL & Python. You will work closely with data scientists, analysts, and business teams to deliver reliable and efficient data solutions.
Key Responsibilities:
- Work with Python services and deploy them on Kubernetes clusters.
- Develop and maintain Docker-based deployments for microservices and data processing workloads.
- Build real-time and batch data ingestion pipelines using Kafka.
- Design, develop, and maintain scalable data pipelines using Databricks, Spark, and PySpark.
- Implement and optimize ETL/ELT workflows for structured and unstructured data.
- Optimize workloads using Spark performance tuning and partitioning strategies.
- Ensure high availability, fault tolerance, and performance of data platforms by applying best practices in CI/CD, monitoring, and alerting.
- Establish and enforce engineering standards, including coding, testing, deployment, and documentation, focusing on performance, security, and scalability.
Data Platform & Architecture
- Contribute to high-performance data lake/data warehouse architecture.
- Implement data quality, validation, and monitoring frameworks.
- Manage data workflows and orchestration using CI/CD pipelines (Git, ADO, Bitbucket).
Cloud & Infrastructure
- Deploy and manage containerized data applications using Kubernetes & Docker.
- Collaborate with DevOps teams to ensure reliable and automated deployments.
- Ensure security, reliability, and scalability of data infrastructure.
Version Control & Collaboration
- Utilize Git, Azure DevOps (ADO), and Bitbucket for version control, code reviews, and CI/CD.
- Document technical designs, data flow diagrams, and deployment processes.
- Collaborate with cross-functional teams including data scientists and business stakeholders
Qualifications/Skills:
- 3+ years of experience in data engineering and data management, with a focus on large scale systems.
- Bachelor’s/master's degree in computer science, Information Science, or related STEM
- field.
- Excellent analytical, quantitative, problem-solving, and critical thinking skills.
- Collaborative mindset with the ability to thrive in a fast-paced environment and manage multiple priorities effectively.
What We Offer
- Competitive compensation and comprehensive benefits.
- Opportunity to work on a cutting-edge real-time data platform.
- High ownership, autonomy, and impact.
- Collaborative, fast-paced, deep-tech environment.
- Strong focus on learning, growth, and long-term career development.
Want to build the future of real-time systems?
Join Zeliot and help shape how enterprises turn streaming data into real-world impact.