About Zeliot
At Zeliot, we are redefining the future of real-time data streaming, empowering enterprises and developers to unlock the full potential of their data with speed, simplicity, and scale. We envision a world where data moves seamlessly, insights are delivered instantly, and innovation happens at the speed of thought.
To bring this vision to life, we created Condense, a next-generation all-in-one data streaming platform that radically simplifies how real-time applications are built, deployed, and scaled. Condense eliminates the complexities traditionally associated with infrastructure management through fully managed Kafka, intelligent autoscaling, and a Bring Your Own Cloud (BYOC) deployment model, freeing developers from operational overhead and enabling them to focus entirely on creating new real-time experiences.
With an AI-driven development framework and a Custom Transformation Framework, Condense allows developers to write, test, and deploy stream processing logic in their preferred programming languages, accelerating innovation and shortening development cycles. This approach enables enterprises to bring new applications to market faster and operate at true cloud-native speed, supported by optimized infrastructure utilization and up to 40–60% reduction in total cost of ownership (TCO), all while maintaining data sovereignty, performance, and scalability across cloud environments.
Driven by deep domain expertise across connected mobility, IoT, and large-scale data ecosystems, Zeliot extends beyond platform innovation to deliver the complete ecosystem enterprises need to realize their real-time data ambitions. By combining advanced streaming technology with contextual intelligence and industry focus, Zeliot enables organizations to build, scale, and manage real-time applications with exceptional efficiency and measurable business impact.
At Zeliot, streaming becomes effortless, development becomes frictionless, and innovation becomes continuous, transforming how enterprises turn data into decisions and vision into value.
Location: Bangalore, India
Employment Type: Full-time
Role Summary:
We are seeking an AI Engineer (4+ years' experience) with expertise in LLMs, contextual autocomplete, and AI agent frameworks. You will design and implement AI-powered features such as code autocomplete, intelligent assistants for Kafka and Kubernetes, observability integrations, and extensible agent frameworks that interface with external systems and Condense backend APIs.

Key Responsibilities:
  • Design and implement AI-powered features within Condense, including developer assistance, intelligent automation, and contextual data exploration.
  • Integrate large language models (LLMs) and related technologies into Condense’s platform in a scalable, secure, and cost-efficient way.
  • Build and extend intelligent assistants/agents that interact with both Condense services (Kafka, pipelines, observability, cloud infrastructure) and external systems.
  • Develop frameworks and APIs that enable extensibility, so customers and partners can plug in custom AI-powered capabilities.
  • Collaborate with backend, frontend, and product teams to create intuitive AI-driven user experiences inside Condense’s developer and operator workflows.
  • Evaluate, fine-tune, and optimize LLMs, embeddings, and retrieval systems to provide context-aware completions and insights.
  • Research emerging trends in applied AI for cloud, data streaming, and DevOps, and bring best practices into the product.
  • Ensure reliability, performance, and cost-efficiency of AI workloads in multi-tenant, cloud-native deployments (BYOC model).
Qualifications/Skills:
  • 4+ years of experience in AI/ML engineering, with focus on LLMs and intelligent assistants.
  • Hands-on experience with LLM APIs (OpenAI, Anthropic, etc.) or open-source models (LLaMA, Mistral, Falcon, etc.).
  • Strong programming skills in Python (primary) and familiarity with backend stacks (Java, Go, Node.js).
  • Experience with retrieval-augmented generation (RAG), embeddings, and vector stores (Pinecone, Weaviate, Milvus, FAISS).
  • Understanding of Kafka, and other real-time streaming systems.
  • Knowledge of Kubernetes and cloud-native deployment patterns.
  • Exposure to observability stacks (Prometheus, Grafana) and DevOps workflows.
  • Strong problem-solving aptitude and ability to work across product, engineering, and UX teams.
Good to have skills:
  • Experience building AI-powered developer tools (autocomplete, code assistants, linters).
  • Familiarity with LangChain, Semantic Kernel, or similar agent frameworks.
  • Experience with multi-agent orchestration for complex workflows.
  • Understanding of cost optimization for AI workloads in SaaS or BYOC deployments.
  • Contributions to open-source AI, streaming, or DevOps ecosystems.
What Success Looks Like:
  • You deliver AI-powered features that enhance developer and operator productivity on Condense.
  • Condense customers experience intuitive, contextual, and reliable AI assistance in their workflows.
  • You establish scalable frameworks that allow internal teams and external users to extend AI capabilities.
  • AI integrations are cost-efficient, performant, and production-ready in multi-cloud / BYOC environments.
  • You continuously suggest and bring in best practices from the applied AI ecosystem, keeping Condense ahead of the curve.
  • You collaborate effectively with product, backend, and frontend teams to ensure AI capabilities are seamlessly embedded into the platform experience.
  • You act as a knowledge resource on LLMs, AI frameworks, and contextual intelligence for the broader engineering team
What We Offer
  • Competitive compensation and comprehensive benefits.
  • Opportunity to work on a cutting-edge real-time data platform.
  • High ownership, autonomy, and impact.
  • Collaborative, fast-paced, deep-tech environment.
  • Strong focus on learning, growth, and long-term career development.
Want to build the future of real-time systems?
Join Zeliot and help shape how enterprises turn streaming data into real-world impact.