View All Jobs 5228

Staff Software Engineer

Design and ship the core connector framework features end-to-end for scalable data movement.
Bengaluru, Karnataka, India
Senior
yesterday
Nexla

Nexla

Provides a unified data operations platform to discover, integrate, monitor, and govern data across diverse sources at scale.

Staff Software Engineer

Nexla is the leading Integration platform, built with AI, for AI. Nexla takes a metadata driven approach to converge diverse integrations across Data, Documents, Agents, Applications, and APIs into a single design pattern. We accelerate the development of solutions for GenAI, Analytics, and Inter-company data. Nexla makes data users and developers up to 10x more productive by delivering a true blend of no-code, low-code, and pro-code interfaces.

Leading companies including DoorDash, LinkedIn, Johnson & Johnson, and LiveRamp trust Nexla for mission-critical data. Nexla is a remote-first company headquartered in San Mateo, California.

At Nexla, our culture is built around our core values: Have Empathy, Be Curious, Be Intellectually Honest, Achieve Excellence, and Remember to Relax. We put our customers at the heart of everything we do, foster a data-driven mindset, take ownership of our work, and believe in the power of teamwork to achieve ambitious goals.

Role

We've been around for 9 years and have real customers, real revenue, and real scale processing 300+ billion rows daily (and growing) across thousands of active pipelines. But we operate with the intensity of a seed-stage company. Our product-market fit is evolving rapidly as AI reshapes the data integration landscape. We're building a new generation of intelligent, context-aware connectors that changes how the industry thinks about data movement. This role is for builders who want ownership. You will write code every day. You will debug production issues when they happen. You will own entire systems end-to-end from design to deploy to on-call. If you want to build something that matters with a small, focused team, we are a good fit.

Responsibilities:

  • Design and implement core connector framework features—schema evolution, intelligent pagination, rate limiting, circuit breaking—and ship them yourself.
  • Own the full lifecycle of the Connector SDK: build it, document it, dogfood it, iterate on it based on feedback from internal teams and partners.
  • Debug the hard stuff. Trace failures across Kafka clusters, distributed runtimes, and flaky third-party APIs. Fix them, then build the guardrails so they don't recur.
  • Handle "dirty" real-world data. Build self-recovery mechanisms, automated retries, and intelligent error handling that keeps pipelines running without human intervention.

Architect & Evolve (25%)

  • Define the technical direction for the connector framework. Solving the N+1 integration problem makes adding the 1,000th connector as easy as the 10th.
  • Design for multi-tenancy, high concurrency, and low latency from day one. Our platform processes billions of rows; your architecture decisions have immediate, measurable impact.
  • Evaluate and drive technology choices where to use Java/Kafka, where Rust or alternative runtimes make more sense, and how to reduce infrastructure costs without sacrificing reliability.

Lead & Multiply (15%)

  • Raise the bar through code reviews, design sessions, and pairing. We're a small team; your standards become the team's standards.
  • Work directly with Product and the CTO to translate customer needs and market shifts (GenAI data ingestion, real-time CDC, agentic workflows) into technical plans.
  • Document what you build. We don't separate building from knowledge sharing RFCs, runbooks, and SDK docs are part of the job, not an afterthought.

Qualifications Must-Haves

  • 8-15+ years of software engineering, with significant time spent building platforms, frameworks, or SDKs—not just end-user applications.
  • Expert-level Java/Kotlin/Scala. You understand JVM memory management, concurrency models, and non-blocking I/O at a level where you can diagnose a thread starvation issue from a heap dump.
  • Deep experience with distributed systems in production: Kafka, Spark, Flink, or similar. You've operated these systems, not just used them. Bonus points for building something of your own.
  • Comfort with the Modern Data Stack: Snowflake, Databricks, and the alphabet soup of protocols (gRPC, GraphQL, Webhooks, CDC).
  • Startup mindset. You thrive in ambiguity. You're comfortable defining scope on the fly and shipping before the full picture is clear.
  • You've owned production systems and learned the hard way what good operational discipline looks like.
  • Global Collaboration Window: Ability to overlap with morning PST (Pacific Standard Time) working hours for syncs, design reviews, and collaboration with our US-based leadership and engineering teams.

Nice-to-Haves

  • Contributions to open-source data projects (Airbyte, Flink, Debezium, Arrow).
  • Experience at an iPaaS or ELT company (MuleSoft, Fivetran, Informatica, Airbyte).
  • Deep Kubernetes / cloud-native experience. We run on K8s and are migrating to Strimzi for Kafka.
  • Security architecture chops: OAuth 2.0, mTLS, encrypted credential management at scale.

Why This Might Be Worth It

  • Impact at scale from day one. Your code processes billions of rows for companies like DoorDash and LinkedIn.
  • The AI wave is real for us. We're not bolting AI onto a legacy product. Intelligent connectors, context-aware data movement, and agentic workflows are the core of what we're building next.
  • Small team, big problems. You'll have direct access to the CTO, real influence over product direction, and the autonomy to make significant technical bets.
  • Recognized platform with startup energy. The credibility of enterprise validation with the speed and ownership of an early-stage company.

Location - Bengaluru Workplace type - Hybrid

+ Show Original Job Post
























Staff Software Engineer
Bengaluru, Karnataka, India
Software
About Nexla
Provides a unified data operations platform to discover, integrate, monitor, and govern data across diverse sources at scale.