View All Jobs 4195

Senior Data Engineer

Build and maintain a scalable, high-performance data warehouse for analytics
Pune, Mahārāshtra, India
Senior
15 hours agoBe an early applicant
Fictiv

Fictiv

Provides a digital manufacturing platform connecting engineers with vetted global suppliers for rapid prototyping and production of custom mechanical parts.

Senior Data Engineer

Fictiv exists to help product innovators create.

Fictiv is a global manufacturing and supply chain company that enables organizations to scale globally across Fictiv's four global manufacturing centers in India, Mexico, China, and the U.S.. Companies use Fictiv to access high-quality production, optimize supply chain logistics, and mitigate supply chain risk—ensuring they can move from prototype to full-scale manufacturing with speed and confidence. To date, Fictiv has delivered more than 35 million commercial and prototype parts for industries such as aerospace, robotics, automotive, climate tech, and more, helping them innovate faster, free up precious resources, and drive profitable growth.

Impact In This Role

The Data Intelligence team creates reports and dashboards to provide meaningful insights from the data warehouse. This team works closely with cross-functional stakeholders and leverages multiple Business Intelligence (BI) technologies to enable data-driven decision-making across the organization. As a Senior Data Engineer on the Data Intelligence team, you will play a key role in designing, managing, and evolving Fictiv's reporting data warehouse and semantic layer. You will partner closely with the Data Intelligence Manager, Data Engineers, Data Analysts, and Product Managers to ensure replicated data from production systems is transformed into scalable, trusted semantic data models that power consistent, accurate self-service reporting. Acting as a bridge between business and technology, you will translate analytical requirements into well-structured data models and transformations that enable reliable dashboards and insights across multiple domains. In addition, you will collaborate with Infrastructure, Software, and Systems Engineering teams to strengthen database design and query efficiency across the broader data ecosystem. You will provide hands-on guidance on optimal table structures, indexing and clustering approaches, and SQL query patterns that improve performance and scalability. By proactively identifying bottlenecks and recommending architectural and modeling improvements, you will help ensure that both analytical and operational workloads are efficient, maintainable, and aligned with best practices. This is a hands-on role responsible for leading the design, transformation, implementation, and operation of data solutions across disparate source systems—driving best practices in data modeling, semantic layer design, performance optimization, documentation, and governance. You will plan, design, and operate data pipelines and models using Agile methodologies to deliver high-quality, maintainable, and secure data assets. You will report to the Data Analytics Manager.

What You'll Be Doing

  • Design efficient, scalable, and maintainable semantic data models that align with business logic and enable intuitive self-service analytics and reporting across multiple domains.
  • Build, maintain, and optimize data transformations and pipelines that transform replicated production data into trusted, analytics-ready models.
  • Collaborate with stakeholders across data, engineering, product, and business teams to resolve data-related issues, clarify requirements, and align on solution design.
  • Provide technical leadership and guidance to Software Engineers and Data Engineers on query performance, optimization techniques, and efficient data access patterns.
  • Drive ongoing improvement of database and table structures, partnering with engineers to validate and refine table design, joins, partitioning/clustering, and other performance-related considerations.
  • Review and advise on SQL query design for both application and analytics use cases, helping teams design queries that are performant, maintainable, and cost-effective.
  • Analyze performance metrics and execution plans to identify bottlenecks in data models, queries, and transformations, and recommend practical tuning and refactoring opportunities.
  • Follow and contribute to SQL coding standards, data modeling best practices, change control procedures, and data warehouse governance processes.
  • Develop and maintain data flow diagrams, data dictionaries, and schema documentation with a focus on clarity, consistency, and enterprise data management.
  • Understand and document end-to-end data flows and dependencies from production systems through the ELT pipeline into Snowflake and downstream BI tools, supporting delivery, release, and change management.
  • Execute and lead data design sessions and technical reviews to gather requirements, validate modeling approaches, and communicate design decisions and trade-offs to stakeholders.
  • Research emerging technologies, patterns, and tools related to Snowflake, dbt, SQL optimization, and semantic modeling, and recommend enhancements to Fictiv's data platform and practices.
  • Help set broad direction and vision for the reporting data platform architecture, influencing data engineering, analytics, and application data design strategy.
  • Partner with internal and external teams for knowledge transfer and smooth handoff in advance of new feature launches or reporting changes.
  • Work effectively with remote and cross-functional teams across time zones, fostering strong communication and collaboration.
  • Proven experience working with large and complex datasets and comfort writing advanced SQL queries or leveraging BI tools for analysis and transformation.
  • Deep understanding of data and analytics architecture, including data warehousing concepts, semantic data modeling, and data integration best practices, with a strong focus on Snowflake and ELT pipelines.
  • Strong knowledge of SQL and BI concepts, including dimensional modeling, data warehouse design, and dashboarding; ability to translate analytical needs into robust semantic models.
  • Demonstrated ability to guide and influence database and schema optimization, including improving query performance, advising on efficient access patterns, and ensuring well-structured data that support scalable workloads.
  • Experience mentoring and guiding Data Engineers and Software Engineers to develop robust, scalable, and maintainable data models, queries, and database structures.
  • Ability to design and maintain data transformation and workload management processes, including job orchestration, dependency handling, scheduling, and metadata management.
  • Excellent analytical skills and experience working with both structured and semi-structured data, from exploration through to production-ready models.
  • Ability to evaluate new tools, database features, and methodologies relative to the existing data stack and propose detailed, pragmatic recommendations for improvement.
  • Strong ability to perform root cause analysis on data quality, performance, and process issues to identify inefficiencies, ensure data integrity, and deliver actionable insights.
  • Clear and effective communicator—capable of providing business context for engineers and technical context for non-engineers, especially around trade-offs in model and schema design.
  • Confident presenting to audiences with mixed levels of technical understanding and experience, including walkthroughs of models, schemas, and query patterns.
  • Curious, resourceful, and adaptable; comfortable navigating ambiguity, learning new technologies (e.g., new Snowflake/dbt features), and evolving processes to improve outcomes.

Desired Traits

  • B.S. in Data Science, Computer Science, Information Systems, or a related field.
  • 15+ years of experience as a Data Engineer, designing and maintaining large-scale data systems, building scalable data models, and delivering analytical solutions through modern BI and data pipeline tools.
  • 10+ years of experience working with modern cloud and on-premise database platforms—such as Snowflake, AWS Redshift, Azure Synapse, Google BigQuery, or traditional RDBMS systems (e.g., PostgreSQL, MySQL, SQL Server, Oracle)—with a strong focus on schema design, query performance, execution plan analysis, and scalable data access patterns, rather than day-to-day operational administration.
  • 8+ years of experience developing and maintaining data transformation workflows using dbt or a similar tool, with expertise in model design, testing, documentation, environment management, deployment automation, and enforcing version control and development best practices.
  • Demonstrated expertise working with complex datasets and advanced SQL, including multi-level CTEs, window functions, and recursive queries, with an emphasis on performance and readability.
  • Experience with BI platforms such as Heap, Sigma Computing, or similar technologies (e.g., Tableau, Looker, or QuickSight), and how they interact with warehouse models.
  • Strong knowledge of BI and analytical concepts, including regressions, recursion, sliding window functions, financial metrics, marketing funnel metrics, and logistics and fulfillment analytics.
  • Experience building and maintaining processes that support data transformation, data structures, metadata management, dependency handling, and workload optimization in a modern ELT/warehouse environment.
  • Proven ability to lead and coordinate cross-functional initiatives, including partnering with Software Engineers to refine schemas and queries, conducting interviews, and performing analyses to develop business cases for projects and improvements.
  • Experience delivering training sessions and technical enablement for internal and external stakeholders, especially around data models, schemas, and query best practices during system and process rollouts.
  • Excellent communication and collaboration skills, with the ability to translate complex technical concepts (e.g., query plans, indexing/clustering trade-offs, modeling decisions) for non-technical audiences.
  • Demonstrated leadership and mentoring abilities, fostering technical excellence and knowledge sharing across data and engineering teams, particularly in areas of semantic modeling, SQL craftsmanship, and performance optimization.

Perks and Benefits

  • Medicliam
  • Annual Paid holidays
+ Show Original Job Post
























Senior Data Engineer
Pune, Mahārāshtra, India
Software
About Fictiv
Provides a digital manufacturing platform connecting engineers with vetted global suppliers for rapid prototyping and production of custom mechanical parts.