Skip to main content

Tredence Inc. is hiring for a fresher entry level Associate Data Engineer role in India




Position:

  • Associate Data Engineer

Company:

  • Tredence Inc.

Location:

  • Bangalore Urban, Karnataka, India

Job type:

  • Full-time

Job mode:

  • Onsite

Job requisition id:

  • Not provided

Years of experience:

  • 0-3 years

Company description:

  • Tredence Inc. is a global data science and analytics company that focuses on bridging the gap between creating insights and generating business value from them

  • With its headquarters in San Jose, Tredence has built a reputation for turning data into actionable strategies and outcomes for its clients

  • The company follows a vertical-focused approach and works with several industries, including retail, consumer packaged goods, technology, telecommunications, healthcare, travel, and industrials

  • Tredence is known for its strong culture that values curiosity, continuous learning, and accountability

  • It has achieved certifications and recognition as a Great Place to Work and a leader in customer analytics services

  • Tredence has a diverse team spread across major global hubs, including San Jose, Foster City, Chicago, London, Toronto, Delhi, Chennai, and Bangalore

  • It recently expanded its capabilities by acquiring Further Advisory, strengthening its banking and financial services solutions

  • Tredence partners with some of the largest retailers and CPG companies worldwide, leveraging its advanced AI-driven decision intelligence platform

  • The company’s focus is on delivering data-driven decisions that accelerate client growth and streamline their digital transformations

  • As a key player in data science, Tredence continually invests in engineering, analytics, and quality assurance to meet evolving industry demands

Profile overview:

  • The Associate Data Engineer role at Tredence is designed for individuals eager to work with large datasets and cutting-edge cloud-based data platforms

  • The position requires expertise in data warehousing, data modeling, and implementing scalable data pipelines

  • Successful candidates will need to manage ETL and ELT processes and work on data transformation and analysis using PySpark and SQL

  • The job involves building data solutions within platforms like Azure Databricks, Snowflake, and Google Cloud Platform, among others

  • Collaboration is a significant part of the role, as the Associate Data Engineer will work closely with cross-functional teams to understand client requirements and deliver high-quality data solutions

  • The job calls for strong technical skills and the ability to communicate effectively with both technical and non-technical stakeholders

  • Candidates should be comfortable working in a consulting environment and adapting to dynamic project requirements

  • The role also demands ensuring data quality, integrity, and consistency throughout the lifecycle of data projects

  • The company values individuals who can think creatively, learn quickly, and contribute to innovative data solutions

  • Working at Tredence means being part of a fast-growing team that prioritizes excellence and data-driven decision-making

Qualifications:

  • A solid understanding of SQL, including the ability to write advanced queries, joins, window functions, and aggregations

  • Familiarity with database management concepts, such as normalization, entity-relationship models, indexing, and table partitioning

  • Proficiency in Python or PySpark for data processing tasks and script automation

  • A clear grasp of data warehousing principles, including the differences between OLTP and OLAP systems, and how data modeling is structured in each case

  • Direct experience in designing ETL and ELT processes, managing incremental loads, and maintaining historical data accuracy

  • Practical experience with data platforms like Databricks or Azure Databricks, Snowflake, Redshift, BigQuery, or similar cloud-based environments

  • Ability to work with orchestration tools like Airflow, Azure Data Factory, or dbt to automate data pipeline workflows

  • Comfort in managing data models, including star and snowflake schemas, and in handling facts and dimension tables

  • Experience in dealing with change data capture and slowly changing dimensions for dynamic data environments

  • Clear and concise communication skills, enabling effective collaboration within cross-functional teams and with external stakeholders

Additional info:

  • Tredence is experiencing significant growth, with notable increases in engineering, administrative, and legal functions, as well as a doubling of its quality assurance team

  • The company’s current workforce is over 3,000 employees, and it is seeing strong momentum in building out its technical capabilities

  • The position will be part of a highly collaborative environment that values teamwork and open communication

  • The role provides the opportunity to work on client-facing projects, where you can directly contribute to delivering tangible outcomes for customers

  • Tredence’s work in retail and CPG sectors is extensive, with its data models powering more than $2 trillion in annual retail and CPG sales

  • The Associate Data Engineer will be joining a team that is committed to solving real-world business challenges through advanced data solutions

  • Tredence also promotes a strong culture of continuous improvement, with opportunities for growth and upskilling through internal learning initiatives

  • The company’s global reach means working alongside a diverse team that brings together perspectives from different regions and industries

  • You will have the chance to work on projects that have a direct impact on client operations and decision-making processes

  • Tredence is looking for candidates who are not only technically proficient but also proactive and willing to take ownership of projects from start to finish

Key Responsibilities:

  • Design and build efficient data pipelines that are scalable and maintainable for large data processing workloads

  • Optimize data transformation processes by leveraging PySpark, especially within Databricks and Azure Databricks environments

  • Write SQL queries that can handle data transformation, data quality checks, and analytical tasks, with a focus on accuracy and performance

  • Develop ETL and ELT workflows that accommodate both full and incremental data loads, ensuring reliable data delivery to downstream systems

  • Implement data models that support analytics and reporting, using concepts such as star and snowflake schemas and properly structuring fact and dimension tables

  • Work with data concepts like change data capture and slowly changing dimensions to ensure data is current and relevant for business decision-making

  • Engage with other team members and stakeholders to identify data requirements and offer technical solutions that meet or exceed project expectations

  • Maintain and monitor data quality, data consistency, and data integrity across systems to uphold the standards of Tredence’s data services

  • Communicate technical issues and progress updates in an accessible manner to team members who may not have a technical background

  • Adapt quickly to changing project requirements and collaborate seamlessly within a fast-paced, client-focused environment

Technical Skills:

  • SQL expertise, particularly with complex joins, subqueries, window functions, and aggregate operations for robust data handling

  • A strong grasp of database structures, including primary and foreign key relationships, indexes for performance, and data normalization techniques

  • Programming experience in Python or PySpark to support data manipulation and pipeline creation

  • Familiarity with cloud-based data warehouse tools and technologies, especially Databricks and Azure Databricks, for building data solutions

  • Hands-on experience in data transformation and ETL/ELT pipeline construction, leveraging cloud orchestration and automation tools for efficiency

  • Knowledge of data modeling strategies, from normalized relational models to analytic-focused star and snowflake schemas

  • Ability to manage incremental data loads to ensure systems have the latest data while preserving historical records for analysis

  • Exposure to best practices for managing data change over time, particularly using change data capture and slowly changing dimensions

  • Experience with data orchestration frameworks, which ensure data flows smoothly and reliably between systems

  • Understanding of how data quality impacts business outcomes and a commitment to delivering trustworthy data



Please click here to apply.


Before you go further ...

or

Direct apply link!


Check our offerings below!


Success Stories

Real outcomes from learners who followed the process



See all the Success Stories - here
Testimonials - here


You can check out other TakeOff Talent offerings that have helped 8,000+ people land jobs.

Offerings
📄 CV Review
📘 200 most-asked SQL interview questions with detailed solutions
📘 200 most-asked Python interview questions with detailed solutions
📊 SQL Crash Course
✍️ CV Writing for freshers
✍️ CV Writing
🛠️ Portfolio Project
🗣️ English Speaking Practice (Live 1:1)
🎯 Job Search Mentorship Package

  In case of any questions around services above, write to us at vibhanshu@takeofftalent.com

  Connect with our founder on Linkedin - https://www.linkedin.com/in/vibvibgyor/



Video Gallery



Check more videos here>>