Position:
-
Associate Data Analyst
Company:
-
Bristlecone
Location:
-
Mumbai, Maharashtra, India
Job type:
-
Full-time
Job mode:
-
Onsite
Job requisition id:
-
16342
Years of experience:
-
0-3 years
Company Description
-
Bristlecone is a global leader in the realm of AI-driven application transformation and digital innovation, focusing extensively on the connected supply chain landscape.
-
The company is a trusted strategic partner to Global 2000 companies operating in sectors like retail, life sciences, manufacturing, and consumer goods.
-
With its roots embedded deeply in digital logistics, smart procurement, and autonomous planning, Bristlecone offers cutting-edge solutions to modernize supply chains and foster business agility.
-
They have a worldwide presence with their headquarters in San Jose, California, and active offices across North America, Europe, and Asia.
-
Bristlecone is a subsidiary of the Mahindra Group, a $19.4 billion global enterprise.
-
Their client-first approach emphasizes speed, automation, visibility, and resilience, ensuring clients can not only adapt to change but thrive amidst it.
-
The company is acknowledged as one of the top ten supply chain service providers by Gartner.
-
Their 2,500+ professionals worldwide contribute to a culture built on integrity, collaboration, learning, and digital disruption.
Profile Overview
-
This role is ideal for individuals with a foundational understanding of data science, analytics, and data engineering concepts, especially fresh graduates and early professionals eager to grow in a dynamic environment.
-
As an Associate Data Analyst, you'll support the data engineering and analytics teams in designing, developing, and maintaining efficient data pipelines and systems.
-
The role demands collaboration across departments to ensure data quality, integrity, and usability in support of analytical and strategic initiatives.
-
You'll be exposed to modern tools and technologies related to big data, cloud computing, ETL, and workflow orchestration.
-
You'll work on projects involving transformation of raw data into meaningful insights, contributing to business decision-making and operational improvements.
-
The position provides significant learning opportunities and the chance to develop technical and business skills in a supportive team-based setting.
-
The job provides a gateway into the high-demand world of data engineering and analytics within a leading IT services company.
-
Through hands-on experience, mentorship, and structured project involvement, you'll develop a robust technical foundation and business acumen.
Qualifications
-
Basic proficiency in programming languages commonly used for data processing such as Python, Java, or SQL.
-
Understanding of database systems, particularly relational databases like MySQL or PostgreSQL.
-
Foundational knowledge of data warehousing principles, including how data is structured, stored, and accessed efficiently.
-
Some exposure to cloud technologies including Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure.
-
Ability to understand and implement data models and schema structures.
-
Interest or preliminary experience with big data tools such as Apache Spark, Hadoop, or Kafka is considered a plus.
-
Analytical thinking and the ability to approach problems logically and creatively.
-
Good verbal and written communication skills for documentation and collaboration with team members.
-
Proactive attitude towards learning and upskilling with new tools and platforms.
-
Familiarity with Agile or Scrum-based work environments is beneficial but not mandatory.
Additional Info
-
Bristlecone is committed to equal employment opportunities, embracing diversity in all forms, and ensuring a discrimination-free workplace.
-
Candidates are expected to be mindful of information security standards and follow the company’s protocols for safeguarding internal systems and data.
-
The role encourages active participation in the organization’s knowledge-sharing and training sessions.
-
Teamwork and collaboration are crucial, as most tasks involve working alongside senior engineers, analysts, and project stakeholders.
-
Flexibility to adapt to new roles and responsibilities based on evolving business needs.
-
Employees are encouraged to take part in certification programs and continuous learning platforms to further their expertise.
-
You'll be working within a vibrant environment that promotes innovation, sustainability, and long-term career development.
-
Regular feedback, performance reviews, and mentorship are a part of the organizational culture to guide professional growth.
Responsibilities
-
Assist in the design and development of scalable data pipelines that facilitate the extraction, transformation, and loading (ETL) of data from various sources into centralized data lakes or data warehouses.
-
Collaborate with data architects and senior engineers to map out data processing workflows and identify improvement opportunities.
-
Write efficient code and scripts using SQL and Python to automate recurring data tasks and ensure consistency in reporting.
-
Support integration of structured and unstructured data sources using APIs and data connectors.
-
Help develop automated workflows for regular and ad-hoc data processing tasks.
-
Contribute to the optimization of database queries and storage mechanisms to improve speed and performance.
-
Troubleshoot operational issues related to data processing and help in root cause analysis to prevent recurrence.
-
Create and maintain documentation related to data pipelines, architecture, workflows, and system configurations.
-
Participate in knowledge exchange sessions within the team and contribute ideas for process improvements.
-
Remain updated on emerging data tools and technologies to enhance project outcomes and increase productivity.
Technical Skills (Nice to Have)
-
Exposure to data visualization platforms like Tableau, Power BI, or Looker to assist in presenting data insights.
-
Understanding of APIs and techniques used for integrating data from multiple sources.
-
Working knowledge of Git or other version control systems for managing code repositories.
-
Familiarity with scheduling and orchestration tools such as Apache Airflow or Luigi.
-
Awareness of industry-standard practices in data governance and privacy.
-
Comfort with working in cloud-based environments for deploying and scaling data solutions.
-
Openness to exploring open-source technologies and contributing to internal R&D efforts.
Soft Skills & Work Ethic
-
Strong interpersonal skills with the ability to work well in team settings.
-
Adaptability to fast-paced and evolving technical environments.
-
Attention to detail, especially in data validation and error handling.
-
Clear communication, especially when translating technical information to non-technical stakeholders.
-
Willingness to take ownership of tasks and contribute proactively.
-
Positive attitude towards feedback and a growth mindset.
Please click here to apply.
Comments
Post a Comment
Please feel free to share your thoughts and discuss.