Associate Consultant | Senior Analyst
Capgemini, Bangalore, India
August 2021 – July 2023
Overview:
Worked on large-scale cloud data migration and analytics projects for Human Capital Management (HCM) and Corporate Real Estate Analytics (CREA).
In HCM, I was instrumental in migrating data from 11 on-prem sources to Snowflake using Azure Data Factory. In CREA, I was instrumental in developing the ETL framework required to migrate data from multiple sources.
Key Contributions:
- Orchestrated the seamless transition of 15+ datasets into a sandbox environment by developing a data migration pipeline with Azure Data Factory and pandas, empowering business users to derive actionable insights for strategic decision-making.
- Validated data accuracy and minimized reporting errors by 90% by designing and executing unit test cases in qTest for 15+ datasets, enhancing client confidence in analytics outcomes.
- Boosted team productivity by 40% by facilitating training sessions for 35+ team members on Azure Databricks, Azure Data Factory, and Python, accelerating cloud adoption and upskilling the team.
- Streamlined SQL object deployments and cut deployment time by 50% by optimizing GitLab CI/CD pipelines, ensuring the timely delivery of client solutions.
- Enabled clients to uncover workforce trends and inform HR strategies by designing complex SQL queries in Snowflake for 10+ Power BI reports, driving data-driven decision-making in Human Capital Management (HCM) cloud migration.
- Secured data integrity and ensured reliable reporting by leading data validation for 200+ tables, safeguarding the accuracy of client decision-making processes.
- Transformed the Corporate Real Estate Analytics (CREA) landscape by architecting an ETL framework in Azure Data Factory, integrating data from 6+ source systems into Azure SQL Database, and delivering real-time insights on facilities, transportation, and employee occupancy.
- Automated complex data flows and reduced manual intervention by 95% by directing file-based data ingestion for 8+ vendor systems using Azure Data Factory and PySpark, earning the ‘Hercules’ award for automation excellence.
- Enhanced pipeline efficiency and lowered processing time by 40% by advising stakeholders on data dependencies and load patterns, addressing potential bottlenecks.
- Ensured project success and alignment with evolving business needs by engaging with clients through daily scrum meetings, providing timely updates, gathering feedback, and refining deliverables.