Campus Pride Jobs

Mobile Campus Pride Logo

Job Information

Nielsen Senior Software Developer in Bangalore, India

At Nielsen, we believe that career growth is a partnership. You ultimately own, fuel and set the journey. By joining our team of nearly 14,000 associates, you will become part of a community that will help you to succeed. We champion you because when you succeed, we do too. Embark on a new initiative, explore a fresh approach, and take license to think big, so we can all continuously improve. We enable your best to power our future.

The Global Outcomes team is looking for a Senior Data Engineer to help us bring data-intensive products to market, maintain existing products for our clients, and work closely with our data science teams in a cloud-native, Python, Java, and Spark-heavy big data stack.A typical day at this role includes attending a standup with data engineers, data scientists, and our product owner, talking about various data sources we’re integrating into our machine learning pipelines with upstream Nielsen teams, guiding data scientists in how to access the data, and turning their analyses into production-ready Spark code that runs using Airflow.

Role Details:

  • Work with other data engineers, data scientists, architects, and product owners on an agile scrum team that delivers products to production.

  • Gather , analyze and convert business requirements into AWS (Amazon Web Services) cloud-based solutions.

  • Design and build systems that load and transform a large volume of structured and semi-structured data.

  • When we say “big data” we don’t mean a few gigabytes – we work with multi-terabyte datasets on a daily basis.

  • Build and test cloud-based data pipelines and applications (primarily in Python and Apache Spark + SQL) for new and existing backend systems.

  • Write reusable, well-tested code and components (e.g. RESTful APIs, Python packages, etc.) that can be used by multiple project teams.

  • Assist in troubleshooting and debugging of ETL code and resolving data integrity issues alongside our data scientists and client-facing customer success teams.

  • Work in a serverless environment.

  • We don’t maintain VMs nor do we manually deploy infrastructure.

  • Automation and scalability is critical.

  • Write code with performance, maintainability, scalability, and reliability in mind.

  • Our tech stack: Python, Apache Spark, SQL, Apache Airflow, Hive, AWS Glue, AWS Athena, AWS EC2, AWS S3, AWS CodeBuild, AWS CloudFormation, YARN, Git, RESTful Microservices, Kubernetes (k8s).

Role Qualifications:

  • Minimum Requirements: Master’s degree in computer science, engineering, or a related field with an information technology focus (foreign equivalent degree acceptable) plus 3 years of experience in software design and development.

  • Bachelor ’s degree in computer science, engineering, or a related field with an information technology focus (foreign equivalent degree acceptable) plus 5 years of experience in software design and development.

  • This must include:3 years of experience with:delivering end-to-end applications and pipelines (including architecting open source-based ETL pipelines and designing, building and implementing big data solutions).

  • 2 years of experience with:AWS, Azure, or Google Cloud Platform, preferably in a serverless tech stack.

  • Designing and developing Apache Spark-based applications using Python (PySpark) or Scala and Spark SQL.

  • Comfort with the Linux command line, Git, Agile Scrum and at least one data orchestration tool e.g. Apache Airflow, Luigi, Azkaban, AWS Data Pipeline, Oozie, etc.

DirectEmployers