Job Information
Google Staff Software Engineer, AI/ML Telemetry Debugging Tools in Seattle, Washington
Minimum qualifications:
Bachelor's degree or equivalent practical experience.
8 years of experience in software development, and with data structures/algorithms.
5 years of experience testing, and launching software products, and 3 years of experience with software design and architecture.
5 years of experience with performance, large-scale systems data analysis, visualization tools, or debugging.
5 years of experience in the Machine Learning field.
Experience with distributed systems.
Preferred qualifications:
Experience IAAS in accelerators.
Experience building infrastructure for models, diagnosis failures and tooling.
Experience in designing and implementing large-scale distributed systems.
Experience with one or more of the following competencies (e.g., Kubernetes, Google Kubernetes Engine, GPU Programming, TensorFlow, etc).
Google Cloud's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google Cloud's needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. You will anticipate our customer needs and be empowered to act like an owner, take action and innovate. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward.
Cloud ML Compute Services team is accountable for defining and driving the overall Cloud ML Compute IaaS and IaaS product offering and technical strategy. We are leveraging Google AI leadership to differentiate GCP and delight our customers with the best ML and High performance computing (HPC) platform in the world for top talent powered by TPUs, GPUs and CPUs and all ML frameworks (Tensorflow, PyTorch and JAX).
In this role, you will be building large-scale distributed systems for ML workload monitoring and diagnostics, applying distributed systems principles and combine it with ML expertise to build systems that provide insights into performance degradation of ML workloads. You will be passionate about solving models convergence problems/building observability capabilities for AI/ML customers.
Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems.
The US base salary range for this full-time position is $189,000-$284,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target salaries for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google (https://careers.google.com/benefits/) .
Drive technical strategy and roadmap for large-scale ML workload for profiling at scale and debugging workload issues in real time.
Build consensus and alignment across multiple Product Area platforms, coreML, Google Compute Engine (GCE) and other ML teams to build a system that serves customer ML Operations.
Build infrastructure and tooling to diagnose model performance issues, remediation steps and observability for internal and external customers to monitor the workload running on Google Cloud Platform (GCP).
Partner and empower ML engineers, data scientist and ML frameworks team to optimize the performance of the model on GCP through a set of tooling and capabilities needed for ideation.
Design and develop system with incremental milestone for iteration for newer models launched in the market.
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also https://careers.google.com/eeo/ and https://careers.google.com/jobs/dist/legal/OFCCPEEOPost.pdf If you have a need that requires accommodation, please let us know by completing our Accommodations for Applicants form: https://goo.gl/forms/aBt6Pu71i1kzpLHe2.