AI/ML - Machine Learning Engineer - Multi-modal Systems, Siri Understanding

Seattle, WA 98104
  • Job Code
    200154868
Summary

Summary

Posted: May 10, 2021

Role Number:200154868

Play a part in the next revolution in human-computer interaction. Contribute to a product that is redefining mobile computing. Cre...Summary

Summary

Posted: May 10, 2021

Role Number:200154868

Play a part in the next revolution in human-computer interaction. Contribute to a product that is redefining mobile computing. Create groundbreaking technology for large scale systems, spoken language, computer vision, big data, and artificial intelligence. And work with the people who created the intelligent assistant that helps millions of people get things done - just by asking. Join the Siri multi-modal learning team at Apple.

The Siri team is looking for a bright and talented machine learning engineer to influence in developing Siri's next generation multi-modal assistant on Apple's innovative devices and novel features. You should be eager to get involved in hands-on work researching and developing new Siri experiences with multiple input modalities like speech, vision and other sensors.

Key Qualifications

  • Machine learning research and development experience on developing systems for computer vision, speech recognition, natural language understanding applications.
  • Fluency in programming languages including but not limited to Python/Java.
  • Proficiency in at least one major machine learning framework, such as Tensorflow, PyTorch etc.
  • Strong understanding of machine learning for different modalities like computer vision, speech processing, natural language understanding.
  • Consistent track record of researching, inventing and/or shipping advanced machine learning algorithms
  • Creativity and curiosity for solving highly complex problems
  • Outstanding communication and interpersonal skills with ability to work well in cross functional teams.

Description

You will be a part of a team that's responsible for help research and develop Siri's multi-modal experience in a full range Apple devices. This position requires passion for researching and developing multi-modal machine learning algorithms and systems. You will work with the speech, vision, natural language understanding teams to deliver a great Siri user experience. You must have a "make this happen" attitude and willingness to also work hands-on in building machine learning tools, testing, data collection, running experiments as well as work with state-of-the-art computer vision, speech and natural language understanding processing algorithms.

Your key responsibilities in this role are:
-Research, design and implementation of machine learning/deep learning algorithms
-Benchmarking and fine tuning of machine learning/deep learning algorithms
-Optimizing algorithms for real time and low power constraints for embedded devices
-Support algorithm integration into Apple products
-Collaboration with teams across Apple with multidisciplinary skills

Because you'll be working closely with engineers from a number of other teams at Apple, you're a team player who thrives in a fast paced environment with rapidly changing priorities.

Education & Experience

MS/Ph.D in Computer Science, Electrical Engineering or related field with focus on machine learning, computer vision, speech processing, natural language understanding or similar

Additional Requirements

  • #AIML#siriunderstanding#machinelearning#speech#ai#seattle


Before you go...

Our free job seeker tools include alerts for new jobs, saving your favorites, optimized job matching, and more! Just enter your email below.

Share this job:

AI/ML - Machine Learning Engineer - Multi-modal Systems, Siri Understanding

Apple, Inc.
Seattle, WA 98104

Join us to start saving your Favorite Jobs!

Sign In Create Account