A patent application from Apple, published Thursday and first pointed out by Apple Insider, describes headphones that know when the user is speaking and provide new kinds of noise suppression.
The application, # 20140093093, specifically describes a “system and method for detecting a user’s voice activity using an accelerometer.” Initially filed in March of last year, it was published today by the U.S. Patent and Trademark Office as part of the standard review process.
Voice is detected through a microphone array via earbuds and the headset wire, and by an accelerometer, or “inertial sensor,” also in the earbud.
The accelerometer, placed in the ear canal, can act as a microphone by detecting vibration of the user’s vocal chords through “vibrations in bones and tissue of the user’s head,” which the patent describes as “unvoiced speech.” The system would know when the user is talking because the accelerometer would only pick up the sound vibrations in bone when the skull’s owner is talking.
By comparing signals from the various sources, the setup is designed to “emphasize the user’s speech signals and deemphasize the environmental noise.” In other words, your headphones will know the difference between your voice and the voice of someone speaking loudly nearby — a problem that has plagued Siri since launch.
The signal detection can also steer beamforming, a signal processing technique that analyzes and combines multiple signals to spatially target sound capturing, such as orienting a microphone toward a user’s voice.
If Apple does implement this patent — and often it just collects the intellectual property without using it — expect to see new generations of EarPods that are much more tuned to your voice, regardless of the environment. But, whatever happens with this application, it’s clear the company is thinking of new ways to use its headphones, including as health sensors.
This article originally appeared on VentureBeat