US9219964B2 - Hearing assistance system with own voice detection - Google Patents

Hearing assistance system with own voice detection Download PDF

Info

Publication number
US9219964B2
US9219964B2 US14/464,149 US201414464149A US9219964B2 US 9219964 B2 US9219964 B2 US 9219964B2 US 201414464149 A US201414464149 A US 201414464149A US 9219964 B2 US9219964 B2 US 9219964B2
Authority
US
United States
Prior art keywords
voice
wearer
hearing assistance
microphone
assistance device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US14/464,149
Other versions
US20150043765A1 (en
Inventor
Ivo Merks
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Starkey Laboratories Inc
Original Assignee
Starkey Laboratories Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US12/749,702 external-priority patent/US8477973B2/en
Priority to US14/464,149 priority Critical patent/US9219964B2/en
Application filed by Starkey Laboratories Inc filed Critical Starkey Laboratories Inc
Publication of US20150043765A1 publication Critical patent/US20150043765A1/en
Assigned to STARKEY LABORATORIES, INC. reassignment STARKEY LABORATORIES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MERKS, IVO
Priority to DK15181620.4T priority patent/DK2988531T3/en
Priority to EP18195310.0A priority patent/EP3461148B1/en
Priority to EP15181620.4A priority patent/EP2988531B1/en
Priority to US14/976,711 priority patent/US9712926B2/en
Publication of US9219964B2 publication Critical patent/US9219964B2/en
Application granted granted Critical
Priority to US15/651,459 priority patent/US10225668B2/en
Assigned to CITIBANK, N.A., AS ADMINISTRATIVE AGENT reassignment CITIBANK, N.A., AS ADMINISTRATIVE AGENT NOTICE OF GRANT OF SECURITY INTEREST IN PATENTS Assignors: STARKEY LABORATORIES, INC.
Priority to US16/290,131 priority patent/US10652672B2/en
Priority to US16/871,791 priority patent/US11388529B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/50Customised settings for obtaining desired overall acoustical characteristics
    • H04R25/505Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/405Arrangements for obtaining a desired directivity characteristic by combining a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/40Arrangements for obtaining a desired directivity characteristic
    • H04R25/407Circuits for combining signals of a plurality of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2410/00Microphones
    • H04R2410/01Noise reduction using microphones having different directional characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/01Hearing devices using active noise cancellation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/05Electronic compensation of the occlusion effect
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones

Definitions

  • This application relates to hearing assistance systems, and more particularly, to hearing assistance systems with own voice detection.
  • Hearing assistance devices are electronic devices that amplify sounds above the audibility threshold to is hearing impaired user. Undesired sounds such as noise, feedback and the user's own voice may also be amplified, which can result in decreased sound quality and benefit for the user. It is undesirable for the user to hear his or her own voice amplified. Further, if the user is using an ear mold with little or no venting, he or she will experience an occlusion effect where his or her own voice sounds hollow (“talking in a barrel”). Thirdly, if the hearing aid has a noise reduction/environment classification algorithm, the user's own voice can be wrongly detected as desired speech.
  • One proposal to detect voice adds a bone conductive microphone to the device.
  • the bone conductive microphone can only be used to detect the user's own voice, has to make a good contact to the skull in order to pick up the own voice, and has a low signal-to-noise ratio.
  • Another proposal to detect voice adds a directional microphone to the hearing aid, and orients the microphone toward the mouth of the user to detect the user's voice.
  • the effectiveness of the directional microphone depends on the directivity of the microphone and the presence of other sound sources, particularly sound sources in the same direction as the mouth.
  • Another proposal to detect voice provides a microphone in the ear-canal and only uses the microphone to record an occluded signal.
  • Another proposal attempts to use a filter to distinguish the user's voice from other sound. However, the filter is unable to self correct to accommodate changes in the user's voice and for changes in the environment of the user.
  • the present subject matter provides apparatus and methods to use a hearing assistance device to detect a voice of the wearer of the hearing assistance device.
  • Embodiments use an adaptive filter to provide a self-correcting voice detector, capable of automatically adjusting to accommodate changes in the wearer's voice and environment.
  • Examples are provided, such as an apparatus configured to be worn by a wearer who has an ear and an ear canal.
  • the apparatus includes a first microphone adapted to be worn about the ear of the person, a second microphone adapted to be worn about the ear canal of the person and at a different location than the first microphone, a sound processor adapted to process signals from the first microphone to produce a processed sound signal, and a voice detector to detect the voice of the wearer.
  • the voice detector includes an adaptive filter to receive signals from the first microphone and the second microphone.
  • an apparatus includes a housing configured to be worn behind the ear or over the ear, a first microphone in the housing, and an ear piece configured to be positioned in the ear canal, wherein the ear piece includes a microphone that receives sound from the outside when positioned near the ear canal.
  • Various voice detection systems employ an adaptive filter that receives signals from the first microphone and the second microphone and detects the voice of the wearer using a peak value for coefficients of the adaptive filter and an error signal from the adaptive filter.
  • the present subject matter also provides methods for detecting a voice of a wearer of a hearing assistance device where the hearing assistance device includes a first microphone and a second microphone.
  • An example of the method is provided and includes using a first electrical signal representative of sound detected by the first microphone and a second electrical signal representative of sound detected by the second microphone as inputs to a system including an adaptive filter, and using the adaptive filter to detect the voice of the wearer of the hearing assistance device.
  • the present subject matter further provides apparatus and methods to use a pair of left and right hearing assistance devices to detect a voice of the wearer of the pair of left and right hearing assistance devices.
  • Embodiments use outcome of detection of the voice of the wearer performed by the left hearing assistance device and the outcome of detection of the voice of the wearer performed the right hearing assistance device to determine whether to declare a detection of the voice of the wearer.
  • FIGS. 1A and 1B illustrate a hearing assistance device with a voice detector according to one embodiment of the present subject matter.
  • FIG. 2 demonstrates how sound can travel from the user's mouth to the first and second microphones illustrated in FIG. 1A .
  • FIG. 3 illustrates a hearing assistance device according to one embodiment of the present subject matter.
  • FIG. 4 illustrates a voice detector according to one embodiment of the present subject matter.
  • FIGS. 5-7 illustrate various processes for detecting voice that can be used in various embodiments of the present subject matter.
  • FIG. 8 illustrates one embodiment of the present subject matter with an “own voice detector” to control active noise canceller for occlusion reduction.
  • FIG. 9 illustrates one embodiment of the present subject matter offering a multichannel expansion, compression and output control limiting algorithm (MECO).
  • MECO multichannel expansion, compression and output control limiting algorithm
  • FIG. 10 illustrates one embodiment of the present subject matter which uses an “own voice detector” in an environment classification scheme.
  • FIG. 11 illustrates a pair of hearing assistance devices according to one embodiment of the present subject matter.
  • FIG. 12 illustrates a process for detecting voice using the pair of hearing assistance devices.
  • Various embodiments disclosed herein provide a self-correcting voice detector, capable of reliably detecting the presence of the user's own voice through automatic adjustments that accommodate changes in the user's voice and environment.
  • the detected voice can be used, among other things, to reduce the amplification of the user's voice, control an anti-occlusion process and control an environment classification process.
  • the present subject matter provides, among other things, an “own voice” detector using two microphones in a standard hearing assistance device.
  • standard hearing aids include behind-the-ear (BTE), over-the-ear (OTE), and receiver-in-canal (RIC) devices.
  • BTE behind-the-ear
  • OFT over-the-ear
  • RIC receiver-in-canal
  • RIC devices have a housing adapted to be worn behind the ear or over the ear.
  • the RIC electronics housing is called a BTE housing or an OTE housing.
  • one microphone is the microphone as usually present in the standard hearing assistance device, and the other microphone is mounted in an ear bud or ear mold near the user's ear canal.
  • the microphone is directed to detection of acoustic signals outside and not inside the ear canal.
  • the two microphones can be used to create a directional signal.
  • FIG. 1A illustrates a hearing assistance device with a voice detector according to one embodiment of the present subject matter.
  • the figure illustrates an ear with a hearing assistance device 100 , such as a hearing aid.
  • the illustrated hearing assistance device includes a standard housing 101 (e.g. behind-the-ear (BTE) or on-the-ear (OTE) housing) with an optional ear hook 102 and an ear piece 103 configured to fit within the ear canal.
  • a first microphone (MIC 1) is positioned in the standard housing 101
  • a second microphone (MIC 2) is positioned near the ear canal 104 on the air side of the ear piece.
  • FIG. 1B schematically illustrates a cross section of the ear piece 103 positioned near the ear canal 104 , with the second microphone on the air side of the ear piece 103 to detect acoustic signals outside of the ear canal.
  • the first microphone (M1) is adapted to be worn about the ear of the person and the second microphone (M2) is adapted to be worn about the ear canal of the person.
  • the first and second microphones are at different locations to provide a time difference for sound from a user's voice to reach the microphones. As illustrated in FIG. 2 , the sound vectors representing travel of the user's voice from the user's mouth to the microphones are different.
  • the first microphone (MIC 1) is further away from the mouth than the second microphone (MIC 2). Sound received by MIC 2 will be relatively high amplitude and will be received slightly sooner than sound detected by MIC 1. And when the wearer is speaking, the sound of the wearer's voice will dominate the sounds received by both MIC 1 and MIC 2. The differences in received sound can be used to distinguish the own voice from other sound sources.
  • FIG. 3 illustrates a hearing assistance device according to one embodiment of the present subject matter.
  • the illustrated device 305 includes the first microphone (MIC 1), the second microphone (MIC 2), and a receiver (speaker) 306 .
  • each microphone is an omnidirectional microphone.
  • each microphone is a directional microphone.
  • the microphones may be both directional and omnidirectional.
  • Various order directional microphones can be employed.
  • Various embodiments incorporate the receiver in a housing of the device (e.g. behind-the-ear or on-the-ear housing).
  • a sound conduit can be used to direct sound from the receiver toward the ear canal.
  • Various embodiments use a receiver configured to fit within the user's ear canal. These embodiments are referred to as receiver-in-canal (RIC) devices.
  • RIC receiver-in-canal
  • a digital sound processing system 308 processes the acoustic signals received by the first and second microphones, and provides a signal to the receiver 306 to produce an audible signal to the wearer of the device 305 .
  • the illustrated digital sound processing system 308 includes an interface 307 , a sound processor 308 , and a voice detector 309 .
  • the illustrated interface 307 converts the analog signals from the first and second microphones into digital signals for processing by the sound processor 308 and the voice detector 309 .
  • the interface may include analog-to-digital converters, and appropriate registers to hold the digital signals for processing by the sound processor and voice detector.
  • the illustrated sound processor 308 processes a signal representative of a sound received by one or both of the first microphone and/or second microphone into a processed output signal 310 , which is provided to the receiver 306 to produce the audible signal.
  • the sound processor 308 is capable of operating in a directional mode in which signals representative of sound received by the first microphone and sound received by the second microphone are processed to provide the output signal 310 to the receiver 306 with directionality.
  • the voice detector 309 receives signals representative of sound received by the first microphone and sound received by the second microphone.
  • the voice detector 309 detects the user's own voice, and provides an indication 311 to the sound processor 308 regarding whether the user's own voice is detected. Once the user's own voice is detected any number of possible other actions can take place.
  • the sound processor 308 can perform one or more of the following, including but not limited to reduction of the amplification of the user's voice, control of an anti-occlusion process, and/or control of an environment classification process. Those skilled in the art will understand that other processes may take place without departing from the scope of the present subject matter.
  • the voice detector 309 includes an adaptive filter.
  • adaptive filters include Recursive Least Square error (RLS), Least Mean Squared error (LMS), and Normalized Least Mean Square error (NLMS) adaptive filter processes.
  • the desired signal for the adaptive filter is taken from the first microphone (e.g., a standard behind-the-ear or over-the-ear microphone), and the input signal to the adaptive filter is taken from the second microphone. If the hearing aid wearer is talking, the adaptive filter models the relative transfer function between the microphones.
  • Voice detection can be performed by comparing the power of the error signal to the power of the signal from the standard microphone and/or looking at the peak strength in the impulse response of the filter.
  • the amplitude of the impulse response should be in a certain range in order to be valid for the own voice. If the user's own voice is present, the power of the error signal will be much less than the power of the signal from the standard microphone, and the impulse response has a strong peak with an amplitude above a threshold (e.g. above about 0.5 for normalized coefficients). In the presence of the user's own voice, the largest normalized coefficient of the filter is expected to be within the range of about 0.5 to about 0.9. Sound from other noise sources would result in a much smaller difference between the power of the error signal and the power of the signal from the standard microphone, and a small impulse response of the filter with no distinctive peak
  • FIG. 4 illustrates a voice detector according to one embodiment of the present subject matter.
  • the illustrated voice detector 409 includes an adaptive filter 412 , a power analyzer 413 and a coefficient analyzer 414 .
  • the output 411 of the voice detector 409 provides an indication to the sound processor indicative of whether the user's own voice is detected.
  • the illustrated adaptive filter includes an adaptive filter process 415 and a summing junction 416 .
  • the desired signal 417 for the filter is taken from a signal representative of sound from the first microphone, and the input signal 418 for the filter is taken from a signal representative of sound from the second microphone.
  • the filter output signal 419 is subtracted from the desired signal 417 at the summing junction 416 to produce an error signal 420 which is fed back to the adaptive filter process 415 .
  • the illustrated power analyzer 413 compares the power of the error signal 420 to the power of the signal representative of sound received from the first microphone. According to various embodiments, a voice will not be detected unless the power of the signal representative of sound received from the first microphone is much greater than the power of the error signal. For example, the power analyzer 413 compares the difference to a threshold, and will not detect voice if the difference is less than the threshold.
  • the illustrated coefficient analyzer 414 analyzes the filter coefficients from the adaptive filter process 415 . According to various embodiments, a voice will not be detected unless a peak value for the coefficients is significantly high. For example, some embodiments will not detect voice unless the largest normalized coefficient is greater than a predetermined value (e.g. 0.5).
  • a predetermined value e.g. 0.5
  • FIGS. 5-7 illustrate various processes for detecting voice that can be used in various embodiments of the present subject matter.
  • the power of the error signal from the adaptive filter is compared to the power of a signal representative of sound received by the first microphone.
  • the threshold is selected to be sufficiently high to ensure that the power of the first microphone is much greater than the power of the error signal.
  • voice is detected at 523 if the power of the first microphone is greater than the power of the error signal by a predetermined threshold, and voice is not detected at 524 if the power of the first microphone is greater than the power of the error signal by a predetermined threshold.
  • coefficients of the adaptive filter are analyzed.
  • voice is detected at 623 if the largest normalized coefficient is greater than a predetermined value, and voice is not detected at 624 if the largest normalized coefficient is not greater than a predetermined value.
  • the power of the error signal from the adaptive filter is compared to the power of a signal representative of sound received by the first microphone.
  • voice is not detected at 724 if the power of the first microphone is not greater than the power of the error signal by a predetermined threshold. If the power of the error signal is too large, then the adaptive filter has not converged. In the illustrated method, the coefficients are not analyzed until the adaptive filter converges.
  • coefficients of the adaptive filter are analyzed if the power of the first microphone is greater than the power of the error signal by a predetermined threshold.
  • a predetermined value such as greater than 0.5.
  • voice is not detected at 724 if the largest normalized coefficient is not greater than a predetermined value.
  • Voice is detected at 723 if the power of the first microphone is greater than the power of the error signal by a predetermined threshold and if the largest normalized coefficient is greater than a predetermined value.
  • FIG. 8 illustrates one embodiment of the present subject matter with an “own voice detector” to control active noise canceller for occlusion reduction.
  • the active noise canceller filters microphone M2 with filter h and sends the filtered signal to the receiver.
  • the microphone M2 and the error microphone M3 (in the ear canal) are used to calculate the filter update for filter h.
  • the own voice detector which uses microphone M1 and M2, is used to steer the stepsize in the filter update.
  • FIG. 9 illustrates one embodiment of the present subject matter offering a multichannel expansion, compression and output control limiting algorithm (MECO) which uses the signal of microphone M2 to calculate the desired gain and subsequently applies that gain to microphone signal M2 and then sends the amplified signal to the receiver. Additionally, the gain calculation can take into account the outcome of the own voice detector (which uses M1 and M2) to calculate the desired gain. If the wearer's own voice is detected, the gain in the lower channels (typically below 1 KHz) will be lowered to avoid occlusion. Note: the MECO algorithm can use microphone signal M1 or M2 or a combination of both.
  • MECO multichannel expansion, compression and output control limiting algorithm
  • FIG. 10 illustrates one embodiment of the present subject matter which uses an “own voice detector” in an environment classification scheme. From the microphone signal M2, several features are calculated. These features together with the result of the own voice detector, which uses M1 and M2, are used in a classifier to determine the acoustic environment. This acoustic environment classification is used to set the gain in the hearing aid.
  • the hearing aid may use M2 or M1 or M1 and M2 for the feature calculation.
  • FIG. 11 illustrates a pair of hearing assistance devices according to one embodiment of the present subject matter.
  • the pair of hearing assistance devices includes a left hearing assistance device 1105 L and a right hearing assistance device 1105 R, such as a left hearing aid and a right hearing aid.
  • the left hearing assistance device 1105 L is configured to be worn in or about the left ear of a wearer for delivering sound to the left ear canal of the wearer.
  • the right hearing assistance device 1105 R is configured to be worn in or about the right ear of the wearer for delivering sound to the right ear canal of the wearer.
  • the left and right hearing assistance devices 1105 L and 1105 R each represent an embodiment of the device 305 as discussed above with capability of performing wireless communication between each other and uses voice detection capability of both devices to determine whether voice of the wearer is present.
  • the illustrated left hearing assistance device 1105 L includes a first microphone MIC 1L, a second microphone MIC 2L, an interface 1107 L, a sound processor 1108 L, a receiver 1106 L, a voice detector 1109 L, and a communication circuit 1130 L.
  • the first microphone MIC 1L produces a first left microphone signal.
  • the second microphone MIC 2L produces a second left microphone signal.
  • the first microphone MIC 1L is positioned about the left ear or the wearer
  • the second microphone MIC 2L is positioned about the left ear canal of wearer, at a different location than the first microphone MIC 1L, on an air side of the left ear canal to detect signals outside the left ear canal.
  • Interface 1107 L converts the analog versions of the first and second left microphone signals into digital signals for processing by the sound processor 1108 L and the voice detector 1109 L.
  • the interface 1107 L may include analog-to-digital converters, and appropriate registers to hold the digital signals for processing by the sound processor 1108 L and the voice detector 1109 L.
  • the sound processor 1108 L produces a processed left sound signal 1110 L.
  • the left receiver 1106 L produces a left audible signal based on the processed left sound signal 1110 L and transmits the left audible signal to the left ear canal of the wearer.
  • the sound processor 1108 L produces the processed left sound signal 1110 L based on the first left microphone signal.
  • the sound processor 1108 L produces the processed left sound signal 1110 L based on the first left microphone signal and the second left microphone signal.
  • the left voice detector 1109 L detects a voice of the wearer using the first left microphone signal and the second left microphone signal. In one embodiment, in response to the voice of the wearer being detected based on the first left microphone signal and the second left microphone signal, the left voice detector 1109 L produces a left detection signal indicative of detection of the voice of the wearer. In one embodiment, the left voice detector 1109 L includes a left adaptive filter configured to output left information and identifies the voice of the wearer from the output left information. In various embodiments, the output left information includes coefficients of the left adaptive filter and/or a left error signal. In various embodiments, the left voice detector 1109 L includes the voice detector 309 or the voice detector 409 as discussed above.
  • the left communication circuit 1130 L receives information from, and transmits information to, the right hearing assistance device 1105 R via a wireless communication link 1132 .
  • the information transmitted via wireless communication link 1132 includes information associated with the detection of the voice of the wearer as performed by each of the left and right hearing assistance devices 1105 L and 1105 R.
  • the illustrated right hearing assistance device 1105 R includes a first microphone MIC 1R, a second microphone MIC 2R, an interface 1107 R, a sound processor 1108 R, a receiver 1106 R, a voice detector 1109 R, and a communication circuit 1130 R.
  • the first microphone MIC 1R produces a first right microphone signal.
  • the second microphone MIC 2R produces a second right microphone signal.
  • the first microphone MIC 1R is positioned about the right ear or the wearer
  • the second microphone MIC 2R is positioned about the right ear canal of wearer, at a different location than the first microphone MIC 1R, on an air side of the right ear canal to detect signals outside the right ear canal.
  • Interface 1107 R converts the analog versions of the first and second right microphone signals into digital signals for processing by the sound processor 1108 R and the voice detector 1109 R.
  • the interface 1107 R may include analog-to-digital converters, and appropriate registers to hold the digital signals for processing by the sound processor 1108 R and the voice detector 1109 R.
  • the sound processor 1108 R produces a processed right sound signal 1110 R.
  • the right receiver 1106 R produces a right audible signal based on the processed right sound signal 1110 R and transmits the right audible signal to the right ear canal of the wearer.
  • the sound processor 1108 R produces the processed right sound signal 1110 R based on the first right microphone signal.
  • the sound processor 1108 R produces the processed right sound signal 1110 R based on the first right microphone signal and the second right microphone signal.
  • the right voice detector 1109 R detects the voice of the wearer using the first right microphone signal and the second right microphone signal. In one embodiment, in response to the voice of the wearer being detected based on the first right microphone signal and the second right microphone signal, the right voice detector 1109 R produces a right detection signal indicative of detection of the voice of the wearer. In one embodiment, the right voice detector 1109 R includes a right adaptive filter configured to output right information and identifies the voice of the wearer from the output right information. In various embodiments, the output right information includes coefficients of the right adaptive filter and/or a right error signal. In various embodiments, the right voice detector 1109 R includes the voice detector 309 or the voice detector 409 as discussed above.
  • the right communication circuit 1130 R receives information from, and transmits information to, the right hearing assistance device 1105 R via a wireless communication link 1132 .
  • At least one of the left voice detector 1109 L and the right voice detector 1109 R is configured to detect the voice of wearer using the first left microphone signal, the second left microphone signal, the first right microphone signal, and the second right microphone signal.
  • signals produced by all of the microphones MIC 1L, MIC 2L, MIC 1R, and MIC 2R are used for determining whether the voice of the wearer is present.
  • the left voice detector 1109 L and/or the right voice detector 1109 R declares a detection of the voice of the wearer in response to at least one of the left detection signal and the second detection signal being present.
  • the left voice detector 1109 L and/or the right voice detector 1109 R declares a detection of the voice of the wearer in response to the left detection signal and the second detection signal both being present. In one embodiment, the left voice detector 1109 L and/or the right voice detector 1109 R determines whether to declare a detection of the voice of the wearer using the output left information and output right information.
  • the output left information and output right information are each indicative of one or more detection strength parameters each being a measure of likeliness of actual existence of the voice of wearer. Examples of the one or more detection strength parameters include the difference between the power of the error signal and the power of the first microphone signal and the largest normalized coefficient of the adaptive filter.
  • the left voice detector 1109 L and/or the right voice detector 1109 R determines whether to declare a detection of the voice of the wearer using a weighted combination of the output left information and the output eight information.
  • the weighted combination of the output left information and the output right information can include a weighted sum of the detection strength parameters.
  • the one or more detection strength parameters produced by each of the left and right voice detectors can be multiplied by one or more corresponding weighting factors before being added to produce the weighted sum.
  • the weighting factors may be determined using a priori information such as estimates of the background noise and/or position(s) of other sound sources in a room.
  • the detection of the voice of the wearer is performed using both the left and the right voice detectors such as detectors 1109 L and 1109 R.
  • whether to declare a detection of the voice of the wearer may be determined by each of the left voice detector 1109 L and the right voice detector 1109 R, determined by the left voice detector 1109 L and communicated to the right voice detector 1109 R via wireless link 1132 , or determined by the right voice detector 1109 R and communicated to the left voice detector 1109 L via wireless link 1132 .
  • the left voice detector 1109 L transmits an indication 1111 L to the sound processor 1108 L
  • the right voice detector 1109 R transmits an indication 1111 R to the sound processor 1108 R.
  • the sound processors 1108 L and 1108 R produce the processed sound signals 1110 L and 1110 R, respectively, using the indication that the voice of the wearer is detected.
  • FIG. 12 illustrates a process for detecting voice using a pair of hearing assistance devices including a left hearing assistance device and a right hearing assistance device, such as the left and right hearing assistance devices 1105 L and 1105 R.
  • voice of a wearer is detected using the left hearing assistance device.
  • voice of a wearer is detected using the right hearing assistance device.
  • steps 1241 and 1242 are performed concurrently or simultaneously. Examples for each of steps 1241 and 1242 include the processes illustrated in each of FIGS. 5-7 .
  • whether to declare a detection of the voice of the wearer is determining using an outcome of both of the detections at 1241 and 1242 .
  • the left and right hearing assistance devices each include first and second microphones. Electrical signals produced by the first and second microphones of the left hearing assistance device are used as inputs to a voice detector of the left hearing assistance device at 1241 .
  • the voice detector of the left hearing assistance device includes a left adaptive filter. Electrical signals produced by the first and second microphones of the right hearing assistance device are used as inputs to a voice detector of the right hearing assistance device at 1242 .
  • the voice detector of the right hearing assistance device includes a right adaptive filter.
  • the voice of the wearer is detected using information output from the left adaptive filter and information output from the right adaptive filter at 1243 . In one embodiment, the voice of the wearer is detected using left coefficients of the left adaptive filter and right coefficients of the right adaptive filter.
  • the voice of the wearer is detected using a left error signal produced by the left adaptive filter and a right error signal produced by the right adaptive filter.
  • the voice of the wearer is detected using a left detection strength parameter of the information output from the left adaptive filter and a right detection strength parameter of the information output from the right adaptive filter.
  • the left and right detection strength parameters are each a measure of likeliness of actual existence of the voice of wearer. Examples of the left detection strength parameter include the difference between the power of a left error signal produced by the left adaptive filter and the power of the electrical signal produced by the first microphone of the left hearing assistance device and the largest normalized coefficient of the left adaptive filter.
  • Examples of the right detection strength parameter include the difference between the power of a right error signal produced by the right adaptive filter and the power of the electrical signal produced by the first microphone of the right hearing assistance device and the largest normalized coefficient of the right adaptive filter.
  • the voice of the wearer is detected using a weighted combination of the information output from the left adaptive filter and the information output from the right adaptive filter.
  • the voice of the wearer is detected using the left hearing assistance device based on the electrical signals produced by the first and second microphones of the left hearing assistance device, and a left detection signal indicative of whether the voice of the wearer is detected by the left hearing assistance device is produced, at 1241 .
  • the voice of the wearer is detected using the right hearing assistance device based on the electrical signals produced by the first and second microphones of the right hearing assistance device, and a right detection signal indicative of whether the voice of the wearer is detected by the right hearing assistance device is produced, at 1242 .
  • Whether to declare the detection of the voice of the wearer is determined using the left detection signal and the right detection signal at 1243 .
  • the detection of the voice of the wearer is declared in response to both of the left detection signal and the right detection signal being present. In another embodiment, the detection of the voice of the wearer is declared in response to at least one of the left detection signal and the right detection signal being present. In one embodiment, whether to declare the detection of the voice of the wearer is determined using the left detection signal, the right detection signal, and weighting factors applied to the left and right detection signals.
  • each device of a pair of hearing assistance devices can be applied to each device of a pair of hearing assistance devices, with the declaration of the detection of the voice of the wearer being a result of detection using both devices of the pair of hearing assistance devices, as discussed with reference to FIGS. 11 and 12 .
  • Such binaural voice detection will likely improve the acoustic perception of the wearer because both hearing assistance devices worn by the wearer are acting similarly when the wearer speaks.
  • whether to declare a detection of the voice of the wearer may be determined based on the detection performed by either one device of the pair of hearing assistance devices or based on the detection performed by both devices of the pair of hearing assistance devices.
  • An example of the pair of hearing assistance devices includes a pair of hearing aids.
  • the present subject matter includes hearing assistance devices, and was demonstrated with respect to BTE, OTE, and RIC type devices, but it is understood that it may also be employed in cochlear implant type hearing devices. It is understood that other hearing assistance devices not expressly stated herein may fall within the scope of the present subject matter.

Abstract

A hearing assistance system includes a pair of left and right hearing assistance devices to be worn by a wearer and uses both of the left and right hearing assistance devices to detect the voice of the wearer. The left and right hearing assistance devices each include first and second microphones at different locations. Various embodiments detect the voice of the wearer using signals produced by the first and second microphones of the left hearing assistance device and the first and second microphones of the right hearing assistance device. Various embodiments use outcome of detection of the voice of the wearer performed by the left hearing assistance device and the outcome of detection of the voice of the wearer performed the right hearing assistance device to determine whether to declare a detection of the voice of the wearer.

Description

CLAIM OF PRIORITY
The present application is a Continuation-in-Part (CIP) of and claims the benefit of priority under 35 U.S.C. §120 to U.S. patent application Ser. No. 13/933,017, filed Jul. 1, 2013, which application is a continuation of U.S. patent application Ser. No. 12/749,702, filed Mar. 30, 2010, which application claims the benefit of priority under 35 U.S.C. §119(e) to U.S. Provisional Application No. 61/165,512, filed Apr. 1, 2009, all of which are hereby incorporated by reference in their entirety.
TECHNICAL FIELD
This application relates to hearing assistance systems, and more particularly, to hearing assistance systems with own voice detection.
BACKGROUND
Hearing assistance devices are electronic devices that amplify sounds above the audibility threshold to is hearing impaired user. Undesired sounds such as noise, feedback and the user's own voice may also be amplified, which can result in decreased sound quality and benefit for the user. It is undesirable for the user to hear his or her own voice amplified. Further, if the user is using an ear mold with little or no venting, he or she will experience an occlusion effect where his or her own voice sounds hollow (“talking in a barrel”). Thirdly, if the hearing aid has a noise reduction/environment classification algorithm, the user's own voice can be wrongly detected as desired speech.
One proposal to detect voice adds a bone conductive microphone to the device. The bone conductive microphone can only be used to detect the user's own voice, has to make a good contact to the skull in order to pick up the own voice, and has a low signal-to-noise ratio. Another proposal to detect voice adds a directional microphone to the hearing aid, and orients the microphone toward the mouth of the user to detect the user's voice. However, the effectiveness of the directional microphone depends on the directivity of the microphone and the presence of other sound sources, particularly sound sources in the same direction as the mouth. Another proposal to detect voice provides a microphone in the ear-canal and only uses the microphone to record an occluded signal. Another proposal attempts to use a filter to distinguish the user's voice from other sound. However, the filter is unable to self correct to accommodate changes in the user's voice and for changes in the environment of the user.
SUMMARY
The present subject matter provides apparatus and methods to use a hearing assistance device to detect a voice of the wearer of the hearing assistance device. Embodiments use an adaptive filter to provide a self-correcting voice detector, capable of automatically adjusting to accommodate changes in the wearer's voice and environment.
Examples are provided, such as an apparatus configured to be worn by a wearer who has an ear and an ear canal. The apparatus includes a first microphone adapted to be worn about the ear of the person, a second microphone adapted to be worn about the ear canal of the person and at a different location than the first microphone, a sound processor adapted to process signals from the first microphone to produce a processed sound signal, and a voice detector to detect the voice of the wearer. The voice detector includes an adaptive filter to receive signals from the first microphone and the second microphone.
Another example of an apparatus includes a housing configured to be worn behind the ear or over the ear, a first microphone in the housing, and an ear piece configured to be positioned in the ear canal, wherein the ear piece includes a microphone that receives sound from the outside when positioned near the ear canal. Various voice detection systems employ an adaptive filter that receives signals from the first microphone and the second microphone and detects the voice of the wearer using a peak value for coefficients of the adaptive filter and an error signal from the adaptive filter.
The present subject matter also provides methods for detecting a voice of a wearer of a hearing assistance device where the hearing assistance device includes a first microphone and a second microphone. An example of the method is provided and includes using a first electrical signal representative of sound detected by the first microphone and a second electrical signal representative of sound detected by the second microphone as inputs to a system including an adaptive filter, and using the adaptive filter to detect the voice of the wearer of the hearing assistance device.
The present subject matter further provides apparatus and methods to use a pair of left and right hearing assistance devices to detect a voice of the wearer of the pair of left and right hearing assistance devices. Embodiments use outcome of detection of the voice of the wearer performed by the left hearing assistance device and the outcome of detection of the voice of the wearer performed the right hearing assistance device to determine whether to declare a detection of the voice of the wearer.
This Summary is an overview of some of the teachings of the present application and is not intended to be an exclusive or exhaustive treatment of the present subject matter. Further details about the present subject matter are found in the detailed description. The scope of the present invention is defined by the appended claims and their legal equivalents.
BRIEF DESCRIPTION OF THE DRAWINGS
FIGS. 1A and 1B illustrate a hearing assistance device with a voice detector according to one embodiment of the present subject matter.
FIG. 2 demonstrates how sound can travel from the user's mouth to the first and second microphones illustrated in FIG. 1A.
FIG. 3 illustrates a hearing assistance device according to one embodiment of the present subject matter.
FIG. 4 illustrates a voice detector according to one embodiment of the present subject matter.
FIGS. 5-7 illustrate various processes for detecting voice that can be used in various embodiments of the present subject matter.
FIG. 8 illustrates one embodiment of the present subject matter with an “own voice detector” to control active noise canceller for occlusion reduction.
FIG. 9 illustrates one embodiment of the present subject matter offering a multichannel expansion, compression and output control limiting algorithm (MECO).
FIG. 10 illustrates one embodiment of the present subject matter which uses an “own voice detector” in an environment classification scheme.
FIG. 11 illustrates a pair of hearing assistance devices according to one embodiment of the present subject matter.
FIG. 12 illustrates a process for detecting voice using the pair of hearing assistance devices.
DETAILED DESCRIPTION
The following detailed description refers to subject matter in the accompanying drawings which show, by way of illustration, specific aspects and embodiments in which the present subject matter may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the present subject matter. References to “an”, “one”, or “various” embodiments in this disclosure are not necessarily to the same embodiment, and such references contemplate more than one embodiment. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope is defined only by the appended claims, along with the full scope of legal equivalents to which such claims are entitled.
Various embodiments disclosed herein provide a self-correcting voice detector, capable of reliably detecting the presence of the user's own voice through automatic adjustments that accommodate changes in the user's voice and environment. The detected voice can be used, among other things, to reduce the amplification of the user's voice, control an anti-occlusion process and control an environment classification process.
The present subject matter provides, among other things, an “own voice” detector using two microphones in a standard hearing assistance device. Examples of standard hearing aids include behind-the-ear (BTE), over-the-ear (OTE), and receiver-in-canal (RIC) devices. It is understood that RIC devices have a housing adapted to be worn behind the ear or over the ear. Sometimes the RIC electronics housing is called a BTE housing or an OTE housing. According to various embodiments, one microphone is the microphone as usually present in the standard hearing assistance device, and the other microphone is mounted in an ear bud or ear mold near the user's ear canal. Hence, the microphone is directed to detection of acoustic signals outside and not inside the ear canal. The two microphones can be used to create a directional signal.
FIG. 1A illustrates a hearing assistance device with a voice detector according to one embodiment of the present subject matter. The figure illustrates an ear with a hearing assistance device 100, such as a hearing aid. The illustrated hearing assistance device includes a standard housing 101 (e.g. behind-the-ear (BTE) or on-the-ear (OTE) housing) with an optional ear hook 102 and an ear piece 103 configured to fit within the ear canal. A first microphone (MIC 1) is positioned in the standard housing 101, and a second microphone (MIC 2) is positioned near the ear canal 104 on the air side of the ear piece. FIG. 1B schematically illustrates a cross section of the ear piece 103 positioned near the ear canal 104, with the second microphone on the air side of the ear piece 103 to detect acoustic signals outside of the ear canal.
Other embodiments may be used in which the first microphone (M1) is adapted to be worn about the ear of the person and the second microphone (M2) is adapted to be worn about the ear canal of the person. The first and second microphones are at different locations to provide a time difference for sound from a user's voice to reach the microphones. As illustrated in FIG. 2, the sound vectors representing travel of the user's voice from the user's mouth to the microphones are different. The first microphone (MIC 1) is further away from the mouth than the second microphone (MIC 2). Sound received by MIC 2 will be relatively high amplitude and will be received slightly sooner than sound detected by MIC 1. And when the wearer is speaking, the sound of the wearer's voice will dominate the sounds received by both MIC 1 and MIC 2. The differences in received sound can be used to distinguish the own voice from other sound sources.
FIG. 3 illustrates a hearing assistance device according to one embodiment of the present subject matter. The illustrated device 305 includes the first microphone (MIC 1), the second microphone (MIC 2), and a receiver (speaker) 306. It is understood that different types of microphones can be employed in various embodiments. In one embodiment, each microphone is an omnidirectional microphone. In one embodiment, each microphone is a directional microphone. In various embodiments, the microphones may be both directional and omnidirectional. Various order directional microphones can be employed. Various embodiments incorporate the receiver in a housing of the device (e.g. behind-the-ear or on-the-ear housing). A sound conduit can be used to direct sound from the receiver toward the ear canal. Various embodiments use a receiver configured to fit within the user's ear canal. These embodiments are referred to as receiver-in-canal (RIC) devices.
A digital sound processing system 308 processes the acoustic signals received by the first and second microphones, and provides a signal to the receiver 306 to produce an audible signal to the wearer of the device 305. The illustrated digital sound processing system 308 includes an interface 307, a sound processor 308, and a voice detector 309. The illustrated interface 307 converts the analog signals from the first and second microphones into digital signals for processing by the sound processor 308 and the voice detector 309. For example, the interface may include analog-to-digital converters, and appropriate registers to hold the digital signals for processing by the sound processor and voice detector. The illustrated sound processor 308 processes a signal representative of a sound received by one or both of the first microphone and/or second microphone into a processed output signal 310, which is provided to the receiver 306 to produce the audible signal. According to various embodiments, the sound processor 308 is capable of operating in a directional mode in which signals representative of sound received by the first microphone and sound received by the second microphone are processed to provide the output signal 310 to the receiver 306 with directionality.
The voice detector 309 receives signals representative of sound received by the first microphone and sound received by the second microphone. The voice detector 309 detects the user's own voice, and provides an indication 311 to the sound processor 308 regarding whether the user's own voice is detected. Once the user's own voice is detected any number of possible other actions can take place. For example, in various embodiments when the user's voice is detected, the sound processor 308 can perform one or more of the following, including but not limited to reduction of the amplification of the user's voice, control of an anti-occlusion process, and/or control of an environment classification process. Those skilled in the art will understand that other processes may take place without departing from the scope of the present subject matter.
In various embodiments, the voice detector 309 includes an adaptive filter. Examples of processes implemented by adaptive filters include Recursive Least Square error (RLS), Least Mean Squared error (LMS), and Normalized Least Mean Square error (NLMS) adaptive filter processes. The desired signal for the adaptive filter is taken from the first microphone (e.g., a standard behind-the-ear or over-the-ear microphone), and the input signal to the adaptive filter is taken from the second microphone. If the hearing aid wearer is talking, the adaptive filter models the relative transfer function between the microphones. Voice detection can be performed by comparing the power of the error signal to the power of the signal from the standard microphone and/or looking at the peak strength in the impulse response of the filter. The amplitude of the impulse response should be in a certain range in order to be valid for the own voice. If the user's own voice is present, the power of the error signal will be much less than the power of the signal from the standard microphone, and the impulse response has a strong peak with an amplitude above a threshold (e.g. above about 0.5 for normalized coefficients). In the presence of the user's own voice, the largest normalized coefficient of the filter is expected to be within the range of about 0.5 to about 0.9. Sound from other noise sources would result in a much smaller difference between the power of the error signal and the power of the signal from the standard microphone, and a small impulse response of the filter with no distinctive peak
FIG. 4 illustrates a voice detector according to one embodiment of the present subject matter. The illustrated voice detector 409 includes an adaptive filter 412, a power analyzer 413 and a coefficient analyzer 414. The output 411 of the voice detector 409 provides an indication to the sound processor indicative of whether the user's own voice is detected. The illustrated adaptive filter includes an adaptive filter process 415 and a summing junction 416. The desired signal 417 for the filter is taken from a signal representative of sound from the first microphone, and the input signal 418 for the filter is taken from a signal representative of sound from the second microphone. The filter output signal 419 is subtracted from the desired signal 417 at the summing junction 416 to produce an error signal 420 which is fed back to the adaptive filter process 415.
The illustrated power analyzer 413 compares the power of the error signal 420 to the power of the signal representative of sound received from the first microphone. According to various embodiments, a voice will not be detected unless the power of the signal representative of sound received from the first microphone is much greater than the power of the error signal. For example, the power analyzer 413 compares the difference to a threshold, and will not detect voice if the difference is less than the threshold.
The illustrated coefficient analyzer 414 analyzes the filter coefficients from the adaptive filter process 415. According to various embodiments, a voice will not be detected unless a peak value for the coefficients is significantly high. For example, some embodiments will not detect voice unless the largest normalized coefficient is greater than a predetermined value (e.g. 0.5).
FIGS. 5-7 illustrate various processes for detecting voice that can be used in various embodiments of the present subject matter. In FIG. 5, as illustrated at 521, the power of the error signal from the adaptive filter is compared to the power of a signal representative of sound received by the first microphone. At 522, it is determined whether the power of the first microphone is greater than the power of the error signal by a predetermined threshold. The threshold is selected to be sufficiently high to ensure that the power of the first microphone is much greater than the power of the error signal. In some embodiments, voice is detected at 523 if the power of the first microphone is greater than the power of the error signal by a predetermined threshold, and voice is not detected at 524 if the power of the first microphone is greater than the power of the error signal by a predetermined threshold.
In FIG. 6, as illustrated at 625, coefficients of the adaptive filter are analyzed. At 626, it is determined whether the largest normalized coefficient is greater than a predetermined value, such as greater than 0.5. In some embodiments, voice is detected at 623 if the largest normalized coefficient is greater than a predetermined value, and voice is not detected at 624 if the largest normalized coefficient is not greater than a predetermined value.
In FIG. 7, as illustrated at 721, the power of the error signal from the adaptive filter is compared to the power of a signal representative of sound received by the first microphone. At 722, it is determined whether the power of the first microphone is greater than the power of the error signal by a predetermined threshold. In some embodiments, voice is not detected at 724 if the power of the first microphone is not greater than the power of the error signal by a predetermined threshold. If the power of the error signal is too large, then the adaptive filter has not converged. In the illustrated method, the coefficients are not analyzed until the adaptive filter converges. As illustrated at 725, coefficients of the adaptive filter are analyzed if the power of the first microphone is greater than the power of the error signal by a predetermined threshold. At 726, it is determined whether the largest normalized coefficient is greater than a predetermined value, such as greater than 0.5. In some embodiments, voice is not detected at 724 if the largest normalized coefficient is not greater than a predetermined value. Voice is detected at 723 if the power of the first microphone is greater than the power of the error signal by a predetermined threshold and if the largest normalized coefficient is greater than a predetermined value.
FIG. 8 illustrates one embodiment of the present subject matter with an “own voice detector” to control active noise canceller for occlusion reduction. The active noise canceller filters microphone M2 with filter h and sends the filtered signal to the receiver. The microphone M2 and the error microphone M3 (in the ear canal) are used to calculate the filter update for filter h. The own voice detector, which uses microphone M1 and M2, is used to steer the stepsize in the filter update.
FIG. 9 illustrates one embodiment of the present subject matter offering a multichannel expansion, compression and output control limiting algorithm (MECO) which uses the signal of microphone M2 to calculate the desired gain and subsequently applies that gain to microphone signal M2 and then sends the amplified signal to the receiver. Additionally, the gain calculation can take into account the outcome of the own voice detector (which uses M1 and M2) to calculate the desired gain. If the wearer's own voice is detected, the gain in the lower channels (typically below 1 KHz) will be lowered to avoid occlusion. Note: the MECO algorithm can use microphone signal M1 or M2 or a combination of both.
FIG. 10 illustrates one embodiment of the present subject matter which uses an “own voice detector” in an environment classification scheme. From the microphone signal M2, several features are calculated. These features together with the result of the own voice detector, which uses M1 and M2, are used in a classifier to determine the acoustic environment. This acoustic environment classification is used to set the gain in the hearing aid. In various embodiments, the hearing aid may use M2 or M1 or M1 and M2 for the feature calculation.
FIG. 11 illustrates a pair of hearing assistance devices according to one embodiment of the present subject matter. The pair of hearing assistance devices includes a left hearing assistance device 1105L and a right hearing assistance device 1105R, such as a left hearing aid and a right hearing aid. The left hearing assistance device 1105L is configured to be worn in or about the left ear of a wearer for delivering sound to the left ear canal of the wearer. The right hearing assistance device 1105R is configured to be worn in or about the right ear of the wearer for delivering sound to the right ear canal of the wearer. In one embodiment, the left and right hearing assistance devices 1105L and 1105R each represent an embodiment of the device 305 as discussed above with capability of performing wireless communication between each other and uses voice detection capability of both devices to determine whether voice of the wearer is present.
The illustrated left hearing assistance device 1105L includes a first microphone MIC 1L, a second microphone MIC 2L, an interface 1107L, a sound processor 1108L, a receiver 1106L, a voice detector 1109L, and a communication circuit 1130L. The first microphone MIC 1L produces a first left microphone signal. The second microphone MIC 2L produces a second left microphone signal. In one embodiment, when the left and right hearing assistance devices 1105L and 1105R are worn by the wearer, the first microphone MIC 1L is positioned about the left ear or the wearer, and the second microphone MIC 2L is positioned about the left ear canal of wearer, at a different location than the first microphone MIC 1L, on an air side of the left ear canal to detect signals outside the left ear canal. Interface 1107L converts the analog versions of the first and second left microphone signals into digital signals for processing by the sound processor 1108L and the voice detector 1109L. For example, the interface 1107L may include analog-to-digital converters, and appropriate registers to hold the digital signals for processing by the sound processor 1108L and the voice detector 1109L. The sound processor 1108L produces a processed left sound signal 1110L. The left receiver 1106L produces a left audible signal based on the processed left sound signal 1110L and transmits the left audible signal to the left ear canal of the wearer. In one embodiment, the sound processor 1108L produces the processed left sound signal 1110L based on the first left microphone signal. In another embodiment, the sound processor 1108L produces the processed left sound signal 1110L based on the first left microphone signal and the second left microphone signal.
The left voice detector 1109L detects a voice of the wearer using the first left microphone signal and the second left microphone signal. In one embodiment, in response to the voice of the wearer being detected based on the first left microphone signal and the second left microphone signal, the left voice detector 1109L produces a left detection signal indicative of detection of the voice of the wearer. In one embodiment, the left voice detector 1109L includes a left adaptive filter configured to output left information and identifies the voice of the wearer from the output left information. In various embodiments, the output left information includes coefficients of the left adaptive filter and/or a left error signal. In various embodiments, the left voice detector 1109L includes the voice detector 309 or the voice detector 409 as discussed above. The left communication circuit 1130L receives information from, and transmits information to, the right hearing assistance device 1105R via a wireless communication link 1132. In the illustrated embodiment, the information transmitted via wireless communication link 1132 includes information associated with the detection of the voice of the wearer as performed by each of the left and right hearing assistance devices 1105L and 1105R.
The illustrated right hearing assistance device 1105R includes a first microphone MIC 1R, a second microphone MIC 2R, an interface 1107R, a sound processor 1108R, a receiver 1106R, a voice detector 1109R, and a communication circuit 1130R. The first microphone MIC 1R produces a first right microphone signal. The second microphone MIC 2R produces a second right microphone signal. In one embodiment, when the left and right hearing assistance devices 1105R and 1105R are worn by the wearer, the first microphone MIC 1R is positioned about the right ear or the wearer, and the second microphone MIC 2R is positioned about the right ear canal of wearer, at a different location than the first microphone MIC 1R, on an air side of the right ear canal to detect signals outside the right ear canal. Interface 1107R converts the analog versions of the first and second right microphone signals into digital signals for processing by the sound processor 1108R and the voice detector 1109R. For example, the interface 1107R may include analog-to-digital converters, and appropriate registers to hold the digital signals for processing by the sound processor 1108R and the voice detector 1109R. The sound processor 1108R produces a processed right sound signal 1110R. The right receiver 1106R produces a right audible signal based on the processed right sound signal 1110R and transmits the right audible signal to the right ear canal of the wearer. In one embodiment, the sound processor 1108R produces the processed right sound signal 1110R based on the first right microphone signal. In another embodiment, the sound processor 1108R produces the processed right sound signal 1110R based on the first right microphone signal and the second right microphone signal.
The right voice detector 1109R detects the voice of the wearer using the first right microphone signal and the second right microphone signal. In one embodiment, in response to the voice of the wearer being detected based on the first right microphone signal and the second right microphone signal, the right voice detector 1109R produces a right detection signal indicative of detection of the voice of the wearer. In one embodiment, the right voice detector 1109R includes a right adaptive filter configured to output right information and identifies the voice of the wearer from the output right information. In various embodiments, the output right information includes coefficients of the right adaptive filter and/or a right error signal. In various embodiments, the right voice detector 1109R includes the voice detector 309 or the voice detector 409 as discussed above. The right communication circuit 1130R receives information from, and transmits information to, the right hearing assistance device 1105R via a wireless communication link 1132.
In various embodiments, at least one of the left voice detector 1109L and the right voice detector 1109R is configured to detect the voice of wearer using the first left microphone signal, the second left microphone signal, the first right microphone signal, and the second right microphone signal. In other words, signals produced by all of the microphones MIC 1L, MIC 2L, MIC 1R, and MIC 2R are used for determining whether the voice of the wearer is present. In one embodiment, the left voice detector 1109L and/or the right voice detector 1109R declares a detection of the voice of the wearer in response to at least one of the left detection signal and the second detection signal being present. In another embodiment, the left voice detector 1109L and/or the right voice detector 1109R declares a detection of the voice of the wearer in response to the left detection signal and the second detection signal both being present. In one embodiment, the left voice detector 1109L and/or the right voice detector 1109R determines whether to declare a detection of the voice of the wearer using the output left information and output right information. The output left information and output right information are each indicative of one or more detection strength parameters each being a measure of likeliness of actual existence of the voice of wearer. Examples of the one or more detection strength parameters include the difference between the power of the error signal and the power of the first microphone signal and the largest normalized coefficient of the adaptive filter. In one embodiment, the left voice detector 1109L and/or the right voice detector 1109R determines whether to declare a detection of the voice of the wearer using a weighted combination of the output left information and the output eight information. For example, the weighted combination of the output left information and the output right information can include a weighted sum of the detection strength parameters. The one or more detection strength parameters produced by each of the left and right voice detectors can be multiplied by one or more corresponding weighting factors before being added to produce the weighted sum. In various embodiments, the weighting factors may be determined using a priori information such as estimates of the background noise and/or position(s) of other sound sources in a room.
In various embodiments when a pair of left and right hearing assistance device is worn by the wearer, the detection of the voice of the wearer is performed using both the left and the right voice detectors such as detectors 1109L and 1109R. In various embodiments, whether to declare a detection of the voice of the wearer may be determined by each of the left voice detector 1109L and the right voice detector 1109R, determined by the left voice detector 1109L and communicated to the right voice detector 1109R via wireless link 1132, or determined by the right voice detector 1109R and communicated to the left voice detector 1109L via wireless link 1132. Upon declaration of the detection of the voice of the wearer, the left voice detector 1109L transmits an indication 1111L to the sound processor 1108L, and the right voice detector 1109R transmits an indication 1111R to the sound processor 1108R. The sound processors 1108L and 1108R produce the processed sound signals 1110L and 1110R, respectively, using the indication that the voice of the wearer is detected.
FIG. 12 illustrates a process for detecting voice using a pair of hearing assistance devices including a left hearing assistance device and a right hearing assistance device, such as the left and right hearing assistance devices 1105L and 1105R. At 1241, voice of a wearer is detected using the left hearing assistance device. At 1242, voice of a wearer is detected using the right hearing assistance device. In various embodiments, steps 1241 and 1242 are performed concurrently or simultaneously. Examples for each of steps 1241 and 1242 include the processes illustrated in each of FIGS. 5-7. At 1243, whether to declare a detection of the voice of the wearer is determining using an outcome of both of the detections at 1241 and 1242.
In one embodiment, the left and right hearing assistance devices each include first and second microphones. Electrical signals produced by the first and second microphones of the left hearing assistance device are used as inputs to a voice detector of the left hearing assistance device at 1241. The voice detector of the left hearing assistance device includes a left adaptive filter. Electrical signals produced by the first and second microphones of the right hearing assistance device are used as inputs to a voice detector of the right hearing assistance device at 1242. The voice detector of the right hearing assistance device includes a right adaptive filter. The voice of the wearer is detected using information output from the left adaptive filter and information output from the right adaptive filter at 1243. In one embodiment, the voice of the wearer is detected using left coefficients of the left adaptive filter and right coefficients of the right adaptive filter. In one embodiment, the voice of the wearer is detected using a left error signal produced by the left adaptive filter and a right error signal produced by the right adaptive filter. In one embodiment, the voice of the wearer is detected using a left detection strength parameter of the information output from the left adaptive filter and a right detection strength parameter of the information output from the right adaptive filter. The left and right detection strength parameters are each a measure of likeliness of actual existence of the voice of wearer. Examples of the left detection strength parameter include the difference between the power of a left error signal produced by the left adaptive filter and the power of the electrical signal produced by the first microphone of the left hearing assistance device and the largest normalized coefficient of the left adaptive filter. Examples of the right detection strength parameter include the difference between the power of a right error signal produced by the right adaptive filter and the power of the electrical signal produced by the first microphone of the right hearing assistance device and the largest normalized coefficient of the right adaptive filter. In one embodiment, the voice of the wearer is detected using a weighted combination of the information output from the left adaptive filter and the information output from the right adaptive filter.
In one embodiment, the voice of the wearer is detected using the left hearing assistance device based on the electrical signals produced by the first and second microphones of the left hearing assistance device, and a left detection signal indicative of whether the voice of the wearer is detected by the left hearing assistance device is produced, at 1241. The voice of the wearer is detected using the right hearing assistance device based on the electrical signals produced by the first and second microphones of the right hearing assistance device, and a right detection signal indicative of whether the voice of the wearer is detected by the right hearing assistance device is produced, at 1242. Whether to declare the detection of the voice of the wearer is determined using the left detection signal and the right detection signal at 1243. In one embodiment, the detection of the voice of the wearer is declared in response to both of the left detection signal and the right detection signal being present. In another embodiment, the detection of the voice of the wearer is declared in response to at least one of the left detection signal and the right detection signal being present. In one embodiment, whether to declare the detection of the voice of the wearer is determined using the left detection signal, the right detection signal, and weighting factors applied to the left and right detection signals.
The various embodiments of the present subject matter discussed with reference to FIGS. 1-10 can be applied to each device of a pair of hearing assistance devices, with the declaration of the detection of the voice of the wearer being a result of detection using both devices of the pair of hearing assistance devices, as discussed with reference to FIGS. 11 and 12. Such binaural voice detection will likely improve the acoustic perception of the wearer because both hearing assistance devices worn by the wearer are acting similarly when the wearer speaks. In various embodiments in which a pair of hearing assistance devices is worn by the wearer, whether to declare a detection of the voice of the wearer may be determined based on the detection performed by either one device of the pair of hearing assistance devices or based on the detection performed by both devices of the pair of hearing assistance devices. An example of the pair of hearing assistance devices includes a pair of hearing aids.
The present subject matter includes hearing assistance devices, and was demonstrated with respect to BTE, OTE, and RIC type devices, but it is understood that it may also be employed in cochlear implant type hearing devices. It is understood that other hearing assistance devices not expressly stated herein may fall within the scope of the present subject matter.
This application is intended to cover adaptations or variations of the present subject matter. It is to be understood that the above description is intended to be illustrative, and not restrictive. The scope of the present subject matter should be determined with reference to the appended claims, along with the full scope of legal equivalents to which such claims are entitled.

Claims (22)

What is claimed is:
1. An apparatus configured to be worn by a wearer having a left ear with a left ear canal and a right ear with a right ear canal, comprising:
a left hearing assistance device and a right hearing assistance device communicatively coupled to each other, the left and right hearing assistance devices each including:
a first microphone configured to produce a first microphone signal;
a second microphone configured to produce a second microphone signal; and
a voice detector,
wherein at least one of the voice detectors of the left and right hearing assistance devices is configured to detect a voice of the wearer using the first and second microphone signals produced by the left hearing assistance device and the first and second microphone signals produced by the right hearing assistance device.
2. The apparatus of claim 1, wherein the voice detector of each of the left and right hearing assistance device is configured to receive the first microphone signal and the second microphone signal.
3. The apparatus of claim 2, wherein the voice detector of each of the left and right hearing assistance device comprises an adaptive filter configured to output information and the at least one of the voice detectors of the left and right hearing assistance devices is configured to detect the voice of the wearer using the output information from each of the left and right hearing assistance devices.
4. The apparatus of claim 3, wherein the at least one of the voice detectors of the left and right hearing assistance devices is configured to detect the voice of the wearer using coefficients of the adaptive filter of each of the left and right hearing assistance devices.
5. The apparatus of claim 3, wherein the at least one of the voice detectors of the left and right hearing assistance devices is configured to detect the voice of the wearer using an error signal produced by the adaptive filter of each of the left and right hearing assistance devices.
6. The apparatus of claim 3, wherein the at least one of the voice detectors of the left and right hearing assistance devices is configured to detect the voice of the wearer using a detection strength parameter of the output information from each of the left and right hearing assistance, the detection strength parameter being a measure of likeliness of actual existence of the voice of wearer.
7. The apparatus of claim 3, wherein the at least one of the voice detectors of the left and right hearing assistance devices is configured to detect the voice of the wearer using a weighted combination of the output information from the left hearing assistance device and the output information from the right hearing assistance device.
8. The apparatus of claim 3, wherein the voice detector of each of the left and right hearing assistance device is configured to produce a detection signal indicative of detection of the voice of the wearer.
9. The apparatus of claim 8, wherein the at least one of the voice detectors of the left and right hearing assistance devices is configured to declare a detection of the voice of the wearer in response to the detection signal being produced by at least one of the left and right hearing assistance devices.
10. The apparatus of claim 8, wherein the at least one of the voice detectors of the left and right hearing assistance devices is configured to declare a detection of the voice of the wearer in response to the detection signal being produced by each of the left and right hearing assistance devices.
11. The apparatus of claim 1, wherein the left and right hearing assistance devices each comprise a hearing aid configured such that when being worn by the wearer, the first microphone is positioned about one of the left and right ears and the second microphone is positioned about one of the left and right ear canals, at a different location than the first microphone, on an air side of the one of the left and left ear canals to detect signals outside the one of the left and right ear canals.
12. The apparatus of claim 11, wherein the hearing aid comprises:
a sound processor configured to produce a processed sound signal based on at least the first microphone signal and whether the voice of the wearer is detected; and
a receiver configured to produce an audible signal based on the processed sound signal and transmit the audible signal to the one of the left and right ear canals.
13. The apparatus of claim 12, wherein the sound processor is configured to control an anti-occlusion process based on whether the voice of the wearer is detected.
14. The apparatus of claim 12, wherein the sound processor is configured to control an environment classification process based on whether the voice of the wearer is detected.
15. A method for detecting a voice of a wearer of a pair of left and right hearing assistance devices each including a first microphone and a second microphone, the wearer having a left ear with a left ear canal and a right ear with a right ear canal, the method comprising:
positioning the first microphone and the second microphone of the left hearing assistance device about the left ear to each detect sound outside the left ear canal;
positioning the first microphone and the second microphone of the right hearing assistance device about the right ear to each detect sound outside the right ear canal; and
detecting a voice of the wearer using electrical signals produced by the first and second microphones of the left hearing assistance device and the electrical signals produced by the first and second microphones of the right hearing assistance device.
16. The method of claim 15, wherein detecting the voice of the wearer comprises:
using electrical signals produced by the first and second microphones of the left hearing assistance device as inputs to a voice detector of the left hearing assistance device including a left adaptive filter;
using electrical signals produced by the first and second microphones of the right hearing assistance device as inputs to a voice detector of the right hearing assistance device including a right adaptive filter; and
detecting the voice of the wearer using information output from the left adaptive filter and information output from the right adaptive filter.
17. The method of claim 16, wherein detecting the voice of the wearer comprises detecting the voice of the wearer using left coefficients of the left adaptive filter and right coefficients of the right adaptive filter.
18. The method of claim 16, wherein detecting the voice of the wearer comprises detecting the voice of the wearer using a left error signal produced by the left adaptive filter and a right error signal produced by the right adaptive filter.
19. The method of claim 16, wherein detecting the voice of the wearer comprises detecting the voice of the wearer using a left detection strength parameter of the information output from the left adaptive filter and a right detection strength parameter of the information output from the right adaptive filter, the left and right detection strength parameters each being a measure of likeliness of actual existence of the voice of wearer.
20. The method of claim 16, wherein detecting the voice of the wearer comprises detecting the voice of the wearer using a weighted combination of the information output from the left adaptive filter and the information output from the right adaptive filter.
21. The method of claim 15, detecting the voice of the wearer comprises:
detecting the voice of the wearer using the left hearing assistance device based on the electrical signals produced by the first and second microphones of the left hearing assistance device;
producing a left detection signal indicative of whether the voice of the wearer is detected by the left hearing assistance device;
detecting the voice of the wearer using the right hearing assistance device based on the electrical signals produced by the first and second microphones of the right hearing assistance device;
producing a right detection signal indicative of whether the voice of the wearer is detected by the right hearing assistance device; and
determining whether to declare a detection of the voice of the wearer using the left detection signal and the right detection signal.
22. The method of claim 21, comprising determining whether to declare the detection of the voice of the wearer using the left detection signal, the right detection signal, and weighting factors applied to the left and right detection signals.
US14/464,149 2009-04-01 2014-08-20 Hearing assistance system with own voice detection Active US9219964B2 (en)

Priority Applications (8)

Application Number Priority Date Filing Date Title
US14/464,149 US9219964B2 (en) 2009-04-01 2014-08-20 Hearing assistance system with own voice detection
EP15181620.4A EP2988531B1 (en) 2014-08-20 2015-08-19 Hearing assistance system with own voice detection
DK15181620.4T DK2988531T3 (en) 2014-08-20 2015-08-19 HEARING SYSTEM WITH OWN VOICE DETECTION
EP18195310.0A EP3461148B1 (en) 2014-08-20 2015-08-19 Hearing assistance system with own voice detection
US14/976,711 US9712926B2 (en) 2009-04-01 2015-12-21 Hearing assistance system with own voice detection
US15/651,459 US10225668B2 (en) 2009-04-01 2017-07-17 Hearing assistance system with own voice detection
US16/290,131 US10652672B2 (en) 2009-04-01 2019-03-01 Hearing assistance system with own voice detection
US16/871,791 US11388529B2 (en) 2009-04-01 2020-05-11 Hearing assistance system with own voice detection

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US16551209P 2009-04-01 2009-04-01
US12/749,702 US8477973B2 (en) 2009-04-01 2010-03-30 Hearing assistance system with own voice detection
US13/933,017 US9094766B2 (en) 2009-04-01 2013-07-01 Hearing assistance system with own voice detection
US14/464,149 US9219964B2 (en) 2009-04-01 2014-08-20 Hearing assistance system with own voice detection

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/933,017 Continuation-In-Part US9094766B2 (en) 2009-04-01 2013-07-01 Hearing assistance system with own voice detection

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/976,711 Continuation US9712926B2 (en) 2009-04-01 2015-12-21 Hearing assistance system with own voice detection

Publications (2)

Publication Number Publication Date
US20150043765A1 US20150043765A1 (en) 2015-02-12
US9219964B2 true US9219964B2 (en) 2015-12-22

Family

ID=52448699

Family Applications (5)

Application Number Title Priority Date Filing Date
US14/464,149 Active US9219964B2 (en) 2009-04-01 2014-08-20 Hearing assistance system with own voice detection
US14/976,711 Active US9712926B2 (en) 2009-04-01 2015-12-21 Hearing assistance system with own voice detection
US15/651,459 Active US10225668B2 (en) 2009-04-01 2017-07-17 Hearing assistance system with own voice detection
US16/290,131 Active US10652672B2 (en) 2009-04-01 2019-03-01 Hearing assistance system with own voice detection
US16/871,791 Active US11388529B2 (en) 2009-04-01 2020-05-11 Hearing assistance system with own voice detection

Family Applications After (4)

Application Number Title Priority Date Filing Date
US14/976,711 Active US9712926B2 (en) 2009-04-01 2015-12-21 Hearing assistance system with own voice detection
US15/651,459 Active US10225668B2 (en) 2009-04-01 2017-07-17 Hearing assistance system with own voice detection
US16/290,131 Active US10652672B2 (en) 2009-04-01 2019-03-01 Hearing assistance system with own voice detection
US16/871,791 Active US11388529B2 (en) 2009-04-01 2020-05-11 Hearing assistance system with own voice detection

Country Status (1)

Country Link
US (5) US9219964B2 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9699573B2 (en) 2009-04-01 2017-07-04 Starkey Laboratories, Inc. Hearing assistance system with own voice detection
US9712926B2 (en) 2009-04-01 2017-07-18 Starkey Laboratories, Inc. Hearing assistance system with own voice detection
WO2022026725A1 (en) 2020-07-31 2022-02-03 Starkey Laboratories, Inc. Hypoxic or anoxic neurological injury detection with ear-wearable devices and system
WO2022103954A1 (en) 2020-11-16 2022-05-19 Starkey Laboratories, Inc. Passive safety monitoring with ear-wearable devices
US11361785B2 (en) 2019-02-12 2022-06-14 Samsung Electronics Co., Ltd. Sound outputting device including plurality of microphones and method for processing sound signal using plurality of microphones
WO2022140559A1 (en) 2020-12-23 2022-06-30 Starkey Laboratories, Inc. Ear-wearable system and method for detecting dehydration
WO2022170091A1 (en) 2021-02-05 2022-08-11 Starkey Laboratories, Inc. Multi-sensory ear-worn devices for stress and anxiety detection and alleviation
WO2022198057A2 (en) 2021-03-19 2022-09-22 Starkey Laboratories, Inc. Ear-wearable device and system for monitoring of and/or providing therapy to individuals with hypoxic or anoxic neurological injury
WO2022271660A1 (en) 2021-06-21 2022-12-29 Starkey Laboratories, Inc. Ear-wearable systems for gait analysis and gait training
US11812213B2 (en) 2020-09-30 2023-11-07 Starkey Laboratories, Inc. Ear-wearable devices for control of other devices and related methods
US11825272B2 (en) 2019-02-08 2023-11-21 Starkey Laboratories, Inc. Assistive listening device systems, devices and methods for providing audio streams within sound fields

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10163453B2 (en) 2014-10-24 2018-12-25 Staton Techiya, Llc Robust voice activity detector system for use with an earphone
US9613615B2 (en) * 2015-06-22 2017-04-04 Sony Corporation Noise cancellation system, headset and electronic device
EP3453189B1 (en) 2016-05-06 2021-04-14 Eers Global Technologies Inc. Device and method for improving the quality of in- ear microphone signals in noisy environments
US10244333B2 (en) * 2016-06-06 2019-03-26 Starkey Laboratories, Inc. Method and apparatus for improving speech intelligibility in hearing devices using remote microphone
US10062373B2 (en) 2016-11-03 2018-08-28 Bragi GmbH Selective audio isolation from body generated sound system and method
US10142745B2 (en) * 2016-11-24 2018-11-27 Oticon A/S Hearing device comprising an own voice detector
DK3484173T3 (en) * 2017-11-14 2022-07-11 Falcom As Hearing protection system with own voice estimation and related methods
WO2019142072A1 (en) * 2018-01-16 2019-07-25 Cochlear Limited Individualized own voice detection in a hearing prosthesis
EP3580639A1 (en) 2018-02-09 2019-12-18 Starkey Laboratories, Inc. Use of periauricular muscle signals to estimate a direction of a user's auditory attention locus
KR102151433B1 (en) * 2019-01-02 2020-09-03 올리브유니온(주) Adaptive solid hearing system according to environmental changes and noise changes, and the method thereof
CN110995909B (en) * 2019-11-20 2021-03-30 维沃移动通信有限公司 Sound compensation method and device

Citations (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4791672A (en) 1984-10-05 1988-12-13 Audiotone, Inc. Wearable digital hearing aid and method for improving hearing ability
US5008954A (en) 1989-04-06 1991-04-16 Carl Oppendahl Voice-activated radio transceiver
US5208867A (en) 1990-04-05 1993-05-04 Intelex, Inc. Voice transmission system and method for high ambient noise conditions
US5327506A (en) 1990-04-05 1994-07-05 Stites Iii George M Voice transmission system and method for high ambient noise conditions
US5426719A (en) 1992-08-31 1995-06-20 The United States Of America As Represented By The Department Of Health And Human Services Ear based hearing protector/communication system
US5550923A (en) 1994-09-02 1996-08-27 Minnesota Mining And Manufacturing Company Directional ear device with adaptive bandwidth and gain control
US5553152A (en) 1994-08-31 1996-09-03 Argosy Electronics, Inc. Apparatus and method for magnetically controlling a hearing aid
US5659621A (en) 1994-08-31 1997-08-19 Argosy Electronics, Inc. Magnetically controllable hearing aid
US5701348A (en) 1994-12-29 1997-12-23 Decibel Instruments, Inc. Articulated hearing device
US5721783A (en) 1995-06-07 1998-02-24 Anderson; James C. Hearing aid with wireless remote processor
US5761319A (en) 1996-07-16 1998-06-02 Avr Communications Ltd. Hearing instrument
WO1998045937A1 (en) 1997-04-09 1998-10-15 Morrill Jeffrey C Radio communications apparatus with attenuating ear pieces for high noise environments
US5917921A (en) 1991-12-06 1999-06-29 Sony Corporation Noise reducing microphone apparatus
US5991419A (en) 1997-04-29 1999-11-23 Beltone Electronics Corporation Bilateral signal processing prosthesis
US20010038699A1 (en) 2000-03-20 2001-11-08 Audia Technology, Inc. Automatic directional processing control for multi-microphone system
WO2002007477A2 (en) 2000-07-13 2002-01-24 Matech, Inc. Audio headset
US20020034310A1 (en) 2000-03-14 2002-03-21 Audia Technology, Inc. Adaptive microphone matching in multi-microphone directional system
US20020080979A1 (en) 2000-12-22 2002-06-27 Sonic Innovations, Inc. Protective hearing devices with multi-band automatic amplitude control and active noise attenuation
US20020141602A1 (en) 2001-03-30 2002-10-03 Nemirovski Guerman G. Ear microphone apparatus and method
US20030012391A1 (en) 2001-04-12 2003-01-16 Armstrong Stephen W. Digital hearing aid system
US20030165246A1 (en) 2002-02-28 2003-09-04 Sintef Voice detection and discrimination apparatus and method
WO2003073790A1 (en) 2002-02-28 2003-09-04 Nacre As Voice detection and discrimination apparatus and method
US6661901B1 (en) 2000-09-01 2003-12-09 Nacre As Ear terminal with microphone for natural voice rendition
WO2004021740A1 (en) 2002-09-02 2004-03-11 Oticon A/S Method for counteracting the occlusion effects
US6718043B1 (en) 1999-05-10 2004-04-06 Peter V. Boesen Voice sound transmitting apparatus and system including expansion port
US20040081327A1 (en) 2001-04-18 2004-04-29 Widex A/S Hearing aid, a method of controlling a hearing aid, and a noise reduction system for a hearing aid
US6738482B1 (en) 1999-09-27 2004-05-18 Jaber Associates, Llc Noise suppression system with dual microphone echo cancellation
US6738485B1 (en) 1999-05-10 2004-05-18 Peter V. Boesen Apparatus, method and system for ultra short range communication
WO2004077090A1 (en) 2003-02-25 2004-09-10 Oticon A/S Method for detection of own voice activity in a communication device
WO2005004534A1 (en) 2003-07-04 2005-01-13 Vast Audio Pty Ltd The production of augmented-reality audio
US20050058313A1 (en) 2003-09-11 2005-03-17 Victorian Thomas A. External ear canal voice detection
WO2005125269A1 (en) 2004-06-22 2005-12-29 Gennum Corporation First person acoustic environment system and method
WO2006028587A2 (en) 2004-07-22 2006-03-16 Softmax, Inc. Headset for separation of speech signals in a noisy environment
US7027603B2 (en) 1998-06-30 2006-04-11 Gn Resound North America Corporation Ear level noise rejection voice pickup method and apparatus
US7027607B2 (en) 2000-09-22 2006-04-11 Gn Resound A/S Hearing aid with adaptive microphone matching
US7072476B2 (en) 1997-02-18 2006-07-04 Matech, Inc. Audio headset
US7110562B1 (en) 2001-08-10 2006-09-19 Hear-Wear Technologies, Llc BTE/CIC auditory device and modular connector system therefor
US20070009122A1 (en) 2005-07-11 2007-01-11 Volkmar Hamacher Hearing apparatus and a method for own-voice detection
US7242924B2 (en) 2000-12-22 2007-07-10 Broadcom Corp. Methods of recording voice signals in a mobile set
US20070195968A1 (en) 2006-02-07 2007-08-23 Jaber Associates, L.L.C. Noise suppression method and system with single microphone
US20080192971A1 (en) 2006-02-28 2008-08-14 Rion Co., Ltd. Hearing Aid
US20090016542A1 (en) 2007-05-04 2009-01-15 Personics Holdings Inc. Method and Device for Acoustic Management Control of Multiple Microphones
US20090034765A1 (en) 2007-05-04 2009-02-05 Personics Holdings Inc. Method and device for in ear canal echo suppression
US20090074201A1 (en) 2007-09-18 2009-03-19 Starkey Laboratories, Inc. Method and apparatus for microphone matching for wearable directional hearing device using wearer's own voice
WO2009034536A2 (en) 2007-09-14 2009-03-19 Koninklijke Philips Electronics N.V. Audio activity detection
US20090097681A1 (en) 2007-10-12 2009-04-16 Earlens Corporation Multifunction System and Method for Integrated Hearing and Communication with Noise Cancellation and Feedback Management
US20090147966A1 (en) 2007-05-04 2009-06-11 Personics Holdings Inc Method and Apparatus for In-Ear Canal Sound Suppression
US20090220096A1 (en) 2007-11-27 2009-09-03 Personics Holdings, Inc Method and Device to Maintain Audio Content Level Reproduction
US20090238387A1 (en) 2008-03-20 2009-09-24 Siemens Medical Instruments Pte. Ltd. Method for actively reducing occlusion comprising plausibility check and corresponding hearing apparatus
US20100061564A1 (en) 2007-02-07 2010-03-11 Richard Clemow Ambient noise reduction system
US20100246845A1 (en) 2009-03-30 2010-09-30 Benjamin Douglass Burge Personal Acoustic Device Position Determination
US20100260364A1 (en) 2009-04-01 2010-10-14 Starkey Laboratories, Inc. Hearing assistance system with own voice detection
US8059847B2 (en) 2006-08-07 2011-11-15 Widex A/S Hearing aid method for in-situ occlusion effect and directly transmitted sound measurement
US20110299692A1 (en) 2009-01-23 2011-12-08 Widex A/S System, method and hearing aids for in situ occlusion effect measurement
US8081780B2 (en) 2007-05-04 2011-12-20 Personics Holdings Inc. Method and device for acoustic management control of multiple microphones
US8116489B2 (en) 2004-10-01 2012-02-14 Hearworks Pty Ltd Accoustically transparent occlusion reduction system and method
US8130991B2 (en) 2007-04-11 2012-03-06 Oticon A/S Hearing instrument with linearized output stage
US20120070024A1 (en) 2010-09-22 2012-03-22 Gn Resound A/S Hearing aid with occlusion suppression
US8391523B2 (en) 2007-10-16 2013-03-05 Phonak Ag Method and system for wireless hearing assistance
US8391522B2 (en) 2007-10-16 2013-03-05 Phonak Ag Method and system for wireless hearing assistance
US20130195296A1 (en) 2011-12-30 2013-08-01 Starkey Laboratories, Inc. Hearing aids with adaptive beamformer responsive to off-axis speech

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5479522A (en) 1993-09-17 1995-12-26 Audiologic, Inc. Binaural hearing aid
EP1586980B1 (en) * 1998-03-18 2007-07-04 Nippon Telegraph and Telephone Corporation Wearable communication device for inputting commands via detection of tapping shocks or vibration of fingertips
US6448801B2 (en) 1998-06-05 2002-09-10 Advanced Micro Devices, Inc. Method and device for supporting flip chip circuitry in analysis
US6639990B1 (en) 1998-12-03 2003-10-28 Arthur W. Astrin Low power full duplex wireless link
NL1021485C2 (en) 2002-09-18 2004-03-22 Stichting Tech Wetenschapp Hearing glasses assembly.
DK1627552T3 (en) * 2003-05-09 2008-03-17 Widex As Hearing aid system, a hearing aid and a method for processing audio signals
US8526646B2 (en) * 2004-05-10 2013-09-03 Peter V. Boesen Communication device
WO2007063139A2 (en) * 2007-01-30 2007-06-07 Phonak Ag Method and system for providing binaural hearing assistance
US9219964B2 (en) 2009-04-01 2015-12-22 Starkey Laboratories, Inc. Hearing assistance system with own voice detection
US8331594B2 (en) * 2010-01-08 2012-12-11 Sonic Innovations, Inc. Hearing aid device with interchangeable covers
CN102474697B (en) 2010-06-18 2015-01-14 松下电器产业株式会社 Hearing aid, signal processing method and program
US20140270230A1 (en) * 2013-03-15 2014-09-18 Skullcandy, Inc. In-ear headphones configured to receive and transmit audio signals and related systems and methods

Patent Citations (76)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4791672A (en) 1984-10-05 1988-12-13 Audiotone, Inc. Wearable digital hearing aid and method for improving hearing ability
US5008954A (en) 1989-04-06 1991-04-16 Carl Oppendahl Voice-activated radio transceiver
US5208867A (en) 1990-04-05 1993-05-04 Intelex, Inc. Voice transmission system and method for high ambient noise conditions
US5327506A (en) 1990-04-05 1994-07-05 Stites Iii George M Voice transmission system and method for high ambient noise conditions
US5917921A (en) 1991-12-06 1999-06-29 Sony Corporation Noise reducing microphone apparatus
US5426719A (en) 1992-08-31 1995-06-20 The United States Of America As Represented By The Department Of Health And Human Services Ear based hearing protector/communication system
US5553152A (en) 1994-08-31 1996-09-03 Argosy Electronics, Inc. Apparatus and method for magnetically controlling a hearing aid
US5659621A (en) 1994-08-31 1997-08-19 Argosy Electronics, Inc. Magnetically controllable hearing aid
US5550923A (en) 1994-09-02 1996-08-27 Minnesota Mining And Manufacturing Company Directional ear device with adaptive bandwidth and gain control
US5701348A (en) 1994-12-29 1997-12-23 Decibel Instruments, Inc. Articulated hearing device
US5721783A (en) 1995-06-07 1998-02-24 Anderson; James C. Hearing aid with wireless remote processor
US5761319A (en) 1996-07-16 1998-06-02 Avr Communications Ltd. Hearing instrument
US7072476B2 (en) 1997-02-18 2006-07-04 Matech, Inc. Audio headset
US6175633B1 (en) 1997-04-09 2001-01-16 Cavcom, Inc. Radio communications apparatus with attenuating ear pieces for high noise environments
WO1998045937A1 (en) 1997-04-09 1998-10-15 Morrill Jeffrey C Radio communications apparatus with attenuating ear pieces for high noise environments
US5991419A (en) 1997-04-29 1999-11-23 Beltone Electronics Corporation Bilateral signal processing prosthesis
US7027603B2 (en) 1998-06-30 2006-04-11 Gn Resound North America Corporation Ear level noise rejection voice pickup method and apparatus
US6738485B1 (en) 1999-05-10 2004-05-18 Peter V. Boesen Apparatus, method and system for ultra short range communication
US6718043B1 (en) 1999-05-10 2004-04-06 Peter V. Boesen Voice sound transmitting apparatus and system including expansion port
US6738482B1 (en) 1999-09-27 2004-05-18 Jaber Associates, Llc Noise suppression system with dual microphone echo cancellation
US20020034310A1 (en) 2000-03-14 2002-03-21 Audia Technology, Inc. Adaptive microphone matching in multi-microphone directional system
US20010038699A1 (en) 2000-03-20 2001-11-08 Audia Technology, Inc. Automatic directional processing control for multi-microphone system
WO2002007477A2 (en) 2000-07-13 2002-01-24 Matech, Inc. Audio headset
US6661901B1 (en) 2000-09-01 2003-12-09 Nacre As Ear terminal with microphone for natural voice rendition
US7027607B2 (en) 2000-09-22 2006-04-11 Gn Resound A/S Hearing aid with adaptive microphone matching
US6801629B2 (en) 2000-12-22 2004-10-05 Sonic Innovations, Inc. Protective hearing devices with multi-band automatic amplitude control and active noise attenuation
US7242924B2 (en) 2000-12-22 2007-07-10 Broadcom Corp. Methods of recording voice signals in a mobile set
US20020080979A1 (en) 2000-12-22 2002-06-27 Sonic Innovations, Inc. Protective hearing devices with multi-band automatic amplitude control and active noise attenuation
US20020141602A1 (en) 2001-03-30 2002-10-03 Nemirovski Guerman G. Ear microphone apparatus and method
US6671379B2 (en) 2001-03-30 2003-12-30 Think-A-Move, Ltd. Ear microphone apparatus and method
US20030012391A1 (en) 2001-04-12 2003-01-16 Armstrong Stephen W. Digital hearing aid system
US20040081327A1 (en) 2001-04-18 2004-04-29 Widex A/S Hearing aid, a method of controlling a hearing aid, and a noise reduction system for a hearing aid
US7110562B1 (en) 2001-08-10 2006-09-19 Hear-Wear Technologies, Llc BTE/CIC auditory device and modular connector system therefor
US6728385B2 (en) 2002-02-28 2004-04-27 Nacre As Voice detection and discrimination apparatus and method
WO2003073790A1 (en) 2002-02-28 2003-09-04 Nacre As Voice detection and discrimination apparatus and method
US20030165246A1 (en) 2002-02-28 2003-09-04 Sintef Voice detection and discrimination apparatus and method
US7477754B2 (en) 2002-09-02 2009-01-13 Oticon A/S Method for counteracting the occlusion effects
WO2004021740A1 (en) 2002-09-02 2004-03-11 Oticon A/S Method for counteracting the occlusion effects
WO2004077090A1 (en) 2003-02-25 2004-09-10 Oticon A/S Method for detection of own voice activity in a communication device
WO2005004534A1 (en) 2003-07-04 2005-01-13 Vast Audio Pty Ltd The production of augmented-reality audio
US7929713B2 (en) 2003-09-11 2011-04-19 Starkey Laboratories, Inc. External ear canal voice detection
US20080260191A1 (en) 2003-09-11 2008-10-23 Starkey Laboratories, Inc. External ear canal voice detection
US20110195676A1 (en) 2003-09-11 2011-08-11 Starkey Laboratories, Inc. External ear canal voice detection
US9036833B2 (en) 2003-09-11 2015-05-19 Starkey Laboratories, Inc. External ear canal voice detection
US20050058313A1 (en) 2003-09-11 2005-03-17 Victorian Thomas A. External ear canal voice detection
WO2005125269A1 (en) 2004-06-22 2005-12-29 Gennum Corporation First person acoustic environment system and method
WO2006028587A2 (en) 2004-07-22 2006-03-16 Softmax, Inc. Headset for separation of speech signals in a noisy environment
US7983907B2 (en) 2004-07-22 2011-07-19 Softmax, Inc. Headset for separation of speech signals in a noisy environment
US8116489B2 (en) 2004-10-01 2012-02-14 Hearworks Pty Ltd Accoustically transparent occlusion reduction system and method
US20070009122A1 (en) 2005-07-11 2007-01-11 Volkmar Hamacher Hearing apparatus and a method for own-voice detection
US20070195968A1 (en) 2006-02-07 2007-08-23 Jaber Associates, L.L.C. Noise suppression method and system with single microphone
US20080192971A1 (en) 2006-02-28 2008-08-14 Rion Co., Ltd. Hearing Aid
US8111849B2 (en) 2006-02-28 2012-02-07 Rion Co., Ltd. Hearing aid
US8059847B2 (en) 2006-08-07 2011-11-15 Widex A/S Hearing aid method for in-situ occlusion effect and directly transmitted sound measurement
US20100061564A1 (en) 2007-02-07 2010-03-11 Richard Clemow Ambient noise reduction system
US8130991B2 (en) 2007-04-11 2012-03-06 Oticon A/S Hearing instrument with linearized output stage
US8081780B2 (en) 2007-05-04 2011-12-20 Personics Holdings Inc. Method and device for acoustic management control of multiple microphones
US20090016542A1 (en) 2007-05-04 2009-01-15 Personics Holdings Inc. Method and Device for Acoustic Management Control of Multiple Microphones
US20090147966A1 (en) 2007-05-04 2009-06-11 Personics Holdings Inc Method and Apparatus for In-Ear Canal Sound Suppression
US20090034765A1 (en) 2007-05-04 2009-02-05 Personics Holdings Inc. Method and device for in ear canal echo suppression
WO2009034536A2 (en) 2007-09-14 2009-03-19 Koninklijke Philips Electronics N.V. Audio activity detection
US20090074201A1 (en) 2007-09-18 2009-03-19 Starkey Laboratories, Inc. Method and apparatus for microphone matching for wearable directional hearing device using wearer's own voice
US8031881B2 (en) 2007-09-18 2011-10-04 Starkey Laboratories, Inc. Method and apparatus for microphone matching for wearable directional hearing device using wearer's own voice
US20090097681A1 (en) 2007-10-12 2009-04-16 Earlens Corporation Multifunction System and Method for Integrated Hearing and Communication with Noise Cancellation and Feedback Management
US8391522B2 (en) 2007-10-16 2013-03-05 Phonak Ag Method and system for wireless hearing assistance
US8391523B2 (en) 2007-10-16 2013-03-05 Phonak Ag Method and system for wireless hearing assistance
US20090220096A1 (en) 2007-11-27 2009-09-03 Personics Holdings, Inc Method and Device to Maintain Audio Content Level Reproduction
US20090238387A1 (en) 2008-03-20 2009-09-24 Siemens Medical Instruments Pte. Ltd. Method for actively reducing occlusion comprising plausibility check and corresponding hearing apparatus
US20110299692A1 (en) 2009-01-23 2011-12-08 Widex A/S System, method and hearing aids for in situ occlusion effect measurement
US20100246845A1 (en) 2009-03-30 2010-09-30 Benjamin Douglass Burge Personal Acoustic Device Position Determination
US8477973B2 (en) 2009-04-01 2013-07-02 Starkey Laboratories, Inc. Hearing assistance system with own voice detection
US20140010397A1 (en) 2009-04-01 2014-01-09 Starkey Laboratories, Inc. Hearing assistance system with own voice detection
US20100260364A1 (en) 2009-04-01 2010-10-14 Starkey Laboratories, Inc. Hearing assistance system with own voice detection
US9094766B2 (en) 2009-04-01 2015-07-28 Starkey Laboratories, Inc. Hearing assistance system with own voice detection
US20120070024A1 (en) 2010-09-22 2012-03-22 Gn Resound A/S Hearing aid with occlusion suppression
US20130195296A1 (en) 2011-12-30 2013-08-01 Starkey Laboratories, Inc. Hearing aids with adaptive beamformer responsive to off-axis speech

Non-Patent Citations (39)

* Cited by examiner, † Cited by third party
Title
"Canadian Application Serial No. 2,481,397, Non-Final Office Action mailed Dec. 5, 2007", 6 pgs.
"Canadian Application Serial No. 2,481,397, Response filed Jun. 5, 2008 to Office Action mailed Dec. 5, 2007", 15 pgs.
"European Application Serial No. 04255520.1, European Search Report mailed Nov. 6, 2006", 3 pgs.
"European Application Serial No. 04255520.1, Office Action mailed Jun. 25, 2007", 4 pgs.
"European Application Serial No. 04255520.1, Response filed Jan. 7, 2008", 21 pgs.
"European Application Serial No. 10250710.0, Examination Notification Art. 94(3) mailed Jun. 25, 2014", 5 pgs.
"European Application Serial No. 10250710.0, Response filed Oct. 13, 2014 to Examination Notification Art. 94(3) mailed Jun. 25, 2014", 21 pgs.
"European Application Serial No. 10250710.0, Search Report mailed Jul. 20, 2010", 6 Pgs.
"European Application Serial No. 10250710.0, Search Report Response Apr. 18, 2011", 16 pg.
"The New Jawbone: The Best Bluetooth Headset Just Got Better", www.aliph.com, (2008), 3 pages.
"U.S. Appl. No. 10/660,454, Advisory Action mailed May 20, 2008", 4 pgs.
"U.S. Appl. No. 10/660,454, Final Office Action mailed Dec. 27, 2007", 18 pgs.
"U.S. Appl. No. 10/660,454, Non Final Office Action mailed Jul. 27, 2007", 16 pgs.
"U.S. Appl. No. 10/660,454, Response filed Apr. 25, 2008 to Final Office Action mailed Dec. 27, 2007", 15 pgs.
"U.S. Appl. No. 10/660,454, Response filed May 9, 2007 to Restriction Requirement Apr. 9, 2007", 11 pgs.
"U.S. Appl. No. 10/660,454, Response filed Oct. 15, 2007 to Non-Final Office Action mailed Jul. 27, 2007", 17 pgs.
"U.S. Appl. No. 10/660,454, Restriction Requirement mailed Apr. 9, 2007", 5 pgs.
"U.S. Appl. No. 12/163,665, Notice of Allowance mailed Feb. 7, 2011", 4 pgs.
"U.S. Appl. No. 12/163,665, Notice of Allowance mailed Sep. 28, 2010", 9 pgs.
"U.S. Appl. No. 12/749,702, Final Office Action mailed Oct. 12, 2012", 7 pgs.
"U.S. Appl. No. 12/749,702, Non Final Office Action mailed May. 25, 2012", 6 pgs.
"U.S. Appl. No. 12/749,702, Notice of Allowance mailed Mar. 4, 2013", 7 pgs.
"U.S. Appl. No. 12/749,702, Response filed Aug. 27, 2012 to Non Final Office Action mailed May 25, 2012", 13 pgs.
"U.S. Appl. No. 12/749,702, Response filed Feb. 12, 2013 to Final Office Action mailed Oct. 12, 2012", 10 pgs.
"U.S. Appl. No. 13/088,902, Advisory Action mailed Nov. 28, 2014", 3 pgs.
"U.S. Appl. No. 13/088,902, Final Office Action mailed Nov. 29, 2013", 16 pgs.
"U.S. Appl. No. 13/088,902, Final Office Action mailed Sep. 23, 2014", 21 pgs.
"U.S. Appl. No. 13/088,902, Non Final Office Action mailed Mar. 27, 2014", 15 pgs.
"U.S. Appl. No. 13/088,902, Non Final Office Action mailed May 21, 2013", 15 pgs.
"U.S. Appl. No. 13/088,902, Notice of Allowance mailed Jan. 20, 2015", 5 pgs.
"U.S. Appl. No. 13/088,902, Response filed Aug. 21, 2013 to Non Final Office Action mailed May 21, 2013", 10 pgs.
"U.S. Appl. No. 13/088,902, Response filed Feb. 28, 2014 to Final Office Action mailed Nov. 29, 2013", 12 pgs.
"U.S. Appl. No. 13/088,902, Response filed Jun. 27, 2014 to Non Final Office Action mailed Mar. 28, 2014", 13 pgs.
"U.S. Appl. No. 13/088,902, Response filed Nov. 20, 2014 to Final Office Action mailed Sep. 23, 2014", 12 pgs.
"U.S. Appl. No. 13/933,017, Non Final Office Action mailed Sep. 18, 2014", 6 pgs.
"U.S. Appl. No. 13/933,017, Notice of Allowance mailed Mar. 20, 2015", 7 pgs.
"U.S. Appl. No. 13/933.017, Response filed Dec. 18, 2014 to Non Final Office Action mailed Sep. 18, 2014", 6 pgs.
Evjen, Peder M., "Low-Power Transceiver Targets Wireless Headsets", Microwaves & RF, (Oct. 2002), 68, 70, 72-73, 75-76, 78-80.
Luo, Fa-Long, et al., "Recent Developments in Signal Processing for Digital Hearing Aids", IEEE Signal Processing Magazine, (Sep. 2006), 103-106.

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9699573B2 (en) 2009-04-01 2017-07-04 Starkey Laboratories, Inc. Hearing assistance system with own voice detection
US9712926B2 (en) 2009-04-01 2017-07-18 Starkey Laboratories, Inc. Hearing assistance system with own voice detection
US10171922B2 (en) 2009-04-01 2019-01-01 Starkey Laboratories, Inc. Hearing assistance system with own voice detection
US10225668B2 (en) 2009-04-01 2019-03-05 Starkey Laboratories, Inc. Hearing assistance system with own voice detection
US10652672B2 (en) 2009-04-01 2020-05-12 Starkey Laboratories, Inc. Hearing assistance system with own voice detection
US10715931B2 (en) 2009-04-01 2020-07-14 Starkey Laboratories, Inc. Hearing assistance system with own voice detection
US11388529B2 (en) 2009-04-01 2022-07-12 Starkey Laboratories, Inc. Hearing assistance system with own voice detection
US11825272B2 (en) 2019-02-08 2023-11-21 Starkey Laboratories, Inc. Assistive listening device systems, devices and methods for providing audio streams within sound fields
US11361785B2 (en) 2019-02-12 2022-06-14 Samsung Electronics Co., Ltd. Sound outputting device including plurality of microphones and method for processing sound signal using plurality of microphones
WO2022026725A1 (en) 2020-07-31 2022-02-03 Starkey Laboratories, Inc. Hypoxic or anoxic neurological injury detection with ear-wearable devices and system
US11812213B2 (en) 2020-09-30 2023-11-07 Starkey Laboratories, Inc. Ear-wearable devices for control of other devices and related methods
WO2022103954A1 (en) 2020-11-16 2022-05-19 Starkey Laboratories, Inc. Passive safety monitoring with ear-wearable devices
WO2022140559A1 (en) 2020-12-23 2022-06-30 Starkey Laboratories, Inc. Ear-wearable system and method for detecting dehydration
WO2022170091A1 (en) 2021-02-05 2022-08-11 Starkey Laboratories, Inc. Multi-sensory ear-worn devices for stress and anxiety detection and alleviation
WO2022198057A2 (en) 2021-03-19 2022-09-22 Starkey Laboratories, Inc. Ear-wearable device and system for monitoring of and/or providing therapy to individuals with hypoxic or anoxic neurological injury
WO2022271660A1 (en) 2021-06-21 2022-12-29 Starkey Laboratories, Inc. Ear-wearable systems for gait analysis and gait training

Also Published As

Publication number Publication date
US20170318398A1 (en) 2017-11-02
US20160192089A1 (en) 2016-06-30
US11388529B2 (en) 2022-07-12
US20190200142A1 (en) 2019-06-27
US10652672B2 (en) 2020-05-12
US9712926B2 (en) 2017-07-18
US20150043765A1 (en) 2015-02-12
US10225668B2 (en) 2019-03-05
US20200344559A1 (en) 2020-10-29

Similar Documents

Publication Publication Date Title
US11388529B2 (en) Hearing assistance system with own voice detection
US10715931B2 (en) Hearing assistance system with own voice detection
US9749754B2 (en) Hearing aids with adaptive beamformer responsive to off-axis speech
EP3005731B1 (en) Method for operating a hearing device and a hearing device
US10327071B2 (en) Head-wearable hearing device
CN105898651B (en) Hearing system comprising separate microphone units for picking up the user's own voice
EP2843971B1 (en) Hearing aid device with in-the-ear-canal microphone
US9020171B2 (en) Method for control of adaptation of feedback suppression in a hearing aid, and a hearing aid
EP2988531B1 (en) Hearing assistance system with own voice detection

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: STARKEY LABORATORIES, INC., MINNESOTA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MERKS, IVO;REEL/FRAME:036207/0026

Effective date: 20150319

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: CITIBANK, N.A., AS ADMINISTRATIVE AGENT, TEXAS

Free format text: NOTICE OF GRANT OF SECURITY INTEREST IN PATENTS;ASSIGNOR:STARKEY LABORATORIES, INC.;REEL/FRAME:046944/0689

Effective date: 20180824

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8