WO2024027259A1 - 信号处理方法及装置、设备控制方法及装置 - Google Patents

信号处理方法及装置、设备控制方法及装置 Download PDF

Info

Publication number
WO2024027259A1
WO2024027259A1 PCT/CN2023/093251 CN2023093251W WO2024027259A1 WO 2024027259 A1 WO2024027259 A1 WO 2024027259A1 CN 2023093251 W CN2023093251 W CN 2023093251W WO 2024027259 A1 WO2024027259 A1 WO 2024027259A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal
user
hearing aid
frequency band
band range
Prior art date
Application number
PCT/CN2023/093251
Other languages
English (en)
French (fr)
Inventor
桂振侠
范泛
曹天祥
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024027259A1 publication Critical patent/WO2024027259A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/43Electronic input selection or mixing based on input signal analysis, e.g. mixing or selection between microphone and telecoil or between microphones with different directivity characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/48Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using constructional means for obtaining a desired frequency response
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/50Customised settings for obtaining desired overall acoustical characteristics

Definitions

  • Embodiments of the present application relate to the field of multimedia, and in particular, to a signal processing method and device, and an equipment control method and device.
  • hearing assistive devices such as headphones, hearing aids and other devices
  • users can hear their own speaking voice through the hearing assistive device, that is, self-speaking speech and external environmental sounds.
  • the speaker of the hearing aid device is located in the user's ear, causing the self-speech voice heard by the user to be unnatural, such as the sound being muffled and loud.
  • the original in-ear signal played in the ear by the speaker of the hearing aid device is usually collected, the phase and amplitude of the original in-ear signal are adjusted, and the adjusted in-ear signal is played simultaneously.
  • Intra-ear signal and original in-ear signal is usually collected.
  • the adjusted in-ear signal played can offset the original in-ear signal played, achieving noise reduction and alleviating the problem of muffled and loud self-talking speech.
  • the above method not only offsets the self-speaking speech contained in the original in-ear signal, but also offsets the environmental sound contained in the original in-ear signal, resulting in the problem that the user cannot perceive external environmental sounds.
  • the present application provides a signal processing method and device, and an equipment control method and device, so that the hearing aid device uses the first signal and the user's voice signal to process the user's voice signal in the first signal in a targeted manner to avoid processing the second signal.
  • the cancellation of the ambient sound signal in one signal achieves the effect of making the user's own voice more natural and allowing the user to perceive the ambient sound.
  • embodiments of the present application provide a signal processing method applied to a hearing aid device.
  • the method includes: when it is detected that the user is wearing the hearing aid device and the user makes a sound, collecting a first signal and a second signal, wherein, The first signal includes the user's voice signal and the surrounding environment sound signal, and the second signal includes the user's voice signal; according to the first signal and the second signal, the user's voice signal in the first signal is processed to obtain the target signal; Play the target signal through ear speakers.
  • the first signal collected by the hearing aid device includes the user's self-speaking voice and environmental sounds
  • the second signal includes the user's voice signal.
  • the hearing aid device can use the first signal and the second signal to process the user's voice signal in the first signal in a targeted manner to obtain the target signal, and play the target signal through the ear speaker in the hearing aid device. Therefore, the cancellation of the environmental sound signal in the first signal can be avoided, and the user's own voice heard by the user is more natural and the user can perceive the environmental sound.
  • processing the user's voice signal in the first signal to obtain the target signal according to the first signal and the second signal includes: filtering the first signal using the second signal to obtain the filter gain; according to the filtering increase The user's voice signal in the first signal is attenuated to obtain a target signal.
  • the first signal is filtered through the second signal to obtain the filter gain, which can ensure that the filter gain can be used to attenuate the user's voice signal in the first signal, thereby obtaining the target signal through attenuation processing.
  • the user's voice signal in the target signal is attenuated, which can reduce the problem of the user's voice being perceived as muffled in the played target signal and make the auditory perception more natural. Therefore, the effect of taking into account the fact that the user's own voice heard by the user is more natural and the user can perceive the environmental sound is achieved.
  • using the second signal to filter the first signal to obtain the filtering gain includes: using the second signal to filter the user's voice signal in the first signal to obtain Desired signal; calculate the ratio of the desired signal to the first signal to obtain the filter gain.
  • the filter gain is obtained through the ratio of the desired signal to the first signal.
  • the desired signal is a signal that meets the attenuation processing expectations for the second signal in the first signal, which can ensure the accuracy of the filter gain. Based on this, the attenuation processing by filter gain can be more accurate.
  • filtering the first signal using the second signal to obtain the filtering gain includes: filtering the first signal using the second signal to obtain the original filtering gain; Obtain at least one of the degree correction amount and the frequency band range; adjust the size of the original filter gain according to the degree correction amount to obtain the filter gain; and/or adjust the original filter gain enabled frequency band according to the frequency band range to obtain the filter gain.
  • the degree correction amount is used to adjust the size of the filter gain, and then the adjusted filter gain is used to adjust the attenuation degree of the user's voice signal in the first signal.
  • the frequency band enabled by the filter gain is adjusted through the frequency band range, and then the adjusted filter gain is used to adjust the frequency band of the user's voice signal that is attenuated in the first signal. Therefore, more flexible and personalized signal processing effects can be achieved through adjustments in this embodiment, rather than being limited to fixed signal processing effects.
  • processing the user's voice signal in the first signal according to the first signal and the second signal to obtain the target signal includes: using the second signal to process the third signal.
  • a signal is enhanced to obtain a compensation signal; the user's voice signal in the first signal is enhanced according to the compensation signal to obtain a target signal.
  • the first signal is enhanced by the second signal to obtain a compensation signal, which can ensure that the compensation signal can be used to enhance the user's voice signal in the first signal, thereby obtaining the target signal through enhancement processing.
  • the user's voice signal in the target signal is enhanced, which can reduce the problem that the user's voice auditory perception is not full in the played target signal heard by the user, and the auditory perception is more natural. Therefore, the effect of taking into account the fact that the user's own voice heard by the user is more natural and the user can perceive the environmental sound is achieved.
  • using the second signal to enhance the first signal to obtain the compensation signal includes: determining the weighting coefficient of the second signal; obtaining the weighting coefficient according to the weighting coefficient and the second signal. Enhance the signal; add the enhancement signal to the first signal to obtain a compensation signal.
  • the enhanced signal is obtained through the weighting coefficient of the second signal and the second signal, which can ensure that the enhanced signal is a signal that enhances the second signal, that is, the user's voice signal, thereby loading the enhanced signal on the first signal.
  • the compensation signal it can be ensured that the compensation signal can be used to enhance the user's voice signal in the first signal.
  • using the second signal to enhance the first signal to obtain the compensation signal includes: obtaining at least one of a degree correction amount and a frequency band range; using the degree correction The signal compensation strength indicated by the quantity and the second signal are used to enhance the first signal to obtain a compensation signal; and/or the second signal is used to enhance the first signal belonging to the frequency band range to obtain a compensation signal.
  • the compensation intensity of the compensation signal is adjusted by the degree correction amount, and then the adjustment of the enhancement degree of the user's voice signal in the first signal is realized through the adjusted compensation signal.
  • the frequency band of the enhanced compensation signal is adjusted through the frequency band range, and then the frequency band of the user's voice signal that is enhanced in the first signal is adjusted through the adjusted compensation signal. Therefore, the adjustment of this embodiment can ensure more flexible signal processing and personalized signal processing effects, rather than being limited to fixed signal processing effects.
  • obtaining at least one of the degree correction amount and the frequency band range includes: establishing a communication connection with the target terminal; wherein the target terminal is used to display a parameter adjustment interface , the parameter adjustment interface includes at least one of the following setting controls: an adjustment degree setting control and a frequency band range setting control; receiving at least one of the degree correction amount and the frequency band range sent by the target terminal; wherein the degree correction amount and the frequency band range are the target terminal Obtained by respectively detecting the operations on the adjustment degree setting control and the frequency band range setting control.
  • the user can set the attenuation processing of the hearing aid device by operating at least one of the adjustment degree setting control and the frequency band range setting control on the parameter adjustment interface displayed on the target terminal.
  • At least one of the attenuation degree and the frequency band range of the attenuated sound signal can be used to obtain an attenuation effect that meets user needs, that is, a self-speech suppression effect, which can achieve personalized signal processing and further improve user experience.
  • the parameter adjustment interface includes a left ear adjustment interface and a right ear adjustment interface; receiving at least one of the degree correction amount and the frequency band sent by the target terminal includes: Receive at least one of left ear correction data and right ear correction data sent by the target terminal; wherein, the left ear correction data is obtained by the target terminal by detecting the operation on the setting control in the left ear adjustment interface, and the right ear correction data is obtained by the target terminal Obtained by detecting the operation on the setting control in the right ear adjustment interface; the left ear correction data includes at least one of the left ear degree correction amount and the left ear frequency band range; the right ear correction data includes the right ear degree correction amount and the right ear frequency band range At least one of: selecting the same correction data as the ear where the hearing aid device is located based on the ear identifier carried by the left ear correction data and/or the right ear correction data.
  • the user can set different parameters for the two earphones of the left and right ears to match ear differences or meet the needs of different applications, further improving the individuality of signal processing. to further improve the user experience.
  • the target terminal is also used to display a mode selection interface;
  • the mode selection interface includes: a self-talk optimization mode selection control; before collecting the first signal and the second signal,
  • the method also includes: when receiving a self-speech optimization mode enable signal sent by the target terminal, detecting whether the user is wearing a hearing aid device; wherein the self-speak optimization mode enable signal is the target terminal's detection of the enable signal on the self-speak optimization mode selection control. Sent during operation; if worn, detects whether the user makes a sound.
  • the user can enable the self-speaking optimization mode.
  • the hearing aid device detects whether the user is wearing the hearing aid device, and then detects whether the user makes a sound while wearing it. In this way, the user can independently control whether to perform the signal processing provided by the embodiments of the present application, thereby further improving the user experience.
  • collecting the first signal and the second signal includes: using the first sensor, detecting whether the user is Whether to wear the hearing aid device; if it is worn, the third sensor is used to detect whether the user is in a quiet environment; if so, the second sensor is used to detect whether the user makes a sound; if so, the first signal and the second signal are collected.
  • the first sensor is used to detect whether the user is wearing the hearing aid device
  • the third sensor is used to detect whether the user is in a quiet environment when wearing it
  • the second sensor is used to detect whether the user makes a sound when the user is in a quiet environment.
  • processing the user's voice signal in the first signal according to the first signal and the second signal to obtain the target signal includes: Collect the third signal at Including: the signal after the third signal has been mapped through the ear canal; determining the frequency response difference between the fourth signal and the fifth signal; based on the first signal, the second signal and the frequency response difference, the user's sound signal in the first signal Processing is performed to obtain the target signal, where the frequency response difference is used to indicate the extent of processing.
  • the frequency response difference between the fourth signal and the fifth signal can be determined, and the frequency response difference can reflect the user's ear canal structure, so that an ear canal suitable for the user can be obtained based on the first signal, the second signal and the frequency response difference.
  • the structured signal processing results further improve the personalized accuracy of signal processing, ensuring that the signal processing results are more suitable for the user and improving the user experience.
  • determining the frequency response difference between the fourth signal and the fifth signal includes: obtaining the frequency responses of the fourth signal and the fifth signal respectively; calculating the fourth signal The difference value between the frequency response of the signal and the frequency response of the fifth signal is the frequency response difference.
  • the frequency response difference between the two signals can be obtained.
  • processing the user's voice signal in the first signal according to the first signal, the second signal and the frequency response difference to obtain the target signal including: according to The frequency response difference determines whether the processing type is attenuation or enhancement; when the processing type is attenuation, the user's voice signal in the first signal is attenuated according to the frequency response difference to obtain the target signal; when the processing type is enhancement , the user's voice signal in the first signal is enhanced according to the frequency response difference to obtain the target signal.
  • the frequency response difference can be used to determine the type of processing when processing the user's voice signal in the first signal, and then perform processing suitable for the signal processing requirements according to the type of processing, so as to achieve more signal processing results. Accurate results.
  • detecting whether the user wears the hearing aid device through the first sensor includes: establishing a communication connection with the target terminal; the target terminal is used to display the mode selection interface; the mode The selection interface includes a personalized mode selection control; when receiving the personalized mode enabling signal sent by the target terminal, the first sensor is used to detect whether the user is wearing the hearing aid device; wherein the personalized mode enabling signal is when the target terminal detects the personalized mode Sent when an action is enabled on the mode selector control.
  • a communication connection is established with the target terminal through the hearing aid device, and the user can control whether the personalized mode of the hearing aid device is enabled through the personalized mode selection control on the target terminal mode selection interface.
  • hearing When the user activates the personalized mode, the assistive device detects whether the user is wearing the hearing assistive device. In this way, the user can independently control whether to perform the signal processing based on the user's voice signal collected in a quiet environment provided by the embodiments of the present application, thereby further improving the user experience.
  • detecting whether the user makes a sound through the second sensor including: if it is, sending an information display instruction to the target terminal, and the information display instruction is used to Instruct the target terminal to display prompt information; wherein the prompt information is used to guide the user to make a sound; and detect whether the user makes a sound through the second sensor.
  • the hearing aid device when the hearing aid device detects that the user is in a quiet environment, it sends an information display instruction to the target terminal.
  • the target terminal can display prompt information when receiving the information display instruction to guide the user to make sounds through the prompt information, so that signal processing can be performed more efficiently.
  • the method before collecting the first signal and the second signal, the method further includes: when detecting that the user is wearing the hearing aid device, sending a first completion instruction to the target Terminal; wherein the first completion instruction is used to instruct the target terminal to output prompt information that the wearing detection is completed; when it is detected that the user is in a quiet environment, the second completion instruction is sent to the target terminal; wherein the second completion instruction is used to instruct the target terminal Output information that the quiet environment detection is completed; and/or, when the target signal is obtained, send a third completion instruction to the target terminal; wherein the third completion instruction is used to instruct the target terminal to output at least one of the following information: detection completed information and the information generated by the personalization parameters.
  • the hearing aid device by sending at least one of the first completion command, the second completion command and the third nickname command to the target terminal, the hearing aid device can instruct the target terminal to correspondingly output at least one of the following information: Prompt information for completion of wearing detection , information about the completion of the quiet environment detection, information about the completion of the detection and information about the generated personalized parameters.
  • Prompt information for completion of wearing detection information about the completion of the quiet environment detection
  • information about the completion of the detection information about the generated personalized parameters.
  • the method further includes: performing a step of detecting whether the user wears the hearing aid device through the first sensor.
  • the hearing aid device plays the target signal through the speaker, it performs the step of detecting whether the user is wearing the hearing aid device through the first sensor.
  • the process of the user using the hearing aid device it can be based on whether the user is in a quiet environment. detection, collect the user's current sound signal in real time, and then process the first signal in real time. In this way, the signal processing effect can be adjusted in real time during the wearing process to ensure that the processing effect is more consistent with the user's current sound state and the processing effect is better.
  • detecting whether the user wears the hearing aid device through the first sensor includes: establishing a communication connection with the target terminal; the target terminal is used to display the mode selection interface;
  • the mode selection interface includes an adaptive mode selection control; when receiving the adaptive mode enable signal sent by the target terminal, the first sensor is executed to detect whether the user is wearing a hearing aid; wherein the adaptive mode enable signal is when the target terminal detects Sent when an action is enabled on the adaptive mode selector.
  • a communication connection is established with the target terminal through the hearing aid device, and the user can control whether the adaptive mode of the hearing aid device is enabled through the adaptive mode selection control on the target terminal mode selection interface.
  • the hearing aid device detects whether the user is wearing the hearing aid device. In this way, the user can independently control whether to perform the solution of real-time adjustment of the signal processing effect during the wearing process provided by the embodiments of the present application, thereby further improving the user experience.
  • embodiments of the present application provide a device control method, applied to a terminal.
  • the method includes: establishing a communication connection with a hearing assistive device; wherein the hearing assistive device is configured to perform the above-mentioned first aspect and the first aspect.
  • a signal processing method in any implementation mode displaying a parameter adjustment interface, which includes at least one of the following setting controls: an adjustment degree setting control and a frequency band range setting control; respectively detecting operations on the adjustment degree setting control and the frequency band range setting control to obtain at least one of the degree correction amount and the frequency band range; sending at least one of the degree correction amount and the frequency band range to the hearing assistive device; wherein the degree correction amount and the frequency band range are used by the hearing assistive device according to at least one of them,
  • the user's voice signal in the first signal is processed to obtain a target signal.
  • the adjustment degree setting control includes a plurality of geometric figures with the same shape and different sizes, each of the plurality of geometric figures indicates a correction amount, and the larger the correction amount, the larger the size of the geometric figure;
  • the frequency band range setting control including a frequency band range icon and a slider located on the frequency band range icon; accordingly, detecting operations on the adjustment degree setting control and the frequency band range setting control respectively to obtain at least one of the degree correction amount and the frequency band range, including: detecting the adjustment Click operations on multiple geometric figures on the degree setting control; determine the correction amount indicated by the geometric figure where the click operation is detected as the degree correction amount; and/or detect a sliding operation on the slider on the frequency band range setting control; according to the sliding operation The sliding position of the block determines the frequency band range.
  • the shape of the geometric figure may be a rectangle, a circle, a hexagon, etc.
  • the different sizes of different geometric figures can be different heights, widths, diameters, etc.
  • the larger the correction amount the larger the size of the geometric figure. For example, the larger the correction amount, the higher the rectangle, the larger the correction amount, the larger the diameter of the circle, etc.
  • the parameter adjustment interface includes a left ear adjustment interface and a right ear adjustment interface; accordingly, operations on the adjustment degree setting control and the frequency band range setting control are respectively detected to obtain At least one of the degree correction amount and the frequency band range, including: detecting operations on the setting controls in the left ear adjustment interface to obtain left ear correction data, wherein the left ear correction data includes the left ear degree correction amount and the left ear frequency band range. At least one of: detecting operations on the setting controls in the right ear adjustment interface to obtain right ear correction data, where the right ear correction data includes at least one of a right ear degree correction amount and a right ear frequency band range.
  • displaying a parameter adjustment interface includes: displaying a mode selection interface; wherein the mode selection interface includes a self-speech optimization mode selection control; when the self-speak optimization mode is detected When selecting the enable operation of the control, the parameter adjustment interface is displayed.
  • the method before displaying the parameter adjustment interface, the method further includes: displaying a mode selection interface; wherein the mode selection interface includes a personalized mode selection control and an adaptive mode selection At least one of the controls; when an enabling operation of the personalized mode selection control is detected, sending a personalized mode enabling signal to the hearing assistive device; wherein the personalized mode enabling signal is used to instruct the hearing assistive device to pass the first sensor, Detect whether the user is wearing a hearing aid device;
  • the adaptive mode selection control when an enabling operation on the adaptive mode selection control is detected, sending an adaptive mode enabling signal to the hearing assistive device; wherein the adaptive mode enabling signal is used to instruct the hearing assistive device to detect whether the user passes the first sensor Wear a hearing aid.
  • the method further includes: receiving an information display instruction sent by the hearing assistive device; wherein, the information display instruction It is sent by the hearing aid device when it detects that the user is in a quiet environment; prompt information is displayed; where the prompt information is used to guide the user to make sounds.
  • the method before displaying the prompt information, the method also includes: receiving a first completion instruction sent by the hearing aid device; wherein the first completion instruction is sent by the hearing aid device when detecting that the user is wearing the hearing aid device; receiving a second completion instruction sent by the hearing aid device; wherein, the first completion instruction is sent by the hearing aid device when it detects that the user is wearing the hearing aid device; The second completion instruction is sent by the hearing aid device when it detects that the user is in a quiet environment; accordingly, after displaying the prompt information, the method also includes: receiving a third completion instruction sent by the hearing aid device; wherein the third completion instruction is listening The auxiliary device sends when the target signal is obtained; outputs at least one of the following information: information that the detection has been completed and information that the personalized parameters have been generated.
  • the second aspect and any implementation manner of the second aspect respectively correspond to the first aspect and any implementation manner of the first aspect.
  • the technical effects corresponding to the second aspect and any implementation manner of the second aspect may be referred to the technical effects corresponding to the above-mentioned first aspect and any implementation manner of the first aspect, which will not be described again here.
  • a hearing aid device which includes: a signal acquisition module configured to collect a first signal and a second signal when it is detected that the user is wearing the hearing aid device and the user makes a sound, wherein, The first signal includes the user's voice signal and the surrounding environmental sound signal, and the second signal includes the user's voice signal; the signal processing module is used to process the user's voice signal in the first signal according to the first signal and the second signal. Process to obtain the target signal; the signal output module is used to play the target signal through the ear speaker.
  • the signal processing module is further configured to: filter the first signal using the second signal to obtain a filter gain; and perform attenuation processing on the user's voice signal in the first signal according to the filter gain to obtain a target signal.
  • the signal processing module is further configured to: use the second signal to filter the user's voice signal in the first signal to obtain the desired signal; calculate the difference between the desired signal and the first signal. The ratio of the signals to obtain the filter gain.
  • the signal processing module is further configured to: filter the first signal using the second signal to obtain the original filter gain; obtain the degree correction amount and the frequency band range. At least one of: adjusting the size of the original filter gain according to the degree correction amount to obtain the filter gain; and/or adjusting the frequency band where the original filter gain is enabled according to the frequency band range to obtain the filter gain.
  • the signal processing module is further configured to: use the second signal to enhance the first signal to obtain a compensation signal; The sound signal is enhanced to obtain the target signal.
  • the signal processing module is further configured to: determine the weighting coefficient of the second signal; obtain the enhanced signal according to the weighting coefficient and the second signal; load the enhanced signal into the third A signal to get the compensation signal.
  • the signal processing module is further configured to: obtain at least one of the degree correction amount and the frequency band range; use the signal compensation strength indicated by the degree correction amount and the second signal, enhancing the first signal to obtain a compensation signal; and/or using the second signal to enhance the first signal belonging to the frequency band range, obtaining a compensation signal.
  • the signal processing module is further used to: establish a communication connection with the target terminal; wherein the target terminal is used to display a parameter adjustment interface, and the parameter adjustment interface includes at least the following: A setting control: an adjustment degree setting control and a frequency band range setting control; receiving at least one of the degree correction amount and the frequency band range sent by the target terminal; wherein the degree correction amount and the frequency band range are adjusted by the target terminal through respectively detecting the degree setting control and band range setting controls.
  • the parameter adjustment interface includes a left ear adjustment interface and a right ear adjustment interface; the signal processing module is further configured to: receive left ear correction data and right ear correction data sent by the target terminal. At least one of the ear correction data; wherein, the left ear correction data is obtained by the target terminal by detecting the operation on the setting control in the left ear adjustment interface, and the right ear correction data is obtained by the target terminal by detecting the operation on the setting control in the right ear adjustment interface.
  • the left ear correction data includes at least one of the left ear degree correction amount and the left ear frequency band range
  • the right ear correction data includes at least one of the right ear degree correction amount and the right ear frequency band range; according to the left ear correction data And/or the ear identification carried by the right ear correction data, select the same correction data as the ear where the hearing aid device is located.
  • the target terminal is also used to display a mode selection interface;
  • the mode selection interface includes: a self-talk optimization mode selection control; a signal acquisition module, and is also used to: when receiving When the target terminal sends a self-speech optimization mode enable signal, it detects whether the user is wearing a hearing aid device; where the self-speak optimization mode enable signal is sent by the target terminal by detecting the enable operation on the self-speak optimization mode selection control; if worn, then Detect if the user makes a sound.
  • the signal acquisition module is further configured to: detect whether the user wears the hearing aid device through the first sensor; if wearing it, detect whether the user is in the hearing aid device through the third sensor; Quiet environment; if it is, detect whether the user makes a sound through the second sensor; if so, collect the first signal and the second signal.
  • the signal processing module is further configured to: collect a third signal at the user's ear canal; play the first signal and the third signal in the user's ear; Collect the fourth signal and the fifth signal; wherein, the fourth signal includes: the signal after the first signal has been mapped through the ear canal; the fifth signal includes: the signal after the third signal has been mapped through the ear canal; determine the fourth signal and the fifth signal.
  • Frequency response difference between signals according to the first signal, the second signal and the frequency response difference, the user's voice signal in the first signal is processed to obtain the target signal, where the frequency response difference is used to indicate the degree of processing.
  • the signal processing module is further configured to: obtain the frequency response of the fourth signal and the fifth signal respectively; calculate the frequency response of the fourth signal and the frequency response of the fifth signal.
  • the difference value between frequency responses is the frequency response difference.
  • the signal processing module is further configured to: determine the type of processing as attenuation or enhancement according to the frequency response difference; when the type of processing is attenuation, determine the type of processing as attenuation or enhancement according to the frequency response.
  • the difference performs attenuation processing on the user's voice signal in the first signal to obtain the target signal; when the processing type is enhancement, the user's voice signal in the first signal is enhanced according to the frequency response difference to obtain the target signal.
  • the signal acquisition module is further used to: establish a communication connection with the target terminal; the target terminal is used to display the mode selection interface; the mode selection interface includes personalized mode selection Control; when receiving the personalized mode enabling signal sent by the target terminal, detect whether the user wears the hearing aid device through the first sensor; wherein the personalized mode enabling signal is the enabling operation of the target terminal on detecting the personalized mode selection control sent on time.
  • the signal acquisition module is further configured to: if in, send an information display instruction to the target terminal, and the information display instruction is used to instruct the target terminal to display the prompt information; wherein , the prompt information is used to guide the user to make a sound; the second sensor is used to detect whether the user makes a sound.
  • the device further includes an instruction sending module, configured to: when detecting that the user is wearing the hearing aid device, send a first completion instruction to the target terminal; wherein, the first over The completion command is used to instruct the target terminal to output prompt information that the wearing detection is completed; when it is detected that the user is in a quiet environment, a second completion command is sent to the target terminal; wherein the second completion command is used to instruct the target terminal to output a notification that the quiet environment detection is completed. information; and/or, when the target signal is obtained, send a third completion instruction to the target terminal; wherein the third completion instruction is used to instruct the target terminal to output at least one of the following information: detection completed information and personalized parameters have been generated Information.
  • an instruction sending module configured to: when detecting that the user is wearing the hearing aid device, send a first completion instruction to the target terminal; wherein, the first over The completion command is used to instruct the target terminal to output prompt information that the wearing detection is completed; when it is detected that the user is in a quiet environment, a second completion command is sent
  • the signal acquisition module is further configured to: after the signal output module plays the target signal through the speaker, perform a process of detecting whether the user wears a hearing aid device through the first sensor. step.
  • the signal acquisition module is further used to: establish a communication connection with the target terminal; the target terminal is used to display a mode selection interface; the mode selection interface includes adaptive mode selection control; when receiving the adaptive mode enabling signal sent by the target terminal, detecting whether the user wears the hearing aid device through the first sensor; wherein the adaptive mode enabling signal is the activation of the adaptive mode selection control by the target terminal. Sent during operation.
  • the third aspect and any implementation manner of the third aspect respectively correspond to the first aspect and any implementation manner of the first aspect.
  • the technical effects corresponding to the third aspect and any implementation manner of the third aspect please refer to the technical effects corresponding to the above-mentioned first aspect and any implementation manner of the first aspect, which will not be described again here.
  • inventions of the present application provide an equipment control device applied to a terminal.
  • the device includes: a communication module configured to establish a communication connection with a hearing aid device; wherein the hearing aid device is configured to perform the above-mentioned first aspect And the signal processing method of any implementation method of the first aspect; an interactive module, used to display a parameter adjustment interface, the parameter adjustment interface includes at least one of the following setting controls: an adjustment degree setting control and a frequency band range setting control; a detection module, used Detecting operations on the adjustment degree setting control and the frequency band range setting control respectively to obtain at least one of the degree correction amount and the frequency band range; a control module for sending at least one of the degree correction amount and the frequency band range to the hearing aid device ; Wherein, the degree correction amount and the frequency band range are used by the hearing aid device to process the user's voice signal in the first signal according to at least one of them to obtain the target signal.
  • the adjustment degree setting control includes a plurality of geometric figures with the same shape and different sizes, each of the plurality of geometric figures indicates a correction amount, and the larger the correction amount, the larger the size of the geometric figure;
  • the frequency band range setting control It includes a frequency band range icon and a slider located on the frequency band range icon; a detection module is further used to: detect click operations on multiple geometric figures on the adjustment degree setting control; determine the correction amount indicated by the geometric figures that detect the click operation. is the degree correction amount; and/or, detects the sliding operation of the slider on the frequency band range setting control; determines the frequency band range according to the sliding position of the slider.
  • the parameter adjustment interface includes a left ear adjustment interface and a right ear adjustment interface; the detection module is further configured to: detect operations on the setting controls in the left ear adjustment interface to Obtain the left ear correction data, wherein the left ear correction data includes at least one of the left ear degree correction amount and the left ear frequency band range; detect the operation on the setting control in the right ear adjustment interface to obtain the right ear correction data, wherein the right ear correction data is obtained.
  • the ear correction data includes at least one of a right ear degree correction amount and a right ear frequency band range.
  • the interaction module is further configured to: display a mode selection interface; wherein the mode selection interface includes a self-speech optimization mode selection control; when the self-speak optimization mode is detected When selecting the enable operation of the control, the parameter adjustment interface is displayed.
  • the interaction module is also used to: display the mode selection interface before displaying the parameter adjustment interface; wherein the mode selection interface includes a personalized mode selection control and an adaptive At least one of the mode selection controls; when an enabling operation on the personalized mode selection control is detected, sending a personalized mode enabling signal to the hearing assistive device; wherein the personalized mode enabling signal is used to instruct the hearing assistive device to pass the first A sensor that detects whether the user is wearing a hearing assistive device; and/or, when an enable operation on the adaptive mode selection control is detected, sends an adaptive mode enable signal to the hearing assistive device; wherein the adaptive mode enable signal is used to indicate hearing The assistive device detects whether the user wears the hearing assistive device through the first sensor.
  • the interaction module is further configured to: after sending the personalized mode enabling signal to the hearing assistive device, receive an information display instruction sent by the hearing assistive device; wherein, the information The display instruction is sent by the hearing aid device when it detects that the user is in a quiet environment; prompt information is displayed; where the prompt information is used to guide the user to make sounds.
  • the interaction module is further configured to: before displaying the prompt information, receive a first completion instruction sent by the hearing assistance device; wherein the first completion instruction is the hearing assistance device The device sends when it detects that the user is wearing a hearing aid device; receives a second completion instruction sent by the hearing aid device; wherein the second completion instruction is sent by the hearing aid device when it detects that the user is in a quiet environment; the interaction module is also used to: After displaying the prompt information, receive the third completion instruction sent by the hearing aid device; wherein, the third completion instruction is sent by the hearing aid device when it obtains the target signal; output at least one of the following information: detecting completed information and personalized parameters Information generated.
  • the fourth aspect and any implementation manner of the fourth aspect respectively correspond to the second aspect and any implementation manner of the second aspect.
  • the technical effects corresponding to the fourth aspect and any implementation manner of the fourth aspect please refer to the technical effects corresponding to the above-mentioned second aspect and any implementation manner of the second aspect, which will not be described again here.
  • embodiments of the present application provide an electronic device, including: a processor and a transceiver; a memory for storing one or more programs; when the one or more programs are processed by the one or more processors Execution causes the one or more processors to implement the method in any possible implementation manner of the first to second aspects or the first to second aspects.
  • the fifth aspect and any implementation manner of the fifth aspect respectively correspond to the first to second aspects or any implementation manner of the first to second aspects.
  • the technical effects corresponding to the fifth aspect and any implementation of the fifth aspect can be found in the technical effects corresponding to the first to second aspects or any implementation of the first to second aspects, and will not be described again here.
  • embodiments of the present application provide a computer-readable storage medium, including a computer program, characterized in that when the computer program is run on an electronic device, the electronic device causes the electronic device to execute the first to second aspects or The method in any possible implementation manner of the first to second aspects.
  • the sixth aspect and any implementation manner of the sixth aspect respectively correspond to the first to second aspects or any implementation manner of the first to second aspects.
  • the technical effects corresponding to the sixth aspect and any implementation of the sixth aspect can be found in the technical effects corresponding to the first to second aspects or any implementation of the first to second aspects, and will not be described again here.
  • embodiments of the present application provide a chip including one or more interface circuits and one or more processing
  • the interface circuit is used to receive signals from the memory of the electronic device and send the signals to the processor, where the signals include computer instructions stored in the memory; when the processor executes the computer instructions,
  • the electronic device is caused to perform the method in any possible implementation manner of the first to second aspects or the first to second aspects.
  • the seventh aspect and any implementation manner of the seventh aspect respectively correspond to the first to second aspects or any implementation manner of the first to second aspects.
  • the technical effects corresponding to the seventh aspect and any implementation of the seventh aspect can be found in the technical effects corresponding to the first to second aspects or any implementation of the first to second aspects, and will not be described again here.
  • Figure 1 is an exemplary flow chart of a signal processing method
  • Figure 2 is an exemplary schematic diagram of the signal processing process
  • Figure 3 is an exemplary structural diagram of an earphone provided by an embodiment of the present application.
  • Figure 4 is an exemplary structural diagram of a signal processing system provided by an embodiment of the present application.
  • Figure 5 is an exemplary structural diagram of an electronic device 500 provided by an embodiment of the present application.
  • Figure 6 is an exemplary software structure block diagram of the electronic device 500 provided by the embodiment of the present application.
  • Figure 7 is an exemplary flow chart of a signal processing method provided by an embodiment of the present application.
  • Figure 8 is an exemplary schematic diagram of the parameter adjustment interface provided by the embodiment of the present application.
  • Figure 9 is an exemplary schematic diagram of the headphone algorithm architecture provided by the embodiment of the present application.
  • Figure 10 is another exemplary schematic diagram of the parameter adjustment interface provided by the embodiment of the present application.
  • Figure 11 is another exemplary schematic diagram of the headphone algorithm architecture provided by the embodiment of the present application.
  • Figure 12a is an exemplary schematic diagram of the mode selection interface provided by the embodiment of the present application.
  • Figure 12b is another exemplary schematic diagram of the mode selection interface provided by the embodiment of the present application.
  • Figure 13 is another exemplary schematic diagram of the parameter adjustment interface provided by the embodiment of the present application.
  • Figure 14 is an exemplary schematic diagram of the detection information display interface provided by the embodiment of the present application.
  • Figure 15 is another exemplary structural diagram of an earphone provided by an embodiment of the present application.
  • Figure 16 is another exemplary schematic diagram of the headphone algorithm architecture provided by the embodiment of the present application.
  • Figure 17 is another exemplary flow chart of the signal processing method provided by the embodiment of the present application.
  • Figure 18 is another exemplary schematic diagram of the mode selection interface provided by the embodiment of the present application.
  • Figure 19 is another exemplary schematic diagram of the headphone algorithm architecture provided by the embodiment of the present application.
  • Figure 20 shows a schematic block diagram of a device 2000 according to an embodiment of the present application.
  • Figure 21 shows a schematic block diagram of a hearing aid device 2100 according to an embodiment of the present application
  • Figure 22 shows a schematic block diagram of an equipment control device 2200 according to an embodiment of the present application.
  • a and/or B can mean: A exists alone, A and B exist simultaneously, and B exists alone.
  • first and second in the description and claims of the embodiments of this application are used to distinguish different objects, rather than to describe a specific order of objects.
  • first target object, the second target object, etc. are used to distinguish different target objects, rather than to describe a specific order of the target objects.
  • multiple processing units refer to two or more processing units; multiple systems refer to two or more systems.
  • multiple processing units refer to two or more processing units; multiple systems refer to two or more systems.
  • the hearing aid device When a user wears a hearing aid device, the hearing aid device usually collects the sound signal of the user's speech and plays it to ensure the user's interaction with the external environment, for example, the user's conversation with others. At this time, the voice that the user hears through the hearing aid device usually has problems such as muffledness and loudness, resulting in unnatural sound quality and reduced user experience.
  • the signals collected by the hearing aid device can be processed by inverting the phase, adjusting the amplitude, etc., to alleviate the problems of dullness and loudness.
  • Figure 1 is an exemplary flow chart of a signal processing method. As shown in Figure 1, the process may include the following steps:
  • the bone sound transmission sensor conducts sound wave signals, and the bone sound transmission sensor contacts the ear canal or forms a vibration transmission path between the bone sound transmission sensor and the ear canal through a solid medium;
  • the phase of the bone conduction acoustic wave signal is adjusted through S001 and S002, and then the adjusted signal and the corresponding sound signal are simultaneously played in the human ear through S003.
  • the corresponding sound signal refers to the sound signal of the user's speech collected by the hearing aid device.
  • the adjusted signal played can offset the sound signal played, thereby alleviating the problem of the user's voice being muffled and loud.
  • the adjusted signal played is no longer the inverse signal of the played sound signal and cannot offset the played sound signal. , cannot solve the problem of the user's voice being muffled and loud.
  • FIG. 2 is an exemplary schematic diagram of the signal processing process.
  • the microphone M1 of the hearing aid device collects external environmental signals
  • the bone conduction sensor M3 collects the sound signal of the user's speech.
  • the external environment signal and the sound signal of the user's speech are processed by the negative feedback path SP and played to the user through the speaker R.
  • In-ear A which generates signals in the user's ears.
  • the signals in the user's ears include: some external environment signals, the signal played by the speaker R and the sound signal of the user's speech.
  • the microphone M2 of the hearing aid device collects the user's ear signals at the user's ear canal EC and sends them to the negative feedback path SP for processing and playback.
  • the negative feedback path SP adjusts the phase and amplitude of the signal in the user's ear, and then plays it simultaneously with the external environment signal collected by the microphone M1.
  • the adjusted signal in the user's ear contains the same signal as the played external environment signal, which can offset the external environment signal.
  • the above external environment signal includes the sound signal of the user speaking and the external environment sound.
  • the example in Figure 2 not only suppresses It suppresses the sound signal of the user's speech and offsets the external environmental sound, causing the user to be unable to perceive the external environmental sound.
  • the embodiment of the present application provides a signal processing method to solve the above problem.
  • the first signal includes the user's self-speaking voice and environmental sounds
  • the second signal includes the user's voice signal.
  • the embodiments of the present application can process the user's sound signal without affecting the environmental sound signal to reduce the problem of the user's hearing being dull, loud, and not full enough when wearing the hearing aid device, so as to achieve the goal of taking into account what the user hears.
  • the sound is more natural and the user can perceive the effect of ambient sound.
  • the hearing aid device may include earphones or hearing aids.
  • headphones or hearing aids have digital augmented hearing function (Digital Augmented Hearing) for signal processing.
  • the earphones may include two pronunciation units hanging on the ears. The one that fits the left ear can be called the left earphone, and the one that fits the right ear can be called the right earphone.
  • the earphones in the embodiments of the present application may be head-mounted earphones, ear-hung earphones, neck-hung earphones, or earbud-type earphones.
  • Earbud earphones may specifically include in-ear earphones (or canal earphones) or semi-in-ear earphones. As an example, take in-ear headphones.
  • the left and right earphones use a similar structure. Either the left earphone or the right earphone can adopt the earphone structure described below.
  • the headphone structure (left earphone or right earphone) includes a rubber sleeve that can be inserted into the ear canal, an ear bag close to the ear, and an earphone rod hanging on the ear bag.
  • the rubber sleeve guides sound to the ear canal.
  • the earpack contains batteries, speakers, sensors and other devices. Microphones, physical buttons, etc. can be arranged on the headphone pole.
  • the headphone stem can be in the shape of cylinder, cuboid, ellipsoid, etc.
  • FIG. 3 is an exemplary structural diagram of an earphone provided by an embodiment of the present application.
  • the earphone 300 may include: a speaker 301, a reference microphone 302, a bone conduction sensor 303 and a processor 304.
  • the reference microphone 302 is arranged outside the earphone and is used to collect sound signals outside the earphone when the user wears the earphone.
  • the sound signal may include the sound signal of the user speaking and environmental sound.
  • Reference microphone 302 may be an analog microphone or a digital microphone.
  • the positional relationship between the reference microphone 302 and the speaker 301 is: the speaker 301 is located between the ear canal and the reference microphone 302, and is used to play the processed sound collected by the microphone. In one case, the speakers can also be used to play music.
  • the reference microphone 302 is close to the external structure of the ear and may be arranged on the upper part of the headphone rod. There is an earphone opening near the reference microphone 302 for transparently transmitting external environmental sounds into the reference microphone 302 .
  • the bone conduction sensor 303 is arranged inside the earphone at a position close to the ear canal.
  • the bone conduction sensor 303 is attached to the ear canal to collect the sound signal of the user's speech that is conducted through the human body.
  • the processor 304 is used to control the collection and playback of signals by the earphones, and process the signals through processing algorithms.
  • the earphone 300 includes a left earphone and a right earphone, and the left earphone and the right earphone can simultaneously implement the same or different signal processing functions.
  • the left earphone and the right earphone implement the same signal processing function at the same time, the user's left ear wearing the left earphone and the right ear wearing the right earphone can have the same auditory perception.
  • FIG. 4 is an exemplary structural diagram of a signal processing system provided by an embodiment of the present application.
  • a signal processing system which includes a terminal device 100 and a headset 300 .
  • the terminal device 100 is communicatively connected to the headset 300, and the connection may be a wireless connection or a wired connection.
  • the terminal device 100 may be connected to the headset 300 through Bluetooth technology, wireless fidelity (Wi-Fi) technology, infrared radiation (IR) technology, or ultra-wideband technology.
  • Wi-Fi wireless fidelity
  • IR infrared radiation
  • the terminal device 100 is a device with a display interface function.
  • the terminal device 100 may be, for example, an electronic device with a display interface such as a mobile phone, a monitor, a tablet, a vehicle-mounted device, or a smart TV, or may be an electronic device such as a smart watch, a smart bracelet, or other smart display wearable products.
  • the embodiment of the present application places no special restrictions on the specific form of the terminal device 100 mentioned above.
  • the terminal device 100 can interact with the headset 300 through manual operation, or can be applied to interact with the headset 300 in a smart scenario.
  • FIG. 5 is an exemplary structural diagram of an electronic device 500 provided by an embodiment of the present application.
  • the electronic device 500 may be any one of the terminal device and earphone included in the signal processing system shown in FIG. 4 .
  • the electronic device 500 shown in FIG. 5 is only an example, and the electronic device 500 may have more or fewer components than shown in the figure, may combine two or more components, or may Available in different component configurations.
  • the various components shown in Figure 5 may be implemented in hardware, software, or a combination of hardware and software including one or more signal processing and/or application specific integrated circuits.
  • the electronic device 500 may include: a processor 510, an external memory interface 520, an internal memory 521, a universal serial bus (USB) interface 530, a charging management module 540, a power management module 541, a battery 542, an antenna 1, an antenna 2.
  • SIM subscriber identification module
  • the sensor module 580 may include a pressure sensor 580A, a gyro sensor 580B, an air pressure sensor 580C, a magnetic sensor 580D, an acceleration sensor 580E, a distance sensor 580F, a proximity light sensor 580G, a fingerprint sensor 580H, a temperature sensor 580J, a touch sensor 580K, and ambient light.
  • the processor 510 may include one or more processing units.
  • the processor 510 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) wait.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • different processing units can be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 500 .
  • the controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • the processor 510 may also be provided with a memory for storing instructions and data.
  • the memory in processor 510 is cache memory. This memory may hold instructions or data that have been recently used or recycled by processor 510 . If processor 510 needs to use the instructions or data again, it can be called directly from the memory. Repeated access is avoided and the waiting time of the processor 510 is reduced, thus improving the efficiency of the system.
  • processor 510 may include one or more interfaces.
  • Interfaces may include integrated circuit (inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, pulse code modulation (PCM) interface, universal asynchronous receiver and transmitter (universal asynchronous receiver/transmitter (UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and /or universal serial bus (universal serial bus bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • UART universal asynchronous receiver and transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (derail clock line, SCL).
  • processor 510 may include multiple sets of I2C buses.
  • the processor 510 can separately couple the touch sensor 580K, charger, flash, camera 593, etc. through different I2C bus interfaces.
  • the processor 510 can be coupled to the touch sensor 580K through an I2C interface, so that the processor 510 and the touch sensor 580K communicate through the I2C bus interface to implement the touch function of the electronic device 500 .
  • the I2S interface can be used for audio communication.
  • processor 510 may include multiple sets of I2S buses.
  • the processor 510 can be coupled with the audio module 570 through the I2S bus to implement communication between the processor 510 and the audio module 570.
  • the audio module 570 can transmit audio signals to the wireless communication module 560 through the I2S interface to implement the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communications to sample, quantize and encode analog signals.
  • the audio module 570 and the wireless communication module 560 may be coupled through a PCM bus interface.
  • the audio module 570 can also transmit audio signals to the wireless communication module 560 through the PCM interface to implement the function of answering calls through a Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is typically used to connect the processor 510 and the wireless communication module 560 .
  • the processor 510 communicates with the Bluetooth module in the wireless communication module 560 through the UART interface to implement the Bluetooth function.
  • the audio module 570 can transmit audio signals to the wireless communication module 560 through the UART interface to implement the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 510 with peripheral devices such as the display screen 594 and the camera 593 .
  • MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
  • the processor 510 and the camera 593 communicate through the CSI interface to implement the shooting function of the electronic device 500 .
  • the processor 510 and the display screen 594 communicate through the DSI interface to implement the display function of the electronic device 500 .
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 510 with the camera 593, display screen 594, wireless communication module 560, audio module 570, sensor module 580, etc.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 530 is an interface that complies with the USB standard specification. It can be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc.
  • the USB interface 530 can be used to connect a charger to charge the electronic device 500, and can also be used to transmit data between the electronic device 500 and peripheral devices. It can also be used to connect headphones to play audio through them. This interface can also be used to connect other electronic devices, such as AR devices, etc.
  • the interface connection relationship between the modules illustrated in the embodiment of the present application is only a schematic illustration and does not constitute a structural limitation of the electronic device 500 .
  • the electronic device 500 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the charge management module 540 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 540 may receive charging input from the wired charger through the USB interface 530 .
  • the charging management module 540 may receive wireless charging input through the wireless charging coil of the electronic device 500 .
  • the charging management module 540 charges the battery 542 At the same time, the electronic device can also be powered through the power management module 541 .
  • the power management module 541 is used to connect the battery 542, the charging management module 540 and the processor 510.
  • the power management module 541 receives input from the battery 542 and/or the charging management module 540 and supplies power to the processor 510, internal memory 521, external memory, display screen 594, camera 593, and wireless communication module 560.
  • the power management module 541 can also be used to monitor battery capacity, battery cycle times, battery health status (leakage, impedance) and other parameters.
  • the power management module 541 may also be provided in the processor 510 .
  • the power management module 541 and the charging management module 540 can also be provided in the same device.
  • the wireless communication function of the electronic device 500 can be implemented through the antenna 1, the antenna 2, the mobile communication module 550, the wireless communication module 560, the modem processor and the baseband processor.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 500 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization. For example: Antenna 1 can be reused as a diversity antenna for a wireless LAN. In other embodiments, antennas may be used in conjunction with tuning switches.
  • the mobile communication module 550 can provide wireless communication solutions including 2G/3G/4G/5G applied to the electronic device 500 .
  • the mobile communication module 550 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 550 can receive electromagnetic waves from the antenna 1, perform filtering, amplification and other processing on the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 550 can also amplify the signal modulated by the modem processor and convert it into electromagnetic waves through the antenna 1 for radiation.
  • at least part of the functional modules of the mobile communication module 550 may be disposed in the processor 510 .
  • at least part of the functional modules of the mobile communication module 550 and at least part of the modules of the processor 510 may be provided in the same device.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be sent into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor outputs sound signals through audio devices (not limited to speaker 570A, receiver 570B, etc.), or displays images or videos through display screen 594.
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 510 and may be provided in the same device as the mobile communication module 550 or other functional modules.
  • the wireless communication module 560 can provide applications on the electronic device 500 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), Bluetooth (bluetooth, BT), and global navigation satellites.
  • WLAN wireless local area networks
  • System global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • the wireless communication module 560 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 560 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 510 .
  • the wireless communication module 560 can also receive the signal to be sent from the processor 510, frequency modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the antenna 1 of the electronic device 500 is coupled to the mobile communication module 550, and the antenna 2 is coupled to the wireless communication module 560, so that the electronic device 500 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (code division multiple access) division multiple access (CDMA), wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE) , BT, GNSS, WLAN, NFC, FM, and/or IR technology, etc.
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • code division multiple access code division multiple access
  • CDMA code division multiple access
  • WCDMA wideband code division multiple access
  • TD-SCDMA time-division code division multiple access
  • LTE long term evolution
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 500 implements display functions through a GPU, a display screen 594, an application processor, and the like.
  • the GPU is an image processing microprocessor and is connected to the display screen 594 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 510 may include one or more GPUs that execute program instructions to generate or alter display information.
  • the display screen 594 is used to display images, videos, etc.
  • Display 594 includes a display panel.
  • the display panel can use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed, quantum dot light emitting diode (QLED), etc.
  • the electronic device 500 may include 1 or N display screens 594, where N is a positive integer greater than 1.
  • the electronic device 500 can implement the shooting function through an ISP, a camera 593, a video codec, a GPU, a display screen 594, and an application processor.
  • the ISP is used to process the data fed back by the camera 593. For example, when taking a photo, the shutter is opened, the light is transmitted to the camera sensor through the lens, the optical signal is converted into an electrical signal, and the camera sensor passes the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be provided in the camera 593.
  • Camera 593 is used to capture still images or video.
  • the object passes through the lens to produce an optical image that is projected onto the photosensitive element.
  • the photosensitive element can be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other format image signals.
  • the electronic device 500 may include 1 or N cameras 593, where N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 500 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy.
  • Video codecs are used to compress or decompress digital video.
  • Electronic device 500 may support one or more video codecs. In this way, the electronic device 500 can play or record videos in multiple encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • MPEG moving picture experts group
  • MPEG2 MPEG2, MPEG3, MPEG4, etc.
  • NPU is a neural network (NN) computing processor.
  • NN neural network
  • Intelligent cognitive applications of the electronic device 500 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, etc.
  • the external memory interface 520 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 500.
  • the external memory card communicates with the processor 510 through the external memory interface 520 to implement the data storage function. Such as saving music, videos, etc. files in external memory card.
  • Internal memory 521 may be used to store computer executable program code, which includes instructions.
  • the processor 510 executes instructions stored in the internal memory 521 to execute various functional applications and data processing of the electronic device 500 .
  • the internal memory 521 may include a program storage area and a data storage area. Among them, the stored program area can store an operating system, at least one application program required for a function (such as a sound playback function, an image playback function, etc.).
  • the storage data area may store data created during use of the electronic device 500 (such as audio data, phone book, etc.).
  • the internal memory 521 may include high-speed random access memory, and may also include non-volatile memory, such as at least one disk storage device, flash memory device, universal flash storage (UFS), etc.
  • the electronic device 500 can implement audio functions through the audio module 570, the speaker 570A, the receiver 570B, the microphone 570C, the headphone interface 570D, and the application processor. Such as music playback, recording, etc.
  • the audio module 570 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signals. Audio module 570 may also be used to encode and decode audio signals. In some embodiments, the audio module 570 may be provided in the processor 510 , or some functional modules of the audio module 570 may be provided in the processor 510 .
  • Speaker 570A also known as “speaker” is used to convert audio electrical signals into sound signals.
  • Electronic device 500 can listen to music through speaker 570A, or listen to hands-free calls.
  • Receiver 570B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the voice can be heard by bringing the receiver 570B close to the human ear.
  • Microphone 570C also known as “microphone” and “microphone”, is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can speak close to the microphone 570C with the human mouth and input the sound signal to the microphone 570C.
  • the electronic device 500 may be provided with at least one microphone 570C. In other embodiments, the electronic device 500 may be provided with two microphones 570C, which in addition to collecting sound signals, may also implement a noise reduction function. In other embodiments, the electronic device 500 can also be equipped with three, four or more microphones 570C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions, etc.
  • the headphone interface 570D is used to connect wired headphones.
  • the headphone interface 570D can be a USB interface 530, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 580A is used to sense the pressure signal and convert the pressure signal into an electrical signal.
  • pressure sensor 580A may be disposed on display screen 594.
  • pressure sensors 580A such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, etc.
  • a capacitive pressure sensor may include at least two parallel plates of conductive material.
  • the electronic device 500 may also calculate the touched position based on the detection signal of the pressure sensor 580A.
  • touch operations acting on the same touch location but with different touch operation intensities may correspond to different operation instructions. For example: when there is a touch operation intensity less than the first pressure When the threshold touch operation is applied to the short message application icon, the instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold is applied to the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 580B may be used to determine the motion posture of the electronic device 500 .
  • the angular velocity of electronic device 500 about three axes may be determined by gyro sensor 580B.
  • the gyro sensor 580B can be used for image stabilization. For example, when the shutter is pressed, the gyro sensor 580B detects the angle at which the electronic device 500 shakes, calculates the distance that the lens module needs to compensate based on the angle, and allows the lens to offset the shake of the electronic device 500 through reverse movement to achieve anti-shake.
  • the gyro sensor 580B can also be used for navigation and somatosensory gaming scenarios.
  • Air pressure sensor 580C is used to measure air pressure. In some embodiments, the electronic device 500 calculates the altitude through the air pressure value measured by the air pressure sensor 580C to assist positioning and navigation.
  • Magnetic sensor 580D includes a Hall sensor.
  • the electronic device 500 may utilize the magnetic sensor 580D to detect the opening and closing of the flip holster.
  • the electronic device 500 may detect the opening and closing of the flip according to the magnetic sensor 580D. Then, based on the detected opening and closing status of the leather case or the opening and closing status of the flip cover, features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 580E can detect the acceleration of the electronic device 500 in various directions (generally three axes). When the electronic device 500 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of electronic devices and be used in horizontal and vertical screen switching, pedometer and other applications.
  • Distance sensor 580F used to measure distance.
  • Electronic device 500 can measure distance via infrared or laser. In some embodiments, when shooting a scene, the electronic device 500 can utilize the distance sensor 580F to measure distance to achieve fast focusing.
  • Proximity light sensor 580G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 500 emits infrared light through the light emitting diode.
  • Electronic device 500 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 500 . When insufficient reflected light is detected, the electronic device 500 may determine that there is no object near the electronic device 500 .
  • the electronic device 500 can use the proximity light sensor 580G to detect when the user holds the electronic device 500 close to the ear for talking, so as to automatically turn off the screen to save power.
  • the proximity light sensor 580G can also be used in holster mode, and pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 580L is used to sense ambient light brightness.
  • the electronic device 500 can adaptively adjust the brightness of the display screen 594 according to the perceived ambient light brightness.
  • the ambient light sensor 580L can also be used to automatically adjust white balance when taking photos.
  • the ambient light sensor 580L can also cooperate with the proximity light sensor 580G to detect whether the electronic device 500 is in the pocket to prevent accidental touching.
  • Fingerprint sensor 580H is used to collect fingerprints.
  • the electronic device 500 can use the collected fingerprint characteristics to achieve fingerprint unlocking, access to application locks, fingerprint photography, fingerprint answering of incoming calls, etc.
  • Temperature sensor 580J is used to detect temperature.
  • the electronic device 500 uses the temperature detected by the temperature sensor 580J to execute the temperature processing strategy. For example, when the temperature reported by the temperature sensor 580J exceeds a threshold, the electronic device 500 reduces the performance of a processor located near the temperature sensor 580J in order to reduce power consumption and implement thermal protection. In other embodiments, when the temperature is lower than another threshold, the electronic device 500 heats the battery 542 to prevent the low temperature from causing the electronic device 500 to shut down abnormally. In some other embodiments, when the temperature is lower than another threshold, the electronic device 500 performs boosting on the output voltage of the battery 542 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 580K also known as "touch panel”.
  • the touch sensor 580K can be disposed on the display screen 594, and can be The touch sensor 580K and the display screen 594 form a touch screen, also called a "touch screen”.
  • Touch sensor 580K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the touch event type.
  • Visual output related to the touch operation may be provided through display screen 594.
  • the touch sensor 580K may also be disposed on the surface of the electronic device 500 at a location different from that of the display screen 594 .
  • the bone conduction sensor 580M can acquire vibration signals. In some embodiments, the bone conduction sensor 580M can acquire the vibration signal of the vibrating bone mass of the human body's vocal part. The bone conduction sensor 580M can also contact the human body's pulse and receive blood pressure beating signals. In some embodiments, the bone conduction sensor 580M can also be provided in the earphone and combined into a bone conduction earphone.
  • the audio module 570 can analyze the voice signal based on the vibration signal of the vocal vibrating bone obtained by the bone conduction sensor 580M to implement the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 580M to implement the heart rate detection function.
  • the buttons 590 include a power button, a volume button, etc.
  • Key 590 may be a mechanical key. It can also be a touch button.
  • the electronic device 500 may receive key input and generate key signal input related to user settings and function control of the electronic device 500 .
  • Motor 591 can produce vibration prompts.
  • Motor 591 can be used for vibration prompts for incoming calls and can also be used for touch vibration feedback.
  • touch operations for different applications can correspond to different vibration feedback effects.
  • Acting on touch operations in different areas of the display screen 594, the motor 591 can also correspond to different vibration feedback effects.
  • Different application scenarios (such as time reminders, receiving information, alarm clocks, games, etc.) can also correspond to different vibration feedback effects.
  • the touch vibration feedback effect can also be customized.
  • the indicator 592 may be an indicator light, which may be used to indicate charging status, power changes, or may be used to indicate messages, missed calls, notifications, etc.
  • the SIM card interface 595 is used to connect the SIM card.
  • the SIM card can be connected to or separated from the electronic device 500 by inserting it into the SIM card interface 595 or pulling it out from the SIM card interface 595 .
  • the electronic device 500 can support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • SIM card interface 595 can support Nano SIM card, Micro SIM card, SIM card, etc.
  • the same SIM card interface 595 can insert multiple cards at the same time.
  • the types of the plurality of cards may be the same or different.
  • the SIM card interface 595 is also compatible with different types of SIM cards.
  • the SIM card interface 595 is also compatible with external memory cards.
  • the electronic device 500 interacts with the network through the SIM card to implement functions such as calls and data communications.
  • the electronic device 500 uses an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 500 and cannot be separated from the electronic device 500 .
  • the software system of the electronic device 500 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • This embodiment of the present application takes the Android system with a layered architecture as an example to illustrate the software structure of the electronic device 500 .
  • the software system of the electronic device 500 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • This embodiment of the present application takes the Android system with a layered architecture as an example to illustrate the software structure of the electronic device 500 .
  • FIG. 6 is an exemplary software structure block diagram of the electronic device 500 provided by the embodiment of the present application.
  • the layered architecture of the electronic device 500 divides the software into several layers, and each layer has clear roles and division of labor.
  • the layers communicate through software interfaces.
  • the Android system is divided into four layers, from top to bottom: application layer, application framework layer, Android runtime (Android runtime) and system libraries, and kernel layer.
  • the application layer can include a series of application packages.
  • the application package can include camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message and other applications.
  • the application framework layer provides an application programming interface (API) and programming framework for applications in the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions.
  • the application framework layer can include a window manager, phone manager, content provider, view system, resource manager, notification manager, etc.
  • a window manager is used to manage window programs.
  • the window manager can obtain the display size, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • the phone manager is used to provide communication functions of the electronic device 500 .
  • call status management including connected, hung up, etc.
  • Content providers are used to store and retrieve data and make this data accessible to applications.
  • Said data can include videos, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
  • the view system includes visual controls, such as controls that display text, controls that display pictures, etc.
  • a view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
  • the resource manager provides various resources to applications, such as localized strings, icons, pictures, layout files, video files, etc.
  • the notification manager allows applications to display notification information in the status bar, which can be used to convey notification-type messages and can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also be notifications that appear in the status bar at the top of the system in the form of charts or scroll bar text, such as notifications for applications running in the background, or notifications that appear on the screen in the form of conversation windows. For example, text information is prompted in the status bar, a beep sounds, the electronic device vibrates, the indicator light flashes, etc.
  • Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library contains two parts: one is the functional functions that need to be called by the Java language, and the other is the core library of Android.
  • the application layer and application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and application framework layer into binary files.
  • the virtual machine is used to perform object life cycle management, stack management, thread management, security and exception management, and garbage collection and other functions.
  • System libraries can include multiple functional modules. For example: surface manager (surface manager), two-dimensional graphics engine (for example: SGL), three-dimensional graphics processing library (for example: OpenGL ES), media library (Media Libraries), etc.
  • surface manager surface manager
  • two-dimensional graphics engine for example: SGL
  • three-dimensional graphics processing library for example: OpenGL ES
  • media library Media Libraries
  • the surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
  • 2D Graphics Engine is a drawing engine for 2D drawing.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, composition, and layer processing.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as static image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer at least includes display driver, audio driver, Wi-Fi driver, sensor driver, and Bluetooth driver.
  • the components included in the software structure shown in FIG. 6 do not constitute specific limitations on the electronic device 500 .
  • the electronic device 500 may include more or less components than shown in the figures, or some components may be combined, some components may be separated, or some components may be arranged differently.
  • This application can collect the first signal including the user's self-speaking voice and environmental sound through the hearing aid device, including the user's voice signal and the second signal, thereby using the first signal and the second signal to target the first signal.
  • the user's voice signal is processed to achieve the effect that the user's own voice heard by the user is more natural and the user can perceive the environmental sound.
  • FIG. 7 is an exemplary flow chart of a signal processing method provided by an embodiment of the present application. As shown in Figure 7, the signal processing method is applied to the hearing aid device, which may include but is not limited to the following steps:
  • the first signal includes the user's sound signal and the surrounding environmental sound signal
  • the second signal includes the user's sound signal.
  • the hearing aid device may collect a first signal through the reference microphone 302 and a second signal through the bone conduction sensor 302 .
  • the surrounding environmental sound signals may include sound signals in the physical environment where the user is located, except for the user's own voice.
  • the surrounding environmental sound signal may include at least one of the following signals: the sound signal of a person talking to the user face to face, music signals, conversation sounds, car horns, etc. in the user's physical environment.
  • the bone conduction sensor 303 collects sound signals conducted by human bones, which can ensure that the sound signals collected are the sound signals of the user wearing the hearing aid device speaking, that is, the user's self-speaking signal.
  • the hearing aid device can detect whether the user is wearing the hearing aid device through the first sensor; if the hearing aid device is worn, the hearing aid device can detect whether the user makes a sound through the second sensor; if it is detected that the user makes a sound , then collect the first signal and the second signal.
  • the first sensor may include a pressure sensor, a temperature sensor, etc.
  • the second sensor may be a bone conduction sensor 303.
  • the hearing aid device can process the user's voice signal in the first signal according to the first signal and the second signal to obtain the target signal.
  • the hearing aid device processes the user's voice signal in the first signal, which may include attenuation processing or enhancement processing.
  • the attenuation processing is used to solve the problem of the user's auditory perception of the sound signal in the first signal being dull
  • the enhancement processing is used to solve the problem of the user's auditory perception of the sound signal in the first signal being not full, thereby enabling the user to pass the hearing test
  • the assistive device hears the user's voice signal more naturally.
  • the hearing aid device processes the user's voice signal in the first signal according to the first signal and the second signal to obtain the target signal, which may include but is not limited to the following step:
  • the user's voice signal in the first signal is attenuated according to the filtering gain to obtain a target signal.
  • the hearing aid device can use the second signal to filter the first signal to obtain a filter gain, which is the signal-to-noise ratio between the surrounding environmental sound signal and the user's voice signal in the first signal.
  • a filter gain which is the signal-to-noise ratio between the surrounding environmental sound signal and the user's voice signal in the first signal.
  • the ratio of the desired signal to the first signal is calculated to obtain the filter gain.
  • the first signal and the second signal can be input into the adaptive filter to obtain a desired signal output by the adaptive filter.
  • the adaptive filter may be, for example, a Kalman filter or a Wiener filter.
  • Kalman filtering is an algorithm that uses linear system state equations to optimally estimate the system state of the filter through the input and output observation data of the filter, that is, filtering.
  • the essence of Wiener filtering is to minimize the mean square value of the estimation error (defined as the difference between the expected response and the actual output of the filter).
  • the filter gain is obtained through the desired signal, which is a signal that satisfies the attenuation processing expectations for the second signal in the first signal, thereby ensuring the accuracy of the filter gain. Based on this, filter gain is used to
  • the specific manner in which the hearing aid device uses the second signal to filter the first signal to obtain the filter gain may include the following steps: input the first signal and the second signal into the pre-trained signal adjustment model to obtain the filter gain of the signal adjustment model output.
  • the signal adjustment model is obtained by unsupervised training using the sample first signal and the sample second signal.
  • the hearing aid device performs attenuation processing on the user's voice signal in the first signal according to the filter gain to obtain the target signal. Specifically, this may include: the hearing aid device applies the filter gain to the first signal to realize the processing of the first signal.
  • the user's voice signal in a signal is attenuated to obtain the target signal. For example, by multiplying the gain G by the first signal A, the target signal A*G that attenuates the second signal B in the first signal A can be obtained.
  • the hearing aid device uses the second signal to filter the first signal to obtain the filter gain, which may include the following steps:
  • the hearing aid device uses the second signal to filter the first signal to obtain the original filtering gain, which may be by using an adaptive filter or by using a pre-trained signal adjustment model.
  • the second signal may be by using an adaptive filter or by using a pre-trained signal adjustment model.
  • the hearing aid device can perform the two steps of obtaining at least one of the degree correction amount and the frequency band range, and filtering the first signal using the second signal to obtain the original filter gain, one after another or at the same time.
  • the embodiment of the present application does not limit the execution order of these two steps.
  • the degree correction amount is used to adjust the degree of attenuation of the second signal in the first signal.
  • the frequency band range is used to limit the attenuation processing of the second signal belonging to the frequency band range among the first signals.
  • the hearing aid device can perform at least one of the following steps: the hearing aid device adjusts the size of the original filter gain according to the degree correction amount to obtain the filter gain; adjusts it according to the frequency band range Raw filter gain enabled frequency band to obtain the filter gain.
  • the way in which the hearing aid device adjusts the size of the original filter gain according to the degree correction amount may include: the hearing aid device calculates the sum or product of the degree correction amount and the original filter gain.
  • the method of calculating the sum value is applicable to the case where the degree correction amount is an increase or decrease.
  • filter gain G original filter gain G0 + degree correction amount Z.
  • Z is an increase, the sign of Z is positive "+", and when Z is a decrease, the sign of Z is negative "-”.
  • the method of calculating the product is suitable for the case where the degree correction amount is a proportional coefficient.
  • filter gain G original filter gain G0*degree correction amount Z
  • Z can be, for example, 0.7, 1, 80%, and so on.
  • the specific degree correction amount can be set according to application requirements, and this application does not limit this.
  • the hearing aid device can calculate the ratio between the desired signal C and the first signal A belonging to the frequency band range when calculating the original filter gain, thereby obtaining the filter gain. It can be understood that in this case, the hearing aid device first obtains the frequency band range, and then uses the second signal and the frequency band range to filter the first signal to obtain the filter gain.
  • the hearing aid device obtains at least one of the degree correction amount and the frequency band range, which may specifically include the following steps:
  • the target terminal is used to display a parameter adjustment interface
  • the parameter adjustment interface includes at least one of the following setting controls: an adjustment degree setting control and a frequency band range setting control;
  • the target terminal may be the terminal device 100 .
  • the manner in which the hearing aid device establishes a communication connection with the terminal device 100 can be referred to the description of the embodiment in FIG. 3 and will not be described again here.
  • the user can turn on the Bluetooth of the mobile phone and the headset for pairing, thereby establishing a communication connection between the mobile phone and the headset. Based on this, users can control the headset in the device management application of the mobile phone.
  • FIG. 8 is an exemplary schematic diagram of a parameter adjustment interface provided by an embodiment of the present application.
  • the user can click the headset management control in the device management application.
  • the phone displays a UI (User Interface) such as a parameter adjustment interface.
  • UI User Interface
  • At least one of an adjustment degree setting control 801 and a frequency band range setting control 802 is arranged on the parameter adjustment interface.
  • the mobile phone detects operations on the adjustment degree setting control 801 and the frequency band range setting control 802 to obtain at least one of the degree correction amount and the frequency band range.
  • the adjustment degree setting control 801 may include six rectangles with different heights. Each of the six rectangles indicates a correction amount. The larger the correction amount, the higher the rectangle. That is to say, the suppression of the user's voice signal in the first signal is controlled by the rectangular six-level strength of the mobile phone UI. Drag the rectangle from left to right to increase the suppression strength.
  • the frequency band range setting control 802 includes a frequency band range icon (such as an optimization range bar) and a slider located on the frequency band range icon.
  • the frequency band range icon is a rectangle, with spatial description information "optimization range” set, and the endpoints of the rectangle are set with "low” and "high” prompt information respectively. That is to say, the optimization range bar can be dragged left and right.
  • the parameter adjustment interface in the embodiment is used to set the attenuation intensity and attenuation frequency band range of the attenuation process.
  • the adjustment degree setting control 801 may be provided with control description information "attenuation information”.
  • the mobile phone detects the operations on the adjustment degree setting control and the frequency band range setting control respectively to obtain at least one of the degree correction amount and the frequency band range, which may include the following steps:
  • rectangles of different heights may indicate different amounts of correction.
  • the correction amount indicated by each rectangle can be pre-stored in the mobile phone, so that the mobile phone detects which rectangle the user clicks, and can determine the correction amount indicated by the rectangle as the degree correction amount.
  • the mobile phone can display the clicked rectangle as a specified color that is different from other rectangles in the plurality of rectangles. For example, referring to FIG. 8, when the mobile phone detects that the user clicks on the rectangle 8011, the rectangle is displayed as black. The black color is different from the color of other rectangles on the adjustment degree setting control 801, such as white.
  • sliders at different positions can correspond to different frequency band ranges.
  • the mobile phone can pre-store the corresponding frequency band range when the slider is at different positions on the frequency band range setting control, so that the mobile phone detects which position the slider is in, and can determine the frequency band range corresponding to that position as the frequency band sent to the headset. scope.
  • the mobile phone may send at least one of the degree correction amount and the frequency band range to the headset.
  • the reference microphone is the reference microphone
  • the bone conduction microphone is the bone conduction sensor.
  • FIG. 9 is an exemplary schematic diagram of the headphone algorithm architecture provided by the embodiment of the present application. As shown in Figure 9, attenuation processing can be seen as signal processing by headphones in attenuation mode.
  • the processing symbol of the reference signal collected by the reference microphone in Figure 8 is "+” and the processing symbol of the bone conduction signal collected by the bone conduction microphone is The processing symbol is "-", which means that the earphone can pass the adaptive filtering of the earphone DSP (processor 304 in Figure 3), using the reference signal, which is the first signal, and the bone conduction signal, which is the second signal, to perform the first
  • the user's voice signal in the signal is filtered to obtain the original filter gain.
  • the headset can adjust the original filter gain through the headset DSP according to at least one of the degree correction amount received from the mobile phone and the frequency band range, and then process the user's voice signal in the first signal based on the adjustment result to obtain
  • the target signal is the signal after attenuation since speaking. Based on this, the ear speakers of the headphones can play the target signal.
  • the user can set at least one of the attenuation degree of the above-mentioned attenuation processing and the frequency band range of the attenuated sound signal through the UI, thereby obtaining an attenuation effect that meets the user's needs, that is, a self-talk suppression effect, which can further improve user experience.
  • the hearing aid device processes the user's voice signal in the first signal according to the first signal and the second signal to obtain the target signal, which may include the following steps:
  • the user's voice signal in the first signal is enhanced according to the compensation signal to obtain a target signal.
  • the hearing aid device uses the second signal to enhance the first signal to obtain a compensation signal.
  • the compensation signal can be used to enhance the user's voice signal in the first signal to improve the user's voice signal in the first signal.
  • the fullness can solve the problem that the user's voice signal in the target signal heard by the user through the ear speaker is not full enough.
  • the hearing aid device uses the second signal to enhance the first signal. To obtain the compensation signal, the following steps may be included:
  • the enhanced signal is added to the first signal to obtain a compensation signal.
  • the hearing aid device determines the weighting coefficient of the second signal in a manner that includes: the hearing aid device reads the weighting coefficient of the second signal prestored by itself. Or, in an optional implementation, the hearing aid device determines the weighting coefficient of the second signal, which may include the following steps: the hearing aid device obtains the degree correction amount; and obtains the weighting coefficient of the second signal according to the degree correction amount. For example, the hearing aid device can read its own pre-stored level correction amount, or receive the level correction amount sent by a mobile phone connected to the hearing aid device, and then determine the level correction amount as a weighting coefficient of the second signal, or calculate the level correction amount. The sum/product of the correction amount and the original weighting coefficient.
  • the hearing aid device obtains the enhanced signal based on the weighting coefficient and the second signal. Specifically, the hearing aid device may calculate the product of the weighting coefficient and the second coefficient to obtain the enhanced signal. For example, if the second signal is B and the weighting coefficient is 50%, then the enhanced signal is B*50%.
  • the hearing aid device uses the second signal to enhance the first signal to obtain a compensation signal, which may include the following steps:
  • the first signal is enhanced to obtain a compensation signal
  • FIG. 10 is another exemplary schematic diagram of the parameter adjustment interface provided by the embodiment of the present application.
  • the adjustment degree setting control may be six rectangles 1001 indicating the compensation intensity, and each rectangle 1001 indicates a degree correction amount, for example, may indicate a weighting coefficient.
  • the mobile phone detects the operation, thereby determining the compensation intensity indicated by the operated rectangle, and correspondingly obtains the weighting coefficient of the second signal.
  • the higher the rectangle the greater the degree of enhancement. That is to say, as the rectangle is dragged from left to right, the weighting coefficient increases, which can increase the degree of enhancement of the user's voice signal in the first signal, that is, the enhancement of the user's own speech. compensation effect.
  • the optimization range column please refer to the relevant description of the embodiment in Figure 8 and will not be described again here.
  • the hearing aid device uses the signal compensation strength indicated by the degree correction amount and the second signal to weight the first signal.
  • Performing enhancement to obtain the compensation signal may specifically include: determining the degree correction amount as the weighting coefficient of the second signal, according to the The weighting coefficient and the second signal are used to obtain an enhanced signal; the enhanced signal is loaded on the first signal to obtain a compensation signal.
  • FIG. 11 is another exemplary schematic diagram of the headphone algorithm architecture provided by the embodiment of the present application.
  • enhancement processing can be seen as signal processing by headphones in enhancement mode.
  • the processing symbol of the reference signal collected by the reference microphone in Figure 10 is "+” and the processing symbol of the bone conduction signal collected by the bone conduction microphone is The processing symbol is "+”, which means that the earphone can use the bone conduction signal, which is the second signal, to enhance the reference signal, which is the first signal, through the weighted superposition of the earphone DSP (processor 304 in Figure 3), so as to The target signal is obtained, which is the signal after self-speech enhancement.
  • the user can set at least one of the enhancement degree of the above-mentioned enhancement processing and the frequency band range of the enhanced sound signal through the UI, thereby obtaining an enhancement effect that meets the user's needs, that is, a self-speech enhancement effect, which can further improve user experience.
  • the target terminal is also used to display a mode selection interface;
  • the mode selection interface includes: a self-speech optimization mode selection control; accordingly, before collecting the first signal and the second signal, the hearing aid device also You can perform the following steps:
  • the target terminal When receiving the self-speech optimization mode enable signal sent by the target terminal, detect whether the user is wearing a hearing aid device; wherein the self-speak optimization mode enable signal is sent by the target terminal by detecting the enable operation on the self-speak optimization mode selection control;
  • FIG. 12a is an exemplary schematic diagram of the mode selection interface provided by the embodiment of the present application.
  • the self-speech optimization mode may include an attenuation mode and a compensation mode.
  • the user selects the "Your Voice" function in the device management application of the mobile phone to manage the headset.
  • the mobile phone can display at least one of the attenuation mode selection control and the compensation mode selection control.
  • the user clicks the attenuation mode selection control to enable the attenuation mode, and the target terminal sends a corresponding activation signal of the self-speech optimization mode, such as the attenuation mode.
  • the hearing aid device can execute the algorithm in the attenuation mode.
  • the activation of the compensation mode is similar to the attenuation mode, except that the enabled mode is different. Accordingly, as shown in Figure 11, the hearing aid device executes the algorithm in the enhancement mode.
  • the target terminal may display the parameter adjustment interface when detecting the user's enabling operation of the self-speech optimization mode selection control.
  • the mobile phone when the user selects the attenuation mode, the mobile phone displays the parameter adjustment interface shown in Figure 8.
  • the parameter adjustment interface may include mode prompt information of "attenuation mode”.
  • the mobile phone displays the parameter adjustment interface shown in Figure 10.
  • the parameter adjustment interface may include mode prompt information of "compensation mode”.
  • FIG. 12b is another exemplary schematic diagram of the mode selection interface provided by the embodiment of the present application.
  • the self-talk optimization mode selection control may not be distinguished into an attenuation mode selection control and a compensation mode selection control.
  • the mobile phone can display the parameter adjustment interface in the attenuation mode and the parameter adjustment interface in the compensation mode in one interface.
  • the user clicks the self-speaking optimization selection control and the mobile phone detects this operation to enable the self-speaking optimization mode, and then displays the parameter adjustment interface in Figure 12b.
  • the attenuation control in Figure 12b The rectangle is the same as the rectangle in Figure 8, and the rectangle in the compensation control in Figure 12b is the same as the rectangle in Figure 10.
  • the lowest rectangle in the optimization strength control in Figure 12b can represent that the optimization strength is 0, that is, no attenuation and no compensation.
  • the specific shapes of the above-mentioned controls are examples.
  • the shapes of the above-mentioned controls may be disk-shaped, etc., and the embodiments of the present application do not limit this.
  • different modes can be set as buttons, and when the user clicks on the button, the mode is turned on.
  • the above parameter adjustment interface may include a left ear adjustment interface and a right ear adjustment interface
  • the headset detects operations on the adjustment degree setting control and the frequency band range setting control respectively to obtain at least one of the degree correction amount and the frequency band range, which may include the following steps:
  • Detect operations on the setting controls in the right ear adjustment interface to obtain right ear correction data where the right ear correction data includes at least one of a right ear degree correction amount and a right ear frequency band range;
  • the hearing aid device receives at least one of the degree correction amount and the frequency band range sent by the target terminal, which may specifically include the following steps:
  • the hearing aid device can receive at least one of the left ear correction data and the right ear correction data sent by the target terminal (such as a mobile phone); according to the ear identification carried by the left ear correction data and/or the right ear correction data, the hearing aid device is selected and The same correction data as the ear where the device is placed.
  • the target terminal such as a mobile phone
  • the left earphone and the right earphone can respectively establish communication connections with the mobile phone.
  • the mobile phone can perform at least one of the following steps: the mobile phone sends the left ear correction data through the communication connection with the left earphone. to the left earphone; the mobile phone sends the right ear correction data to the right earphone through the communication connection with the right earphone.
  • the left earphone or the right earphone can directly use the received correction data for signal processing. There is no need to filter the received correction data based on the ear identification, which is more efficient and saves computing costs.
  • FIG. 13 is another exemplary schematic diagram of the parameter adjustment interface provided by the embodiment of the present application.
  • the left ear adjustment interface can be the interface of the mobile phone in Figure 13 that displays the ear identification information "Left Ear”
  • the right ear adjustment interface can be the interface of the mobile phone in Figure 13 that displays the ear identification information "Right Ear”.
  • "Ear" interface It can be understood that both the left ear adjustment interface and the right ear adjustment interface are similar to the parameter adjustment interface shown in Figure 12b. The difference is that different ear identification information is displayed to guide the user in different interfaces, respectively for the left ear and the right ear. Use the right ear to set signal processing parameters.
  • the embodiment of Figure 13 controls the left and right earphones respectively through two UI interfaces, that is, one interface controls the earphones of one ear.
  • the control method is the same as when one interface controls two earphones. See Figures 8 and 10 and the control method described in Figures 12a to 12b. In this way, users can set different parameters for the left and right earphones to match ear differences or meet the needs of different applications, further improving the personalization of signal processing and thereby improving user experience.
  • the hearing aid device uses the second signal to enhance the first signal to obtain a compensation signal, which may include the following steps: the hearing aid device inputs the first signal and the second signal into the pre-trained signal. Enhance the model to obtain the compensation signal output by the signal enhancement model; wherein the signal enhancement model is obtained by unsupervised training using the first signal of the sample and the second signal of the sample.
  • the hearing aid device performs enhancement processing on the user's voice signal in the first signal according to the compensation signal to obtain the target signal. Specifically, this may include: using available compensation signals belonging to the frequency band range in the compensation signal, and more The signal to be enhanced in the new first signal, where the signal to be enhanced belongs to the frequency band range. For example, the frequency band ranges from 0 to 8KHz.
  • the signals in the mid-frequency band greater than 8KHz are replaced with unenhanced signals, and the available compensation signals from 0 to 8KHz in the frequency domain C signal are maintained in weighted compensation processing, that is, retained, to obtain the frequency domain target signal.
  • the frequency domain target signal is transformed into the time domain through the inverse Fourier transform, that is, the target signal is obtained.
  • the hearing aid device After the hearing aid device obtains the target signal through the above embodiments, it can play the target signal through the ear speaker. In this way, the user's voice signal in the first signal heard by the user has been enhanced or attenuated, and can be more natural.
  • the ear speaker may be, for example, 301 shown in FIG. 3 .
  • the hearing aid device collects the first signal and the second signal when detecting that the user is wearing the hearing aid device and the user makes a sound, which may include the following steps:
  • the third sensor is used to detect whether the user is in a quiet environment
  • the detection of whether the user is in a quiet environment can be implemented through a third sensor such as a reference microphone.
  • FIG. 14 is an exemplary schematic diagram of the detection information display interface provided by the embodiment of the present application. As shown in Figure 14, when the mobile phone detects the enabling operation of the personalized mode selection control, it can display the detection information display interface in the personalized optimization mode.
  • the detection information display interface may display at least one of progress information of the wearing detection, progress information of the quiet scene detection, and prompt information to guide the user to make a sound.
  • the progress information of the wearing test is: "1. Wearing test in progress".
  • the headset detects that the user is wearing the hearing aid device, it sends a first completion instruction to the mobile phone.
  • the mobile phone receives the first completion instruction, the progress information displayed is that the detection has been completed, for example, "1. Wearing detection...100%" in Figure 14.
  • the mobile phone when the mobile phone receives the first completion command, it displays the progress information of the quiet scene detection, for example, "2. Quiet scene detection is in progress."
  • the headset detects that the user is in a quiet environment, it sends a second completion command to the mobile phone.
  • the progress information displayed is that the detection has been completed, for example, "2. Quiet scene detection is in progress... 100%" in Figure 14.
  • the second completion instruction can be regarded as an information display instruction.
  • the mobile phone can display prompt information to guide the user to make a sound, such as "3. Please read the following content "XXXX"" in Figure 14 .
  • the above-mentioned first to second completion instructions can all be regarded as the third completion instruction, so that when receiving the third instruction, the mobile phone can display the detection completion information, such as “2. Quiet scene detection. Medium...100%".
  • the mobile phone can display at least one of the information shown in Figure 14, which can be specifically set according to application requirements, and the embodiments of the present application do not limit this.
  • Figure 14 the user can intuitively understand the progress of personalized setting of the headset. By guiding the user to make sound prompts, the collection of the user's voice signal can be improved. efficiency, thereby improving the efficiency of signal processing.
  • FIG. 15 is another exemplary structural diagram of an earphone provided by an embodiment of the present application.
  • the earphone 300 in the embodiment of FIG. 3 and FIG. 4 of the present application may also include an error microphone 304 .
  • the error microphone 304 is arranged inside the earphone and close to the ear canal.
  • the hearing aid device processes the user's voice signal in the first signal according to the first signal and the second signal to obtain the target signal, which may include the following steps:
  • the fourth signal includes: the signal after the first signal has been mapped through the ear canal;
  • the fifth signal includes: the signal after the third signal has been mapped through the ear canal;
  • the user's voice signal in the first signal is processed to obtain a target signal according to the first signal, the second signal and the frequency response difference, where the frequency response difference is used to indicate the degree of processing.
  • the earphone can collect the third signal at the user's ear canal through the error microphone 304, and the third signal is the signal at the user's ear canal.
  • the fourth signal may be, for example, the external signal collected by the reference microphone and mapped through the ear canal, resulting in an approximate sound signal D of the user heard by the user without wearing headphones.
  • the fifth signal may be, for example, the sound signal E at the eardrum obtained after the signal collected by the error microphone is mapped through the ear canal.
  • the hearing aid device determines the frequency response difference between the fourth signal and the fifth signal, which may include the following steps:
  • the hearing aid device processes the user's voice signal in the first signal according to the first signal, the second signal and the frequency response difference to obtain the target signal, which may include the following steps:
  • the user's voice signal in the first signal is enhanced according to the frequency response difference to obtain the target signal.
  • the headset performs the following algorithm steps through the headset DSP: comparing the frequency response difference between signal B and signal A, the compensation amount for the user's voice signal in the first signal, that is, the self-speaking signal can be obtained or Attenuation amount.
  • the earphone performs Fourier transform on the above-mentioned sound signal D and sound signal E respectively, and the frequency response of each frequency point can be obtained; the frequency response of the sound signal D and the frequency response of the sound signal E are subtracted to obtain the above-mentioned frequency response. difference.
  • the frequency response difference is, for example, a compensation amount (such as a weighting coefficient) or an attenuation amount (such as a filter gain), which can indicate the degree of processing.
  • the headset can send a third signal to the mobile phone, so that the mobile phone can display information that the personalized coefficient has been generated, such as "Detection completed, personalized coefficient has been generated" in Figure 14.
  • the sound signal E-sound signal D frequency response difference: if the frequency response difference is positive, the headset can determine the type of processing to be enhancement; if the frequency response difference is negative, the headset can determine the processing type The type is attenuation.
  • FIG. 16 is an exemplary schematic diagram of another exemplary headphone algorithm architecture provided by an embodiment of the present application.
  • the headset when the headset performs signal processing in the personalized mode, on the basis of Figure 11 or Figure 9, it also acquires in-ear signals, such as the above-mentioned sound signal D and sound signal E.
  • Headphones can perform offline calculations using in-ear signals to optimize coefficients, that is, obtain frequency response differences.
  • offline calculation means that the headset only performs the processing shown in Figure 16 every time the personalized mode is turned on.
  • the frequency response difference is used to achieve hearing enhancement:
  • the signal processing provided by the embodiments of the present application achieves a more natural effect of the user's voice signal in the first signal heard by the user.
  • FIG. 17 is another exemplary flow chart of the signal processing method provided by the embodiment of the present application. As shown in Figure 17, the method may include the following steps:
  • S1701 to S1704 are similar to those in the embodiment of FIG. 14 and have the same functions. For details of the same parts, please refer to the description of the embodiment of FIG. 14 and will not be described again here. The difference is that the above-mentioned S1701 to S1704 are the steps performed by the headset when the user selects the personalized mode.
  • S1703 may include: the energy of the signal collected by the reference microphone of the earphone is less than the first preset value, indicating a quiet environment.
  • S1704 may specifically include: if the energy of the signal collected by the bone conduction microphone is greater than the second preset value, the user speaks. When the user speaks, the user's voice signal is detected.
  • Bone conduction microphone is a bone conduction sensor.
  • the energy of any signal may include: calculating the square integral of the amplitude of the signal in the frequency domain, or the sum of the square amplitudes of the signal in the frequency domain.
  • the earphones After the earphones obtain the frequency response difference, they can compare the frequency response difference with the threshold. If the frequency response difference is less than the threshold, it means that after the first signal is mapped through the user's ear canal, the user's sound signal in the first signal heard by the user is similar to the user's auditory perception when not wearing headphones, and optimization does not need to be performed.
  • the headset can be determined to have completed the optimization, that is, the hearing enhancement shown in Figure 16 has been completed, and the target signal can be played.
  • the compensation amount or attenuation amount is obtained based on the frequency response difference.
  • the headset determines that the frequency response difference is greater than the threshold, it means that after the first signal is mapped through the user's ear canal, the user's voice signal in the first signal heard by the user is different from the user's auditory perception when not wearing headphones, causing the auditory perception to be unnatural.
  • the difference can be optimized through step S1709. For details about obtaining the compensation amount or the attenuation amount based on the frequency response difference, please refer to the description of obtaining the compensation amount or the attenuation amount in the optional embodiment of FIG. 14 , which will not be described again here.
  • the above-mentioned S1709 is specifically equivalent to the hearing aid device processing the user's voice signal in the first signal to obtain the target signal based on the first signal, the second signal and the frequency response difference. Please refer to the above-mentioned related descriptions, which will not be repeated here. Repeat.
  • S1705 may be executed after each target signal is played, so as to continuously optimize the user's sound signal while the user is wearing the earphones, that is, using the earphones. It can be understood that during the continuous optimization process, S1706 may be executed, or the embodiments of FIG. 9 and FIG. 11 may be executed, depending on the user's mode selection operation on the mobile phone.
  • the signal processing results suitable for the ear canal structure of the user can be obtained, further improving the personalization of signal processing for different users and ensuring that the signal The processing results are more suitable for this user.
  • FIG. 18 is another exemplary flow chart of the signal processing method provided by the embodiment of the present application. As shown in Figure 18, the user can slide the enable button to the "ON" state to turn on the adaptive optimization mode. At this time, the mobile phone detects the enabling operation of the adaptive mode selection control. Combined with Fig. 18, Fig.
  • FIG. 19 is another exemplary schematic diagram of the headphone algorithm architecture provided by an embodiment of the present application.
  • the headset when receiving the adaptive mode enable signal sent by the mobile phone, the headset performs signal processing in the adaptive mode.
  • the signal processing in the adaptive mode is similar to the signal processing in the personalized mode in Figure 16.
  • the difference is that the optimization coefficient is calculated in real time.
  • real-time calculation means that the headset uses environmental monitoring + self-talk monitoring. When it detects that the user is in a quiet environment and emits a sound signal, it uses the in-ear signal, reference signal and bone conduction signal to calculate the optimization coefficient.
  • the optimization coefficient is the compensation amount or attenuation amount in the above embodiment. The same parts will not be described again here. Please refer to the description of the embodiment in Figure 16 for details.
  • the implementation of the embodiment in Figure 19 may be that the hearing aid device performs the step of detecting whether the user is wearing the hearing aid device through the first sensor after playing the target signal through the speaker. Then the step of calculating the optimization coefficient in real time is performed. In this way, it can be ensured that the user wears headphones during optimization and avoids ineffective signal processing.
  • the headset can dynamically adjust the optimization intensity of the user's sound signal in the first signal through the adaptive mode every time the user wears the headset, which can avoid the problem of inconsistent optimization effects due to differences in wearing, and does not require Users manually adjust, and through online correction, which means real-time calculation of compensation or attenuation, the sound signal optimization effect suitable for the current user is provided in real time.
  • the electronic device includes corresponding hardware and/or software modules that perform each function.
  • the embodiments of the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is performed by hardware or computer software driving the hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art can use different methods to implement the described functions in conjunction with the embodiments for each specific application, but such implementation should not be considered to be beyond the scope of the embodiments of the present application.
  • This embodiment can divide the electronic device into functional modules according to the above method examples.
  • each functional module can be divided corresponding to each function, or two or more functions can be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware. It should be noted that the division of modules in this embodiment is schematic and is only a logical function division. In actual implementation, there may be other division methods.
  • Figure 20 shows a schematic block diagram of a device 2000 according to an embodiment of the present application.
  • the device 2000 may include: a processor 2001 and a transceiver/transceiver pin 2002.
  • Memory 2003 is also included.
  • bus 2004, which includes, in addition to a data bus, a power bus, a control bus, and a status signal bus.
  • bus 2004 various buses are referred to as bus 2004 in the figure.
  • the memory 2003 may be used for instructions in the foregoing method embodiments.
  • the processor 2001 can be used to execute instructions in the memory 2003, and control the receiving pin to receive signals, and control the transmitting pin to send signals.
  • the device 2000 may be the electronic device or a chip of the electronic device in the above method embodiment.
  • FIG. 21 shows a schematic block diagram of a hearing aid device 2100 according to an embodiment of the present application.
  • hearing aid device 2100 may include:
  • Signal acquisition module 2101 configured to collect a first signal and a second signal when it is detected that the user is wearing a hearing aid device and the user makes a sound, where the first signal includes the user's voice signal and the surrounding environmental sound signal, and the second signal Includes the user’s voice signal;
  • the signal processing module 2102 is used to process the user's voice signal in the first signal according to the first signal and the second signal to obtain the target signal;
  • the signal output module 2103 is used to play the target signal through the ear speaker.
  • FIG. 22 shows a schematic block diagram of a device control device 2200 according to the embodiment of the present application.
  • the device control device 2200 may include:
  • the communication module 2201 is used to establish a communication connection with the hearing aid device; wherein the hearing aid device is used to perform the signal processing method in any of the above implementations;
  • the interactive module 2202 is used to display the parameter adjustment interface.
  • the parameter adjustment interface includes at least one of the following setting controls: adjustment degree setting control and frequency band range setting control;
  • the detection module 2203 is used to respectively detect the operations on the adjustment degree setting control and the frequency band range setting control to obtain at least one of the degree correction amount and the frequency band range;
  • the control module 2204 is configured to send at least one of the degree correction amount and the frequency band range to the hearing assistive device; wherein the degree correction amount and the frequency band range are used by the hearing assistive device to respond to the user's hearing loss in the first signal according to at least one of them.
  • the sound signal is processed to obtain the target signal.
  • This embodiment also provides a computer storage medium.
  • Computer instructions are stored in the computer storage medium.
  • the electronic device causes the electronic device to execute the above related method steps to implement the large screen service in the above embodiment.
  • Cross-device transfer control methods are also provided.
  • This embodiment also provides a computer program product.
  • the computer program product When the computer program product is run on a computer, it causes the computer to perform the above related steps to implement the cross-device flow control method of large-screen services in the above embodiment.
  • the embodiments of the present application also provide a device.
  • This device may be a chip, a component or a module.
  • the device may include a connected processor and a memory; where the memory is used to store computer execution instructions.
  • the processor can execute the computer execution instructions stored in the memory, so that the chip executes the cross-device flow control method of large-screen services in the above method embodiments.
  • the electronic equipment, computer storage media, computer program products or chips provided in this embodiment are all used to execute the corresponding methods provided above. Therefore, the beneficial effects they can achieve can be referred to the corresponding methods provided above. The beneficial effects of the method will not be repeated here.
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of modules or units is only a logical function division.
  • there may be other division methods for example, multiple units or components may be combined or can be integrated into another device, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • a unit described as a separate component may or may not be physically separate.
  • a component shown as a unit may be one physical unit or multiple physical units, that is, it may be located in one place, or it may be distributed to multiple different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application can be integrated into one processing unit, or each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the above integrated units can be implemented in the form of hardware or software functional units.
  • Integrated units may be stored in a readable storage medium if they are implemented in the form of software functional units and sold or used as independent products.
  • the technical solutions of the embodiments of the present application are essentially or contribute to the existing technology, or all or part of the technical solution can be embodied in the form of a software product, and the software product is stored in a storage medium , including several instructions to cause a device (which can be a microcontroller, a chip, etc.) or a processor to execute all or part of the steps of the methods of various embodiments of the embodiments of this application.
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other media that can store program code.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

本申请实施例提供了一种信号处理方法及装置、设备控制方法及装置,其中,信号处理方法应用于听力辅助装置,该方法包括:若检测到用户佩戴听力辅助装置且用户发出声音,采集第一信号和第二信号,第一信号包括用户的自说话语音和环境音,第二信号包括用户的声音信号。这样,利用第一信号和第二信号,可以有针对性地对第一信号中用户的声音信号进行处理,以得到目标信号,并通过耳部扬声器播放目标信号。本方案可以避免对第一信号中环境音信号的抵消,实现兼顾用户听到的该用户自身声音更加自然和用户能够感知环境音的效果。

Description

信号处理方法及装置、设备控制方法及装置
本申请要求于2022年7月30日提交中国专利局、申请号为202210911626.2、申请名称为“信号处理方法及装置、设备控制方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及多媒体领域,尤其涉及一种信号处理方法及装置、设备控制方法及装置。
背景技术
随着技术的发展,听力辅助装置,例如耳机、助听器等设备可以满足用户与真实世界的交互需求:用户可以通过听力辅助装置听到自己说话的声音也就是自说话语音和外部环境音。在具体应用中,听力辅助装置的扬声器位于用户耳部,导致用户听到的自说话语音不够自然,例如存在声音闷、响的问题。
相关技术中,为了使用户听到的自说话语音更加自然,通常采集听力辅助装置的扬声器在耳内播放的原始耳内信号,调整原始耳内信号的相位和幅度,并同时播放调整后的耳内信号和原始耳内信号。这样,所播放的调整后的耳内信号可以对所播放的原始耳内信号进行抵消,实现降噪,缓解自说话语音声音闷、响的问题。
但是,上述方式不仅抵消了原始耳内信号包含的自说话语音,还抵消了原始耳内信号包含的环境音,导致用户无法感知外部环境音的问题。
发明内容
本申请提供一种信号处理方法及装置、设备控制方法及装置,以由听力辅助装置利用第一信号和用户的声音信号有针对性地对第一信号中用户的声音信号进行处理,避免对第一信号中环境音信号的抵消,实现兼顾用户听到的该用户自身声音更加自然和用户能够感知环境音的效果。
第一方面,本申请实施例提供一种信号处理方法,应用于听力辅助装置,该方法包括:当检测到用户佩戴听力辅助装置且用户发出声音时,采集第一信号和第二信号,其中,第一信号包括用户的声音信号和周围的环境音信号,第二信号包括用户的声音信号;根据第一信号和第二信号,对第一信号中的用户的声音信号进行处理以得到目标信号;通过耳部扬声器播放目标信号。
本申请实施例,通过听力辅助装置采集的第一信号包括用户的自说话语音和环境音,第二信号包括用户的声音信号。这样,听力辅助装置利用第一信号和第二信号,可以有针对性地对第一信号中用户的声音信号进行处理,以得到目标信号,并通过听力辅助装置中的耳部扬声器播放目标信号,从而可以避免对第一信号中环境音信号的抵消,实现兼顾用户听到的该用户自身声音更加自然和用户能够感知环境音的效果。
根据第一方面,根据第一信号和第二信号,对第一信号中的用户的声音信号进行处理以得到目标信号,包括:利用第二信号对第一信号进行过滤以得到滤波增益;根据滤波增 益对第一信号中的用户的声音信号进行衰减处理以得到目标信号。
本申请实施例,通过第二信号对第一信号进行过滤以得到滤波增益,可以保证滤波增益可以用于对第一信号中的用户的声音信号进行衰减,从而通过衰减处理以得到目标信号。这样,目标信号中用户的声音信号被衰减,可以减少用户听到的所播放的目标信号中,该用户的声音听觉感知闷的问题,该听觉感知更加自然。因此,实现兼顾用户听到的该用户自身声音更加自然和用户能够感知环境音的效果。
根据第一方面,或者以上第一方面的任意一种实现方式,利用第二信号对第一信号进行过滤以得到滤波增益,包括:利用第二信号过滤第一信号中的用户的声音信号以得到期望信号;计算期望信号与第一信号的比值以得到滤波增益。
本申请实施例,通过期望信号与第一信号的比值获得滤波增益,期望信号为满足对第一信号中第二信号的衰减处理期望的信号,可以保证滤波增益的准确性。基于此,通过滤波增益进行的衰减处理可以更加准确。
根据第一方面,或者以上第一方面的任意一种实现方式,利用第二信号对第一信号进行过滤以得到滤波增益,包括:利用第二信号对第一信号进行过滤以得到原始滤波增益;获取程度修正量和频带范围中的至少一种;按照程度修正量调整原始滤波增益的大小,得到滤波增益;和/或,按照频带范围调整原始滤波增益使能的频带,得到滤波增益。
本申请实施例,通过程度修正量调整滤波增益的大小,进而通过所调整的滤波增益,实现对第一信号中用户的声音信号衰减程度的调整。通过频带范围调整滤波增益使能的频带,进而通过所调整的滤波增益,实现对第一信号中所衰减的用户的声音信号频带的调整。因此,可以通过本实施例的调整实现更加灵活、个性化的信号处理效果,而非局限于固定不变的信号处理效果。
根据第一方面,或者以上第一方面的任意一种实现方式,根据第一信号和第二信号对第一信号中的用户的声音信号进行处理以得到目标信号,包括:利用第二信号对第一信号进行增强以得到补偿信号;根据补偿信号对第一信号中的用户的声音信号进行增强处理以得到目标信号。
本申请实施例,通过第二信号对第一信号进行增强以得到补偿信号,可以保证补偿信号能够用于对第一信号中的用户的声音信号进行增强,从而通过增强处理以得到目标信号。这样,目标信号中用户的声音信号被增强,可以减少用户听到的所播放的目标信号中,该用户的声音听觉感知不饱满的问题,该听觉感知更加自然。因此,实现兼顾用户听到的该用户自身声音更加自然和用户能够感知环境音的效果。
根据第一方面,或者以上第一方面的任意一种实现方式,利用第二信号对第一信号进行增强以得到补偿信号,包括:确定第二信号的加权系数;根据加权系数和第二信号获取增强信号;将增强信号加载于第一信号以得到补偿信号。
本申请实施例,通过第二信号的加权系数和第二信号获取增强信号,可以保证增强信号是对第二信号也就是用户的声音信号进行了增强的信号,从而将增强信号加载于第一信号以得到补偿信号,可以保证补偿信号能够用于对第一信号中的用户的声音信号进行增强处理。
根据第一方面,或者以上第一方面的任意一种实现方式,利用第二信号对第一信号进行增强以得到补偿信号,包括:获取程度修正量和频带范围中的至少一种;利用程度修正 量指示的信号补偿强度和第二信号,对第一信号进行增强,得到补偿信号;和/或,利用第二信号对属于频带范围的第一信号进行增强,得到补偿信号。
本申请实施例,通过程度修正量调整对补偿信号的补偿强度,进而通过所调整的补偿信号,实现对第一信号中用户的声音信号增强程度的调整。通过频带范围调整所增强的补偿信号的频带,进而通过所调整的补偿信号,实现对第一信号中所增强的用户的声音信号频带的调整。因此,可以通过本实施例的调整保证信号处理更加灵活、个性化的信号处理效果,而非局限于固定不变的信号处理效果。
根据第一方面,或者以上第一方面的任意一种实现方式,获取程度修正量和频带范围中的至少一种,包括:建立与目标终端的通信连接;其中,目标终端用于展示参数调整界面,参数调整界面包括以下至少一种设置控件:调节程度设置控件和频带范围设置控件;接收目标终端发送的程度修正量和频带范围中的至少一种;其中,程度修正量和频带范围为目标终端通过分别检测调节程度设置控件和频带范围设置控件上的操作获得。
本申请实施例,通过听力辅助装置与目标终端建立通信连接,用户可以通过操作目标终端展示的参数调整界面上调节程度设置控件和频带范围设置控件中的至少之一,设置听力辅助装置衰减处理的衰减程度和所衰减的声音信号的频带范围中的至少之一,从而得到符合用户需求的衰减效果也就是自说话抑制效果,可以实现个性化的信号处理,进一步提高用户体验。
根据第一方面,或者以上第一方面的任意一种实现方式,参数调整界面包括左耳调整界面和右耳调整界面;接收目标终端发送的程度修正量和频带范围中的至少一种,包括:接收目标终端发送的左耳修正数据和右耳修正数据中的至少一种;其中,左耳修正数据为目标终端通过检测左耳调整界面中设置控件上的操作获得,右耳修正数据为目标终端通过检测右耳调整界面中设置控件上的操作获得;左耳修正数据包括左耳程度修正量和左耳频带范围中的至少一种;右耳修正数据包括右耳程度修正量和右耳频带范围中的至少一种;根据左耳修正数据和/或右耳修正数据携带的耳部标识,选择与听力辅助装置所处耳部相同的修正数据。
本申请实施例,通过目标终端的左耳调整界面和右耳调整界面,用户可以为左右耳的两个耳机设置不同的参数,匹配耳朵差异性或配合不同应用的需要,进一步提高信号处理的个性化效果的准确度,从而进一步提高用户体验。
根据第一方面,或者以上第一方面的任意一种实现方式,目标终端还用于展示模式选择界面;模式选择界面包括:自说话优化模式选择控件;在采集第一信号和第二信号之前,该方法还包括:当接收到目标终端发送的自说话优化模式启用信号时,检测用户是否佩戴听力辅助装置;其中,自说话优化模式启用信号为目标终端通过检测自说话优化模式选择控件上的启用操作时发送;若佩戴,则检测用户是否发出声音。
本申请实施例,通过目标终端的自说话优化模式选择控件,用户可以进行对自说话优化模式的启用操作。在用户启用听力辅助装置的自说话优化模式时,听力辅助装置进行用户是否佩戴听力辅助装置的检测,进而在佩戴时检测用户是否发出声音。这样,用户可以自主控制是否进行本申请实施例提供的信号处理,从而进一步提高用户体验。
根据第一方面,或者以上第一方面的任意一种实现方式,当检测到用户佩戴听力辅助装置且用户发出声音时,采集第一信号和第二信号,包括:通过第一传感器,检测用户是 否佩戴听力辅助装置;若佩戴,则通过第三传感器检测用户是否处于安静环境;若处于,则通过第二传感器检测用户是否发出声音;若是,则采集第一信号和第二信号。
本申请实施例,通过第一传感器检测用户是否佩戴听力辅助装置,在佩戴时通过第三传感器检测用户是否处于安静环境,进而在用户处于安静环境时通过第二传感器检测用户是否发出声音。这样,可以保证本申请实施例的步骤在用户佩戴听力辅助装置时执行,避免用户未佩戴时的无效处理。在用户处于安静环境时检测用户是否发出声音,进而采集用户的声音信号,可以减少该信号中的环境音以更加符合用户自身的声音。
根据第一方面,或者以上第一方面的任意一种实现方式,根据第一信号和第二信号,对第一信号中的用户的声音信号进行处理以得到目标信号,包括:在用户的耳道处采集第三信号;在用户的耳内播放第一信号和第三信号;采集第四信号和第五信号;其中,第四信号包括:第一信号经过耳道映射后的信号;第五信号包括:第三信号经过耳道映射后的信号;确定第四信号和第五信号间的频响差异;根据第一信号,第二信号和频响差异,对第一信号中的用户的声音信号进行处理以得到目标信号,其中,频响差异用于指示进行处理的程度。
本申请实施例,通过在用户的耳内播放第一信号和第三信号,可以得到第一信号经过耳道映射后的第四信号,第三信号经过耳道映射后的第五信号。这样,可以确定第四信号和第五信号的频响差异,该频响差异可以反映用户的耳道结构,从而根据第一信号,第二信号和频响差异可以获得适用于该用户的耳道结构的信号处理结果,进一步提高信号处理的个性化准确度,保证信号处理结果更加适用于该用户,提高用户体验。
根据第一方面,或者以上第一方面的任意一种实现方式,确定第四信号和第五信号间的频响差异,包括:分别获取第四信号和第五信号的频率响应;计算第四信号的频率响应与第五信号的频率响应间的差异值,得到频响差异。
本申请实施例,通过计算第四信号和第五信号的频率响应间的差异,可以得到这两种信号间的频响差异。
根据第一方面,或者以上第一方面的任意一种实现方式,根据第一信号,第二信号和频响差异,对第一信号中的用户的声音信号进行处理以得到目标信号,包括:根据频响差异,确定处理的类型为衰减或者增强;当处理的类型为衰减时,根据频响差异对第一信号中的用户的声音信号进行衰减处理以得到目标信号;当处理的类型为增强时,根据频响差异对第一信号中的用户的声音信号进行增强处理以得到目标信号。
本申请实施例,通过频响差异可以确定对第一信号中的用户的声音信号进行处理时,该处理的类型,进而按照处理的类型进行适用于信号处理需求的处理,实现信号处理的结果更加准确的效果。
根据第一方面,或者以上第一方面的任意一种实现方式,通过第一传感器,检测用户是否佩戴听力辅助装置,包括:建立与目标终端的通信连接;目标终端用于展示模式选择界面;模式选择界面包括个性化模式选择控件;接收到目标终端发送的个性化模式启用信号时,通过第一传感器,检测用户是否佩戴听力辅助装置;其中,个性化模式启用信号为目标终端在检测到个性化模式选择控件上的启用操作时发送。
本申请实施例,通过听力辅助装置建立与目标终端的通信连接,用户可以通过目标终端模式选择界面的个性化模式选择控件,控制听力辅助装置的个性化模式是否启用。听力 辅助装置在用户启用个性化模式时,进行用户是否佩戴听力辅助装置的检测。这样,用户可以自主控制是否进行本申请实施例提供的基于安静环境下所采集的用户的声音信号的信号处理,从而进一步提高用户体验。
根据第一方面,或者以上第一方面的任意一种实现方式,若处于,则通过第二传感器检测用户是否发出声音,包括:若处于,则发送信息展示指令至目标终端,信息展示指令用于指示目标终端展示提示信息;其中,提示信息用于引导用户发出声音;通过第二传感器检测用户是否发出声音。
本申请实施例,通过听力辅助装置在检测到用户处于安静环境时,发送信息展示指令至目标终端。这样,目标终端可以在接收到信息展示指令时展示提示信息,以通过提示信息引导用户发出声音,从而可以更加高效地进行信号处理。
根据第一方面,或者以上第一方面的任意一种实现方式,在采集第一信号和第二信号之前,该方法还包括:当检测到用户佩戴听力辅助装置时,发送第一完成指令至目标终端;其中,第一完成指令用于指示目标终端输出佩戴检测完成的提示信息;当检测到用户处于安静环境时,发送第二完成指令至目标终端;其中,第二完成指令用于指示目标终端输出安静环境检测完成的信息;和/或,当得到目标信号时,发送第三完成指令至目标终端;其中,第三完成指令用于指示目标终端输出以下信息至少之一:检测已完成的信息和个性化参数已生成的信息。
本申请实施例,听力辅助装置通过发送第一完成指令,第二完成指令和第三昵称指令中的至少之一至目标终端,可以指示目标终端相应输出以下信息至少之一:佩戴检测完成的提示信息,安静环境检测完成的信息,检测已完成的信息和个性化参数已生成的信息。这样,有利于用户根据目标终端输出的信息直观确定信息处理的进度,提高用户体验。
根据第一方面,或者以上第一方面的任意一种实现方式,在通过扬声器播放目标信号之后,该方法还包括:执行通过第一传感器,检测用户是否佩戴听力辅助装置的步骤。
本申请实施例,听力辅助装置在通过扬声器播放目标信号之后,执行通过第一传感器,检测用户是否佩戴听力辅助装置的步骤,可以在用户使用听力辅助装置的过程中,基于对用户是否处于安静环境的检测,实时采集用户当前的声音信号,进而实时进行对第一信号的处理。这样,可以在佩戴过程中实时调整信号处理的效果,保证处理效果与用户当前的声音状态更加匹配,处理效果更优。
根据第一方面,或者以上第一方面的任意一种实现方式,执行通过第一传感器,检测用户是否佩戴听力辅助装置,包括:建立与目标终端的通信连接;目标终端用于展示模式选择界面;模式选择界面包括自适应模式选择控件;接收到目标终端发送的自适应模式启用信号时,执行通过第一传感器,检测用户是否佩戴听力辅助装置;其中,自适应模式启用信号为目标终端在检测到自适应模式选择控件上的启用操作时发送。
本申请实施例,通过听力辅助装置建立与目标终端的通信连接,用户可以通过目标终端模式选择界面的自适应模式选择控件,控制听力辅助装置的自适应模式是否启用。听力辅助装置在用户启用自适应模式时,进行用户是否佩戴听力辅助装置的检测。这样,用户可以自主控制是否进行本申请实施例提供的在佩戴过程中实时调整信号处理的效果的方案,从而进一步提高用户体验。
第二方面,本申请实施例提供一种设备控制方法,应用于终端,该方法包括:建立与听力辅助装置的通信连接;其中,听力辅助装置用于执行如上述第一方面及第一方面的任一种实现方式的信号处理方法;展示参数调整界面,参数调整界面包括以下至少一种设置控件:调节程度设置控件和频带范围设置控件;分别检测调节程度设置控件和频带范围设置控件上的操作以得到程度修正量和频带范围中的至少一种;发送程度修正量和频带范围中的至少一种至听力辅助装置;其中,程度修正量和频带范围用于听力辅助装置根据其中至少之一,对第一信号中的用户的声音信号进行处理以得到目标信号。
根据第二方面,调节程度设置控件包括形状相同且尺寸不同的多个几何图形,多个几何图形中每个几何图形指示一个修正量,修正量越大几何图形的尺寸越大;频带范围设置控件包括频带范围图标和位于频带范围图标上的滑块;相应地,分别检测调节程度设置控件和频带范围设置控件上的操作以得到程度修正量和频带范围中的至少一种,包括:检测对调节程度设置控件上多个几何图形的点击操作;将检测到点击操作的几何图形指示的修正量,确定为程度修正量;和/或,检测对频带范围设置控件上滑块的滑动操作;根据滑块的滑动位置,确定频带范围。
示例性的,几何图形的形状可以是矩形、圆形、六边形等等。不同几何图形的尺寸不同可以是高度不同、宽度、直径不同等。修正量越大几何图形的尺寸越大例如可以是修正量越大矩形越高、修正量越大圆形直径越大等。
根据第二方面,或者以上第二方面的任意一种实现方式,参数调整界面包括左耳调整界面和右耳调整界面;相应地,分别检测调节程度设置控件和频带范围设置控件上的操作以得到程度修正量和频带范围中的至少一种,包括:检测左耳调整界面中设置控件上的操作以得到左耳修正数据,其中,左耳修正数据包括左耳程度修正量和左耳频带范围中的至少一种;检测右耳调整界面中设置控件上的操作以得到右耳修正数据,其中,右耳修正数据包括右耳程度修正量和右耳频带范围中的至少一种。
根据第二方面,或者以上第二方面的任意一种实现方式,展示参数调整界面,包括:展示模式选择界面;其中,模式选择界面包括自说话优化模式选择控件;当检测到对自说话优化模式选择控件的启用操作时,展示参数调整界面。
根据第二方面,或者以上第二方面的任意一种实现方式,在展示参数调整界面之前,该方法还包括:展示模式选择界面;其中,模式选择界面包括个性化模式选择控件和自适应模式选择控件中的至少一种;当检测到对个性化模式选择控件的启用操作时,发送个性化模式启用信号至听力辅助装置;其中,个性化模式启用信号用于指示听力辅助装置通过第一传感器,检测用户是否佩戴听力辅助装置;
和/或,当检测到自适应模式选择控件上的启用操作时,发送自适应模式启用信号至听力辅助装置;其中,自适应模式启用信号用于指示听力辅助装置通过第一传感器,检测用户是否佩戴听力辅助装置。
根据第二方面,或者以上第二方面的任意一种实现方式,在发送个性化模式启用信号至听力辅助装置之后,该方法还包括:接收听力辅助装置发送的信息展示指令;其中,信息展示指令为听力辅助装置在检测到用户处于安静环境时发送;展示提示信息;其中,提示信息用于引导用户发出声音。
根据第二方面,或者以上第二方面的任意一种实现方式,在展示提示信息之前,该方 法还包括:接收听力辅助装置发送的第一完成指令;其中,第一完成指令为听力辅助装置在检测到用户佩戴听力辅助装置时发送;接收听力辅助装置发送的第二完成指令;其中,第二完成指令为听力辅助装置在检测到用户处于安静环境时发送;相应地,在展示提示信息之后,该方法还包括:接收听力辅助装置发送的第三完成指令;其中,第三完成指令为听力辅助装置在得到目标信号时发送;输出以下信息至少之一:检测已完成的信息和个性化参数已生成的信息。
第二方面以及第二方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第二方面以及第二方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第三方面,本申请实施例提供一种听力辅助装置,该装置包括:信号采集模块,用于当检测到用户佩戴听力辅助装置且用户发出声音时,采集第一信号和第二信号,其中,第一信号包括用户的声音信号和周围的环境音信号,第二信号包括用户的声音信号;信号处理模块,用于根据第一信号和第二信号,对第一信号中的用户的声音信号进行处理以得到目标信号;信号输出模块,用于通过耳部扬声器播放目标信号。
根据第三方面,信号处理模块,进一步用于:利用第二信号对第一信号进行过滤以得到滤波增益;根据滤波增益对第一信号中的用户的声音信号进行衰减处理以得到目标信号。
根据第三方面,或者以上第三方面的任意一种实现方式,信号处理模块,进一步用于:利用第二信号过滤第一信号中的用户的声音信号以得到期望信号;计算期望信号与第一信号的比值以得到滤波增益。
根据第三方面,或者以上第三方面的任意一种实现方式,信号处理模块,进一步用于:利用第二信号对第一信号进行过滤以得到原始滤波增益;获取程度修正量和频带范围中的至少一种;按照程度修正量调整原始滤波增益的大小,得到滤波增益;和/或,按照频带范围调整原始滤波增益使能的频带,得到滤波增益。
根据第三方面,或者以上第三方面的任意一种实现方式,信号处理模块,进一步用于:利用第二信号对第一信号进行增强以得到补偿信号;根据补偿信号对第一信号中的用户的声音信号进行增强处理以得到目标信号。
根据第三方面,或者以上第三方面的任意一种实现方式,信号处理模块,进一步用于:确定第二信号的加权系数;根据加权系数和第二信号获取增强信号;将增强信号加载于第一信号以得到补偿信号。
根据第三方面,或者以上第三方面的任意一种实现方式,信号处理模块,进一步用于:获取程度修正量和频带范围中的至少一种;利用程度修正量指示的信号补偿强度和第二信号,对第一信号进行增强,得到补偿信号;和/或,利用第二信号对属于频带范围的第一信号进行增强,得到补偿信号。
根据第三方面,或者以上第三方面的任意一种实现方式,信号处理模块,进一步用于:建立与目标终端的通信连接;其中,目标终端用于展示参数调整界面,参数调整界面包括以下至少一种设置控件:调节程度设置控件和频带范围设置控件;接收目标终端发送的程度修正量和频带范围中的至少一种;其中,程度修正量和频带范围为目标终端通过分别检测调节程度设置控件和频带范围设置控件上的操作获得。
根据第三方面,或者以上第三方面的任意一种实现方式,参数调整界面包括左耳调整界面和右耳调整界面;信号处理模块,进一步用于:接收目标终端发送的左耳修正数据和右耳修正数据中的至少一种;其中,左耳修正数据为目标终端通过检测左耳调整界面中设置控件上的操作获得,右耳修正数据为目标终端通过检测右耳调整界面中设置控件上的操作获得;左耳修正数据包括左耳程度修正量和左耳频带范围中的至少一种;右耳修正数据包括右耳程度修正量和右耳频带范围中的至少一种;根据左耳修正数据和/或右耳修正数据携带的耳部标识,选择与听力辅助装置所处耳部相同的修正数据。
根据第三方面,或者以上第三方面的任意一种实现方式,目标终端还用于展示模式选择界面;模式选择界面包括:自说话优化模式选择控件;信号采集模块,还用于:当接收到目标终端发送的自说话优化模式启用信号时,检测用户是否佩戴听力辅助装置;其中,自说话优化模式启用信号为目标终端通过检测自说话优化模式选择控件上的启用操作时发送;若佩戴,则检测用户是否发出声音。
根据第三方面,或者以上第三方面的任意一种实现方式,信号采集模块,进一步用于:通过第一传感器,检测用户是否佩戴听力辅助装置;若佩戴,则通过第三传感器检测用户是否处于安静环境;若处于,则通过第二传感器检测用户是否发出声音;若是,则采集第一信号和第二信号。
根据第三方面,或者以上第三方面的任意一种实现方式,信号处理模块,进一步用于:在用户的耳道处采集第三信号;在用户的耳内播放第一信号和第三信号;采集第四信号和第五信号;其中,第四信号包括:第一信号经过耳道映射后的信号;第五信号包括:第三信号经过耳道映射后的信号;确定第四信号和第五信号间的频响差异;根据第一信号,第二信号和频响差异,对第一信号中的用户的声音信号进行处理以得到目标信号,其中,频响差异用于指示进行处理的程度。
根据第三方面,或者以上第三方面的任意一种实现方式,信号处理模块,进一步用于:分别获取第四信号和第五信号的频率响应;计算第四信号的频率响应与第五信号的频率响应间的差异值,得到频响差异。
根据第三方面,或者以上第三方面的任意一种实现方式,信号处理模块,进一步用于:根据频响差异,确定处理的类型为衰减或者增强;当处理的类型为衰减时,根据频响差异对第一信号中的用户的声音信号进行衰减处理以得到目标信号;当处理的类型为增强时,根据频响差异对第一信号中的用户的声音信号进行增强处理以得到目标信号。
根据第三方面,或者以上第三方面的任意一种实现方式,信号采集模块,进一步用于:建立与目标终端的通信连接;目标终端用于展示模式选择界面;模式选择界面包括个性化模式选择控件;接收到目标终端发送的个性化模式启用信号时,通过第一传感器,检测用户是否佩戴听力辅助装置;其中,个性化模式启用信号为目标终端在检测到个性化模式选择控件上的启用操作时发送。
根据第三方面,或者以上第三方面的任意一种实现方式,信号采集模块,进一步用于:若处于,则发送信息展示指令至目标终端,信息展示指令用于指示目标终端展示提示信息;其中,提示信息用于引导用户发出声音;通过第二传感器检测用户是否发出声音。
根据第三方面,或者以上第三方面的任意一种实现方式,装置还包括指令发送模块,用于:当检测到用户佩戴听力辅助装置时,发送第一完成指令至目标终端;其中,第一完 成指令用于指示目标终端输出佩戴检测完成的提示信息;当检测到用户处于安静环境时,发送第二完成指令至目标终端;其中,第二完成指令用于指示目标终端输出安静环境检测完成的信息;和/或,当得到目标信号时,发送第三完成指令至目标终端;其中,第三完成指令用于指示目标终端输出以下信息至少之一:检测已完成的信息和个性化参数已生成的信息。
根据第三方面,或者以上第三方面的任意一种实现方式,信号采集模块,还用于:在信号输出模块通过扬声器播放目标信号之后,执行通过第一传感器,检测用户是否佩戴听力辅助装置的步骤。
根据第三方面,或者以上第三方面的任意一种实现方式,信号采集模块,进一步用于:建立与目标终端的通信连接;目标终端用于展示模式选择界面;模式选择界面包括自适应模式选择控件;接收到目标终端发送的自适应模式启用信号时,执行通过第一传感器,检测用户是否佩戴听力辅助装置;其中,自适应模式启用信号为目标终端在检测到自适应模式选择控件上的启用操作时发送。
第三方面以及第三方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第三方面以及第三方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第四方面,本申请实施例提供一种设备控制装置,应用于终端,该装置包括:通信模块,用于建立与听力辅助装置的通信连接;其中,听力辅助装置用于执行如上述第一方面以及第一方面的任意一种实现方式的信号处理方法;交互模块,用于展示参数调整界面,参数调整界面包括以下至少一种设置控件:调节程度设置控件和频带范围设置控件;检测模块,用于分别检测调节程度设置控件和频带范围设置控件上的操作以得到程度修正量和频带范围中的至少一种;控制模块,用于发送程度修正量和频带范围中的至少一种至听力辅助装置;其中,程度修正量和频带范围用于听力辅助装置根据其中至少之一,对第一信号中的用户的声音信号进行处理以得到目标信号。
根据第四方面,调节程度设置控件包括形状相同且尺寸不同的多个几何图形,多个几何图形中每个几何图形指示一个修正量,修正量越大几何图形的尺寸越大;频带范围设置控件包括频带范围图标和位于频带范围图标上的滑块;检测模块,进一步用于:检测对调节程度设置控件上多个几何图形的点击操作;将检测到点击操作的几何图形指示的修正量,确定为程度修正量;和/或,检测对频带范围设置控件上滑块的滑动操作;根据滑块的滑动位置,确定频带范围。
根据第四方面,或者以上第四方面的任意一种实现方式,参数调整界面包括左耳调整界面和右耳调整界面;检测模块,进一步用于:检测左耳调整界面中设置控件上的操作以得到左耳修正数据,其中,左耳修正数据包括左耳程度修正量和左耳频带范围中的至少一种;检测右耳调整界面中设置控件上的操作以得到右耳修正数据,其中,右耳修正数据包括右耳程度修正量和右耳频带范围中的至少一种。
根据第四方面,或者以上第四方面的任意一种实现方式,交互模块,进一步用于:展示模式选择界面;其中,模式选择界面包括自说话优化模式选择控件;当检测到对自说话优化模式选择控件的启用操作时,展示参数调整界面。
根据第四方面,或者以上第四方面的任意一种实现方式,交互模块,还用于:在展示参数调整界面之前,展示模式选择界面;其中,模式选择界面包括个性化模式选择控件和自适应模式选择控件中的至少一种;当检测到对个性化模式选择控件的启用操作时,发送个性化模式启用信号至听力辅助装置;其中,个性化模式启用信号用于指示听力辅助装置通过第一传感器,检测用户是否佩戴听力辅助装置;和/或,当检测到自适应模式选择控件上的启用操作时,发送自适应模式启用信号至听力辅助装置;其中,自适应模式启用信号用于指示听力辅助装置通过第一传感器,检测用户是否佩戴听力辅助装置。
根据第四方面,或者以上第四方面的任意一种实现方式,交互模块,还用于:在发送个性化模式启用信号至听力辅助装置之后,接收听力辅助装置发送的信息展示指令;其中,信息展示指令为听力辅助装置在检测到用户处于安静环境时发送;展示提示信息;其中,提示信息用于引导用户发出声音。
根据第四方面,或者以上第四方面的任意一种实现方式,交互模块,还用于:在展示提示信息之前,接收听力辅助装置发送的第一完成指令;其中,第一完成指令为听力辅助装置在检测到用户佩戴听力辅助装置时发送;接收听力辅助装置发送的第二完成指令;其中,第二完成指令为听力辅助装置在检测到用户处于安静环境时发送;交互模块,还用于:在展示提示信息之后,接收听力辅助装置发送的第三完成指令;其中,第三完成指令为听力辅助装置在得到目标信号时发送;输出以下信息至少之一:检测已完成的信息和个性化参数已生成的信息。
第四方面以及第四方面的任意一种实现方式分别与第二方面以及第二方面的任意一种实现方式相对应。第四方面以及第四方面的任意一种实现方式所对应的技术效果可参见上述第二方面以及第二方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第五方面,本申请实施例提供一种电子设备,包括:处理器和收发器;存储器,用于存储一个或多个程序;当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如第一至二方面或第一至二方面的任一种可能的实现方式中的方法。
第五方面以及第五方面的任意一种实现方式分别与第一方面至二方面或第一至二方面的任意一种实现方式相对应。第五方面以及第五方面的任意一种实现方式所对应的技术效果可参见上述第一至二方面或第一至二方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第六方面,本申请实施例提供一种计算机可读存储介质,包括计算机程序,其特征在于,当所述计算机程序在电子设备上运行时,使得所述电子设备执行如第一至二方面或第一至二方面的任一种可能的实现方式中的方法。
第六方面以及第六方面的任意一种实现方式分别与第一至二方面或第一至二方面的任意一种实现方式相对应。第六方面以及第六方面的任意一种实现方式所对应的技术效果可参见上述第一至二方面或第一至二方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第七方面,本申请实施例提供一种芯片,包括一个或多个接口电路和一个或多个处理 器;所述接口电路用于从电子设备的存储器接收信号,并向所述处理器发送所述信号,所述信号包括存储器中存储的计算机指令;当所述处理器执行所述计算机指令时,使得所述电子设备执行如第一至二方面或第一至二方面的任一种可能的实现方式中的方法。
第七方面以及第七方面的任意一种实现方式分别与第一至二方面或第一至二方面的任意一种实现方式相对应。第七方面以及第七方面的任意一种实现方式所对应的技术效果可参见上述第一至二方面或第一至二方面的任意一种实现方式所对应的技术效果,此处不再赘述。
附图说明
图1为信号处理方法的一个示例性的流程图;
图2为信号处理过程的一个示例性的示意图;
图3为本申请实施例提供的耳机的一个示例性的结构图;
图4为本申请实施例提供的信号处理系统的一个示例性的结构图;
图5为本申请实施例提供的电子设备500的一个示例性的结构图;
图6为本申请实施例提供的电子设备500的一个示例性的软件结构框图;
图7为本申请实施例提供的一种信号处理方法的一个示例性的流程图;
图8为本申请实施例提供的参数调整界面的一个示例性的示意图;
图9为本申请实施例提供的耳机算法架构的一个示例性的示意图;
图10为本申请实施例提供的参数调整界面的另一个示例性的示意图;
图11为本申请实施例提供的耳机算法架构的另一个示例性的示意图;
图12a为本申请实施例提供的模式选择界面的一个示例性的示意图;
图12b为本申请实施例提供的模式选择界面的另一个示例性的示意图;
图13为本申请实施例提供的参数调整界面的另一个示例性的示意图;
图14为本申请实施例提供的检测信息展示界面的一个示例性的示意图;
图15为本申请实施例提供的耳机的另一个示例性的结构图;
图16为本申请实施例提供的耳机算法架构的另一个示例性的示意图;
图17为本申请实施例提供的信号处理方法的另一个示例性的流程图;
图18为本申请实施例提供的模式选择界面的另一个示例性的示意图;
图19为本申请实施例提供的耳机算法架构的另一个示例性的示意图;
图20示出了本申请实施例的一种装置2000的示意性框图;
图21示出了本申请实施例的一种听力辅助装置2100的示意性框图;
图22示出了本申请实施例的一种设备控制装置2200的示意性框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系, 例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
本申请实施例的说明书和权利要求书中的术语“第一”和“第二”等是用于区别不同的对象,而不是用于描述对象的特定顺序。例如,第一目标对象和第二目标对象等是用于区别不同的目标对象,而不是用于描述目标对象的特定顺序。
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
在本申请实施例的描述中,除非另有说明,“多个”的含义是指两个或两个以上。例如,多个处理单元是指两个或两个以上的处理单元;多个系统是指两个或两个以上的系统。
在本申请实施例的描述中,除非另有说明,“多个”的含义是指两个或两个以上。例如,多个处理单元是指两个或两个以上的处理单元;多个系统是指两个或两个以上的系统。
当用户佩戴听力辅助装置时,听力辅助装置通常采集用户说话的声音信号并播放,以保证用户与外部环境的交互,例如,用户与他人的交谈。此时,用户通过听力辅助装置听到的自己说话的声音通常存在闷、响等问题,导致声音质量不够自然,用户体验降低。对此,相关技术中,可以对听力辅助装置采集的信号进行反相、幅度调整等处理,以减轻闷、响的问题。
示例性的,图1为信号处理方法的一个示例性的流程图,如图1所示,该流程可以包括以下步骤:
S001,骨传声传感器传导声波信号,骨传声传感器与耳道接触或通过固体介质与耳道之间形成振动传导路径;
S002,对骨传导声波信号进行处理,处理包括反相;
S003,将处理后的骨传导声波信号和相应的声音信号传输到人耳。
图1实施例中通过S001和S002调整骨传导声波信号的相位,进而通过S003同时在人耳中播放调整后的信号和相应的声音信号。其中,相应的声音信号指听力辅助装置采集的用户说话的声音信号。这样,所播放的调整后的信号可以对所播放的声音信号进行抵消,缓解用户听到的该用户声音闷、响的问题。
但是,当听力辅助装置采集的声音信号中包含用户所处环境的环境音时,所播放的调整后的信号不再是所播放的声音信号的反相信号,无法对所播放的声音信号进行抵消,不能解决用户听到的该用户声音闷、响的问题。
示例性的,图2为信号处理过程的一个示例性的示意图。如图2所示,听力辅助装置的麦克风M1采集外部环境信号,骨导传感器M3采集用户说话的声音信号,外部环境信号和用户说话的声音信号经过负反馈通路SP处理后通过喇叭R播放至用户耳内A,即产生用户耳内信号。用户耳内信号包括:部分外部环境信号,喇叭R播放的信号和用户说话的声音信号。听力辅助装置的麦克风M2在用户耳道EC处采集该用户耳内信号发送至负反馈通路SP处理并播放。这样,负反馈通路SP对用户耳内信号进行相位和幅度的调整后,与麦克风M1采集的外部环境信号同时播放。调整后的用户耳内信号和所播放的外部环境信号包含的信号相同,可以对外部环境信号进行抵消。
但是,上述外部环境信号包含用户说话的声音信号和外部环境声音,图2示例不仅抑 制了用户说话的声音信号,还抵消了外部环境声音,导致用户无法感知外部环境音的问题。
本申请实施例提供了一种信号处理方法,以解决上述问题。本申请实施例中,第一信号包括用户的自说话语音和环境音,第二信号包括用户的声音信号。这样,利用第一信号和第二信号,可以有针对性地对第一信号中用户的声音信号进行处理,从而避免对第一信号中用户的声音信号和环境音信号进行无差别地相位和幅度抵消时,所导致的对环境音信号的抵消。因此,本申请实施例可以在不影响环境音信号的情况下,处理用户的声音信号以减少用户佩戴听力辅助装置时听感闷、响、不够饱满的问题,实现兼顾用户听到的该用户自身声音更加自然和用户能够感知环境音的效果。
在对本申请实施例的技术方案说明之前,首先结合附图对本申请实施例的应用场景进行说明。
本申请实施例中,听力辅助装置可以包括耳机或者助听器。其中,耳机或者助听器具备数字听觉增强功能(Digital Augmented Hearing),以进行信号处理。以耳机为例,耳机可以包括挂在耳朵边上的两个发音单元。适配于左耳朵的可以称为左耳机,适配于右耳朵的可以称为右耳机。从佩戴方式的角度来说,本申请实施例中的耳机可以是头戴式耳机、耳挂式耳机、颈挂式耳机或者耳塞式耳机等。耳塞式耳机具体可以包括入耳式耳机(或者称为耳道式耳机)或者半入耳式耳机。作为一种示例,以入耳式耳机为例。左耳机和右耳机采用的结构类似。左耳机或者右耳机均可以采用如下所描述的耳机结构。耳机结构(左耳机或右耳机)包括可以塞入耳道内的胶套、贴近耳朵的耳包、悬挂在耳包上的耳机杆。胶套将声音导向耳道,耳包内包括电池、扬声器、传感器等器件,耳机杆上可布置麦克风、物理按健等。耳机杆可以是圆柱、长方体、椭圆体等形状。
示例性的,图3为本申请实施例提供的耳机的一个示例性的结构图。如图3所示,用户的耳部佩戴有耳机300。耳机300可以包括:扬声器301,参考麦克风302,骨传导传感器303和处理器304。参考麦克风302布置在耳机外部,用于在用户佩戴该耳机时,采集耳机外部的声音信号,该声音信号可以包括用户说话的声音信号和环境音。参考麦克风302可以是模拟麦克风或者数字麦克风。在用户佩戴耳机后,参考麦克风302与扬声器301的位置关系是:扬声器301位于耳道与参考麦克风302之间,用于播放处理后的麦克风采集声音。在一种情况中,扬声器还可以用于播放音乐。参考麦克风302靠近耳朵外部结构,可以布置在耳机杆上部。参考麦克风302附近有耳机开孔,用于透传外部环境声音进入参考麦克风302。骨传导传感器303布置在耳机内部与耳道相贴的位置,也就是说,骨传导传感器303贴附在耳道上,以采集通过人体传导的用户说话的声音信号。处理器304用于控制耳机对信号的采集和播放,并通过处理算法进行信号的处理。
应理解的是,耳机300包括左耳机和右耳机,左耳机和右耳机可以同时实现相同或者不同的信号处理功能。左耳机与右耳机同时实现相同的信号处理功能时,用户佩戴左耳机的左耳朵和佩戴右耳机的右耳朵听觉感知可以相同。
图4为本申请实施例提供的信号处理系统的一个示例性的结构图。如图4所示,在一些示例中,本申请实施例提供一种信号处理系统,该信号处理系统包括终端设备100和耳机300。终端设备100与耳机300通信连接,连接可以是无线连接,也可以为有线连接。对于无线连接,比如可以是终端设备100通过蓝牙技术、无线高保真(wireless fidelity,Wi-Fi)技术、红外(infrared radiation,IR)技术、超宽带技术与耳机300连接。
本申请实施例中,终端设备100是具备显示界面功能的设备。终端设备100例如可以为手机、显示器、平板电脑、车载设备、智能电视等具有显示界面的电子设备,或者可以是智能手表、智能手环等智能显示穿戴产品等电子设备。本申请实施例对上述终端设备100的具体形式不做特殊限制。
应理解,本申请实施例中,终端设备100可以由人工操作与耳机300交互,或者,可以应用于智慧场景中与耳机300交互。
图5为本申请实施例提供的电子设备500的一个示例性的结构图,如图5所示,电子设备500可以为图4所示信号处理系统包括的终端设备和耳机中的任意一个。
应该理解的是,图5所示电子设备500仅是一个范例,并且电子设备500可以具有比图中所示的更多的或者更少的部件,可以组合两个或多个的部件,或者可以具有不同的部件配置。图5中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
电子设备500可以包括:处理器510,外部存储器接口520,内部存储器521,通用串行总线(universal serial bus,USB)接口530,充电管理模块540,电源管理模块541,电池542,天线1,天线2,移动通信模块550,无线通信模块560,音频模块570,扬声器570A,受话器570B,麦克风570C,耳机接口570D,传感器模块580,按键590,马达591,指示器592,摄像头593,显示屏594,以及用户标识模块(subscriber identification module,SIM)卡接口595等。其中传感器模块580可以包括压力传感器580A,陀螺仪传感器580B,气压传感器580C,磁传感器580D,加速度传感器580E,距离传感器580F,接近光传感器580G,指纹传感器580H,温度传感器580J,触摸传感器580K,环境光传感器580L,骨传导传感器580M等。
处理器510可以包括一个或多个处理单元,例如:处理器510可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备500的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器510中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器510中的存储器为高速缓冲存储器。该存储器可以保存处理器510刚用过或循环使用的指令或数据。如果处理器510需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器510的等待时间,因而提高了系统的效率。
在一些实施例中,处理器510可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial  bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器510可以包含多组I2C总线。处理器510可以通过不同的I2C总线接口分别耦合触摸传感器580K,充电器,闪光灯,摄像头593等。例如:处理器510可以通过I2C接口耦合触摸传感器580K,使处理器510与触摸传感器580K通过I2C总线接口通信,实现电子设备500的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器510可以包含多组I2S总线。处理器510可以通过I2S总线与音频模块570耦合,实现处理器510与音频模块570之间的通信。在一些实施例中,音频模块570可以通过I2S接口向无线通信模块560传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块570与无线通信模块560可以通过PCM总线接口耦合。在一些实施例中,音频模块570也可以通过PCM接口向无线通信模块560传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器510与无线通信模块560。例如:处理器510通过UART接口与无线通信模块560中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块570可以通过UART接口向无线通信模块560传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器510与显示屏594,摄像头593等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器510和摄像头593通过CSI接口通信,实现电子设备500的拍摄功能。处理器510和显示屏594通过DSI接口通信,实现电子设备500的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器510与摄像头593,显示屏594,无线通信模块560,音频模块570,传感器模块580等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口530是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口530可以用于连接充电器为电子设备500充电,也可以用于电子设备500与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
应该理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备500的结构限定。在本申请另一些实施例中,电子设备500也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块540用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块540可以通过USB接口530接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块540可以通过电子设备500的无线充电线圈接收无线充电输入。充电管理模块540为电池542充电的 同时,还可以通过电源管理模块541为电子设备供电。
电源管理模块541用于连接电池542,充电管理模块540与处理器510。电源管理模块541接收电池542和/或充电管理模块540的输入,为处理器510,内部存储器521,外部存储器,显示屏594,摄像头593,和无线通信模块560等供电。电源管理模块541还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块541也可以设置于处理器510中。在另一些实施例中,电源管理模块541和充电管理模块540也可以设置于同一个器件中。
电子设备500的无线通信功能可以通过天线1,天线2,移动通信模块550,无线通信模块560,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备500中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块550可以提供应用在电子设备500上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块550可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块550可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块550还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块550的至少部分功能模块可以被设置于处理器510中。在一些实施例中,移动通信模块550的至少部分功能模块可以与处理器510的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器570A,受话器570B等)输出声音信号,或通过显示屏594显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器510,与移动通信模块550或其他功能模块设置在同一个器件中。
无线通信模块560可以提供应用在电子设备500上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块560可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块560经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器510。无线通信模块560还可以从处理器510接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备500的天线1和移动通信模块550耦合,天线2和无线通信模块560耦合,使得电子设备500可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code  division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备500通过GPU,显示屏594,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏594和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器510可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏594用于显示图像,视频等。显示屏594包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备500可以包括1个或N个显示屏594,N为大于1的正整数。
电子设备500可以通过ISP,摄像头593,视频编解码器,GPU,显示屏594以及应用处理器等实现拍摄功能。
ISP用于处理摄像头593反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头593中。
摄像头593用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备500可以包括1个或N个摄像头593,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备500在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备500可以支持一种或多种视频编解码器。这样,电子设备500可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备500的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口520可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备500的存储能力。外部存储卡通过外部存储器接口520与处理器510通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器521可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器510通过运行存储在内部存储器521的指令,从而执行电子设备500的各种功能应用以及数据处理。内部存储器521可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备500使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器521可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备500可以通过音频模块570,扬声器570A,受话器570B,麦克风570C,耳机接口570D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块570用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块570还可以用于对音频信号编码和解码。在一些实施例中,音频模块570可以设置于处理器510中,或将音频模块570的部分功能模块设置于处理器510中。
扬声器570A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备500可以通过扬声器570A收听音乐,或收听免提通话。
受话器570B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备500接听电话或语音信息时,可以通过将受话器570B靠近人耳接听语音。
麦克风570C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风570C发声,将声音信号输入到麦克风570C。电子设备500可以设置至少一个麦克风570C。在另一些实施例中,电子设备500可以设置两个麦克风570C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备500还可以设置三个,四个或更多麦克风570C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口570D用于连接有线耳机。耳机接口570D可以是USB接口530,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器580A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器580A可以设置于显示屏594。压力传感器580A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器580A,电极之间的电容改变。电子设备500根据电容的变化确定压力的强度。当有触摸操作作用于显示屏594,电子设备500根据压力传感器580A检测所述触摸操作强度。电子设备500也可以根据压力传感器580A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力 阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器580B可以用于确定电子设备500的运动姿态。在一些实施例中,可以通过陀螺仪传感器580B确定电子设备500围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器580B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器580B检测电子设备500抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备500的抖动,实现防抖。陀螺仪传感器580B还可以用于导航,体感游戏场景。
气压传感器580C用于测量气压。在一些实施例中,电子设备500通过气压传感器580C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器580D包括霍尔传感器。电子设备500可以利用磁传感器580D检测翻盖皮套的开合。在一些实施例中,当电子设备500是翻盖机时,电子设备500可以根据磁传感器580D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器580E可检测电子设备500在各个方向上(一般为三轴)加速度的大小。当电子设备500静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器580F,用于测量距离。电子设备500可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备500可以利用距离传感器580F测距以实现快速对焦。
接近光传感器580G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备500通过发光二极管向外发射红外光。电子设备500使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备500附近有物体。当检测到不充分的反射光时,电子设备500可以确定电子设备500附近没有物体。电子设备500可以利用接近光传感器580G检测用户手持电子设备500贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器580G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器580L用于感知环境光亮度。电子设备500可以根据感知的环境光亮度自适应调节显示屏594亮度。环境光传感器580L也可用于拍照时自动调节白平衡。环境光传感器580L还可以与接近光传感器580G配合,检测电子设备500是否在口袋里,以防误触。
指纹传感器580H用于采集指纹。电子设备500可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器580J用于检测温度。在一些实施例中,电子设备500利用温度传感器580J检测的温度,执行温度处理策略。例如,当温度传感器580J上报的温度超过阈值,电子设备500执行降低位于温度传感器580J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备500对电池542加热,以避免低温导致电子设备500异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备500对电池542的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器580K,也称“触控面板”。触摸传感器580K可以设置于显示屏594,由触 摸传感器580K与显示屏594组成触摸屏,也称“触控屏”。触摸传感器580K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏594提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器580K也可以设置于电子设备500的表面,与显示屏594所处的位置不同。
骨传导传感器580M可以获取振动信号。在一些实施例中,骨传导传感器580M可以获取人体声部振动骨块的振动信号。骨传导传感器580M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器580M也可以设置于耳机中,结合成骨传导耳机。音频模块570可以基于所述骨传导传感器580M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器580M获取的血压跳动信号解析心率信息,实现心率检测功能。
按键590包括开机键,音量键等。按键590可以是机械按键。也可以是触摸式按键。电子设备500可以接收按键输入,产生与电子设备500的用户设置以及功能控制有关的键信号输入。
马达591可以产生振动提示。马达591可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏594不同区域的触摸操作,马达591也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器592可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口595用于连接SIM卡。SIM卡可以通过插入SIM卡接口595,或从SIM卡接口595拔出,实现和电子设备500的接触和分离。电子设备500可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口595可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口595可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口595也可以兼容不同类型的SIM卡。SIM卡接口595也可以兼容外部存储卡。电子设备500通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备500采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备500中,不能和电子设备500分离。
电子设备500的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android系统为例,示例性说明电子设备500的软件结构。
电子设备500的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android系统为例,示例性说明电子设备500的软件结构。
图6为本申请实施例提供的电子设备500的一个示例性的软件结构框图。
电子设备500的分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图6所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图6所示,应用程序框架层可以包括窗口管理器,电话管理器,内容提供器,视图系统,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
电话管理器用于提供电子设备500的通信功能。例如通话状态的管理(包括接通,挂断等)。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),二维图形引擎(例如:SGL),三维图形处理库(例如:OpenGL ES),媒体库(Media Libraries)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
2D图形引擎是2D绘图的绘图引擎。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,音频驱动,Wi-Fi驱动,传感器驱动,蓝牙驱动。
应该理解的是,图6示出的软件结构包含的部件,并不构成对电子设备500的具体限定。在本申请另一些实施例中,电子设备500可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。
本申请可以通过听力辅助装置采集包括用户的自说话语音和环境音的第一信号,包括用户的声音信号的和第二信号,从而利用第一信号和第二信号有针对性地对第一信号中用户的声音信号进行处理,实现兼顾用户听到的该用户自身声音更加自然和用户能够感知环境音的效果。
图7为本申请实施例提供的一种信号处理方法的一个示例性的流程图。如图7所示,该信号处理方法应用于听力辅助装置,具体可以包括但不限于以下步骤:
S101,当检测到用户佩戴听力辅助装置且用户发出声音时,采集第一信号和第二信号,其中,第一信号包括用户的声音信号和周围的环境音信号,第二信号包括用户的声音信号。
当听力辅助装置检测到用户佩戴该听力辅助装置且用户发出声音时,可以采集第一信号和第二信号,以保证对第一信号和第二信号的成功采集以及信号处理的合理进行。示例性的,参见图3,听力辅助装置可以通过参考麦克风302采集第一信号,通过骨传导传感器302采集第二信号。其中,周围的环境音信号可以包括用户所处物理环境中,除用户自己说话的声音以外的声音信号。举例而言,周围的环境音信号可以包括如下信号至少之一:与用户当面交谈的人员的声音信号,用户所处物理环境中的音乐信号、谈话声和汽车鸣笛声等等。骨传导传感器303采集人体骨骼传导的声音信号,可以保证所采集的声音信号为佩戴听力辅助装置的用户自己说话的声音信号,也就是用户的自说话信号。
在一种可选的实施方式中,听力辅助装置可以通过第一传感器,检测用户是否佩戴听力辅助装置;若佩戴听力辅助装置,则通过第二传感器检测用户是否发出声音;若检测到用户发出声音,则采集第一信号和第二信号。其中,第一传感器可以包括压力传感器、温度传感器等等。第二传感器可以为骨导传感器303。
S102,根据第一信号和第二信号,对第一信号中的用户的声音信号进行处理以得到目标信号。
听力辅助装置采集第一信号和第二信号后,可以根据第一信号和第二信号,对第一信号中的用户的声音信号进行处理以得到目标信号。其中,听力辅助装置对第一信号中的用户的声音信号进行处理的方式,可以包括衰减处理或者增强处理。衰减处理用于解决对第一信号中的用户的声音信号听觉感知闷的问题,增强处理用于会解决对第一信号中的用户的声音信号听觉感知不饱满的问题,从而可以实现用户通过听力辅助装置听到的该用户的声音信号更加自然的效果。
对于衰减处理,在一种可选的实施方式中,听力辅助装置根据第一信号和第二信号,对第一信号中的用户的声音信号进行处理以得到目标信号,具体可以包括但不限于如下步骤:
利用第二信号对第一信号进行过滤以得到滤波增益;
根据滤波增益对第一信号中的用户的声音信号进行衰减处理以得到目标信号。
在对第一信号中的用户的声音信号进行处理时,可以将第一信号中用户的声音信号看作噪音信号。相应地,听力辅助装置可以利用第二信号对第一信号进行过滤以得到滤波增益,该滤波增益也即是第一信号中周围的环境音信号和用户的声音信号之间的信噪比。在 一种可选的实施方式中,听力辅助装置利用第二信号对第一信号进行过滤以得到滤波增益的具体方式可以包括如下步骤:
利用第二信号过滤第一信号中的用户的声音信号以得到期望信号;
计算期望信号与第一信号的比值以得到滤波增益。
示例性的,可以将第一信号和第二信号输入自适应滤波器,得到自适应滤波器输出的期望信号。以第一信号为A和第二信号为B为例,自适应滤波器可以施加滤波器系数h至信号A得到h*A,基于此,自适应滤波器自适应地预测和更新滤波器系数h,直至得到期望信号C,例如得到不含第二信号B的期望信号。这样,计算期望信号C与第一信号A的比值即可得到滤波增益G:G=C/A。自适应滤波器例如可以是卡尔曼滤波器或者维纳滤波器等滤波器。卡尔曼滤波(Kalman filtering)是一种利用线性系统状态方程,通过滤波器的输入和输出观测数据,对滤波器的系统状态进行最优估计也就是滤波的算法。维纳滤波器(wiener filtering)的本质是使估计误差(定义为期望响应与滤波器实际输出之差)均方值最小化。
本实施例通过期望信号获得滤波增益,期望信号为满足对第一信号中第二信号的衰减处理期望的信号,可以保证滤波增益的准确性。基于此,通过滤波增益进行
在一种可选的实施方式中,听力辅助装置利用第二信号对第一信号进行过滤以得到滤波增益的具体方式可以包括如下步骤:将第一信号和第二信号输入预先训练得到的信号调整模型,得到该信号调整模型输出的滤波增益。其中,信号调整模型为利用样本第一信号和样本第二信号进行无监督训练得到。
在一种示例中,听力辅助装置根据滤波增益对第一信号中的用户的声音信号进行衰减处理以得到目标信号,具体可以包括:听力辅助装置将滤波增益施加在第一信号上,实现对第一信号中的用户的声音信号的衰减处理,得到目标信号。例如,将增益G与第一信号A相乘,即可得到对第一信号A中的第二信号B进行了衰减的目标信号A*G。
在一种可选的实施方式中,听力辅助装置利用第二信号对第一信号进行过滤以得到滤波增益,具体可以包括如下步骤:
利用第二信号对第一信号进行过滤以得到原始滤波增益;
获取程度修正量和频带范围中的至少一种;
按照程度修正量调整原始滤波增益的大小,得到滤波增益;
和/或,按照频带范围调整原始滤波增益使能的频带,得到滤波增益。
可以理解的是,听力辅助装置利用第二信号对第一信号进行过滤以得到原始滤波增益的方式,可以是利用自适应滤波器或者利用预先训练得到的信号调整模型,具体可以参见上述相关描述,此处不再赘述。
需要说明的是,对于获取程度修正量和频带范围中的至少一种,和利用第二信号对第一信号进行过滤以得到原始滤波增益这两个步骤,听力辅助装置可以先后执行或者同时执行,本申请实施例对这两个步骤的执行顺序不作限制。
示例性的,程度修正量用于调整对第一信号中第二信号的衰减程度。频带范围用于限制对第一信号中属于该频带范围的第二信号进行衰减处理。听力辅助装置在获取程度修正量和频带范围中的至少一种后,可以执行如下步骤中的至少之一:听力辅助装置按照程度修正量调整原始滤波增益的大小,得到滤波增益;按照频带范围调整原始滤波增益使能的 频带,得到滤波增益。举例而言,听力辅助装置按照程度修正量调整原始滤波增益的大小的方式,可以包括:听力辅助装置计算程度修正量和原始滤波增益的和值或者乘积。
可以理解的是,计算和值的方式适用于程度修正量为增加量或者减少量的情况。例如,滤波增益G=原始滤波增益G0+程度修正量Z,Z为增加量时Z的符号为正“+”,Z为减少量时Z的符号为负“-”。计算乘积的方式适用于程度修正量为比例系数的情况。例如,滤波增益G=原始滤波增益G0*程度修正量Z,Z例如可以是0.7、1、80%等等。具体的程度修正量可以按照应用需求设置,本申请对此不作限制。
举例而言,听力辅助装置按照频带范围调整原始滤波增益使能的频带,得到滤波增益的方式,具体可以包括:听力辅助装置从分别对应不同频带的多个原始滤波增益中,选取对应的频带属于频带范围的原始滤波增益,得到滤波增益。以原始滤波增益G0=期望信号C/第一信号A为例,期望信号C和第一信号A均包括多个不同频带的信号,从而得到分别对应不同频带的多个原始滤波增益G0。这样,听力辅助装置按照频带范围调整原始滤波增益使能的频带时,选取对应的频带属于频带范围的原始滤波增益即可。在一种可选的情况中,听力辅助装置可以在计算原始滤波增益时,计算属于频带范围的期望信号C和第一信号A间的比值,从而得到滤波增益。可以理解的是,在本情况中,听力辅助装置先获取频带范围,再利用第二信号和频带范围对第一信号进行过滤,以得到滤波增益。
在一种可选的实施方式中,听力辅助装置获取程度修正量和频带范围中的至少一种,具体可以包括如下步骤:
建立与目标终端的通信连接;其中,目标终端用于展示参数调整界面,参数调整界面包括以下至少一种设置控件:调节程度设置控件和频带范围设置控件;
接收目标终端发送的程度修正量和频带范围中的至少一种;其中,程度修正量和频带范围为目标终端通过分别检测调节程度设置控件和频带范围设置控件上的操作获得。
参见图3,目标终端可以为终端设备100。听力辅助装置与终端设备100建立通信连接的方式可以参见图3实施例的描述,此处不再赘述。在一种示例中,用户可以开启手机和耳机的蓝牙进行配对,从而建立手机和耳机之间的通信连接。基于此,用户可以在手机的设备管理应用程序中,进行对耳机的控制。
以手机和耳机为例,图8为本申请实施例提供的参数调整界面的一个示例性的示意图。如图8所示,在手机与耳机建立了通信连接后,用户可以点击设备管理应用程序中的耳机管理控件,手机在检测到该点击操作时,展示UI(User Interface)例如参数调整界面。该参数调整界面上布置有调节程度设置控件801和频带范围设置控件802中的至少之一。此时,手机分别检测调节程度设置控件801和频带范围设置控件802上的操作以得到程度修正量和频带范围中的至少一种。在一种可选的实施方式中,仍参见图8,调节程度设置控件801可以包括高低不同的6个矩形,6个矩形中每个矩形指示一个修正量,修正量越大矩形越高。也就是说,第一信号中用户的声音信号的抑制由手机UI的矩形6档力度控制,从左往右拖动矩形为抑制力度增强。频带范围设置控件802包括频带范围图标(如优化范围栏)和位于频带范围图标上的滑块。例如,频带范围图标为一个矩形,设置有空间描述信息“优化范围”,矩形的端点分别设置有“低”和“高”的提示信息。也就是说,优化范围栏可左右拖动,从左到右拖动时,所抑制的第一信号中用户的声音信号的带宽范围变大。这样,用户可以根据该提示信息,对滑块进行符合用户对频带范围调节需求的滑动操作。本 实施例中的参数调整界面用于对衰减处理的衰减力度和衰减频带范围进行设置,相应地,调节程度设置控件801上可以设置有控件描述信息“衰减信息”。
基于上述图8实施例的参数调整界面,手机分别检测调节程度设置控件和频带范围设置控件上的操作以得到程度修正量和频带范围中的至少一种,具体可以包括如下步骤:
检测对调节程度设置控件上多个矩形的点击操作;
将检测到点击操作的矩形指示的修正量,确定为程度修正量;
和/或,检测对频带范围设置控件上滑块的滑动操作;
根据滑块的滑动位置,确定频带范围。
参见图8,不同高度的矩形可以指示不同的修正量。相应地,手机中可以预先存储每个矩形所指示的修正量,从而手机检测到用户点击哪个矩形,即可将该矩形指示的修正量确定为程度修正量。在一种情况中,手机可以在检测到用户对矩形的点击操作时,将所点击矩形显示为区别于多个矩形中其他矩形的指定颜色。例如,参见图8,手机检测到用户点击矩形8011,则将该矩形显示为黑色,黑色与调节程度设置控件801上的其他矩形的颜色例如白色不同。
仍参见图8,不同位置的滑块可以对应有不同的频带范围。相应地,手机中可以预先存储滑块处于频带范围设置控件上不同位置时对应的频带范围,从而手机检测到滑块处于哪个位置,即可将该位置对应的频带范围确定为发送至耳机的频带范围。
手机在获得了程度修正量和频带范围中的至少一种后,可以发送程度修正量和频带范围中的至少一种至耳机。参见图8,参考麦即参考麦克风,骨导麦即骨传导传感器。示例性的,图9为本申请实施例提供的耳机算法架构的一个示例性的示意图。如图9所示,衰减处理可以看作耳机在衰减模式下进行信号处理,结合图9,图8中参考麦采集的参考信号的处理符号为“+”和骨导麦采集的骨导信号的处理符号为“-”,是指耳机可以通过耳机DSP(如图3中的处理器304)的自适应滤波,利用参考信号也就是第一信号和骨导信号也就是第二信号,对第一信号中的用户的声音信号进行过滤,以得到原始滤波增益。这样,耳机可以通过耳机DSP,按照从手机接收的程度修正量和频带范围中的至少一种,对原始滤波增益进行调整,进而基于调整结果对第一信号中的用户的声音信号进行处理以得到目标信号也就是自说话衰减后信号。基于此,耳机的耳部扬声器可以播放目标信号。
本申请实施例中,用户可以通过UI设置上述衰减处理的衰减程度和所衰减的声音信号的频带范围中的至少之一,从而得到符合用户需求的衰减效果也就是自说话抑制效果,可以进一步提高用户体验。
对于增强处理,在一种可选的实施方式中,听力辅助装置根据第一信号和第二信号对第一信号中的用户的声音信号进行处理以得到目标信号,具体可以包括如下步骤:
利用第二信号对第一信号进行增强以得到补偿信号;
根据补偿信号对第一信号中的用户的声音信号进行增强处理以得到目标信号。
听力辅助装置利用第二信号对第一信号进行增强,可以得到补偿信号,这样,补偿信号可以用于对第一信号中的用户的声音信号进行增强处理,以提高第一信号中用户的声音信号的饱满度,从而可以解决用户通过耳部扬声器听到的目标信号中用户的声音信号不够饱满的问题。在一种可选的实施方式中,听力辅助装置利用第二信号对第一信号进行增强 以得到补偿信号,可以包括如下步骤:
确定第二信号的加权系数;
根据加权系数和第二信号获取增强信号;
将增强信号加载于第一信号以得到补偿信号。
示例性的,听力辅助装置确定第二信号的加权系数的方式,可以包括:听力辅助装置读取自身预存的第二信号的加权系数。或者,在一种可选的实施方式中,听力辅助装置确定第二信号的加权系数,可以包括如下步骤:听力辅助装置获取程度修正量;根据程度修正量,获取第二信号的加权系数。举例而言,听力辅助装置可以读取自身预存的程度修正量,或者接收与听力辅助装置通信连接的手机发送的程度修正量,进而将程度修正量确定为第二信号的加权系数,或者计算程度修正量与原始加权系数的和值/乘积。和值与乘积的具体应用情况与上述衰减处理中和值与乘积的应用情况类似,区别在于是对原始加权系数进行计算,对于相同部分此处不再赘述,可以参见上述衰减处理中对和值与乘积的应用情况的描述。
听力辅助装置根据加权系数和第二信号获取增强信号,具体可以是计算加权系数和第二系数的乘积,得到增强信号。例如,第二信号为B,加权系数为50%,则增强信号为B*50%。听力辅助装置将增强信号加载于第一信号以得到补偿信号,具体可以包括:听力辅助装置计算增强信号与第一信号之和,得到补偿信号。例如,第一信号为A,则补偿信号C=(A+B*50%)。需要说明的是,
在一种可选的实施方式中,听力辅助装置利用第二信号对第一信号进行增强以得到补偿信号,可以包括如下步骤:
获取程度修正量和频带范围中的至少一种;
利用程度修正量指示的信号补偿强度和第二信号,对第一信号进行增强,得到补偿信号;
和/或,利用第二信号对属于频带范围的第一信号进行增强,得到补偿信号。
听力辅助装置获取程度修正量和频带范围中的至少一种的具体方式,与上述衰减处理中获取程度修正量和频带范围中的至少一种的方式类似,区别在于本实施例针对增强处理获取。基于此,在通过手机获取的场景中,可以对手机的参数调整界面中的调节程度设置控件进行适应性调整。对于相同内容此处不再赘述,详见上述图8实施例的描述。示例性的,图10为本申请实施例提供的参数调整界面的另一个示例性的示意图。如图10所示,在增强处理的场景中,调节程度设置控件可以为指示补偿力度的6个矩形1001,每个矩形1001指示一个程度修正量,例如,可以指示一个加权系数。当用户在参数调整界面上进行拖动或者点击矩形的操作时,手机检测到该操作,从而可以确定所操作的矩形指示的补偿力度,相应得到第二信号的加权系数。矩形越高增强程度越大,也就是说,矩形从左往右拖动,加权系数增大,从而可以提高对第一信号中用户的声音信号的增强程度,也即是增强对用户自说话的补偿效果。关于优化范围栏,可以参见图8实施例的相关描述,此处不再赘述。
可以理解的是,当图10实施例中的程度修正量指示的信号补偿强度为第二信号的加权系数时,听力辅助装置利用程度修正量指示的信号补偿强度和第二信号,对第一信号进行增强,得到补偿信号具体可以包括:将程度修正量确定为第二信号的加权系数,根据该 加权系数和第二信号获取增强信号;将增强信号加载于第一信号以得到补偿信号。
示例性的,图11为本申请实施例提供的耳机算法架构的另一个示例性的示意图。如图11所示,增强处理可以看作耳机在增强模式下进行信号处理,结合图11,图10中参考麦采集的参考信号的处理符号为“+”和骨导麦采集的骨导信号的处理符号为“+”,是指耳机可以通过耳机DSP(如图3中的处理器304)的加权叠加,利用骨导信号也就是第二信号对参考信号也就是第一信号进行增强处理,以得到目标信号也就是自说话增强后信号。在一种示例中,该增强处理可以包括:对第一信号和第二信号分别进行傅里叶变换,得到第一信号中每个频点的频率响应,和第二信号中每个频点的频率响应,从而根据频响也就是频率响应对第一信号和第二信号进行加权。例如C1=A+B。需要说明的是,做傅里叶变换,可得到每个频点的频响频率响应。其中,频点(Frequency),指具体的绝对频率值,一般为调制信号的中心频率。
本申请实施例中,用户可以通过UI设置上述增强处理的增强程度和所增强的声音信号的频带范围中的至少之一,从而得到符合用户需求的增强效果也就是自说话增强效果,可以进一步提高用户体验。
在一种可选的实施方式中,目标终端还用于展示模式选择界面;模式选择界面包括:自说话优化模式选择控件;相应地,听力辅助装置在采集第一信号和第二信号之前,还可以执行如下步骤:
当接收到目标终端发送的自说话优化模式启用信号时,检测用户是否佩戴听力辅助装置;其中,自说话优化模式启用信号为目标终端通过检测自说话优化模式选择控件上的启用操作时发送;
若佩戴,则检测用户是否发出声音。
示例性的,图12a为本申请实施例提供的模式选择界面的一个示例性的示意图。如图12a所示,自说话优化模式可以包括衰减模式和补偿模式。用户在手机的设备管理应用程序中选择“你的声音”功能以对耳机进行管理。此时,手机可以展示衰减模式选择控件和补偿模式选择控件中的至少一个。举例而言,用户点击衰减模式选择控件即可实现对衰减模式的启用操作,目标终端相应发送的自说话优化模式例如衰减模式的启用信号。此时,参见图9,听力辅助装置即可执行衰减模式下的算法。补偿模式的启用与衰减模式类似,区别在于启用的模式不同,相应地,如图11所示,听力辅助装置执行增强模式下的算法。
在一种可选的实施方式中,目标终端在展示模式选择界面后,可以在检测到用户对自说话优化模式选择控件的启用操作时,展示参数调整界面。
示例性的,参见图12a,用户选择衰减模式时,手机展示图8所示的参数调整界面,该参数调整界面中可以包含“衰减模式”的模式提示信息。类似的,用户选择补偿模式时,手机展示图10所示的参数调整界面,该参数调整界面中可以包含“补偿模式”的模式提示信息。
示例性的,图12b为本申请实施例提供的模式选择界面的另一个示例性的示意图。如图12b所示,自说话优化模式选择控件可以不区分为衰减模式选择控件和补偿模式选择控件。相应地,手机可以将衰减模式下的参数调整界面和补偿模式下的参数调整界面集中在一个界面中展示。这样,用户点击自说话优化选择控件,手机检测到该操作即可启用自说话优化模式,进而展示图12b中的参数调整界面。可以理解的是,图12b中的衰减控件中 的矩形与图8中的矩形相同,图12b中的补偿控件中的矩形与图10中的矩形相同,具体可以参加相关实施例中的描述,此处不再赘述。可以理解的是,图12b的优化力度控件中最低的矩形可以代表优化力度为0,即不衰减也不补偿。
需要说明的是,上述各控件的具体形状为示例,上述各控件的形状可以是圆盘形等,本申请实施例对此不作限制。在一种情况中,可以将不同模式设置为按钮形式,用户点击该按钮,即表示开启该模式。
在一种可选的实施方式中,上述参数调整界面可以包括左耳调整界面和右耳调整界面;
相应地,耳机分别检测调节程度设置控件和频带范围设置控件上的操作以得到程度修正量和频带范围中的至少一种,具体可以包括如下步骤:
检测左耳调整界面中设置控件上的操作以得到左耳修正数据,其中,左耳修正数据包括左耳程度修正量和左耳频带范围中的至少一种;
检测右耳调整界面中设置控件上的操作以得到右耳修正数据,其中,右耳修正数据包括右耳程度修正量和右耳频带范围中的至少一种;
相应地,听力辅助装置接收目标终端发送的程度修正量和频带范围中的至少一种,具体可以包括如下步骤:
听力辅助装置可以接收目标终端(如手机)发送的左耳修正数据和右耳修正数据中的至少一种;根据左耳修正数据和/或右耳修正数据携带的耳部标识,选择与听力辅助装置所处耳部相同的修正数据。
在一种可选的实施方式中,左耳机和右耳机可以分别与手机建立通信连接,相应地,手机可以执行以下步骤中至少之一:手机通过与左耳机间的通信连接发送左耳修正数据至左耳机;手机通过与右耳机间的通信连接发送右耳修正数据至右耳机。此时,左耳机和右耳机中任一个可以直接利用接收到的修正数据进行信号处理,无需对根据耳部标识对所接收的修正数据进行筛选,更加高效且节约计算成本。
示例性的,图13为本申请实施例提供的参数调整界面的另一个示例性的示意图。如图13所示,左耳调整界面可以为图13手机的界面中展示有耳部标识信息“左耳”的界面,右耳调整界面可以为图13手机的界面展示有耳部标识信息“右耳”的界面。可以理解的是,左耳调整界面和右耳调整界面均与图12b所示的参数调整界面类似,区别在于展示有不同的耳部标识信息以引导用户在不同的界面中,分别针对左耳和右耳进行信号处理参数的设置。也就是说,图13实施例通过两个UI界面分别控制左右耳耳机,即一个界面控制一个耳朵的耳机,控制方式与一个界面控制两个耳机时的控制方式相同,可以参见图8、图10和图12a至图12b描述的控制方式。这样,用户可以为左右耳的两个耳机设置不同的参数,匹配耳朵差异性或配合不同应用的需要,进一步提高信号处理的个性化,从而提高用户体验。
在一种可选的实施方式中,听力辅助装置利用第二信号对第一信号进行增强以得到补偿信号,可以包括如下步骤:听力辅助装置将第一信号和第二信号输入预先训练得到的信号增强模型,得到该信号增强模型输出的补偿信号;其中,信号增强模型为利用样本第一信号和样本第二信号进行无监督训练得到。
在一些示例中,听力辅助装置根据补偿信号对第一信号中的用户的声音信号进行增强处理以得到目标信号,具体可以包括:利用补偿信号中属于频带范围的可用补偿信号,更 新第一信号中的待增强信号,其中,待增强信号属于该频带范围。举例而言,频带范围为0至8KHz。通过傅里叶变换将补偿信号C和第一信号A分别变换到频域,得到频域C信号和频域A信号;确定频域A信号中频带大于8KHz的无增强信号,将频域C信号中频带大于8KHz的信号替换为无增强信号,将频域C信号中0~8KHz的可用补偿信号维持加权补偿处理也就是保留,从而得到频域目标信号。在此基础上,通过傅里叶逆变换将频域目标信号变换到时域,即得到目标信号。
S103,通过耳部扬声器播放目标信号。
听力辅助装置在通过上述各实施例的方式得到目标信号,可以通过耳部扬声器播放目标信号。这样,用户听到的第一信号中用户的声音信号为经过增强或者衰减处理,可以更加自然。其中,耳部扬声器例如可以是图3所示的301。
在一种可选的实施方式中,听力辅助装置在检测到用户佩戴听力辅助装置且用户发出声音时,采集第一信号和第二信号,具体可以包括如下步骤:
通过第一传感器,检测用户是否佩戴听力辅助装置;
若佩戴,则通过第三传感器检测用户是否处于安静环境;
若处于,则通过第二传感器检测用户是否发出声音;
若是,则采集第一信号和第二信号。
对于佩戴和用户发出声音的检测可以参见图7实施例的相关描述,此处不再赘述。对于用户是否处于安静环境的检测,可以通过第三传感器例如参考麦克风实现。
在一种可选的实施方式中,如图12a或者图12b所示,手机展示模式选择界面后,如果检测到对个性化模式选择控件“个性化优化模式”的启用操作时,可以发送个性化模式启用信号至耳机,进而耳机在接收到目标终端发送的个性化模式启用信号时,通过第一传感器,检测用户是否佩戴听力辅助装置。示例性的,图14为本申请实施例提供的检测信息展示界面的一个示例性的示意图。如图14所示,手机可以在检测到对个性化模式选择控件的启用操作时,展示个性化优化模式下的检测信息展示界面。该检测信息展示界面中可以显示佩戴检测的进度信息,安静场景检测的进度信息和引导用户发出声音的提示信息中的至少一种。例如,佩戴检测的进度信息为:“1、佩戴检测中。。。”。耳机在检测到用户佩戴听力辅助装置时,发送第一完成指令至手机。手机在接收到第一完成指令时,显示的进度信息为检测已完成的信息例如是图14中的“1、佩戴检测中。。。100%”。
仍参见图14,手机在接收到第一完成指令时,显示安静场景检测的进度信息,例如,“2、安静场景检测中。。。”。耳机在检测到用户处于安静环境时,发送第二完成指令至手机。手机在接收到第二完成指令时,显示的进度信息为检测已完成的信息例如是图14中的“2、安静场景检测中。。。100%”。该第二完成指令可以看作是信息展示指令,手机在接收到第二完成指令时可以展示用于引导用户发出声音的提示信息,例如图14中的“3、请朗读以下内容“XXXX””。可以理解的是,上述第一完成指令至第二完成指令,均可以看作第三完成指令,从而手机可以在接收到第三指令时,显示检测已完成的信息例如是“2、安静场景检测中。。。100%”。
需要说明的是,手机可以展示图14所示各信息中的至少一种,具体可以根据应用需求设置,本申请实施例对此不作限制。通过图14实施例,用户可以直观了解耳机对耳机的个性化设置进度。通过引导用户发出声音的提示信息,可以提高采集到用户的声音信号 的效率,从而提高信号处理的效率。
结合上述图14实施例,图15为本申请实施例提供的耳机的另一个示例性的结构图。如图15所示,本申请图3及图4实施例中的耳机300,还可以包括误差麦克风304。误差麦克风304布置在耳机内部且贴近耳道。这样,在一种可选的实施方式中,听力辅助装置根据第一信号和第二信号,对第一信号中的用户的声音信号进行处理以得到目标信号,具体可以包括如下步骤:
在用户的耳道处采集第三信号;
在用户的耳内播放第一信号和第三信号;
采集第四信号和第五信号;其中,第四信号包括:第一信号经过耳道映射后的信号;第五信号包括:第三信号经过耳道映射后的信号;
确定第四信号和第五信号间的频响差异;
根据第一信号,第二信号和频响差异,对第一信号中的用户的声音信号进行处理以得到目标信号,其中,频响差异用于指示进行处理的程度。
示例性的,如图14所示,耳机可以通过误差麦即误差麦克风304,在用户的耳道处采集第三信号,第三信号即用户耳道处的信号。第四信号例如可以是参考麦采集的外界信号经过耳道映射后,得到的近似不戴耳机时用户听到的该用户的声音信号D。第五信号例如可以是误差麦采集的信号经过耳道映射后,得到的耳朵鼓膜处的声音信号E。在一种可选的实施方式中,听力辅助装置确定第四信号和第五信号间的频响差异,具体可以包括如下步骤:
分别获取第四信号和第五信号的频率响应;
计算第四信号的频率响应与第五信号的频率响应间的差异值,得到频响差异。
在一种可选的实施方式中,听力辅助装置根据第一信号,第二信号和频响差异,对第一信号中的用户的声音信号进行处理以得到目标信号,具体可以包括如下步骤:
根据频响差异,确定处理的类型为衰减或者增强;
当处理的类型为衰减时,根据频响差异对第一信号中的用户的声音信号进行衰减处理以得到目标信号;
当处理的类型为增强时,根据频响差异对第一信号中的用户的声音信号进行增强处理以得到目标信号。
示例性的,如图14所示,耳机通过耳机DSP执行如下算法步骤:比较信号B和信号A的频响差异,可以得到对第一信号中用户的声音信号也就是自说话信号的补偿量或者衰减量。例如,耳机对上述声音信号D和声音信号E分别做傅里叶变换,可得到每个频点的频率响应;将声音信号D的频率响应和声音信号E的频率响应相减即为上述频响差异。该频响差异例如为补偿量(如加权系数)或者衰减量(如滤波增益),可以指示进行处理的程度。耳机可以在获得上述补偿量或者衰减量后,发送第三信号至手机,从而手机可以显示个性化系数已生成的信息,例如是图14中的“检测完毕,个性化系数已生成”。
可以理解的是,耳机可以根据频响差异的正负去确定是补偿还是衰减。例如,当声音信号D-声音信号E=频响差异时:频响差异为正,则耳机可以确定处理的类型为衰减;频响差异为负,则耳机可以确定处理的类型为增强。当声音信号E-声音信号D=频响差异时:频响差异为正,则耳机可以确定处理的类型为增强;频响差异为负,则耳机可以确定处理 的类型为衰减。
针对上述结合误差麦进行信号处理的实施例,示例性的,图16为本申请实施例提供的耳机算法架构的另一个示例性的示意图。如图16所示,结合图14,耳机在个性化模式下进行信号处理时,在图11或者图9的基础上还获取耳内信号例如是上述声音信号D和声音信号E。耳机利用耳内信号可以进行离线计算,从而优化系数,也就是获得频响差异。其中,离线计算是指耳机只在每次开启个性化模式时,执行图16所示的处理,也就是获取频响差异后,在用户结束使用耳机前,利用该频响差异实现听觉增强:通过本申请实施例提供的信号处理,实现用户听到的第一信号中用户的声音信号更加自然的效果。
结合上述图14至图16,示例性的,图17为本申请实施例提供的信号处理方法的另一个示例性的流程图。如图17所示,该方法可以包括如下步骤:
S1701,个性化自说话优化模式开启;
S1702,检测到用户佩戴耳机;
S1703,检测到用户处于安静环境;
S1704,检测到用户的声音信号。
上述S1701至S1704与图14实施例中作用相同的内容相似,对于相同部分具体可以参见图14实施例的描述,此处不再赘述。区别在于,上述S1701至S1704为用户选择个性化模式时,耳机执行的步骤。其中,S1703具体可以包括:耳机的参考麦克风采集到的信号的能量小于第一预设值,为安静环境。S1704具体可以包括:骨导麦采集信号的能量大于第二预设值,则为用户说话。用户说话也就是检测到用户的声音信号。骨导麦即骨传导传感器。示例性的,任一信号的能量可以包括:计算该信号在频率域的幅度平方积分,或该信号在频率域的幅度平方求和。
S1705,第一信号,第二信号和第三信号的采集;
S1706,根据耳道映射后的信号,获取频响差异。
上述S1705至S1706具体可以参见图14可选实施例中关于第三信号采集以及频响差异获取的相关描述,此处不再赘述。
S1707,频响差异小于阈值时,完成优化。
耳机在获得频响差异后,可以比较频响差异与阈值的大小。若频响差异小于阈值,表明第一信号经过用户的耳道映射后,用户听到的第一信号中用户的声音信号与用户不戴耳机时的听觉感知近似,可以不进行优化。相应地,耳机可以确定完成优化,也就是完成图16所示的听觉增强,可以播放目标信号。
S1708,频响差异大于阈值时,基于频响差异,获取补偿量或者衰减量。
若耳机确定频响差异大于阈值,表明第一信号经过用户的耳道映射后,用户听到的第一信号中用户的声音信号与用户不戴耳机时的听觉感知存在导致该听觉感知不自然的差异,可以通过步骤S1709进行优化。其中,基于频响差异,获取补偿量或者衰减量具体可以参见图14可选实施例中获取补偿量或者衰减量的描述,此处不再赘述。
S1709,骨导麦信号自适应滤波或者加权叠加。
上述S1709具体相当于听力辅助装置根据第一信号,第二信号和频响差异,对第一信号中的用户的声音信号进行处理以得到目标信号,可以参见上述已有相关描述,此处不再赘述。
在一种情况中,在用户结束耳机的使用前,可以在每一次播放目标信号后执行S1705,以在用户佩戴耳机也就是使用耳机的过程中持续进行用户的声音信号的优化。可以理解的是,该持续优化的过程中,可以执行S1706,或者,可以执行图9和图11的实施例,具体取决于用户在手机上对模式的选择操作。
本申请实施例中,通过参考麦和误差麦的路径映射结果的频响差异,可以获得适用于该用户的耳道结构的信号处理结果,进一步提高针对不同用户的信号处理的个性化,保证信号处理结果更加适用于该用户。
如图12a或者图12b所示,手机展示模式选择界面后,如果检测到对自适应模式选择控件的启用操作时,可以发送自适应模式启用信号至耳机,进而耳机在接收到目标终端发送的自适应模式启用信号时,通过第一传感器,检测用户是否佩戴听力辅助装置。示例性的,图18为本申请实施例提供的信号处理方法的另一个示例性的流程图。如图18所示,用户可以滑动开启按钮至“ON”的状态,从而开启自适应优化模式,此时,手机检测到对自适应模式选择控件的启用操作。结合图18,图19为本申请实施例提供的耳机算法架构的另一个示例性的示意图。如图19所示,在接收到手机发送的自适应模式启用信号时,耳机进行自适应模式下的信号处理。自适应模式下的信号处理与图16中个性化模式下的信号处理类似,区别在于优化系数为实时计算得到的。其中,实时计算是指耳机通过环境监测+自说话监测,在检测到用户处于安静环境以及发出声音信号时,即利用耳内信号,参考信号和骨导信号计算优化系数。该优化系数即上述实施例中的补偿量或者衰减量。对于相同部分此处不再赘述,详见图16实施例的描述。
可以理解的是,在一种可选的实施方式中,图19实施例的执行可以是听力辅助装置在通过扬声器播放目标信号之后,执行通过第一传感器,检测用户是否佩戴听力辅助装置的步骤,进而执行实时计算优化系数的步骤。这样,可以保证优化时用户佩戴耳机,避免无效的信号处理。
本申请实施例中,耳机可以在用户每次佩戴耳机后,通过自适应模式动态调整对第一信号中用户的声音信号的优化力度,可以避免佩戴差异致的优化效果不一致的问题,且不需要用户手动调整,通过在线修正也就是实时计算补偿量或者衰减量,实时提供适用于当前用户的声音信号优化效果。
应当理解的是,电子设备为了实现上述功能,其包含了执行各个功能相应的硬件和/或软件模块。结合本文中所公开的实施例描述的各示例的算法步骤,本申请实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以结合实施例对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请实施例的范围。
本实施例可以根据上述方法示例对电子设备进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块可以采用硬件的形式实现。需要说明的是,本实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
一个示例中,图20示出了本申请实施例的一种装置2000的示意性框图,如图20所示,装置2000可包括:处理器2001和收发器/收发管脚2002,可选地,还包括存储器2003。
装置2000的各个组件通过总线2004耦合在一起,其中总线2004除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图中将各种总线都称为总线2004。
可选地,存储器2003可以用于前述方法实施例中的指令。该处理器2001可用于执行存储器2003中的指令,并控制接收管脚接收信号,以及控制发送管脚发送信号。
装置2000可以是上述方法实施例中的电子设备或电子设备的芯片。
示例性的,图21示出了本申请实施例的一种听力辅助装置2100的示意性框图。如图21所示,听力辅助装置2100可包括:
信号采集模块2101,用于当检测到用户佩戴听力辅助装置且用户发出声音时,采集第一信号和第二信号,其中,第一信号包括用户的声音信号和周围的环境音信号,第二信号包括用户的声音信号;
信号处理模块2102,用于根据第一信号和第二信号,对第一信号中的用户的声音信号进行处理以得到目标信号;
信号输出模块2103,用于通过耳部扬声器播放目标信号。
示例性的,图22示出了本申请实施例的一种设备控制装置2200的示意性框图。如图22所示,应用于终端,设备控制装置2200可包括:
通信模块2201,用于建立与听力辅助装置的通信连接;其中,听力辅助装置用于执行如上述的任意一种实现方式的信号处理方法;
交互模块2202,用于展示参数调整界面,参数调整界面包括以下至少一种设置控件:调节程度设置控件和频带范围设置控件;
检测模块2203,用于分别检测调节程度设置控件和频带范围设置控件上的操作以得到程度修正量和频带范围中的至少一种;
控制模块2204,用于发送程度修正量和频带范围中的至少一种至听力辅助装置;其中,程度修正量和频带范围用于听力辅助装置根据其中至少之一,对第一信号中的用户的声音信号进行处理以得到目标信号。
其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
本实施例还提供一种计算机存储介质,该计算机存储介质中存储有计算机指令,当该计算机指令在电子设备上运行时,使得电子设备执行上述相关方法步骤实现上述实施例中的大屏业务的跨设备流转操控方法。
本实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中的大屏业务的跨设备流转操控方法。
另外,本申请实施例的实施例还提供一种装置,这个装置具体可以是芯片,组件或模块,该装置可包括相连的处理器和存储器;其中,存储器用于存储计算机执行指令,当装置运行时,处理器可执行存储器存储的计算机执行指令,以使芯片执行上述各方法实施例中的大屏业务的跨设备流转操控方法。
其中,本实施例提供的电子设备、计算机存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
通过以上实施方式的描述,所属领域的技术人员可以了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请实施例所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请实施例各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
本申请实施例各个实施例的任意内容,以及同一实施例的任意内容,均可以自由组合。对上述内容的任意组合均在本申请实施例的范围之内。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请实施例各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
上面结合附图对本申请实施例的实施例进行了描述,但是本申请实施例并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请实施例的启示下,在不脱离本申请实施例宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请实施例的保护之内。

Claims (56)

  1. 一种信号处理方法,其特征在于,应用于听力辅助装置,所述方法包括:
    当检测到用户佩戴所述听力辅助装置且所述用户发出声音时,采集第一信号和第二信号,其中,所述第一信号包括所述用户的声音信号和周围的环境音信号,所述第二信号包括所述用户的声音信号;
    根据所述第一信号和所述第二信号,对所述第一信号中的所述用户的声音信号进行处理以得到目标信号;
    通过所述耳部扬声器播放所述目标信号。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述第一信号和所述第二信号,对所述第一信号中的所述用户的声音信号进行处理以得到目标信号,包括:
    利用所述第二信号对所述第一信号进行过滤以得到滤波增益;
    根据所述滤波增益对所述第一信号中的所述用户的声音信号进行衰减处理以得到所述目标信号。
  3. 根据权利要求2所述的方法,其特征在于,所述利用所述第二信号对所述第一信号进行过滤以得到滤波增益,包括:
    利用所述第二信号过滤所述第一信号中的所述用户的声音信号以得到期望信号;
    计算所述期望信号与所述第一信号的比值以得到所述滤波增益。
  4. 根据权利要求2或3所述的方法,其特征在于,所述利用所述第二信号对所述第一信号进行过滤以得到滤波增益,包括:
    利用所述第二信号对所述第一信号进行过滤以得到原始滤波增益;
    获取程度修正量和频带范围中的至少一种;
    按照所述程度修正量调整所述原始滤波增益的大小,得到滤波增益;
    和/或,按照所述频带范围调整所述原始滤波增益使能的频带,得到所述滤波增益。
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述根据所述第一信号和所述第二信号对所述第一信号中的所述用户的声音信号进行处理以得到目标信号,包括:
    利用所述第二信号对所述第一信号进行增强以得到补偿信号;
    根据所述补偿信号对所述第一信号中的所述用户的声音信号进行增强处理以得到所述目标信号。
  6. 根据权利要求5所述的方法,其特征在于,所述利用所述第二信号对所述第一信号进行增强以得到补偿信号,包括:
    确定所述第二信号的加权系数;
    根据所述加权系数和所述第二信号获取增强信号;
    将所述增强信号加载于所述第一信号以得到所述补偿信号。
  7. 根据权利要求5或6所述的方法,其特征在于,所述利用所述第二信号对所述第一信号进行增强以得到补偿信号,包括:
    获取程度修正量和频带范围中的至少一种;
    利用所述程度修正量指示的信号补偿强度和所述第二信号,对所述第一信号进行增强,得到补偿信号;
    和/或,利用所述第二信号对属于所述频带范围的所述第一信号进行增强,得到补偿信号。
  8. 根据权利要求4或7所述的方法,其特征在于,所述获取程度修正量和频带范围中的至少一种,包括:
    建立与目标终端的通信连接;其中,所述目标终端用于展示参数调整界面,所述参数调整界面包括以下至少一种设置控件:调节程度设置控件和频带范围设置控件;
    接收所述目标终端发送的程度修正量和频带范围中的至少一种;其中,所述程度修正量和所述频带范围为所述目标终端通过分别检测所述调节程度设置控件和所述频带范围设置控件上的操作获得。
  9. 根据权利要求8所述的方法,其特征在于,所述参数调整界面包括左耳调整界面和右耳调整界面;
    所述接收所述目标终端发送的程度修正量和频带范围中的至少一种,包括:
    接收所述目标终端发送的左耳修正数据和右耳修正数据中的至少一种;其中,所述左耳修正数据为所述目标终端通过检测所述左耳调整界面中设置控件上的操作获得,所述右耳修正数据为所述目标终端通过检测所述右耳调整界面中设置控件上的操作获得;所述左耳修正数据包括左耳程度修正量和左耳频带范围中的至少一种;所述右耳修正数据包括右耳程度修正量和右耳频带范围中的至少一种;
    根据所述左耳修正数据和/或所述右耳修正数据携带的耳部标识,选择与所述听力辅助装置所处耳部相同的修正数据。
  10. 根据权利要求8或9所述的方法,其特征在于,所述目标终端还用于展示模式选择界面;所述模式选择界面包括:自说话优化模式选择控件;
    在所述采集第一信号和第二信号之前,所述方法还包括:
    当接收到所述目标终端发送的自说话优化模式启用信号时,检测用户是否佩戴所述听力辅助装置;其中,所述自说话优化模式启用信号为所述目标终端通过检测所述自说话优化模式选择控件上的启用操作时发送;
    若佩戴,则检测所述用户是否发出声音。
  11. 根据权利要求1-10中任一项所述的方法,其特征在于,所述当检测到用户佩戴所述听力辅助装置且所述用户发出声音时,采集第一信号和第二信号,包括:
    通过第一传感器,检测用户是否佩戴所述听力辅助装置;
    若佩戴,则通过第三传感器检测所述用户是否处于安静环境;
    若处于,则通过所述第二传感器检测所述用户是否发出声音;
    若是,则采集第一信号和第二信号。
  12. 根据权利要求11所述的方法,其特征在于,所述根据所述第一信号和所述第二信号,对所述第一信号中的所述用户的声音信号进行处理以得到目标信号,包括:
    在所述用户的耳道处采集第三信号;
    在所述用户的耳内播放所述第一信号和所述第三信号;
    采集第四信号和第五信号;其中,所述第四信号包括:所述第一信号经过所述耳道映射后的信号;所述第五信号包括:所述第三信号经过所述耳道映射后的信号;
    确定所述第四信号和所述第五信号间的频响差异;
    根据所述第一信号,所述第二信号和所述频响差异,对所述第一信号中的所述用户的声音信号进行处理以得到目标信号,其中,所述频响差异用于指示进行所述处理的程度。
  13. 根据权利要求12所述的方法,其特征在于,所述确定所述第四信号和所述第五信号间的频响差异,包括:
    分别获取所述第四信号和所述第五信号的频率响应;
    计算所述第四信号的频率响应与所述第五信号的频率响应间的差异值,得到所述频响差异。
  14. 根据权利要求12或13所述的方法,其特征在于,所述根据所述第一信号,所述第二信号和所述频响差异,对所述第一信号中的所述用户的声音信号进行处理以得到目标信号,包括:
    根据所述频响差异,确定处理的类型为衰减或者增强;
    当所述处理的类型为衰减时,根据频响差异对所述第一信号中的所述用户的声音信号进行衰减处理以得到所述目标信号;
    当所述处理的类型为增强时,根据频响差异对所述第一信号中的所述用户的声音信号进行增强处理以得到所述目标信号。
  15. 根据权利要求11-14中任一项所述的方法,其特征在于,所述通过第一传感器,检测用户是否佩戴所述听力辅助装置,包括:
    建立与目标终端的通信连接;所述目标终端用于展示模式选择界面;所述模式选择界面包括个性化模式选择控件;
    接收到所述目标终端发送的个性化模式启用信号时,通过第一传感器,检测用户是否佩戴所述听力辅助装置;其中,所述个性化模式启用信号为所述目标终端在检测到所述个性化模式选择控件上的启用操作时发送。
  16. 根据权利要求15所述的方法,其特征在于,所述若处于,则通过所述第二传感器检测所述用户是否发出声音,包括:
    若处于,则发送信息展示指令至所述目标终端,所述信息展示指令用于指示所述目标终端展示提示信息;其中,所述提示信息用于引导所述用户发出声音;
    通过所述第二传感器检测所述用户是否发出声音。
  17. 根据权利要求15或16所述的方法,其特征在于,在所述采集第一信号和第二信号之前,所述方法还包括:
    当检测到用户佩戴所述听力辅助装置时,发送第一完成指令至所述目标终端;其中,所述第一完成指令用于指示所述目标终端输出佩戴检测完成的提示信息;
    当检测到用户处于安静环境时,发送第二完成指令至所述目标终端;其中,所述第二完成指令用于指示所述目标终端输出安静环境检测完成的信息;
    和/或,当得到目标信号时,发送第三完成指令至所述目标终端;其中,所述第三完成指令用于指示所述目标终端输出以下信息至少之一:检测已完成的信息和个性化参数已生成的信息。
  18. 根据权利要求12-16中任一项所述的方法,其特征在于,在所述通过所述扬声器播放所述目标信号之后,所述方法还包括:
    执行所述通过第一传感器,检测用户是否佩戴所述听力辅助装置的步骤。
  19. 根据权利要求18所述的方法,其特征在于,所述执行所述通过第一传感器,检测用户是否佩戴所述听力辅助装置,包括:
    建立与目标终端的通信连接;所述目标终端用于展示模式选择界面;所述模式选择界面包括自适应模式选择控件;
    接收到所述目标终端发送的自适应模式启用信号时,执行所述通过第一传感器,检测用户是否佩戴所述听力辅助装置;其中,所述自适应模式启用信号为所述目标终端在检测到所述自适应模式选择控件上的启用操作时发送。
  20. 一种设备控制方法,其特征在于,应用于终端,所述方法包括:
    建立与听力辅助装置的通信连接;其中,所述听力辅助装置用于执行如权利要求1-19任一项所述的信号处理方法;
    展示参数调整界面,所述参数调整界面包括以下至少一种设置控件:调节程度设置控件和频带范围设置控件;
    分别检测所述调节程度设置控件和所述频带范围设置控件上的操作以得到程度修正量和频带范围中的至少一种;
    发送所述程度修正量和所述频带范围中的至少一种至所述听力辅助装置;其中,所述程度修正量和所述频带范围用于所述听力辅助装置根据其中至少之一,对第一信号中的用户的声音信号进行处理以得到目标信号。
  21. 根据权利要求20所述的方法,其特征在于,所述调节程度设置控件包括形状相同且尺寸不同的多个几何图形,所述多个几何图形中每个几何图形指示一个修正量,所述修正量越大所述几何图形的尺寸越大;所述频带范围设置控件包括频带范围图标和位于所述频带范围图标上的滑块;
    所述分别检测所述调节程度设置控件和所述频带范围设置控件上的操作以得到程度修正量和频带范围中的至少一种,包括:
    检测对所述调节程度设置控件上所述多个几何图形的点击操作;
    将检测到所述点击操作的几何图形指示的修正量,确定为所述程度修正量;
    和/或,检测对所述频带范围设置控件上滑块的滑动操作;
    根据所述滑块的滑动位置,确定所述频带范围。
  22. 根据权利要求20或21所述的方法,其特征在于,所述参数调整界面包括左耳调整界面和右耳调整界面;
    所述分别检测所述调节程度设置控件和所述频带范围设置控件上的操作以得到程度修正量和频带范围中的至少一种,包括:
    检测所述左耳调整界面中设置控件上的操作以得到左耳修正数据,其中,所述左耳修正数据包括左耳程度修正量和左耳频带范围中的至少一种;
    检测所述右耳调整界面中设置控件上的操作以得到右耳修正数据,其中,所述右耳修正数据包括右耳程度修正量和右耳频带范围中的至少一种。
  23. 根据权利要求20-22中任一项所述的方法,其特征在于,所述展示参数调整界面,包括:
    展示模式选择界面;其中,所述模式选择界面包括自说话优化模式选择控件;
    当检测到对所述自说话优化模式选择控件的启用操作时,展示参数调整界面。
  24. 根据权利要求20-23中任一项所述的方法,其特征在于,在所述展示参数调整界面之前,所述方法还包括:
    展示模式选择界面;其中,所述模式选择界面包括个性化模式选择控件和自适应模式选择控件中的至少一种;
    当检测到对所述个性化模式选择控件的启用操作时,发送个性化模式启用信号至所述听力辅助装置;其中,所述个性化模式启用信号用于指示所述听力辅助装置通过第一传感器,检测用户是否佩戴所述听力辅助装置;
    和/或,当检测到所述自适应模式选择控件上的启用操作时,发送自适应模式启用信号至所述听力辅助装置;其中,所述自适应模式启用信号用于指示所述听力辅助装置通过第一传感器,检测用户是否佩戴所述听力辅助装置。
  25. 根据权利要求24所述的方法,其特征在于,在所述发送个性化模式启用信号至所述听力辅助装置之后,所述方法还包括:
    接收所述听力辅助装置发送的信息展示指令;其中,所述信息展示指令为所述听力辅助装置在检测到用户处于安静环境时发送;
    展示提示信息;其中,所述提示信息用于引导所述用户发出声音。
  26. 根据权利要求24或25所述的方法,其特征在于,在所述展示提示信息之前,所述方法还包括:
    接收所述听力辅助装置发送的第一完成指令;其中,所述第一完成指令为所述听力辅助装置在检测到用户佩戴所述听力辅助装置时发送;
    接收所述听力辅助装置发送的第二完成指令;其中,所述第二完成指令为所述听力辅助装置在检测到用户处于安静环境时发送;
    在所述展示提示信息之后,所述方法还包括:
    接收所述听力辅助装置发送的第三完成指令;其中,所述第三完成指令为所述听力辅助装置在得到目标信号时发送;
    输出以下信息至少之一:检测已完成的信息和个性化参数已生成的信息。
  27. 一种听力辅助装置,其特征在于,所述装置包括:
    信号采集模块,用于当检测到用户佩戴所述听力辅助装置且所述用户发出声音时,采集第一信号和第二信号,其中,所述第一信号包括所述用户的声音信号和周围的环境音信号,所述第二信号包括所述用户的声音信号;
    信号处理模块,用于根据所述第一信号和所述第二信号,对所述第一信号中的所述用户的声音信号进行处理以得到目标信号;
    信号输出模块,用于通过所述耳部扬声器播放所述目标信号。
  28. 根据权利要求27所述的装置,其特征在于,所述信号处理模块,进一步用于:
    利用所述第二信号对所述第一信号进行过滤以得到滤波增益;
    根据所述滤波增益对所述第一信号中的所述用户的声音信号进行衰减处理以得到所述目标信号。
  29. 根据权利要求28所述的装置,其特征在于,所述信号处理模块,进一步用于:
    利用所述第二信号过滤所述第一信号中的所述用户的声音信号以得到期望信号;
    计算所述期望信号与所述第一信号的比值以得到所述滤波增益。
  30. 根据权利要求28或29所述的装置,其特征在于,所述信号处理模块,进一步用于:
    利用所述第二信号对所述第一信号进行过滤以得到原始滤波增益;
    获取程度修正量和频带范围中的至少一种;
    按照所述程度修正量调整所述原始滤波增益的大小,得到滤波增益;
    和/或,按照所述频带范围调整所述原始滤波增益使能的频带,得到所述滤波增益。
  31. 根据权利要求27-30中任一项所述的装置,其特征在于,所述信号处理模块,进一步用于:
    利用所述第二信号对所述第一信号进行增强以得到补偿信号;
    根据所述补偿信号对所述第一信号中的所述用户的声音信号进行增强处理以得到所述目标信号。
  32. 根据权利要求31所述的装置,其特征在于,所述信号处理模块,进一步用于:
    确定所述第二信号的加权系数;
    根据所述加权系数和所述第二信号获取增强信号;
    将所述增强信号加载于所述第一信号以得到所述补偿信号。
  33. 根据权利要求31或32所述的装置,其特征在于,所述信号处理模块,进一步用于:
    获取程度修正量和频带范围中的至少一种;
    利用所述程度修正量指示的信号补偿强度和所述第二信号,对所述第一信号进行增强,得到补偿信号;
    和/或,利用所述第二信号对属于所述频带范围的所述第一信号进行增强,得到补偿信号。
  34. 根据权利要求30或33所述的装置,其特征在于,所述信号处理模块,进一步用于:
    建立与目标终端的通信连接;其中,所述目标终端用于展示参数调整界面,所述参数调整界面包括以下至少一种设置控件:调节程度设置控件和频带范围设置控件;
    接收所述目标终端发送的程度修正量和频带范围中的至少一种;其中,所述程度修正量和所述频带范围为所述目标终端通过分别检测所述调节程度设置控件和所述频带范围设置控件上的操作获得。
  35. 根据权利要求34所述的装置,其特征在于,所述参数调整界面包括左耳调整界面和右耳调整界面;
    所述信号处理模块,进一步用于:
    接收所述目标终端发送的左耳修正数据和右耳修正数据中的至少一种;其中,所述左耳修正数据为所述目标终端通过检测所述左耳调整界面中设置控件上的操作获得,所述右耳修正数据为所述目标终端通过检测所述右耳调整界面中设置控件上的操作获得;所述左耳修正数据包括左耳程度修正量和左耳频带范围中的至少一种;所述右耳修正数据包括右耳程度修正量和右耳频带范围中的至少一种;
    根据所述左耳修正数据和/或所述右耳修正数据携带的耳部标识,选择与所述听力辅助装置所处耳部相同的修正数据。
  36. 根据权利要求34或35所述的装置,其特征在于,所述目标终端还用于展示模式选择界面;所述模式选择界面包括:自说话优化模式选择控件;
    所述信号采集模块,还用于:
    当接收到所述目标终端发送的自说话优化模式启用信号时,检测用户是否佩戴所述听力辅助装置;其中,所述自说话优化模式启用信号为所述目标终端通过检测所述自说话优化模式选择控件上的启用操作时发送;
    若佩戴,则检测所述用户是否发出声音。
  37. 根据权利要求27-36中任一项所述的装置,其特征在于,所述信号采集模块,进一步用于:
    通过第一传感器,检测用户是否佩戴所述听力辅助装置;
    若佩戴,则通过第三传感器检测所述用户是否处于安静环境;
    若处于,则通过所述第二传感器检测所述用户是否发出声音;
    若是,则采集第一信号和第二信号。
  38. 根据权利要求37所述的装置,其特征在于,所述信号处理模块,进一步用于:
    在所述用户的耳道处采集第三信号;
    在所述用户的耳内播放所述第一信号和所述第三信号;
    采集第四信号和第五信号;其中,所述第四信号包括:所述第一信号经过所述耳道映射后的信号;所述第五信号包括:所述第三信号经过所述耳道映射后的信号;
    确定所述第四信号和所述第五信号间的频响差异;
    根据所述第一信号,所述第二信号和所述频响差异,对所述第一信号中的所述用户的声音信号进行处理以得到目标信号,其中,所述频响差异用于指示进行所述处理的程度。
  39. 根据权利要求38所述的装置,其特征在于,所述信号处理模块,进一步用于:
    分别获取所述第四信号和所述第五信号的频率响应;
    计算所述第四信号的频率响应与所述第五信号的频率响应间的差异值,得到所述频响差异。
  40. 根据权利要求38或39所述的装置,其特征在于,所述信号处理模块,进一步用于:
    根据所述频响差异,确定处理的类型为衰减或者增强;
    当所述处理的类型为衰减时,根据频响差异对所述第一信号中的所述用户的声音信号进行衰减处理以得到所述目标信号;
    当所述处理的类型为增强时,根据频响差异对所述第一信号中的所述用户的声音信号进行增强处理以得到所述目标信号。
  41. 根据权利要求37-40中任一项所述的装置,其特征在于,所述信号采集模块,进一步用于:
    建立与目标终端的通信连接;所述目标终端用于展示模式选择界面;所述模式选择界面包括个性化模式选择控件;
    接收到所述目标终端发送的个性化模式启用信号时,通过第一传感器,检测用户是否 佩戴所述听力辅助装置;其中,所述个性化模式启用信号为所述目标终端在检测到所述个性化模式选择控件上的启用操作时发送。
  42. 根据权利要求41所述的装置,其特征在于,所述信号采集模块,进一步用于:
    若处于,则发送信息展示指令至所述目标终端,所述信息展示指令用于指示所述目标终端展示提示信息;其中,所述提示信息用于引导所述用户发出声音;
    通过所述第二传感器检测所述用户是否发出声音。
  43. 根据权利要求41或42所述的装置,其特征在于,所述装置还包括指令发送模块,用于:
    当检测到用户佩戴所述听力辅助装置时,发送第一完成指令至所述目标终端;其中,所述第一完成指令用于指示所述目标终端输出佩戴检测完成的提示信息;
    当检测到用户处于安静环境时,发送第二完成指令至所述目标终端;其中,所述第二完成指令用于指示所述目标终端输出安静环境检测完成的信息;
    和/或,当得到目标信号时,发送第三完成指令至所述目标终端;其中,所述第三完成指令用于指示所述目标终端输出以下信息至少之一:检测已完成的信息和个性化参数已生成的信息。
  44. 根据权利要求38-42中任一项所述的装置,其特征在于,所述信号采集模块,还用于:
    在所述信号输出模块通过所述扬声器播放所述目标信号之后,执行所述通过第一传感器,检测用户是否佩戴所述听力辅助装置的步骤。
  45. 根据权利要求44所述的装置,其特征在于,所述信号采集模块,进一步用于:
    建立与目标终端的通信连接;所述目标终端用于展示模式选择界面;所述模式选择界面包括自适应模式选择控件;
    接收到所述目标终端发送的自适应模式启用信号时,执行所述通过第一传感器,检测用户是否佩戴所述听力辅助装置;其中,所述自适应模式启用信号为所述目标终端在检测到所述自适应模式选择控件上的启用操作时发送。
  46. 一种设备控制装置,其特征在于,应用于终端,所述装置包括:
    通信模块,用于建立与听力辅助装置的通信连接;其中,所述听力辅助装置用于执行如权利要求1-19任一项所述的信号处理方法;
    交互模块,用于展示参数调整界面,所述参数调整界面包括以下至少一种设置控件:调节程度设置控件和频带范围设置控件;
    检测模块,用于分别检测所述调节程度设置控件和所述频带范围设置控件上的操作以得到程度修正量和频带范围中的至少一种;
    控制模块,用于发送所述程度修正量和所述频带范围中的至少一种至所述听力辅助装置;其中,所述程度修正量和所述频带范围用于所述听力辅助装置根据其中至少之一,对第一信号中的用户的声音信号进行处理以得到目标信号。
  47. 根据权利要求46所述的装置,其特征在于,所述调节程度设置控件包括形状相同且尺寸不同的多个几何图形,所述多个几何图形中每个几何图形指示一个修正量,所述修正量越大所述几何图形的尺寸越大;所述频带范围设置控件包括频带范围图标和位于所述 频带范围图标上的滑块;
    所述检测模块,进一步用于:
    检测对所述调节程度设置控件上所述多个几何图形的点击操作;
    将检测到所述点击操作的几何图形指示的修正量,确定为所述程度修正量;
    和/或,检测对所述频带范围设置控件上滑块的滑动操作;
    根据所述滑块的滑动位置,确定所述频带范围。
  48. 根据权利要求46或47所述的装置,其特征在于,所述参数调整界面包括左耳调整界面和右耳调整界面;
    所述检测模块,进一步用于:
    检测所述左耳调整界面中设置控件上的操作以得到左耳修正数据,其中,所述左耳修正数据包括左耳程度修正量和左耳频带范围中的至少一种;
    检测所述右耳调整界面中设置控件上的操作以得到右耳修正数据,其中,所述右耳修正数据包括右耳程度修正量和右耳频带范围中的至少一种。
  49. 根据权利要求46-48中任一项所述的装置,其特征在于,所述交互模块,进一步用于:
    展示模式选择界面;其中,所述模式选择界面包括自说话优化模式选择控件;
    当检测到对所述自说话优化模式选择控件的启用操作时,展示参数调整界面。
  50. 根据权利要求46-49中任一项所述的装置,其特征在于,所述交互模块,还用于:
    在所述展示参数调整界面之前,展示模式选择界面;其中,所述模式选择界面包括个性化模式选择控件和自适应模式选择控件中的至少一种;
    当检测到对所述个性化模式选择控件的启用操作时,发送个性化模式启用信号至所述听力辅助装置;其中,所述个性化模式启用信号用于指示所述听力辅助装置通过第一传感器,检测用户是否佩戴所述听力辅助装置;
    和/或,当检测到所述自适应模式选择控件上的启用操作时,发送自适应模式启用信号至所述听力辅助装置;其中,所述自适应模式启用信号用于指示所述听力辅助装置通过第一传感器,检测用户是否佩戴所述听力辅助装置。
  51. 根据权利要求50所述的装置,其特征在于,所述交互模块,还用于:
    在所述发送个性化模式启用信号至所述听力辅助装置之后,接收所述听力辅助装置发送的信息展示指令;其中,所述信息展示指令为所述听力辅助装置在检测到用户处于安静环境时发送;
    展示提示信息;其中,所述提示信息用于引导所述用户发出声音。
  52. 根据权利要求50或51所述的装置,其特征在于,所述交互模块,还用于:
    在所述展示提示信息之前,接收所述听力辅助装置发送的第一完成指令;其中,所述第一完成指令为所述听力辅助装置在检测到用户佩戴所述听力辅助装置时发送;
    接收所述听力辅助装置发送的第二完成指令;其中,所述第二完成指令为所述听力辅助装置在检测到用户处于安静环境时发送;
    所述交互模块,还用于:
    在所述展示提示信息之后,接收所述听力辅助装置发送的第三完成指令;其中,所述第三完成指令为所述听力辅助装置在得到目标信号时发送;
    输出以下信息至少之一:检测已完成的信息和个性化参数已生成的信息。
  53. 一种电子设备,其特征在于,包括:
    处理器和收发器;
    存储器,用于存储一个或多个程序;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-26中任一项所述的方法。
  54. 一种计算机可读存储介质,其特征在于,包括计算机程序,其特征在于,当所述计算机程序在电子设备上运行时,使得所述电子设备执行如权利要求1-26中任意一项所述的方法。
  55. 一种芯片,其特征在于,包括一个或多个接口电路和一个或多个处理器;所述接口电路用于从电子设备的存储器接收信号,并向所述处理器发送所述信号,所述信号包括存储器中存储的计算机指令;当所述处理器执行所述计算机指令时,使得所述电子设备执行权利要求1-26中任意一项所述的方法。
  56. 一种计算机程序产品,其特征在于,包括计算机程序,当所述计算机程序被电子设备执行时,使得所述电子设备执行权利要求1-26中任一项所述的方法。
PCT/CN2023/093251 2022-07-30 2023-05-10 信号处理方法及装置、设备控制方法及装置 WO2024027259A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210911626.2 2022-07-30
CN202210911626.2A CN117528370A (zh) 2022-07-30 2022-07-30 信号处理方法及装置、设备控制方法及装置

Publications (1)

Publication Number Publication Date
WO2024027259A1 true WO2024027259A1 (zh) 2024-02-08

Family

ID=89753740

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/093251 WO2024027259A1 (zh) 2022-07-30 2023-05-10 信号处理方法及装置、设备控制方法及装置

Country Status (2)

Country Link
CN (1) CN117528370A (zh)
WO (1) WO2024027259A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109996165A (zh) * 2017-12-29 2019-07-09 奥迪康有限公司 包括适于位于用户耳道处或耳道中的传声器的听力装置
CN113132881A (zh) * 2021-04-16 2021-07-16 深圳木芯科技有限公司 基于多麦克风自适应控制佩戴者声音放大程度的方法
CN113498005A (zh) * 2020-03-20 2021-10-12 奥迪康有限公司 适于提供用户自我话音的估计量的听力装置
CN113873378A (zh) * 2020-06-30 2021-12-31 华为技术有限公司 一种耳机噪声处理方法、装置及耳机
US20220189497A1 (en) * 2020-12-15 2022-06-16 Google Llc Bone conduction headphone speech enhancement systems and methods

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109996165A (zh) * 2017-12-29 2019-07-09 奥迪康有限公司 包括适于位于用户耳道处或耳道中的传声器的听力装置
CN113498005A (zh) * 2020-03-20 2021-10-12 奥迪康有限公司 适于提供用户自我话音的估计量的听力装置
CN113873378A (zh) * 2020-06-30 2021-12-31 华为技术有限公司 一种耳机噪声处理方法、装置及耳机
US20220189497A1 (en) * 2020-12-15 2022-06-16 Google Llc Bone conduction headphone speech enhancement systems and methods
CN113132881A (zh) * 2021-04-16 2021-07-16 深圳木芯科技有限公司 基于多麦克风自适应控制佩戴者声音放大程度的方法

Also Published As

Publication number Publication date
CN117528370A (zh) 2024-02-06

Similar Documents

Publication Publication Date Title
CN113873378B (zh) 一种耳机噪声处理方法、装置及耳机
CN113630572B (zh) 帧率切换方法和相关装置
EP4080859A1 (en) Method for implementing stereo output and terminal
US20230164475A1 (en) Mode Control Method and Apparatus, and Terminal Device
CN110956939B (zh) 调节屏幕亮度的方法及电子设备
CN115017920A (zh) 一种翻译方法及电子设备
WO2021083128A1 (zh) 一种声音处理方法及其装置
CN112119641B (zh) 通过转发模式连接的多tws耳机实现自动翻译的方法及装置
US20230041696A1 (en) Image Syntheis Method, Electronic Device, and Non-Transitory Computer-Readable Storage Medium
WO2022257563A1 (zh) 一种音量调节的方法,电子设备和系统
CN111065020B (zh) 音频数据处理的方法和装置
CN113571035A (zh) 降噪方法及降噪装置
WO2023216930A1 (zh) 基于穿戴设备的振动反馈方法、系统、穿戴设备和电子设备
WO2022089563A1 (zh) 一种声音增强方法、耳机控制方法、装置及耳机
CN115641867A (zh) 语音处理方法和终端设备
WO2024027259A1 (zh) 信号处理方法及装置、设备控制方法及装置
CN114445522A (zh) 笔刷效果图生成方法、图像编辑方法、设备和存储介质
WO2024046182A1 (zh) 一种音频播放方法、系统及相关装置
WO2024046416A1 (zh) 一种音量调节方法、电子设备及系统
WO2024032035A1 (zh) 一种语音信号的输出方法和电子设备
WO2024066933A1 (zh) 扬声器控制方法及设备
WO2024051638A1 (zh) 声场校准方法、电子设备及系统
WO2023197997A1 (zh) 穿戴设备、拾音方法及装置
WO2023202405A1 (zh) 界面布局方法和装置
WO2022242301A1 (zh) 振动描述文件的生成方法、装置、设备及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23848970

Country of ref document: EP

Kind code of ref document: A1