WO2021120247A1 - 听力补偿方法、装置及计算机可读存储介质 - Google Patents

听力补偿方法、装置及计算机可读存储介质 Download PDF

Info

Publication number
WO2021120247A1
WO2021120247A1 PCT/CN2019/128044 CN2019128044W WO2021120247A1 WO 2021120247 A1 WO2021120247 A1 WO 2021120247A1 CN 2019128044 W CN2019128044 W CN 2019128044W WO 2021120247 A1 WO2021120247 A1 WO 2021120247A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
compensation
voice
hearing
output
Prior art date
Application number
PCT/CN2019/128044
Other languages
English (en)
French (fr)
Inventor
朱永胜
盖伟东
詹马尔姆安德斯
Original Assignee
深圳市易优斯科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市易优斯科技有限公司 filed Critical 深圳市易优斯科技有限公司
Publication of WO2021120247A1 publication Critical patent/WO2021120247A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/50Customised settings for obtaining desired overall acoustical characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/6033Substation equipment, e.g. for use by subscribers including speech amplifiers for providing handsfree use or a loudspeaker mode in telephone sets
    • H04M1/6041Portable telephones adapted for handsfree use
    • H04M1/6058Portable telephones adapted for handsfree use involving the use of a headset accessory device connected to the portable telephone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • H04R25/55Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired
    • H04R25/554Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using an external connection, either wireless or wired using a wireless connection, e.g. between microphone and amplifier or using Tcoils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/43Signal processing in hearing aids to enhance the speech intelligibility

Definitions

  • This application relates to the field of communication technology, and in particular to hearing compensation methods, devices, and computer-readable storage media.
  • the proportion of people with different degrees of hearing loss in my country is very high. Even among people with normal hearing, each person's hearing characteristics are different, and the frequency band of each person's hearing loss is different. Ordinary earphones combined with mobile phones or other playback devices can realize functions such as making calls and playing music, but cannot perform sound processing according to the user’s personalized hearing characteristics, and cannot increase hearing in the frequency band where the user’s hearing is impaired.
  • the damaged frequency band adjusts the volume of the sound playback, which will cause the user to be unable to hear the sound in the hearing-impaired frequency band, which will affect the user's reception of the sound information played by the playback device.
  • the main purpose of this application is to propose a hearing compensation method, device, and computer-readable storage medium, which aims to solve the technical problem that hearing impaired users cannot hear the content of the voice call clearly.
  • the present application provides a hearing compensation method, which includes the following steps:
  • this application also provides a hearing compensation device, the hearing compensation device comprising: a memory, a processor, and a hearing compensation program stored in the memory and running on the processor, so When the hearing compensation program is executed by the processor, the steps of the hearing compensation method as described above are realized.
  • the present application also provides a computer-readable storage medium on which a hearing compensation program is stored, and when the hearing compensation program is executed by a processor, the hearing compensation as described above is realized. Method steps.
  • This application provides a hearing compensation method, device, and computer-readable storage medium to obtain hearing loss information of a user; determine the acoustic compensation information according to the hearing loss information; when receiving voice output information, according to the hearing compensation algorithm Compensating the voice output information with the acoustic compensation information to generate voice compensation information; and playing the voice compensation information.
  • the present application can realize the function of hearing the content of the voice call clearly when the hearing impaired user is in a voice call, and realize the function of increasing the user's hearing.
  • FIG. 1 is a schematic diagram of a terminal structure of a hardware operating environment involved in a solution of an embodiment of the present application
  • FIG. 2 is a schematic flowchart of the first embodiment of the hearing compensation method according to the application.
  • FIG. 3 is a schematic flowchart of a second embodiment of the hearing compensation method of this application.
  • FIG. 4 is a schematic flowchart of a third embodiment of the hearing compensation method of this application.
  • FIG. 5 is a schematic flowchart of a fourth embodiment of a hearing compensation method according to this application.
  • FIG. 6 is a schematic flowchart of a fifth embodiment of the hearing compensation method of this application.
  • the main solution of the embodiment of the present application is to obtain the user’s hearing impairment information; determine the acoustic compensation information according to the hearing impairment information; when receiving the voice output information, perform the hearing compensation algorithm and the acoustic compensation information on all users.
  • the voice output information is compensated to generate voice compensation information; the voice compensation information is played.
  • the proportion of people with different degrees of hearing loss in my country is very high. Even among people with normal hearing, each person's hearing characteristics are different, and the frequency band of each person's hearing loss is different. Ordinary earphones combined with mobile phones or other playback devices can realize functions such as making calls and playing music, but cannot perform sound processing according to the user’s personalized hearing characteristics, and cannot increase hearing in the frequency band where the user’s hearing is impaired.
  • the damaged frequency band adjusts the volume of the sound playback, which will cause the user to be unable to hear the sound in the hearing-impaired frequency band, which will affect the user's reception of the sound information played by the playback device.
  • the present application solves the technical problem that the hearing impaired user cannot hear the content of the voice call clearly during the voice call.
  • FIG. 1 is a schematic diagram of a terminal structure of a hardware operating environment involved in a solution of an embodiment of the present application.
  • the terminal in the embodiment of the present application may be a PC, or a mobile terminal device with a display function, such as a smart phone or a tablet computer.
  • the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, and a communication bus 1002.
  • the communication bus 1002 is used to implement connection and communication between these components.
  • the user interface 1003 may include a display screen (Display) and an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
  • the network interface 1004 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).
  • the memory 1005 may be a high-speed RAM memory, or a stable memory (non-volatile memory), such as a magnetic disk memory.
  • the memory 1005 may also be a storage device independent of the aforementioned processor 1001.
  • the terminal may also include a camera, RF (Radio Frequency (radio frequency) circuits, sensors, audio circuits, WiFi modules, etc.
  • sensors such as light sensors, motion sensors and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display screen according to the brightness of the ambient light
  • the proximity sensor can turn off the display screen and/or when the mobile terminal is moved to the ear.
  • Backlight As a kind of motion sensor, the gravity acceleration sensor can detect the magnitude of acceleration in various directions (usually three axes). It can detect the magnitude and direction of gravity when it is stationary.
  • the mobile terminal can be used for applications that recognize the posture of the mobile terminal (such as horizontal and vertical screen switching, Related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer, percussion), etc.; of course, the mobile terminal can also be equipped with other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared sensor, etc. No longer.
  • terminal structure shown in FIG. 1 does not constitute a limitation on the terminal, and may include more or fewer components than shown in the figure, or combine some components, or arrange different components.
  • the memory 1005 as a computer storage medium may include an operating system, a network communication module, a user interface module, and a hearing compensation program.
  • the network interface 1004 is mainly used to connect to the back-end server and communicate with the back-end server;
  • the user interface 1003 is mainly used to connect to the client (user side) and communicate with the client;
  • the processor 1001 can be used to call the hearing compensation program stored in the memory 1005 and perform the following operations:
  • processor 1001 may call the hearing compensation program stored in the memory 1005, and also perform the following operations:
  • the actual damage information corresponding to the damaged frequency information is compared with the normal hearing value to obtain acoustic compensation information, where the acoustic compensation information includes compensation frequency information and compensation multiple information corresponding to the compensation frequency information.
  • processor 1001 may call the hearing compensation program stored in the memory 1005, and also perform the following operations:
  • the output frequency information is determined according to the voice output information
  • the output frequency information is the same as the compensation frequency information, compensate the voice output information according to the hearing compensation algorithm and the compensation multiple information;
  • the compensated voice output information is output to generate voice compensation information.
  • processor 1001 may call the hearing compensation program stored in the memory 1005, and also perform the following operations:
  • the processor 1001 may call the hearing compensation program stored in the memory 1005, and also perform the following operations:
  • the step of playing the voice compensation information includes:
  • the processor 1001 may call the hearing compensation program stored in the memory 1005, and also perform the following operations:
  • the step of playing the voice compensation information includes:
  • processor 1001 may call the hearing compensation program stored in the memory 1005, and also perform the following operations:
  • the step of compensating the voice output information according to the hearing compensation algorithm and the acoustic compensation information to generate voice compensation information further includes:
  • the step of playing the voice compensation information includes:
  • processor 1001 may call the hearing compensation program stored in the memory 1005, and also perform the following operations:
  • the music output frequency information is the same as the compensation frequency information, compensate the environmental voice information according to the hearing compensation algorithm and the compensation multiple information;
  • the compensated environmental voice information is output to generate music voice compensation information.
  • FIG. 2 is a schematic flowchart of the first embodiment of the hearing compensation method of this application.
  • the hearing compensation method is applied to a hearing compensation device, and the method includes:
  • Step S10 obtaining the hearing loss information of the user
  • the hearing compensation device before the hearing compensation device compensates the user's hearing, the hearing compensation device needs to first obtain the user's hearing loss, and the hearing compensation device obtains the user's hearing loss information; where the hearing loss information may be the user The difference between the hearing at a certain frequency and the hearing of a normal person at that frequency; among them, the hearing loss information can be obtained by the hearing compensation device by measuring the user’s hearing; the hearing loss information can also be obtained by other devices measuring the user’s hearing Hearing loss information can also be obtained by the hearing compensation device and sent to other detection devices to measure and send hearing loss information acquisition instructions.
  • the hearing compensation device can be a speaker device or
  • the Bluetooth headset can also be a mobile terminal, a fixed terminal, or a tablet computer.
  • Step S20 Determine acoustic compensation information according to the hearing impairment information
  • the hearing compensation device analyzes the hearing loss information to determine the acoustic compensation information; and newly saves the acoustic compensation to the hearing compensation device.
  • the acoustic compensation information is a parameter or function used to compensate the user's hearing loss.
  • Step S30 when the voice output information is received, compensate the voice output information according to the hearing compensation algorithm and the acoustic compensation information to generate voice compensation information;
  • the hearing compensation device compensates the voice output information according to the hearing compensation algorithm and the acoustic compensation information to generate voice compensation information.
  • the voice output information may be the call voice information sent by the mobile terminal to the hearing compensation device in the 4G broadband call scenario; the voice output information may be the call voice information sent by the mobile terminal to the hearing compensation device in the 5G ultra-wideband call scenario ;
  • the voice output information can also be the call voice information sent by the mobile terminal to the hearing compensation device in the IP network high-definition phone scenario; the voice output information can also be the call voice sent by the mobile terminal to the hearing compensation device in the network video phone scenario Information;
  • Hearing compensation algorithm is a calculation method used to compensate users’ hearing loss.
  • Hearing compensation algorithm can be applied to EVS (Enhanced Voice Service); Hearing compensation algorithm can be applied to 4G broadband calls, 5G ultra-wideband calls, and IP network HD phones Or in the network video phone scene; the hearing compensation algorithm mainly solves the situation of the user's high frequency (4KHz-20KHz) hearing loss in the broadband, ultra-wideband or full-band voice call scenario, that is, the user cannot hear the high frequency (4KHz- 20KHz) voice information; the voice compensation information is the damaged voice information after the hearing compensation device is played.
  • the voice compensation information can be 4G broadband calls, 5G ultra-wideband calls, IP network high-definition calls or Internet video calls.
  • the voice compensation information mainly compensates for high-frequency (4KHz-20KHz) sound information, and less compensation for low-frequency (20Hz-4KHz) sound information.
  • Step S40 Play the voice compensation information.
  • the voice compensation information is played through the speaker module, and the user hears the compensated voice information.
  • the user’s hearing loss information is obtained through the above-mentioned solution; the acoustic compensation information is determined according to the hearing loss information; when the voice output information is received, the voice output is performed according to the hearing compensation algorithm and the acoustic compensation information The information is compensated to generate voice compensation information; the voice compensation information is played.
  • the hearing compensation algorithm is used to perform hearing compensation (voice enhancement) for call voices in 5G ultra-wideband calls, IP network high-definition calls, or network video call scenarios, and realizes the function of voice enhancement that cannot be achieved in 2G and 3G narrowband calls.
  • the function that users do not need hearing compensation (voice enhancement) in 2G and 3G narrowband call scenarios because most of the People can’t hear high-frequency sounds clearly; low-frequency sounds can be heard clearly; most people’s hearing loss is mainly high-frequency sound information), so that hearing impaired users can clearly hear the content of the voice call.
  • the function realizes the function of increasing the user's hearing.
  • FIG. 3 is a schematic flowchart of a second embodiment of the hearing compensation method of this application. Based on the embodiment shown in FIG. 2 above, step S20 determining acoustic compensation information according to the hearing impairment information may include:
  • Step S21 reading the damaged frequency information in the hearing loss information and the actual damage information corresponding to the damaged frequency information
  • the hearing compensation device reads the user's hearing impairment frequency information and the user's hearing impairment frequency information in the hearing impairment information.
  • the damage frequency information is a collection of several frequency points at which the user's hearing is impaired; the actual damage information is a collection of actual hearing values corresponding to each hearing impaired frequency point.
  • Step S22 The actual damage information corresponding to the damaged frequency information is compared with the normal hearing value to obtain acoustic compensation information, where the acoustic compensation information includes compensation frequency information and compensation multiple information corresponding to the compensation frequency information.
  • the hearing compensation device compares the actual damage information with the normal hearing value, and the hearing compensation device compares the damaged frequency
  • the information is output as compensation frequency information.
  • the hearing compensation device uses the ratio of the normal hearing value to the actual impairment information as the compensation multiple information.
  • the hearing compensation device combines the compensation frequency information and the compensation multiple information corresponding to the compensation frequency information to generate acoustic compensation information.
  • the compensation frequency information is a collection of several frequency points for which the user needs hearing compensation
  • the compensation multiple information is a collection of multiples of hearing enhancement corresponding to each frequency point that needs hearing compensation.
  • the user’s hearing impairment information is obtained through the above-mentioned solution; the impairment frequency information in the hearing impairment information and the actual impairment information corresponding to the impairment frequency information are read; and the impairment frequency information The corresponding actual impairment information is compared with the normal hearing value to obtain the acoustic compensation information, where the acoustic compensation information includes the compensation frequency information and the compensation multiple information corresponding to the compensation frequency information; when the voice output information is received, it is based on the hearing compensation The algorithm and the acoustic compensation information compensate the voice output information to generate voice compensation information; and play the voice compensation information.
  • the function that the hearing impaired user can hear the content of the voice call clearly during the voice call is realized, and the function of increasing the hearing of the user is realized.
  • FIG. 4 is a schematic flowchart of a third embodiment of a hearing compensation method according to this application.
  • compensating the voice output information according to the hearing compensation algorithm and the acoustic compensation information to generate voice compensation information may include:
  • Step S31 When the voice output information is received, the output frequency information is determined according to the voice output information
  • the hearing compensation device when the hearing compensation device receives the voice output information, the hearing compensation device determines the current output frequency information of the voice output information according to the voice output information.
  • the output frequency information is the current frequency value of the voice output information.
  • Step S32 detecting whether the output frequency information is the same as the compensation frequency information
  • the hearing compensation device detects whether the output frequency information has a corresponding frequency point in the collection of the compensation frequency information; wherein, the compensation frequency information refers to the number of hearing compensation required by the user. A collection of frequency points.
  • Step S32 after detecting whether the output frequency information is the same as the compensation frequency information, may include:
  • Step a If the output frequency information is different from the compensation frequency information, output the voice output information to generate voice compensation information.
  • the hearing compensation device detects that the output frequency information does not have a corresponding frequency point in the collection of compensation frequency information, the hearing compensation device does not compensate for the voice output information, and the hearing compensation device outputs the voice output information to generate Voice compensation information.
  • Step S33 If the output frequency information is the same as the compensation frequency information, compensate the voice output information according to the hearing compensation algorithm and the compensation multiple information;
  • the hearing compensation device if the hearing compensation device detects that the output frequency information has a corresponding frequency point in the collection of compensation frequency information, the hearing compensation device multiplies the voice output information by the compensation multiple information according to the hearing compensation algorithm for compensation.
  • Step S34 output the compensated voice output information to generate voice compensation information.
  • the hearing compensation device multiplies the voice output information by the compensation multiple information for compensation according to the hearing compensation algorithm
  • the hearing compensation device multiplies the voice output information by the compensation multiple information to output, and the hearing compensation device generates voice compensation information.
  • the user’s hearing impairment information is obtained through the above-mentioned solution; the impairment frequency information in the hearing impairment information and the actual impairment information corresponding to the impairment frequency information are read; and the impairment frequency information The corresponding actual impairment information is compared with the normal hearing value to obtain acoustic compensation information, where the acoustic compensation information includes compensation frequency information and compensation multiple information corresponding to the compensation frequency information; when the voice output information is received, the acoustic compensation information is based on the voice output Information to determine the output frequency information; detect whether the output frequency information is the same as the compensation frequency information; if the output frequency information is the same as the compensation frequency information, perform a comparison on the voice according to the hearing compensation algorithm and the compensation multiple information The output information is compensated; the compensated voice output information is output to generate voice compensation information; the voice compensation information is played.
  • the function that the hearing impaired user can hear the content of the voice call clearly during the voice call is realized, and the function of increasing the hearing of the user is realized.
  • FIG. 5 is a schematic flowchart of a fourth embodiment of a hearing compensation method according to this application.
  • the voice output information is call voice output information
  • the voice output information is received in step S30, the voice output information is adjusted according to the hearing compensation algorithm and the acoustic compensation information.
  • Output information for compensation, and generate voice compensation information which can also include:
  • Step S35 When the call voice output information is received, the call voice output information is compensated according to the hearing compensation algorithm and the acoustic compensation information to generate call voice compensation information.
  • the hearing compensation device when the voice output information is call voice output information, after the hearing compensation device obtains the acoustic compensation information, when the hearing compensation device receives the call voice output information, the hearing compensation device uses the hearing compensation algorithm and the acoustic compensation information Compensate the call voice output information to generate call voice compensation information.
  • the call voice output information may be the call voice information sent by the mobile terminal to the hearing compensation device in the 4G broadband call scenario; the call voice output information may be the call made by the mobile terminal to the hearing compensation device in the 5G ultra-wideband call scenario Voice information; the call voice output information can also be the call voice information sent by the mobile terminal to the hearing compensation device in the IP network high-definition phone scene; the call voice output information can also be the mobile terminal to the hearing compensation device in the network video phone scene The voice message of the call.
  • Step S35 when the call voice output information is received, compensate the call voice output information according to the hearing compensation algorithm and the acoustic compensation information to generate call voice compensation information, which may include:
  • Step b1 when the call voice output information is received, determine the call output frequency information according to the call voice output information;
  • Step b2 detecting whether the call output frequency information is the same as the compensation frequency information
  • Step b3 if the call output frequency information is the same as the compensation frequency information, compensate the call voice output information according to the hearing compensation algorithm and the compensation multiple information;
  • Step b4 Output the compensated call voice output information to generate call voice compensation information.
  • the hearing compensation device when the voice output information is call voice output information, when the hearing compensation device receives the voice output information, the hearing compensation device determines the current call output frequency information of the call voice output information according to the call voice output information; hearing compensation After the device determines the call output frequency information, the hearing compensation device detects whether the call output frequency information has a corresponding frequency point in the compensation frequency information collection; if the hearing compensation device detects that the call output frequency information is in the compensation frequency information collection For the corresponding frequency point, the hearing compensation device multiplies the call voice output information by the compensation multiple information according to the hearing compensation algorithm to compensate; the hearing compensation device multiplies the call voice output information by the compensation multiple information according to the hearing compensation algorithm to compensate, and then the hearing compensation The device multiplies the call voice output information by the compensation multiple information to output, and the hearing compensation device generates the call voice compensation information.
  • Step b2 after detecting whether the call output frequency information is the same as the compensation frequency information, may include:
  • Step b5 If the call output frequency information is different from the compensation frequency information, output the call voice output information to generate call voice compensation information.
  • the hearing compensation device detects that the call output frequency information does not have a corresponding frequency point in the collection of compensation frequency information, the hearing compensation device does not compensate the call voice output information, and the hearing compensation device performs the call voice output information. Output, generate call voice compensation information.
  • Step S40 playing the voice compensation information may include:
  • Step S41 Play the call voice compensation information.
  • the hearing compensation device after the hearing compensation device generates the call voice compensation information, it plays the call voice compensation information through the speaker module, and the user hears the compensated voice information.
  • compensating the voice output information according to the hearing compensation algorithm and the acoustic compensation information to generate voice compensation information may also include:
  • Step c When the music voice output information is received, the music voice output information is compensated according to the hearing compensation algorithm and the acoustic compensation information to generate music voice compensation information.
  • the hearing compensation device when the voice output information is music voice output information, after the hearing compensation device obtains the acoustic compensation information, when the hearing compensation device receives the music voice output information, the hearing compensation device according to the hearing compensation algorithm and the acoustic compensation information Compensate the music voice output information to generate music voice compensation information.
  • the music voice output information user needs to listen to the music, the music voice output information may be high frequency (4KHz-20KHz) music information; the music voice output information may be high frequency (20Hz-4KHz) music information.
  • Step c When the music voice output information is received, compensating the music voice output information according to the hearing compensation algorithm and the acoustic compensation information to generate music voice compensation information may include:
  • Step d1 when the music voice output information is received, determine the music output frequency information according to the music voice output information;
  • Step d2 detecting whether the music output frequency information is the same as the compensation frequency information
  • Step d3 if the music output frequency information is the same as the compensation frequency information, compensate the music voice output information according to the hearing compensation algorithm and the compensation multiple information;
  • Step d4 output the compensated music voice output information to generate music voice compensation information.
  • the hearing compensation device when the voice output information is music voice output information, when the hearing compensation device receives the voice output information, the hearing compensation device determines the current music output frequency information of the music voice output information according to the music voice output information; hearing compensation After the device determines the music output frequency information, the hearing compensation device detects whether the music output frequency information has a corresponding frequency point in the compensation frequency information collection; if the hearing compensation device detects that the music output frequency information is in the compensation frequency information collection For the corresponding frequency point, the hearing compensation device multiplies the music voice output information by the compensation multiple information according to the hearing compensation algorithm to compensate; the hearing compensation device multiplies the music voice output information by the compensation multiple information according to the hearing compensation algorithm to compensate, and then the hearing compensation The device multiplies the music voice output information by the compensation multiple information to output, and the hearing compensation device generates music voice compensation information;
  • Step d2 after detecting whether the music output frequency information is the same as the compensation frequency information, may include:
  • Step d5 If the music output frequency information is different from the compensation frequency information, output the music voice output information to generate music voice compensation information.
  • the hearing compensation device detects that the music output frequency information does not have a corresponding frequency point in the collection of compensation frequency information, the hearing compensation device does not compensate the music voice output information, and the hearing compensation device performs the music output information Output, generate music voice compensation information.
  • Step S40 playing the voice compensation information may also include:
  • Step e Play the music and voice compensation information.
  • the hearing compensation device after the hearing compensation device generates the music voice compensation information, it plays the music voice compensation information through the speaker module, and the user hears the compensated voice information.
  • the user’s hearing loss information is obtained through the above solution; the acoustic compensation information is determined according to the hearing loss information; when the call voice output information is received, the hearing compensation algorithm and the acoustic compensation information are compared to the call The voice output information is compensated to generate call voice compensation information; the call voice compensation information is played.
  • the hearing compensation algorithm is used to perform hearing compensation (voice enhancement) for call voices in 5G ultra-wideband calls, IP network high-definition calls, or network video call scenarios, and realizes the function of voice enhancement that cannot be achieved in 2G and 3G narrowband calls.
  • the function that users do not need hearing compensation (voice enhancement) in 2G and 3G narrowband call scenarios because most of the People can’t hear high-frequency sounds clearly; low-frequency sounds can be heard clearly; most people’s hearing loss is mainly high-frequency sound information), so that hearing impaired users can clearly hear the content of the voice call.
  • the function realizes the function of increasing the user's hearing.
  • FIG. 6 is a schematic flowchart of a fifth embodiment of a hearing compensation method according to this application. Based on the embodiment shown in FIG. 2 or FIG. 3, in order to facilitate the user to obtain the voice information of other speakers in the environment, after the step of determining acoustic compensation information according to the hearing impairment information in step S20, it may include:
  • Step S50 Acquire environmental voice information
  • the hearing compensation device after the hearing compensation device obtains the acoustic compensation information, the hearing compensation device obtains sounds made by other people in the environment or sounds emitted by other objects, and the hearing compensation device obtains environmental voice information.
  • the environmental voice information may be the sound of the user's surrounding environment, the environmental voice information may be high-frequency (4KHz-20KHz) sound information; the environmental voice information may be high-frequency (20Hz-4KHz) sound information.
  • Step S30 when the voice output information is received, compensating the voice output information according to the hearing compensation algorithm and the acoustic compensation information to generate voice compensation information may also include:
  • Step S36 Compensate the environmental voice information according to the hearing compensation algorithm and the acoustic compensation information to generate environmental voice compensation information.
  • the hearing compensation device compensates the environmental voice information according to the hearing compensation algorithm and the acoustic compensation information to generate environmental voice compensation information.
  • Step S36 Compensating the environmental voice information according to the hearing compensation algorithm and the acoustic compensation information to generate environmental voice compensation information may include:
  • Step f1 Determine the environmental frequency information according to the environmental voice information
  • Step f2 detecting whether the environmental frequency information is the same as the compensation frequency information
  • Step f3 if the environmental frequency information is the same as the compensation frequency information, compensate the environmental voice information according to the hearing compensation algorithm and the compensation multiple information;
  • Step f4 output the compensated environmental voice information to generate environmental voice compensation information.
  • the hearing compensation device determines the current environmental frequency information of the environmental voice information according to the environmental voice information; after the hearing compensation device determines the environmental frequency information, the hearing compensation device Detect whether the environmental frequency information has a corresponding frequency point in the compensation frequency information collection; if the hearing compensation device detects that the environmental frequency information has a corresponding frequency point in the compensation frequency information collection, the hearing compensation device will follow the hearing compensation algorithm Multiply the environmental voice information by the compensation multiple information for compensation; after the hearing compensation device multiplies the environmental voice information by the compensation multiple information according to the hearing compensation algorithm, the hearing compensation device multiplies the environmental voice information by the compensation multiple information to output, and the hearing compensation device Generate environmental voice compensation information;
  • Step f2 after detecting whether the environmental frequency information is the same as the compensation frequency information, may include:
  • Step f5 If the environmental frequency information is different from the compensation frequency information, output the environmental voice information to generate environmental voice compensation information.
  • the hearing compensation device detects that the environmental frequency information does not have a corresponding frequency point in the collection of compensation frequency information, the hearing compensation device does not compensate for the environmental voice information, and the hearing compensation device outputs the environmental voice information to generate Environmental voice compensation information; where the environmental frequency information is the current frequency value of the environmental voice information.
  • Step S40 playing the voice compensation information may also include:
  • Step S42 Play the environmental voice compensation information.
  • the hearing compensation device after the hearing compensation device generates the environmental voice compensation information, it plays the environmental voice compensation information through the speaker module, and the user hears the compensated voice information.
  • This embodiment obtains the user’s hearing loss information through the above solution; determines the acoustic compensation information according to the hearing loss information; obtains environmental voice information; and compensates the environmental voice information according to the hearing compensation algorithm and the acoustic compensation information , Generate environmental voice compensation information; play the environmental voice compensation information.
  • the function that the hearing impaired user can hear the content of the voice call clearly during the voice call is realized, and the function of increasing the hearing of the user is realized.
  • the application also provides a hearing compensation device.
  • the hearing compensation device of the present application includes: a memory, a processor, and a hearing compensation program stored in the memory and capable of running on the processor, and the hearing compensation program is executed by the processor to realize the above-mentioned hearing Steps of compensation method.
  • the application also provides a computer-readable storage medium.
  • the computer-readable storage medium of the present application stores a hearing compensation program, and when the hearing compensation program is executed by a processor, the steps of the hearing compensation method as described above are realized.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Telephone Function (AREA)

Abstract

一种听力补偿方法、听力补偿装置和一种计算机可读存储介质,能够实现听力受损的用户在语音通话时能听清语音通话的内容的功能,实现增加用户听力的功能。听力补偿方法包括:获取用户的听力受损信息(S10);根据听力受损信息确定声学补偿信息(S20);在接收到语音输出信息时,根据听力补偿算法和声学补偿信息对语音输出信息进行补偿,生成语音补偿信息(S30);对语音补偿信息进行播放(S40)。

Description

听力补偿方法、装置及计算机可读存储介质
本申请要求于2019年12月20日提交中国专利局、申请号为201911332886.9、申请名称为“听力补偿方法、装置及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在申请中。
技术领域
本申请涉及通信技术领域,尤其涉及听力补偿方法、装置及计算机可读存储介质。
背景技术
目前,我国具有不同程度听力损失的人群比例很高,即使在听力正常的人群中,每个人的听力特性也是各有不同的,每个人听力损失的频段也不相同。普通耳机配合手机或者其他播放设备,可以实现打电话、播放音乐等功能,却不能针对用户的个性化听力特性进行声音处理,不能在用户听力受损的频段给予听力增加,用户不能根据自身听力受损的频段调节声音播放的音量,这样会导致用户听不清自己听力受损频段的声音,会影响用户接收播放设备播放的声音信息。
技术解决方案
本申请的主要目的在于提出一种听力补偿方法、装置及计算机可读存储介质,旨在解决听力受损的用户在语音通话时不能听清语音通话的内容的技术问题。
为实现上述目的,本申请提供一种听力补偿方法,所述听力补偿方法包括如下步骤:
获取用户的听力受损信息;
根据所述听力受损信息确定声学补偿信息;
在接收到语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述语音输出信息进行补偿,生成语音补偿信息;
对所述语音补偿信息进行播放。
此外,为实现上述目的,本申请还提供一种听力补偿装置,所述听力补偿装置包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的听力补偿程序,所述听力补偿程序被所述处理器执行时实现如上所述的听力补偿方法的步骤。
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有听力补偿程序,所述听力补偿程序被处理器执行时实现如上所述的听力补偿方法的步骤。
本申请提供了一种听力补偿方法、装置及计算机可读存储介质,获取用户的听力受损信息;根据所述听力受损信息确定声学补偿信息;在接收到语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述语音输出信息进行补偿,生成语音补偿信息;对所述语音补偿信息进行播放。通过上述方式,本申请能够实现听力受损的用户在语音通话时能听清语音通话的内容的功能,实现增加用户听力的功能。
附图说明
图1是本申请实施例方案涉及的硬件运行环境的终端结构示意图;
图2为本申请听力补偿方法第一实施例的流程示意图;
图3为本申请听力补偿方法第二实施例的流程示意图;
图4为本申请听力补偿方法第三实施例的流程示意图;
图5为本申请听力补偿方法第四实施例的流程示意图;
图6为本申请听力补偿方法第五实施例的流程示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
本发明的实施方式
应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
本申请实施例的主要解决方案是:获取用户的听力受损信息;根据所述听力受损信息确定声学补偿信息;在接收到语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述语音输出信息进行补偿,生成语音补偿信息;对所述语音补偿信息进行播放。
目前,我国具有不同程度听力损失的人群比例很高,即使在听力正常的人群中,每个人的听力特性也是各有不同的,每个人听力损失的频段也不相同。普通耳机配合手机或者其他播放设备,可以实现打电话、播放音乐等功能,却不能针对用户的个性化听力特性进行声音处理,不能在用户听力受损的频段给予听力增加,用户不能根据自身听力受损的频段调节声音播放的音量,这样会导致用户听不清自己听力受损频段的声音,会影响用户接收播放设备播放的声音信息。
本申请解决听力受损的用户在语音通话时不能听清语音通话的内容的技术问题。
如图1所示,图1是本申请实施例方案涉及的硬件运行环境的终端结构示意图。
本申请实施例终端可以是PC,也可以是智能手机、平板电脑等具有显示功能的可移动式终端设备。
如图1所示,该终端可以包括:处理器1001,例如CPU,网络接口1004,用户接口1003,存储器1005,通信总线1002。其中,通信总线1002用于实现这些组件之间的连接通信。用户接口1003可以包括显示屏(Display)、输入单元比如键盘(Keyboard),可选用户接口1003还可以包括标准的有线接口、无线接口。网络接口1004可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。存储器1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储装置。
优选地,终端还可以包括摄像头、RF(Radio Frequency,射频)电路,传感器、音频电路、WiFi模块等等。其中,传感器比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示屏的亮度,接近传感器可在移动终端移动到耳边时,关闭显示屏和/或背光。作为运动传感器的一种,重力加速度传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别移动终端姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;当然,移动终端还可配置陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
本领域技术人员可以理解,图1中示出的终端结构并不构成对终端的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
如图1所示,作为一种计算机存储介质的存储器1005中可以包括操作系统、网络通信模块、用户接口模块以及听力补偿程序。
在图1所示的终端中,网络接口1004主要用于连接后台服务器,与后台服务器进行数据通信;用户接口1003主要用于连接客户端(用户端),与客户端进行数据通信;而处理器1001可以用于调用存储器1005中存储的听力补偿程序,并执行以下操作:
获取用户的听力受损信息;
根据所述听力受损信息确定声学补偿信息;
在接收到语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述语音输出信息进行补偿,生成语音补偿信息;
对所述语音补偿信息进行播放。
进一步地,处理器1001可以调用存储器1005中存储的听力补偿程序,还执行以下操作:
读取所述听力受损信息中的受损频率信息和所述受损频率信息对应的实际受损信息;
将所述受损频率信息对应的实际受损信息与正常听力值进行比较,得到声学补偿信息,其中,所述声学补偿信息包括补偿频率信息和补偿频率信息对应的补偿倍数信息。
进一步地,处理器1001可以调用存储器1005中存储的听力补偿程序,还执行以下操作:
在接收到语音输出信息时,根据语音输出信息确定输出频率信息;
检测所述输出频率信息是否与所述补偿频率信息相同;
若所述输出频率信息与所述补偿频率信息相同,则根据听力补偿算法和所述补偿倍数信息对所述语音输出信息进行补偿;
将补偿后的所述语音输出信息进行输出,生成语音补偿信息。
进一步地,处理器1001可以调用存储器1005中存储的听力补偿程序,还执行以下操作:
若所述输出频率信息与所述补偿频率信息不相同,则将所述语音输出信息进行输出,生成语音补偿信息。
进一步地,当所述语音输出信息为通话语音输出信息时,处理器1001可以调用存储器1005中存储的听力补偿程序,还执行以下操作:
在接收到通话语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述通话语音输出信息进行补偿,生成通话语音补偿信息;
所述对所述语音补偿信息进行播放的步骤,包括:
对所述通话语音补偿信息进行播放。
进一步地,当所述语音输出信息为音乐语音输出信息时,处理器1001可以调用存储器1005中存储的听力补偿程序,还执行以下操作:
在接收到音乐语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述音乐语音输出信息进行补偿,生成音乐语音补偿信息;
所述对所述语音补偿信息进行播放的步骤,包括:
对所述音乐语音补偿信息进行播放。
进一步地,处理器1001可以调用存储器1005中存储的听力补偿程序,还执行以下操作:
获取环境语音信息;
所述在接收到语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述语音输出信息进行补偿,生成语音补偿信息的步骤,还包括:
根据听力补偿算法和所述声学补偿信息对所述环境语音信息进行补偿,生成环境语音补偿信息;
所述对所述语音补偿信息进行播放的步骤,包括:
对所述环境语音补偿信息进行播放。
进一步地,处理器1001可以调用存储器1005中存储的听力补偿程序,还执行以下操作:
根据环境语音信息确定音乐输出频率信息;
检测所述音乐输出频率信息是否与所述补偿频率信息相同;
若所述音乐输出频率信息与所述补偿频率信息相同,则根据听力补偿算法和所述补偿倍数信息对所述环境语音信息进行补偿;
将补偿后的所述环境语音信息进行输出,生成音乐语音补偿信息。
基于上述硬件结构,提出本申请听力补偿方法实施例。
本申请听力补偿方法。
参照图2,图2为本申请听力补偿方法第一实施例的流程示意图。
本申请实施例中,该听力补偿方法应用于听力补偿装置,所述方法包括:
步骤S10,获取用户的听力受损信息;
在本实施例中,听力补偿装置在对用户听力进行补偿之前,听力补偿装置需要先获取用户听力的受损情况,听力补偿装置获取用户的听力受损信息;其中,听力受损信息可以是用户在某一频率听力与正常人在该频率听力之间的差值;其中,听力受损信息可以是听力补偿装置通过测量用户听力而得到的;听力受损信息也可以是其它装置测量用户听力而得到,并发送给听力补偿装置的;听力受损信息也可以是听力补偿装置向其它检测装置测量发送听力受损信息获取指令,其它检测装置在接收到听力受损信息之后,其它检测装置检测或查找用户的听力受损信息,其它检测装置将用户的听力受损信息发送至听力补偿装置,听力补偿装置接收其它检测装置发送的听力受损信息;其中听力补偿装置可以是音箱装置,也可以是蓝牙耳机,还可以是移动终端,还可以是固定终端,还可以是平板电脑等。
步骤S20,根据所述听力受损信息确定声学补偿信息;
在本实施例中,听力补偿装置在获取到用户的听力受损信息之后,听力补偿装置对听力受损信息进行解析,确定声学补偿信息;并将声学补偿新保存至听力补偿装置。其中,声学补偿信息是用于补偿用户听力受损的参数或函数。
步骤S30,在接收到语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述语音输出信息进行补偿,生成语音补偿信息;
在本实施例中,听力补偿装置在得到声学补偿信息之后,在听力补偿装置接收到语音输出信息时,听力补偿装置根据听力补偿算法和声学补偿信息对语音输出信息进行补偿,生成语音补偿信息。其中,语音输出信息可以是在4G宽带通话场景下,移动终端给听力补偿装置发出的通话语音信息;语音输出信息可以是在5G超宽带通话场景下,移动终端给听力补偿装置发出的通话语音信息;语音输出信息也可以是在IP网络高清电话场景下,移动终端给听力补偿装置发出的通话语音信息;语音输出信息也可以是在网络视频电话场景下,移动终端给听力补偿装置发出的通话语音信息;听力补偿算法是用于补偿用户听力受损的计算方法,听力补偿算法可以应用于EVS(增强型语音服务);听力补偿算法可以应用于4G宽带通话、5G超宽带通话、IP网络高清电话或网络视频电话场景下;听力补偿算法主要解决的宽带、超宽带或全频段的语音通话场景下,用户高频(4KHz-20KHz)听力损失的情况,即用户不能听不清高频(4KHz-20KHz)声音信息;语音补偿信息为经过听力补偿装置之后,播放之后用户能够正常听清楚受损的语音信息,语音补偿信息可以是4G宽带通话、5G超宽带通话、IP网络高清电话或网络视频电话场景下输出的语音信息,语音补偿信息主要补偿的是高频(4KHz-20KHz)的声音信息,对于低频(20Hz-4KHz)的声音信息则补偿较少。
步骤S40,对所述语音补偿信息进行播放。
在本实施例中,听力补偿装置在生成语音补偿信息之后,通过音箱模块对语音补偿信息进行播放,用户听到经过补偿的语音信息。
本实施例通过上述方案,获取用户的听力受损信息;根据所述听力受损信息确定声学补偿信息;在接收到语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述语音输出信息进行补偿,生成语音补偿信息;对所述语音补偿信息进行播放。由此,通过听力补偿算法对5G超宽带通话、IP网络高清电话或网络视频电话场景下的通话语音进行听力补偿(语音增强),实现了在2G、3G窄带通话场景下不能实现语音增强的功能(因为2G、3G窄带通话场景下不会获取高频的声音信息,减小传输的数据),实现了在2G、3G窄带通话场景下用户不需要听力补偿(语音增强)的功能(因为大部分人听不清高频的声音;低频的声音则能听清楚;大部分人听力损失主要是高频段的声音信息),实现了听力受损的用户在语音通话时能听清语音通话的内容的功能,实现了增加用户听力的功能。
进一步地,参照图3,图3为本申请听力补偿方法第二实施例的流程示意图。基于上述图2所示的实施例,步骤S20根据所述听力受损信息确定声学补偿信息,可以包括:
步骤S21,读取所述听力受损信息中的受损频率信息和所述受损频率信息对应的实际受损信息;
在本实施例中,听力补偿装置在获取到用户的听力受损信息之后,听力补偿装置读取听力受损信息中的用户听力受损的受损频率信息和用户听力受损的受损频率信息对应的实际受损信息。其中,受损频率信息为用户听力受损的若干个频率点的合集;实际受损信息为每个听力受损频率点对应的实际听力值的合集。
步骤S22,将所述受损频率信息对应的实际受损信息与正常听力值进行比较,得到声学补偿信息,其中,所述声学补偿信息包括补偿频率信息和补偿频率信息对应的补偿倍数信息。
在本实施例中,听力补偿装置在得到受损频率信息和受损频率信息对应的实际受损信息之后,听力补偿装置将实际受损信息与正常听力值进行比较,听力补偿装置将受损频率信息作为补偿频率信息进行输出,听力补偿装置将正常听力值与实际受损信息的比值作为补偿倍数信息,听力补偿装置将补偿频率信息和补偿频率信息对应的补偿倍数信息合并,生成声学补偿信息。其中,补偿频率信息为用户需要听力补偿的若干个频率点的合集;补偿倍数信息为每个需要听力补偿的频率点对应的听力加强的倍数的合集。
本实施例通过上述方案,获取用户的听力受损信息;读取所述听力受损信息中的受损频率信息和所述受损频率信息对应的实际受损信息;将所述受损频率信息对应的实际受损信息与正常听力值进行比较,得到声学补偿信息,其中,所述声学补偿信息包括补偿频率信息和补偿频率信息对应的补偿倍数信息;在接收到语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述语音输出信息进行补偿,生成语音补偿信息;对所述语音补偿信息进行播放。由此,实现了听力受损的用户在语音通话时能听清语音通话的内容的功能,实现了增加用户听力的功能。
进一步地,参照图4,图4为本申请听力补偿方法第三实施例的流程示意图。基于上述图3所示的实施例,步骤S30接收到语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述语音输出信息进行补偿,生成语音补偿信息,可以包括:
步骤S31,在接收到语音输出信息时,根据语音输出信息确定输出频率信息;
在本实施例中,听力补偿装置在接收到语音输出信息时,听力补偿装置根据语音输出信息确定语音输出信息当前的输出频率信息。其中,输出频率信息是语音输出信息当前的频率值。
步骤S32,检测所述输出频率信息是否与所述补偿频率信息相同;
在本实施例中,听力补偿装置在确定输出频率信息之后,听力补偿装置检测输出频率信息在补偿频率信息的合集中是否有相对应的频率点;其中,补偿频率信息是用户需要听力补偿的若干个频率点的合集。
步骤S32检测所述输出频率信息是否与所述补偿频率信息相同之后,可以包括:
步骤a,若所述输出频率信息与所述补偿频率信息不相同,则将所述语音输出信息进行输出,生成语音补偿信息。
在本实施例中,若听力补偿装置检测出输出频率信息在补偿频率信息的合集中没有相对应的频率点,听力补偿装置不对语音输出信息进行补偿,听力补偿装置将语音输出信息进行输出,生成语音补偿信息。
步骤S33,若所述输出频率信息与所述补偿频率信息相同,则根据听力补偿算法和所述补偿倍数信息对所述语音输出信息进行补偿;
在本实施例中,若听力补偿装置检测出输出频率信息在补偿频率信息的合集中有相对应的频率点,则听力补偿装置根据听力补偿算法将语音输出信息乘以补偿倍数信息进行补偿。
步骤S34,将补偿后的所述语音输出信息进行输出,生成语音补偿信息。
在本实施例中,听力补偿装置根据听力补偿算法将语音输出信息乘以补偿倍数信息进行补偿之后,听力补偿装置将语音输出信息乘以补偿倍数信息进行输出,听力补偿装置生成语音补偿信息。
本实施例通过上述方案,获取用户的听力受损信息;读取所述听力受损信息中的受损频率信息和所述受损频率信息对应的实际受损信息;将所述受损频率信息对应的实际受损信息与正常听力值进行比较,得到声学补偿信息,其中,所述声学补偿信息包括补偿频率信息和补偿频率信息对应的补偿倍数信息;在接收到语音输出信息时,根据语音输出信息确定输出频率信息;检测所述输出频率信息是否与所述补偿频率信息相同;若所述输出频率信息与所述补偿频率信息相同,则根据听力补偿算法和所述补偿倍数信息对所述语音输出信息进行补偿;将补偿后的所述语音输出信息进行输出,生成语音补偿信息;对所述语音补偿信息进行播放。由此,实现了听力受损的用户在语音通话时能听清语音通话的内容的功能,实现了增加用户听力的功能。
进一步地,参照图5,图5为本申请听力补偿方法第四实施例的流程示意图。基于上述图2、图3或图4所示的实施例,当语音输出信息为通话语音输出信息时,步骤S30接收到语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述语音输出信息进行补偿,生成语音补偿信息,还可以包括:
步骤S35,在接收到通话语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述通话语音输出信息进行补偿,生成通话语音补偿信息。
在本实施例中,当语音输出信息为通话语音输出信息时,听力补偿装置在得到声学补偿信息之后,在听力补偿装置接收到通话语音输出信息时,听力补偿装置根据听力补偿算法和声学补偿信息对通话语音输出信息进行补偿,生成通话语音补偿信息。其中,通话语音输出信息可以是在4G宽带通话场景下,移动终端给听力补偿装置发出的通话语音信息;通话语音输出信息可以是在5G超宽带通话场景下,移动终端给听力补偿装置发出的通话语音信息;通话语音输出信息也可以是在IP网络高清电话场景下,移动终端给听力补偿装置发出的通话语音信息;通话语音输出信息也可以是在网络视频电话场景下,移动终端给听力补偿装置发出的通话语音信息。
步骤S35在接收到通话语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述通话语音输出信息进行补偿,生成通话语音补偿信息,可以包括:
步骤b1,在接收到通话语音输出信息时,根据通话语音输出信息确定通话输出频率信息;
步骤b2,检测所述通话输出频率信息是否与所述补偿频率信息相同;
步骤b3,若所述通话输出频率信息与所述补偿频率信息相同,则根据听力补偿算法和所述补偿倍数信息对所述通话语音输出信息进行补偿;
步骤b4,将补偿后的所述通话语音输出信息进行输出,生成通话语音补偿信息。
在本实施例中,当语音输出信息为通话语音输出信息时,听力补偿装置在接收到语音输出信息时,听力补偿装置根据通话语音输出信息确定通话语音输出信息当前的通话输出频率信息;听力补偿装置在确定通话输出频率信息之后,听力补偿装置检测通话输出频率信息在补偿频率信息的合集中是否有相对应的频率点;若听力补偿装置检测出通话输出频率信息在补偿频率信息的合集中有相对应的频率点,则听力补偿装置根据听力补偿算法将通话语音输出信息乘以补偿倍数信息进行补偿;听力补偿装置根据听力补偿算法将通话语音输出信息乘以补偿倍数信息进行补偿之后,听力补偿装置将通话语音输出信息乘以补偿倍数信息进行输出,听力补偿装置生成通话语音补偿信息。
步骤b2检测所述通话输出频率信息是否与所述补偿频率信息相同之后,可以包括:
步骤b5,若所述通话输出频率信息与所述补偿频率信息不相同,则将所述通话语音输出信息进行输出,生成通话语音补偿信息。
在本实施例中,若听力补偿装置检测出通话输出频率信息在补偿频率信息的合集中没有相对应的频率点,听力补偿装置不对通话语音输出信息进行补偿,听力补偿装置将通话语音输出信息进行输出,生成通话语音补偿信息。
步骤S40对所述语音补偿信息进行播放,可以包括:
步骤S41,对所述通话语音补偿信息进行播放。
在本实施例中,听力补偿装置在生成通话语音补偿信息之后,通过音箱模块对通话语音补偿信息进行播放,用户听到经过补偿的语音信息。
当语音输出信息为音乐语音输出信息时,步骤S30接收到语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述语音输出信息进行补偿,生成语音补偿信息,还可以包括:
步骤c,在接收到音乐语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述音乐语音输出信息进行补偿,生成音乐语音补偿信息。
在本实施例中,当语音输出信息为音乐语音输出信息时,听力补偿装置在得到声学补偿信息之后,在听力补偿装置接收到音乐语音输出信息时,听力补偿装置根据听力补偿算法和声学补偿信息对音乐语音输出信息进行补偿,生成音乐语音补偿信息。其中,音乐语音输出信息用户需要听的音乐,音乐语音输出信息可以是高频(4KHz-20KHz)的音乐信息;音乐语音输出信息可以是高频(20Hz-4KHz)的音乐信息。
步骤c在接收到音乐语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述音乐语音输出信息进行补偿,生成音乐语音补偿信息,可以包括:
步骤d1,在接收到音乐语音输出信息时,根据音乐语音输出信息确定音乐输出频率信息;
步骤d2,检测所述音乐输出频率信息是否与所述补偿频率信息相同;
步骤d3,若所述音乐输出频率信息与所述补偿频率信息相同,则根据听力补偿算法和所述补偿倍数信息对所述音乐语音输出信息进行补偿;
步骤d4,将补偿后的所述音乐语音输出信息进行输出,生成音乐语音补偿信息。
在本实施例中,当语音输出信息为音乐语音输出信息时,听力补偿装置在接收到语音输出信息时,听力补偿装置根据音乐语音输出信息确定音乐语音输出信息当前的音乐输出频率信息;听力补偿装置在确定音乐输出频率信息之后,听力补偿装置检测音乐输出频率信息在补偿频率信息的合集中是否有相对应的频率点;若听力补偿装置检测出音乐输出频率信息在补偿频率信息的合集中有相对应的频率点,则听力补偿装置根据听力补偿算法将音乐语音输出信息乘以补偿倍数信息进行补偿;听力补偿装置根据听力补偿算法将音乐语音输出信息乘以补偿倍数信息进行补偿之后,听力补偿装置将音乐语音输出信息乘以补偿倍数信息进行输出,听力补偿装置生成音乐语音补偿信息;
步骤d2检测所述音乐输出频率信息是否与所述补偿频率信息相同之后,可以包括:
步骤d5,若所述音乐输出频率信息与所述补偿频率信息不相同,则将所述音乐语音输出信息进行输出,生成音乐语音补偿信息。
在本实施例中,若听力补偿装置检测出音乐输出频率信息在补偿频率信息的合集中没有相对应的频率点,听力补偿装置不对音乐语音输出信息进行补偿,听力补偿装置将音乐语音输出信息进行输出,生成音乐语音补偿信息。
步骤S40对所述语音补偿信息进行播放,还可以包括:
步骤e,对所述音乐语音补偿信息进行播放。
在本实施例中,听力补偿装置在生成音乐语音补偿信息之后,通过音箱模块对音乐语音补偿信息进行播放,用户听到经过补偿的语音信息。
本实施例通过上述方案,获取用户的听力受损信息;根据所述听力受损信息确定声学补偿信息;在接收到通话语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述通话语音输出信息进行补偿,生成通话语音补偿信息;对所述通话语音补偿信息进行播放。由此,通过听力补偿算法对5G超宽带通话、IP网络高清电话或网络视频电话场景下的通话语音进行听力补偿(语音增强),实现了在2G、3G窄带通话场景下不能实现语音增强的功能(因为2G、3G窄带通话场景下不会获取高频的声音信息,减小传输的数据),实现了在2G、3G窄带通话场景下用户不需要听力补偿(语音增强)的功能(因为大部分人听不清高频的声音;低频的声音则能听清楚;大部分人听力损失主要是高频段的声音信息),实现了听力受损的用户在语音通话时能听清语音通话的内容的功能,实现了增加用户听力的功能。
进一步地,参照图6,图6为本申请听力补偿方法第五实施例的流程示意图。基于上述图2或图3所示的实施例,为了方便用户获取环境中其它说话人的语音信息,步骤S20根据所述听力受损信息确定声学补偿信息的步骤之后,可以包括:
步骤S50,获取环境语音信息;
在本实施例中,听力补偿装置在得到声学补偿信息之后,听力补偿装置获取到环境中其它人发出的声音或其它物体发出声音,听力补偿装置获取环境语音信息。其中,环境语音信息可以是用户周围环境的声音,环境语音信息可以是高频(4KHz-20KHz)的声音信息;环境语音信息可以是高频(20Hz-4KHz)的声音信息。
步骤S30接收到语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述语音输出信息进行补偿,生成语音补偿信息,还可以包括:
步骤S36,根据听力补偿算法和所述声学补偿信息对所述环境语音信息进行补偿,生成环境语音补偿信息。
在本实施例中,听力补偿装置在得到声学补偿信息和环境语音信息之后,听力补偿装置根据听力补偿算法和声学补偿信息对环境语音信息进行补偿,生成环境语音补偿信息。
步骤S36根据听力补偿算法和所述声学补偿信息对所述环境语音信息进行补偿,生成环境语音补偿信息,可以包括:
步骤f1,根据环境语音信息确定环境频率信息;
步骤f2,检测所述环境频率信息是否与所述补偿频率信息相同;
步骤f3,若所述环境频率信息与所述补偿频率信息相同,则根据听力补偿算法和所述补偿倍数信息对所述环境语音信息进行补偿;
步骤f4,将补偿后的所述环境语音信息进行输出,生成环境语音补偿信息。
在本实施例中,听力补偿装置在得到声学补偿信息和环境语音信息之后,听力补偿装置根据环境语音信息确定环境语音信息当前的环境频率信息;听力补偿装置在确定环境频率信息之后,听力补偿装置检测环境频率信息在补偿频率信息的合集中是否有相对应的频率点;若听力补偿装置检测出环境频率信息在补偿频率信息的合集中有相对应的频率点,则听力补偿装置根据听力补偿算法将环境语音信息乘以补偿倍数信息进行补偿;听力补偿装置根据听力补偿算法将环境语音信息乘以补偿倍数信息进行补偿之后,听力补偿装置将环境语音信息乘以补偿倍数信息进行输出,听力补偿装置生成环境语音补偿信息;
步骤f2检测所述环境频率信息是否与所述补偿频率信息相同之后,可以包括:
步骤f5,若所述环境频率信息与所述补偿频率信息不相同,则将所述环境语音信息进行输出,生成环境语音补偿信息。
在本实施例中,若听力补偿装置检测出环境频率信息在补偿频率信息的合集中没有相对应的频率点,听力补偿装置不对环境语音信息进行补偿,听力补偿装置将环境语音信息进行输出,生成环境语音补偿信息;其中,环境频率信息为环境语音信息当前的频率值。
步骤S40对所述语音补偿信息进行播放,还可以包括:
步骤S42,对所述环境语音补偿信息进行播放。
在本实施例中,听力补偿装置在生成环境语音补偿信息之后,通过音箱模块对环境语音补偿信息进行播放,用户听到经过补偿的语音信息。
本实施例通过上述方案,获取用户的听力受损信息;根据所述听力受损信息确定声学补偿信息;获取环境语音信息;根据听力补偿算法和所述声学补偿信息对所述环境语音信息进行补偿,生成环境语音补偿信息;对所述环境语音补偿信息进行播放。由此,实现了听力受损的用户在语音通话时能听清语音通话的内容的功能,实现了增加用户听力的功能。
本申请还提供一种听力补偿装置。
本申请听力补偿装置包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的听力补偿程序,所述听力补偿程序被所述处理器执行时实现如上所述的听力补偿方法的步骤。
其中,在所述处理器上运行的听力补偿程序被执行时所实现的方法可参照本申请听力补偿方法各个实施例,此处不再赘述。
本申请还提供一种计算机可读存储介质。
本申请计算机可读存储介质上存储有听力补偿程序,所述听力补偿程序被处理器执行时实现如上所述的听力补偿方法的步骤。
其中,在所述处理器上运行的听力补偿程序被执行时所实现的方法可参照本申请听力补偿方法各个实施例,此处不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者系统不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者系统所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者系统中还存在另外的相同要素。
上述本申请实施例序号仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种听力补偿方法,其中,所述方法包括如下步骤:
    获取用户的听力受损信息;
    根据所述听力受损信息确定声学补偿信息;
    在接收到语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述语音输出信息进行补偿,生成语音补偿信息;
    对所述语音补偿信息进行播放。
  2. 如权利要求1所述的听力补偿方法,其中,所述根据所述听力受损信息确定声学补偿信息的步骤包括:
    读取所述听力受损信息中的受损频率信息和所述受损频率信息对应的实际受损信息;
    将所述受损频率信息对应的实际受损信息与正常听力值进行比较,得到声学补偿信息,其中,所述声学补偿信息包括补偿频率信息和补偿频率信息对应的补偿倍数信息。
  3. 如权利要求2所述的听力补偿方法,其中,所述在接收到语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述语音输出信息进行补偿,生成语音补偿信息的步骤,包括:
    在接收到语音输出信息时,根据语音输出信息确定输出频率信息;
    检测所述输出频率信息是否与所述补偿频率信息相同;
    若所述输出频率信息与所述补偿频率信息相同,则根据听力补偿算法和所述补偿倍数信息对所述语音输出信息进行补偿;
    将补偿后的所述语音输出信息进行输出,生成语音补偿信息。
  4. 如权利要求3所述的听力补偿方法,其中,所述检测所述输出频率信息与所述补偿频率信息是否相同的步骤之后,包括:
    若所述输出频率信息与所述补偿频率信息不相同,则将所述语音输出信息进行输出,生成语音补偿信息。
  5. 如权利要求1所述的听力补偿方法,其中,当所述语音输出信息为通话语音输出信息时,所述在接收到语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述语音输出信息进行补偿,生成语音补偿信息的步骤,还包括:
    在接收到通话语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述通话语音输出信息进行补偿,生成通话语音补偿信息;
    所述对所述语音补偿信息进行播放的步骤,包括:
    对所述通话语音补偿信息进行播放。
  6. 如权利要求1所述的听力补偿方法,其中,当所述语音输出信息为音乐语音输出信息时,所述在接收到语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述语音输出信息进行补偿,生成语音补偿信息的步骤,还包括:
    在接收到音乐语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述音乐语音输出信息进行补偿,生成音乐语音补偿信息;
    所述对所述语音补偿信息进行播放的步骤,包括:
    对所述音乐语音补偿信息进行播放。
  7. 如权利要求1所述的听力补偿方法,其中,所述根据所述听力受损信息确定声学补偿信息的步骤之后,包括:
    获取环境语音信息;
    所述在接收到语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述语音输出信息进行补偿,生成语音补偿信息的步骤,还包括:
    根据听力补偿算法和所述声学补偿信息对所述环境语音信息进行补偿,生成环境语音补偿信息;
    所述对所述语音补偿信息进行播放的步骤,包括:
    对所述环境语音补偿信息进行播放。
  8. 如权利要求7所述的听力补偿方法,其中,所述根据听力补偿算法和所述声学补偿信息对所述环境语音信息进行补偿,生成环境语音补偿信息的步骤,包括:
    根据环境语音信息确定音乐输出频率信息;
    检测所述音乐输出频率信息是否与所述补偿频率信息相同;
    若所述音乐输出频率信息与所述补偿频率信息相同,则根据听力补偿算法和所述补偿倍数信息对所述环境语音信息进行补偿;
    将补偿后的所述环境语音信息进行输出,生成音乐语音补偿信息。
  9. 一种听力补偿装置,其中,所述听力补偿装置包括:存储器、处理器及存储在所述存储器上并在所述处理器上运行的听力补偿程序,所述听力补偿程序被所述处理器执行时实现下步骤。
    获取用户的听力受损信息;
    根据所述听力受损信息确定声学补偿信息;
    在接收到语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述语音输出信息进行补偿,生成语音补偿信息;
    对所述语音补偿信息进行播放。
  10. 如权利要求9所述的听力补偿装置,其中,所述听力补偿程序被所述处理器执行时实现下步骤:
    读取所述听力受损信息中的受损频率信息和所述受损频率信息对应的实际受损信息;
    将所述受损频率信息对应的实际受损信息与正常听力值进行比较,得到声学补偿信息,其中,所述声学补偿信息包括补偿频率信息和补偿频率信息对应的补偿倍数信息。
  11. 如权利要求10所述的听力补偿装置,其中,所述听力补偿程序被所述处理器执行时实现下步骤:
    在接收到语音输出信息时,根据语音输出信息确定输出频率信息;
    检测所述输出频率信息是否与所述补偿频率信息相同;
    若所述输出频率信息与所述补偿频率信息相同,则根据听力补偿算法和所述补偿倍数信息对所述语音输出信息进行补偿;
    将补偿后的所述语音输出信息进行输出,生成语音补偿信息。
  12. 如权利要求11所述的听力补偿装置,其中,所述听力补偿程序被所述处理器执行时实现下步骤:
    若所述输出频率信息与所述补偿频率信息不相同,则将所述语音输出信息进行输出,生成语音补偿信息。
  13. 如权利要求9所述的听力补偿装置,其中,当所述语音输出信息为通话语音输出信息时,所述听力补偿程序被所述处理器执行时实现下步骤:
    在接收到通话语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述通话语音输出信息进行补偿,生成通话语音补偿信息;
    所述对所述语音补偿信息进行播放的步骤,包括:
    对所述通话语音补偿信息进行播放。
  14. 如权利要求9所述的听力补偿装置,其中,当所述语音输出信息为音乐语音输出信息时,所述听力补偿程序被所述处理器执行时实现下步骤:
    在接收到音乐语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述音乐语音输出信息进行补偿,生成音乐语音补偿信息;
    所述对所述语音补偿信息进行播放的步骤,包括:
    对所述音乐语音补偿信息进行播放。
  15. 如权利要求9所述的听力补偿装置,其中,所述听力补偿程序被所述处理器执行时实现下步骤:
    获取环境语音信息;
    所述在接收到语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述语音输出信息进行补偿,生成语音补偿信息的步骤,还包括:
    根据听力补偿算法和所述声学补偿信息对所述环境语音信息进行补偿,生成环境语音补偿信息;
    所述对所述语音补偿信息进行播放的步骤,包括:
    对所述环境语音补偿信息进行播放。
  16. 如权利要求15所述的听力补偿装置,其中,所述听力补偿程序被所述处理器执行时实现下步骤:
    根据环境语音信息确定音乐输出频率信息;
    检测所述音乐输出频率信息是否与所述补偿频率信息相同;
    若所述音乐输出频率信息与所述补偿频率信息相同,则根据听力补偿算法和所述补偿倍数信息对所述环境语音信息进行补偿;
    将补偿后的所述环境语音信息进行输出,生成音乐语音补偿信息。
  17. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有听力补偿程序,所述听力补偿程序被处理器执行时实现如下步骤:
    获取用户的听力受损信息;
    根据所述听力受损信息确定声学补偿信息;
    在接收到语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述语音输出信息进行补偿,生成语音补偿信息;
    对所述语音补偿信息进行播放。
  18. 如权利要求17所述的计算机可读存储介质,其中,所述听力补偿程序被处理器执行时实现如下步骤:
    读取所述听力受损信息中的受损频率信息和所述受损频率信息对应的实际受损信息;
    将所述受损频率信息对应的实际受损信息与正常听力值进行比较,得到声学补偿信息,其中,所述声学补偿信息包括补偿频率信息和补偿频率信息对应的补偿倍数信息。
  19. 如权利要求17所述的计算机可读存储介质,其中,所述听力补偿程序被处理器执行时实现如下步骤:
    获取环境语音信息;
    所述在接收到语音输出信息时,根据听力补偿算法和所述声学补偿信息对所述语音输出信息进行补偿,生成语音补偿信息的步骤,还包括:
    根据听力补偿算法和所述声学补偿信息对所述环境语音信息进行补偿,生成环境语音补偿信息;
    所述对所述语音补偿信息进行播放的步骤,包括:
    对所述环境语音补偿信息进行播放。
  20. 如权利要求19所述的计算机可读存储介质,其中,所述听力补偿程序被处理器执行时实现如下步骤:
    根据环境语音信息确定音乐输出频率信息;
    检测所述音乐输出频率信息是否与所述补偿频率信息相同;
    若所述音乐输出频率信息与所述补偿频率信息相同,则根据听力补偿算法和所述补偿倍数信息对所述环境语音信息进行补偿;
    将补偿后的所述环境语音信息进行输出,生成音乐语音补偿信息。
PCT/CN2019/128044 2019-12-20 2019-12-24 听力补偿方法、装置及计算机可读存储介质 WO2021120247A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911332886.9 2019-12-20
CN201911332886.9A CN111050261A (zh) 2019-12-20 2019-12-20 听力补偿方法、装置及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021120247A1 true WO2021120247A1 (zh) 2021-06-24

Family

ID=70238429

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/128044 WO2021120247A1 (zh) 2019-12-20 2019-12-24 听力补偿方法、装置及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN111050261A (zh)
WO (1) WO2021120247A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114501281B (zh) * 2022-01-24 2024-03-12 深圳市昂思科技有限公司 声音调整方法、装置、电子设备和计算机可读介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100020988A1 (en) * 2008-07-24 2010-01-28 Mcleod Malcolm N Individual audio receiver programmer
CN102823276A (zh) * 2010-02-24 2012-12-12 奥迪伦特控股有限公司 助听仪器
CN105531764A (zh) * 2013-05-31 2016-04-27 A·Y·布莱帝希恩 用于在电话系统和移动电话装置中补偿听力损失的方法
CN105933838A (zh) * 2015-02-27 2016-09-07 奥迪康有限公司 使听力装置适应用户耳朵的方法及听力装置
CN208806943U (zh) * 2018-09-12 2019-04-30 深圳市华胜德塑胶电线有限公司 一种降噪的头戴式耳机
CN110213707A (zh) * 2019-04-23 2019-09-06 广东思派康电子科技有限公司 耳机及其助听方法、计算机可读存储介质

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7564979B2 (en) * 2005-01-08 2009-07-21 Robert Swartz Listener specific audio reproduction system
US20080008328A1 (en) * 2006-07-06 2008-01-10 Sony Ericsson Mobile Communications Ab Audio processing in communication terminals
KR20100060550A (ko) * 2008-11-27 2010-06-07 삼성전자주식회사 청각 보정 단말기
US8369549B2 (en) * 2010-03-23 2013-02-05 Audiotoniq, Inc. Hearing aid system adapted to selectively amplify audio signals
US8891777B2 (en) * 2011-12-30 2014-11-18 Gn Resound A/S Hearing aid with signal enhancement
CN102625220B (zh) * 2012-03-22 2014-05-07 清华大学 一种确定助听设备听力补偿增益的方法
WO2014108080A1 (en) * 2013-01-09 2014-07-17 Ace Communications Limited Method and system for self-managed sound enhancement
CN104144374B (zh) * 2013-05-06 2018-03-06 展讯通信(上海)有限公司 基于移动设备的辅助听力方法及系统
US9832562B2 (en) * 2013-11-07 2017-11-28 Gn Hearing A/S Hearing aid with probabilistic hearing loss compensation
CN105050014A (zh) * 2015-06-01 2015-11-11 邹采荣 一种基于智能手机的助听装置及实现方法
CN105681994A (zh) * 2016-03-07 2016-06-15 佛山博智医疗科技有限公司 听力矫正装置的分频调控方法
CN107911528A (zh) * 2017-12-15 2018-04-13 刘方辉 一种基于智能手机的听力补偿系统及其自助验配方法
CN110493695A (zh) * 2018-05-15 2019-11-22 群腾整合科技股份有限公司 一种音频补偿系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100020988A1 (en) * 2008-07-24 2010-01-28 Mcleod Malcolm N Individual audio receiver programmer
CN102823276A (zh) * 2010-02-24 2012-12-12 奥迪伦特控股有限公司 助听仪器
CN105531764A (zh) * 2013-05-31 2016-04-27 A·Y·布莱帝希恩 用于在电话系统和移动电话装置中补偿听力损失的方法
CN105933838A (zh) * 2015-02-27 2016-09-07 奥迪康有限公司 使听力装置适应用户耳朵的方法及听力装置
CN208806943U (zh) * 2018-09-12 2019-04-30 深圳市华胜德塑胶电线有限公司 一种降噪的头戴式耳机
CN110213707A (zh) * 2019-04-23 2019-09-06 广东思派康电子科技有限公司 耳机及其助听方法、计算机可读存储介质

Also Published As

Publication number Publication date
CN111050261A (zh) 2020-04-21

Similar Documents

Publication Publication Date Title
CN107231473B (zh) 一种音频输出调控方法、设备及计算机可读存储介质
US10834503B2 (en) Recording method, recording play method, apparatuses, and terminals
CN107256139A (zh) 音频音量的调整方法、终端及计算机可读存储介质
AU2013211541B2 (en) Mobile apparatus and control method thereof
WO2021042761A1 (zh) 音频播放控制方法、智能手机、装置及可读存储介质
KR102226817B1 (ko) 콘텐츠 재생 방법 및 그 방법을 처리하는 전자 장치
JP2020109968A (ja) ユーザ固有音声情報及びハードウェア固有音声情報に基づくカスタマイズされた音声処理
US9053710B1 (en) Audio content presentation using a presentation profile in a content header
WO2021203906A1 (zh) 自动音量调整方法、装置、介质和设备
WO2019237667A1 (zh) 播放音频数据的方法和装置
WO2023070792A1 (zh) 通话式门铃的音量均衡方法、设备和可读存储介质
WO2022262410A1 (zh) 录音方法和装置
WO2021098698A1 (zh) 音频播放方法及终端设备
CN116347320B (zh) 音频播放方法及电子设备
US11863950B2 (en) Dynamic rendering device metadata-informed audio enhancement system
US20240244371A1 (en) Smart device and control method therefor, computer readable storage medium
KR101977329B1 (ko) 음성 신호 출력 제어 방법 및 장치
WO2021120247A1 (zh) 听力补偿方法、装置及计算机可读存储介质
WO2021127842A1 (zh) 均衡器设置方法、装置、设备及计算机可读存储介质
CN113689890B (zh) 多声道信号的转换方法、装置及存储介质
CN106293607B (zh) 自动切换音频输出模式的方法及系统
US11330371B2 (en) Audio control based on room correction and head related transfer function
JP2018084843A (ja) 入出力装置
JP2014202808A (ja) 入出力装置
JP2019140503A (ja) 情報処理装置、情報処理方法、及び情報処理プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19956237

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19956237

Country of ref document: EP

Kind code of ref document: A1