CN114501281B - Sound adjusting method, device, electronic equipment and computer readable medium - Google Patents

Sound adjusting method, device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN114501281B
CN114501281B CN202210079854.8A CN202210079854A CN114501281B CN 114501281 B CN114501281 B CN 114501281B CN 202210079854 A CN202210079854 A CN 202210079854A CN 114501281 B CN114501281 B CN 114501281B
Authority
CN
China
Prior art keywords
target sound
noise reduction
sound
sound intensity
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210079854.8A
Other languages
Chinese (zh)
Other versions
CN114501281A (en
Inventor
方韶劻
郭峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Aangsi Science & Technology Co ltd
Original Assignee
Shenzhen Aangsi Science & Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Aangsi Science & Technology Co ltd filed Critical Shenzhen Aangsi Science & Technology Co ltd
Priority to CN202210079854.8A priority Critical patent/CN114501281B/en
Publication of CN114501281A publication Critical patent/CN114501281A/en
Application granted granted Critical
Publication of CN114501281B publication Critical patent/CN114501281B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R25/00Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/43Signal processing in hearing aids to enhance the speech intelligibility
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2225/00Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
    • H04R2225/61Aspects relating to mechanical or electronic switches or control elements, e.g. functioning

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurosurgery (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Soundproofing, Sound Blocking, And Sound Damping (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

Embodiments of the present disclosure disclose sound adjustment methods, apparatuses, electronic devices, and computer-readable media. One embodiment of the method comprises the following steps: in response to detecting that the sound intensity of the ambient noise is greater than or equal to a first preset intensity, reducing the sound intensity received by the hearing aid to a second preset intensity; in response to detecting a target sound, performing noise reduction processing on the target sound to generate a noise reduction target sound; obtaining the hearing level of the target user from a target database; and adjusting the target sound intensity of the noise reduction target sound according to the hearing level, and controlling the hearing aid to play the adjusted noise reduction target sound. The embodiment can adaptively adjust the received sound intensity, and avoid the time difference of the sound heard by the user.

Description

Sound adjusting method, device, electronic equipment and computer readable medium
Technical Field
Embodiments of the present disclosure relate to the field of audio, and in particular, to a sound adjustment method, apparatus, electronic device, and computer readable medium.
Background
A hearing aid is a small loudspeaker for helping a hearing impaired user to hear external sounds. Currently, hearing aids typically adjust the received sound to a preset sound intensity or by the user operating a volume key on the hearing aid to adjust the received sound.
However, when adjusting the received sound, the following technical problems generally exist:
first, the received ambient noise is not denoised, resulting in an unclear played sound; and, not adaptively adjusting the sound intensity according to the hearing level of the user, resulting in a user needing to operate a volume key on the hearing aid to adjust the received sound, resulting in a time difference in the sound heard by the user;
second, noise reduction processing is not performed on the audio itself, resulting in poor noise suppression effect.
Disclosure of Invention
The disclosure is in part intended to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose sound adjustment methods, apparatuses, electronic devices, and computer-readable media to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a sound adjustment method applied to a hearing aid, the method comprising: in response to detecting that the sound intensity of the ambient noise is greater than or equal to a first preset intensity, reducing the sound intensity received by the hearing aid to a second preset intensity; in response to detecting a target sound, performing noise reduction processing on the target sound to generate a noise reduction target sound; obtaining the hearing level of the target user from a target database; and adjusting the target sound intensity of the noise reduction target sound according to the hearing level, and controlling the hearing aid to play the adjusted noise reduction target sound.
In a second aspect, some embodiments of the present disclosure provide a sound adjustment device for use in a hearing aid, the device comprising: a reducing unit configured to reduce the intensity of sound received by the hearing aid to a second preset intensity in response to detecting that the intensity of sound of the ambient noise is equal to or greater than the first preset intensity; a noise reduction unit configured to perform noise reduction processing on a target sound in response to detection of the target sound to generate a noise reduction target sound; an acquisition unit configured to acquire a hearing level of a target user from a target database; and an adjusting unit configured to adjust a target sound intensity of the noise reduction target sound according to the hearing level, and to control the hearing aid to play the adjusted noise reduction target sound.
In a third aspect, some embodiments of the present disclosure provide an electronic device comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors causes the one or more processors to implement the method described in any of the implementations of the first aspect above.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect above.
The above embodiments of the present disclosure have the following advantageous effects: by the sound adjusting method of some embodiments of the present disclosure, the received environmental noise can be reduced, and the clarity of the played sound is improved. And the received sound intensity can be adaptively adjusted, so that the time difference of the sound heard by the user is avoided. Specifically, the reason for causing the played sound to be unclear and causing the sound heard by the user to have a time difference is that: the received environmental noise is not reduced, so that the played sound is unclear; and, the sound intensity is not adaptively adjusted according to the hearing level of the user, resulting in a user needing to operate a volume key on the hearing aid to adjust the received sound, resulting in a time difference in the sound heard by the user. Based on this, the sound adjustment method of some embodiments of the present disclosure first reduces the sound intensity received by the hearing aid to a second preset intensity in response to detecting that the sound intensity of the environmental noise is equal to or greater than the first preset intensity. Thus, the sound intensity of the environmental noise can be reduced, and the definition of other sounds can be improved. Then, in response to detection of the target sound, noise reduction processing is performed on the target sound to generate a noise reduction target sound. Thus, the noise reduction processing can be performed on the target sound to improve the definition of the sound played by the hearing aid. The hearing level of the target user is then obtained from the target database. Therefore, the intensity of the played sound can be adjusted in a self-adaptive mode according to the hearing level of the user, and the hearing feeling of the user is improved. Finally, according to the hearing level, adjusting the target sound intensity of the noise reduction target sound, and controlling the hearing aid to play the adjusted noise reduction target sound. Therefore, the received sound intensity can be adaptively adjusted, and the time difference of the sound heard by the user is avoided.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a schematic diagram of one application scenario of a sound adjustment method of some embodiments of the present disclosure;
FIG. 2 is a flow chart of some embodiments of a sound adjustment method according to the present disclosure;
FIG. 3 is a flow chart of other embodiments of a sound adjustment method according to the present disclosure;
FIG. 4 is a schematic structural view of some embodiments of a sound adjustment device according to the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram of an application scenario of a sound adjustment method according to some embodiments of the present disclosure.
In the application scenario of fig. 1, first, the computing device 101 may reduce the intensity of sound received by the hearing aid to a second preset intensity in response to detecting that the intensity of sound of the ambient noise is equal to or greater than the first preset intensity. Next, the computing device 101 may perform noise reduction processing on the target sound 102 in response to detecting the target sound 102 to generate a noise reduction target sound 103. The computing device 101 may then obtain the hearing level 104 of the target user from the target database. Finally, the computing device 101 may adjust the target sound intensity of the noise reduction target sound 103 according to the hearing level 104, and control the hearing aid 106 to play the adjusted noise reduction target sound 105.
The computing device 101 may be hardware or software. When the computing device is hardware, the computing device may be implemented as a distributed cluster formed by a plurality of servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices listed above. It may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present invention is not particularly limited herein.
It should be understood that the number of computing devices in fig. 1 is merely illustrative. There may be any number of computing devices, as desired for an implementation.
With continued reference to fig. 2, a flow 200 of some embodiments of a sound adjustment method according to the present disclosure is shown. The sound adjusting method is applied to a hearing aid and comprises the following steps of:
in response to detecting that the sound intensity of the ambient noise is equal to or higher than the first preset intensity, step 201, the sound intensity received by the hearing aid is reduced to a second preset intensity.
In some embodiments, the executing body of the sound adjustment method (e.g., the computing device 101 shown in fig. 1) may reduce the intensity of sound received by the hearing aid to a second preset intensity in response to detecting that the intensity of sound of the ambient noise is greater than or equal to the first preset intensity. Here, the main body of execution of the sound adjustment method may be a computing device built in the hearing aid, or may be a computing device or a server that controls the hearing aid. Here, the first preset intensity and the second preset intensity are both preset sound intensities. Here, the setting of the first preset intensity and the second preset intensity is not limited. The first preset intensity is greater than the second preset intensity. Here, the environmental noise means environmental noise (for example, sounds generated by industrial production, construction, transportation, and the like). In practice, first, the execution subject may detect the sound intensity of the environmental noise through a noise sensor built in the hearing aid. Then, it is determined whether the sound intensity of the environmental noise is equal to or greater than a first preset intensity. Finally, in response to detecting that the sound intensity of the ambient noise is greater than or equal to the first preset intensity, the sound intensity received by the hearing aid is reduced to a second preset intensity.
In step 202, in response to detecting the target sound, noise reduction processing is performed on the target sound to generate a noise reduction target sound.
In some embodiments, the executing body may perform noise reduction processing on the target sound in response to detecting the target sound to generate a noise reduction target sound. Here, the target sound may refer to a sound emitted by living beings as well as various kinds of alert sounds. For example, the target sound may include, but is not limited to: speech sounds, sounds made by animals, alarm sounds, etc. In practice, the execution subject may perform noise reduction processing on the target sound in response to detection of the target sound to generate a noise reduction target sound. Here, the above-described target sound may be subjected to noise reduction processing by ENC (Environmental Noise Cancellation, environmental noise reduction technique) to generate a noise reduction target sound.
Step 203, obtaining the hearing level of the target user from the target database.
In some embodiments, the executing body may obtain the hearing level of the target user from the target database through a wired connection or a wireless connection. Here, the target database may refer to the execution subject database described above. Here, the target user may refer to a user who uses the above hearing aid. Here, the hearing level may refer to a current hearing level of the target user.
Step 204, adjusting the target sound intensity of the noise reduction target sound according to the hearing level, and controlling the hearing aid to play the adjusted noise reduction target sound.
In some embodiments, the executing body may adjust the target sound intensity of the noise reduction target sound according to the hearing level, and control the hearing aid to play the adjusted noise reduction target sound. Here, the target sound intensity may refer to the sound intensity of the noise reduction target sound.
In practice, according to the hearing level, the execution subject may adjust the target sound intensity of the noise reduction target sound by:
first, it is determined whether the hearing level is greater than a historical hearing level. Here, the historical hearing level may refer to the last target user's hearing level stored in the target database. For example, the hearing level of the target user stored in the target database may be updated once every month, and the historical hearing level may refer to the hearing level stored in the previous month.
And a second step of determining a preset sound intensity range corresponding to the hearing level in response to determining that the hearing level is greater than the historical hearing level. Here, the preset sound intensity range may refer to a preset sound intensity range suitable for the user corresponding to the above-described hearing level. For example, a hearing level of 1, the corresponding sound intensity range may be 46-60 db. The hearing level is level 2 and the corresponding sound intensity range may be 61-80 db.
And thirdly, determining whether the target sound intensity is within the preset sound intensity range.
And a fourth step of raising the target sound intensity to a minimum value of the preset sound intensity range in response to determining that the target sound intensity is not within the preset sound intensity range and that the target sound intensity is less than the minimum value of the preset sound intensity range. For example, the predetermined sound intensity range may be 61-80 db. The target sound intensity may be 50 db. The target sound intensity may be adjusted to be higher than the minimum value "61 db" of the preset sound intensity range.
And fifth, in response to the target sound intensity being greater than the maximum value of the preset sound intensity range, reducing the target sound intensity to the maximum value of the preset sound intensity range.
The above embodiments of the present disclosure have the following advantageous effects: by the sound adjusting method of some embodiments of the present disclosure, the received environmental noise can be reduced, and the clarity of the played sound is improved. And the received sound intensity can be adaptively adjusted, so that the time difference of the sound heard by the user is avoided. Specifically, the reason for causing the played sound to be unclear and causing the sound heard by the user to have a time difference is that: the received environmental noise is not reduced, so that the played sound is unclear; and, the sound intensity is not adaptively adjusted according to the hearing level of the user, resulting in a user needing to operate a volume key on the hearing aid to adjust the received sound, resulting in a time difference in the sound heard by the user. Based on this, the sound adjustment method of some embodiments of the present disclosure first reduces the sound intensity received by the hearing aid to a second preset intensity in response to detecting that the sound intensity of the environmental noise is equal to or greater than the first preset intensity. Thus, the sound intensity of the environmental noise can be reduced, and the definition of other sounds can be improved. Then, in response to detection of the target sound, noise reduction processing is performed on the target sound to generate a noise reduction target sound. Thus, the noise reduction processing can be performed on the target sound to improve the definition of the sound played by the hearing aid. The hearing level of the target user is then obtained from the target database. Therefore, the intensity of the played sound can be adjusted in a self-adaptive mode according to the hearing level of the user, and the hearing feeling of the user is improved. Finally, according to the hearing level, adjusting the target sound intensity of the noise reduction target sound, and controlling the hearing aid to play the adjusted noise reduction target sound. Therefore, the received sound intensity can be adaptively adjusted, and the time difference of the sound heard by the user is avoided.
With further reference to fig. 3, further embodiments of the sound adjustment method according to the present disclosure are shown. The sound adjusting method is applied to a hearing aid and comprises the following steps of:
in step 301, in response to detecting that the sound intensity of the ambient noise is equal to or higher than a first preset intensity, the sound intensity received by the hearing aid is reduced to a second preset intensity.
In some embodiments, the specific implementation of step 301 and the technical effects thereof may refer to step 201 in those embodiments corresponding to fig. 2, which are not described herein.
In step 302, in response to detecting the target sound, noise reduction processing is performed on the target sound to generate a noise reduction target sound.
In some embodiments, an execution subject of the sound adjustment method (e.g., the computing device 101 shown in fig. 1) may perform a noise reduction process on the target sound in response to detecting the target sound to generate a noise reduction target sound.
In practice, the execution subject may perform noise reduction processing on the target sound to generate a noise reduction target sound by:
first, performing time domain conversion processing on the audio signal of the target sound to generate a time domain waveform. In practice, the executing body may perform a time domain conversion process on the audio signal using an inverse fourier transform to generate a time domain waveform. Here, the time domain waveform may refer to a time domain waveform map for characterizing a change in the audio signal with time.
And secondly, performing spectrum transformation processing on the time domain waveform to generate an audio frequency spectrum. Here, the execution body may perform a spectrum conversion process on the time domain waveform to generate an audio spectrum. Here, the spectral transformation may refer to a short-time fourier transformation.
And thirdly, extracting the frequency spectrum characteristic information of the audio frequency spectrum. Here, the above-described audio spectrum may be subjected to spectral feature extraction processing by a spectral feature extraction network to generate spectral feature information. Here, the spectral feature extraction network may be a neural network model of various structures. For example, CNN (Convolutional Neural Networks, convolutional neural network), RNN (Recurrent Neural Networks, recurrent neural network), and the like. Of course, a model constructed according to actual needs may be used. Here, the spectral feature information may include, but is not limited to, at least one of: amplitude, frequency, phase.
And fourthly, inputting the frequency spectrum characteristic information into a pre-trained audio extraction network to obtain an audio extraction result. Here, the audio extraction network may be a neural network model of various structures. For example, CNN (Convolutional Neural Networks, convolutional neural network) and RNN (Recurrent Neural Networks, recurrent neural network), and the like. Of course, a model constructed according to actual needs may be used. The CNN can well grasp the frequency spectrum structure of the audio signal, and on the other hand, the RNN can utilize the front and rear time sequence information to carry out related frequency spectrum prediction.
And fifthly, performing noise reduction processing on the audio extraction result to generate noise reduction audio as noise reduction target sound. Here, the noise reduction processing may refer to short-time inverse fourier transform.
Optionally, the audio extraction network is obtained through the following training: and training the initial neural network according to the training sample set to obtain the trained initial neural network as an audio extraction network. Wherein, the training samples in the training sample set include: sample mask amplitude feature vectors and sample mask phase feature vectors corresponding to the sample mask amplitude feature vectors, amplitude labels corresponding to the mask amplitude feature vectors, and phase labels corresponding to the mask phase feature vectors. Here, the amplitude tag may refer to an unmasked amplitude feature vector. Here, the phase tag may refer to an unmasked phase feature vector.
In practice, according to the training sample set, the executing body may train the initial neural network by the following steps, to obtain the trained initial neural network as the audio extraction network:
first, determining the network structure of the initial neural network and initializing the network parameters of the initial neural network.
And secondly, taking the sample mask amplitude characteristic vector and the sample mask phase characteristic vector corresponding to the sample mask amplitude characteristic vector which are included in the training sample set as the input of the initial neural network, taking the amplitude label corresponding to the mask amplitude characteristic vector and the phase label corresponding to the mask phase characteristic vector which are included in the training sample set as the expected output of the initial neural network, and training the initial neural network by using a deep learning method.
And thirdly, determining the trained initial neural network as the audio extraction network.
Optionally, the training samples in the training sample set are obtained by the following steps:
first, spectrum characteristics are acquired. Wherein the sample spectrum feature comprises an amplitude feature vector and a phase feature vector. Here, the spectral features may be spectral features of any type of sound (e.g., speaking sound, animal sounds, machine alarm sounds, etc.).
And a second step of dynamically masking the amplitude eigenvector and the phase eigenvector to generate a masked amplitude eigenvector and a masked phase eigenvector, respectively. In practice, the amplitude feature vector and the phase feature vector may be dynamically masked by an MLM (Masked Language Model, mask language model) to generate a masked amplitude feature vector and a masked phase feature vector, respectively.
Thus, the trained audio extraction network can be enabled to extract amplitude features and phase features more accurately.
And thirdly, taking the mask amplitude feature vector as a sample mask amplitude feature vector, taking the mask phase feature vector as a sample mask phase feature vector, and combining the sample mask amplitude feature vector, the sample mask phase feature vector, an amplitude tag corresponding to the mask amplitude feature vector and a phase tag corresponding to the mask phase feature vector to generate a training sample. Here, the combining process may refer to a splicing process.
The related content in step 302 serves as an invention point of the present disclosure, thereby solving the second technical problem mentioned in the background art, that is, the noise suppression effect is poor because the noise is not reduced in the audio itself. ". Factors that cause poor suppression effect on noise tend to be as follows: the noise reduction processing is not performed on the audio itself, so that the noise suppression effect is poor. If the above factors are solved, an improved noise suppression effect can be achieved. To achieve this effect, first, the audio signal of the target sound is subjected to time-domain conversion processing to generate a time-domain waveform. Next, the above-described time domain waveform is subjected to a spectrum conversion process to generate an audio spectrum. Then, spectral feature information of the audio spectrum is extracted. Thus, the subsequent noise reduction processing of the audio can be facilitated. And then, inputting the frequency spectrum characteristic information into a pre-trained audio extraction network to obtain an audio extraction result. Therefore, the frequency spectrum characteristics conforming to the original audio frequency can be extracted, and obvious hearing distortion after noise reduction is avoided. Finally, the above-mentioned audio extraction result is subjected to noise reduction processing to generate noise reduction audio as noise reduction target sound. Therefore, the noise reduction processing of the audio is realized, and the noise suppression effect is improved.
Step 303, obtaining the hearing level of the target user from the target database.
Step 304, adjusting the target sound intensity of the noise reduction target sound according to the hearing level, and controlling the hearing aid to play the adjusted noise reduction target sound.
In some embodiments, the specific implementation of steps 303-304 and the technical effects thereof may refer to steps 203-204 in those embodiments corresponding to fig. 2, which are not described herein.
And step 305, in response to detecting that the electric quantity of the hearing aid is smaller than or equal to the preset electric quantity, controlling the hearing aid to play a charging prompt voice.
In some embodiments, the executing body may control the hearing aid to play the charging prompt voice in response to detecting that the electric quantity of the hearing aid is less than or equal to a preset electric quantity. Here, the charge prompt voice may be a voice prompting the user to charge the hearing aid. For example, a charge alert tone may indicate "hearing aid is low in power, requesting a timely charge".
Step 306, in response to not detecting that the hearing aid is in a charging mode within a first target period of time, detecting a stored power corresponding to a charging bin of the hearing aid.
In some embodiments, the executing body may detect a stored power corresponding to a charging bin of the hearing aid in response to not detecting that the hearing aid is in the charging mode for a first target period of time. Here, the execution body is communicatively connected to the controller of the charging bin. Here, the first target duration may be a duration for detecting whether the hearing aid is in the charging mode after controlling the hearing aid to play the charging prompt voice. For example, the first target duration may be 6 minutes. Wherein, the stored electric quantity is the stored electric quantity of the charging bin.
Step 307, in response to detecting that the stored power is less than or equal to a preset stored power, adjusting the working mode of the hearing aid to a sleep mode.
In some embodiments, the executing body may adjust the operation mode of the hearing aid to the sleep mode in response to detecting that the stored power is equal to or less than a preset stored power. Here, the setting of the preset stored power amount is not limited. Here, the sleep mode may indicate that the hearing aid is in a standby state.
Optionally, in response to detecting that the stored power is greater than the preset stored power, determining whether the charging bin is in a charging mode.
In some embodiments, the executing entity may determine whether the charging bin is in the charging mode in response to detecting that the stored power is greater than the preset stored power. Here, it can be judged whether the charging bin is being charged.
Optionally, in response to determining that the charging bin is not in the charging mode, controlling the charging bin to vibrate.
In some embodiments, the executing body may control the charging bin to vibrate in response to determining that the charging bin is not in the charging mode.
Optionally, in response to not detecting a click operation of a mute button acting on the charging bin within a second target duration, generating a voice indicative of a loss of the charging bin, and controlling the hearing aid to play the voice indicative of the loss of the charging bin. Here, the mute button of the charging bin may refer to a physical button or a touch button for stopping vibration of the charging bin.
Optionally, in response to detecting that the stored power is less than or equal to the preset stored power, generating a charging bin charging prompt voice, and controlling the hearing aid to play the charging bin charging prompt voice.
In some embodiments, the executing body may generate a charging bin charging prompt voice in response to detecting that the stored power is less than or equal to the preset stored power, and control the hearing aid to play the charging bin charging prompt voice. Here, the charging bin charging prompt voice may be a voice prompting the user to charge the charging bin. For example, the charging bin charging prompt voice may indicate that "the charging bin is low in electric quantity, please charge in time".
Optionally, before the hearing aid is controlled to play the charging prompt voice of the charging bin, the method further includes: the microphone array in the charging bin is used for carrying out noise reduction processing on the charging bin charging prompt voice, generating the noise reduction charging bin charging prompt voice, and controlling the hearing aid to play the noise reduction charging bin charging prompt voice.
In some embodiments, the executing body may perform noise reduction processing on the charging bin charging prompt voice through a microphone array in the charging bin, generate a noise reduction charging bin charging prompt voice, and control the hearing aid to play the noise reduction charging bin charging prompt voice.
As can be seen from fig. 3, in comparison with the description of some embodiments corresponding to fig. 2, the flow 300 in some embodiments corresponding to fig. 3 first, the operation mode of the hearing aid may be adaptively adjusted according to the power of the hearing aid. Thereby the duration of the hearing aid can be prolonged. The user may then be reminded to timely charge the hearing aid. Finally, whether the charging bin is lost or not can be judged according to the charging states of the charging bin and the hearing aid.
With further reference to fig. 4, as an implementation of the method shown in the above figures, the present disclosure provides embodiments of a sound adjustment device, which correspond to those method embodiments shown in fig. 2, which are particularly applicable in various electronic apparatuses.
As shown in fig. 4, the sound adjustment apparatus 400 of some embodiments includes: a lowering unit 401, a noise reduction unit 402, an acquisition unit 403, and an adjustment unit 404. Wherein the reducing unit 401 is configured to reduce the intensity of sound received by the hearing aid to a second preset intensity in response to detecting that the intensity of sound of the ambient noise is equal to or higher than the first preset intensity; a noise reduction unit 402 configured to perform noise reduction processing on a target sound in response to detection of the target sound to generate a noise reduction target sound; an obtaining unit 403 configured to obtain a hearing level of the target user from the target database; an adjusting unit 404 configured to adjust the target sound intensity of the noise reduction target sound according to the hearing level, and to control the hearing aid to play the adjusted noise reduction target sound.
It will be appreciated that the elements described in the apparatus 400 correspond to the various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting benefits described above with respect to the method are equally applicable to the apparatus 400 and the units contained therein, and are not described in detail herein.
Referring now to FIG. 5, a schematic diagram of an electronic device (e.g., computing device 101 of FIG. 1) 500 suitable for use in implementing some embodiments of the disclosure is shown. The electronic device in some embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 5 is merely an example and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 5, the electronic device 500 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 501, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data required for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM502, and the RAM503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
In general, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 508 including, for example, magnetic tape, hard disk, etc.; and communication means 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 shows an electronic device 500 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 5 may represent one device or a plurality of devices as needed.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communications device 509, or from the storage device 508, or from the ROM 502. The above-described functions defined in the methods of some embodiments of the present disclosure are performed when the computer program is executed by the processing device 501.
It should be noted that, the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: in response to detecting that the sound intensity of the ambient noise is greater than or equal to a first preset intensity, reducing the sound intensity received by the hearing aid to a second preset intensity; in response to detecting a target sound, performing noise reduction processing on the target sound to generate a noise reduction target sound; obtaining the hearing level of the target user from a target database; and adjusting the target sound intensity of the noise reduction target sound according to the hearing level, and controlling the hearing aid to play the adjusted noise reduction target sound.
Computer program code for carrying out operations for some embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes a reduction unit, a noise reduction unit, an acquisition unit, and an adjustment unit. The names of these units do not constitute a limitation of the unit itself in some cases, and for example, the reducing unit may also be described as "a unit that reduces the sound intensity received by the hearing aid to a second preset intensity in response to detecting that the sound intensity of the ambient noise is equal to or higher than the first preset intensity".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (6)

1. A sound adjustment method for a hearing aid, comprising:
in response to detecting that the sound intensity of the ambient noise is greater than or equal to a first preset intensity, reducing the sound intensity received by the hearing aid to a second preset intensity;
in response to detecting a target sound, performing noise reduction processing on the target sound to generate a noise reduction target sound;
obtaining the hearing level of the target user from a target database;
according to the hearing level, adjusting the target sound intensity of the noise reduction target sound, and controlling the hearing aid to play the adjusted noise reduction target sound;
wherein the adjusting the target sound intensity of the noise reduction target sound according to the hearing level includes:
determining whether the hearing level is greater than a historical hearing level;
responsive to determining that the hearing level is greater than the historical hearing level, determining a preset sound intensity range corresponding to the hearing level;
determining whether the target sound intensity is within the preset sound intensity range;
in response to determining that the target sound intensity is not within the preset sound intensity range and that the target sound intensity is less than a minimum value of the preset sound intensity range, raising the target sound intensity to the minimum value of the preset sound intensity range;
In response to the target sound intensity being greater than a maximum of the preset sound intensity range, reducing the target sound intensity to the maximum of the preset sound intensity range;
the noise reduction processing is performed on the target sound to generate a noise reduction target sound, including:
performing time domain conversion processing on the audio signal of the target sound to generate a time domain waveform;
performing spectrum transformation processing on the time domain waveform to generate an audio frequency spectrum;
extracting spectral feature information of the audio frequency spectrum;
inputting the frequency spectrum characteristic information into a pre-trained audio extraction network to obtain an audio extraction result;
performing noise reduction processing on the audio extraction result to generate noise reduction audio as noise reduction target sound;
wherein the audio extraction network is obtained through the following training:
training the initial neural network according to a training sample set to obtain a trained initial neural network as an audio extraction network, wherein training samples in the training sample set comprise: sample mask amplitude feature vectors and sample mask phase feature vectors corresponding to the sample mask amplitude feature vectors, amplitude labels corresponding to the mask amplitude feature vectors, phase labels corresponding to the mask phase feature vectors;
The training samples in the training sample set are obtained through the following steps:
acquiring spectrum features, wherein the sample spectrum features comprise amplitude feature vectors and phase feature vectors;
dynamically masking the amplitude feature vector and the phase feature vector, respectively, to generate a masked amplitude feature vector and a masked phase feature vector;
and combining the sample mask amplitude feature vector, the sample mask phase feature vector, an amplitude tag corresponding to the mask amplitude feature vector, and a phase tag corresponding to the mask phase feature vector to generate training samples.
2. The method of claim 1, wherein the method further comprises:
in response to detecting that the electric quantity of the hearing aid is smaller than or equal to a preset electric quantity, controlling the hearing aid to play a charging prompt voice;
detecting a stored power corresponding to a charging bin of the hearing aid in response to not detecting that the hearing aid is in a charging mode within a first target time period;
And in response to detecting that the stored electricity quantity is smaller than or equal to a preset stored electricity quantity, adjusting the working mode of the hearing aid to a dormant mode.
3. The method of claim 2, wherein the method further comprises:
determining whether the charging bin is in a charging mode in response to detecting that the stored power is greater than the preset stored power;
responsive to determining that the charging bin is not in a charging mode, controlling the charging bin to vibrate;
and generating a voice representing the loss of the charging bin in response to the fact that the clicking operation of the mute key acting on the charging bin is not detected within a second target duration, and controlling the hearing aid to play the voice representing the loss of the charging bin.
4. A sound adjustment device for use in a hearing aid, comprising:
a reducing unit configured to reduce the intensity of sound received by the hearing aid to a second preset intensity in response to detecting that the intensity of sound of the ambient noise is equal to or greater than the first preset intensity;
a noise reduction unit configured to perform noise reduction processing on a target sound in response to detection of the target sound to generate a noise reduction target sound; a noise reduction unit further configured to:
Performing time domain conversion processing on the audio signal of the target sound to generate a time domain waveform;
performing spectrum transformation processing on the time domain waveform to generate an audio frequency spectrum;
extracting spectral feature information of the audio frequency spectrum;
inputting the frequency spectrum characteristic information into a pre-trained audio extraction network to obtain an audio extraction result;
performing noise reduction processing on the audio extraction result to generate noise reduction audio as noise reduction target sound;
wherein the audio extraction network is obtained through the following training:
training the initial neural network according to a training sample set to obtain a trained initial neural network as an audio extraction network, wherein training samples in the training sample set comprise: sample mask amplitude feature vectors and sample mask phase feature vectors corresponding to the sample mask amplitude feature vectors, amplitude labels corresponding to the mask amplitude feature vectors, phase labels corresponding to the mask phase feature vectors;
the training samples in the training sample set are obtained through the following steps:
acquiring spectrum features, wherein the sample spectrum features comprise amplitude feature vectors and phase feature vectors;
Dynamically masking the amplitude feature vector and the phase feature vector, respectively, to generate a masked amplitude feature vector and a masked phase feature vector;
taking the mask amplitude feature vector as a sample mask amplitude feature vector, taking the mask phase feature vector as a sample mask phase feature vector, and combining the sample mask amplitude feature vector, the sample mask phase feature vector, an amplitude tag corresponding to the mask amplitude feature vector, and a phase tag corresponding to the mask phase feature vector to generate training samples;
an acquisition unit configured to acquire a hearing level of a target user from a target database;
an adjusting unit configured to adjust a target sound intensity of the noise reduction target sound according to the hearing level, and to control the hearing aid to play the adjusted noise reduction target sound; an adjustment unit further configured to:
determining whether the hearing level is greater than a historical hearing level;
responsive to determining that the hearing level is greater than the historical hearing level, determining a preset sound intensity range corresponding to the hearing level;
determining whether the target sound intensity is within the preset sound intensity range;
In response to determining that the target sound intensity is not within the preset sound intensity range and that the target sound intensity is less than a minimum value of the preset sound intensity range, raising the target sound intensity to the minimum value of the preset sound intensity range;
and in response to the target sound intensity being greater than the maximum value of the preset sound intensity range, reducing the target sound intensity to the maximum value of the preset sound intensity range.
5. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-3.
6. A computer readable medium having stored thereon a computer program, wherein the program when executed by a processor implements the method of any of claims 1-3.
CN202210079854.8A 2022-01-24 2022-01-24 Sound adjusting method, device, electronic equipment and computer readable medium Active CN114501281B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210079854.8A CN114501281B (en) 2022-01-24 2022-01-24 Sound adjusting method, device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210079854.8A CN114501281B (en) 2022-01-24 2022-01-24 Sound adjusting method, device, electronic equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN114501281A CN114501281A (en) 2022-05-13
CN114501281B true CN114501281B (en) 2024-03-12

Family

ID=81473758

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210079854.8A Active CN114501281B (en) 2022-01-24 2022-01-24 Sound adjusting method, device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN114501281B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024066443A1 (en) * 2022-09-27 2024-04-04 海信视像科技股份有限公司 Display device and volume adjustment method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108804123A (en) * 2018-06-15 2018-11-13 歌尔科技有限公司 A kind of TWS earphones and its upgrade method, device, storage medium
CN108810696A (en) * 2018-06-12 2018-11-13 歌尔科技有限公司 A kind of electric quantity reminding method, TWS earphones and earphone charging equipment
CN109218951A (en) * 2018-09-25 2019-01-15 深圳市博音科技有限公司 A kind of hearing-aid method and system with autonomous inspection correcting function
CN110267150A (en) * 2019-07-31 2019-09-20 潍坊歌尔电子有限公司 Wireless headset, earphone charging box and communication means, system, computer media
CN111050261A (en) * 2019-12-20 2020-04-21 深圳市易优斯科技有限公司 Hearing compensation method, device and computer readable storage medium
CN111491233A (en) * 2020-04-08 2020-08-04 江苏紫米电子技术有限公司 Method, device and equipment for reminding electric quantity of charging box and storage medium
CN112599147A (en) * 2021-03-04 2021-04-02 北京嘉诚至盛科技有限公司 Audio noise reduction transmission method and device, electronic equipment and computer readable medium
CN112954563A (en) * 2019-11-26 2021-06-11 音科有限公司 Signal processing method, electronic device, apparatus and storage medium
CN113747330A (en) * 2018-10-15 2021-12-03 奥康科技有限公司 Hearing aid system and method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108810696A (en) * 2018-06-12 2018-11-13 歌尔科技有限公司 A kind of electric quantity reminding method, TWS earphones and earphone charging equipment
CN108804123A (en) * 2018-06-15 2018-11-13 歌尔科技有限公司 A kind of TWS earphones and its upgrade method, device, storage medium
CN109218951A (en) * 2018-09-25 2019-01-15 深圳市博音科技有限公司 A kind of hearing-aid method and system with autonomous inspection correcting function
CN113747330A (en) * 2018-10-15 2021-12-03 奥康科技有限公司 Hearing aid system and method
CN110267150A (en) * 2019-07-31 2019-09-20 潍坊歌尔电子有限公司 Wireless headset, earphone charging box and communication means, system, computer media
CN112954563A (en) * 2019-11-26 2021-06-11 音科有限公司 Signal processing method, electronic device, apparatus and storage medium
CN111050261A (en) * 2019-12-20 2020-04-21 深圳市易优斯科技有限公司 Hearing compensation method, device and computer readable storage medium
CN111491233A (en) * 2020-04-08 2020-08-04 江苏紫米电子技术有限公司 Method, device and equipment for reminding electric quantity of charging box and storage medium
CN112599147A (en) * 2021-03-04 2021-04-02 北京嘉诚至盛科技有限公司 Audio noise reduction transmission method and device, electronic equipment and computer readable medium

Also Published As

Publication number Publication date
CN114501281A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
US10679612B2 (en) Speech recognizing method and apparatus
CN111179961B (en) Audio signal processing method and device, electronic equipment and storage medium
CN111309883B (en) Man-machine dialogue method based on artificial intelligence, model training method and device
US11270690B2 (en) Method and apparatus for waking up device
CN111986691B (en) Audio processing method, device, computer equipment and storage medium
CN111883117B (en) Voice wake-up method and device
EP4266308A1 (en) Voice extraction method and apparatus, and electronic device
CN111343410A (en) Mute prompt method and device, electronic equipment and storage medium
CN112364144B (en) Interaction method, device, equipment and computer readable medium
US11822854B2 (en) Automatic volume adjustment method and apparatus, medium, and device
CN114501281B (en) Sound adjusting method, device, electronic equipment and computer readable medium
CN113963716A (en) Volume balancing method, device and equipment for talking doorbell and readable storage medium
KR20200072196A (en) Electronic device audio enhancement and method thereof
CN115775564A (en) Audio processing method and device, storage medium and intelligent glasses
CN112669878B (en) Sound gain value calculation method and device and electronic equipment
CN112863545B (en) Performance test method, device, electronic equipment and computer readable storage medium
CN111312243B (en) Equipment interaction method and device
CN113823313A (en) Voice processing method, device, equipment and storage medium
CN111276127A (en) Voice awakening method and device, storage medium and electronic equipment
CN110660399A (en) Training method and device for voiceprint recognition, terminal and computer storage medium
CN113593527B (en) Method and device for generating acoustic features, training voice model and recognizing voice
CN112017685B (en) Speech generation method, device, equipment and computer readable medium
CN112307161B (en) Method and apparatus for playing audio
CN114615609B (en) Hearing aid control method, hearing aid device, apparatus, device and computer medium
CN111650560B (en) Sound source positioning method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant