CN116132875A - Multi-mode intelligent control method, system and storage medium for hearing-aid earphone - Google Patents

Multi-mode intelligent control method, system and storage medium for hearing-aid earphone Download PDF

Info

Publication number
CN116132875A
CN116132875A CN202310404993.8A CN202310404993A CN116132875A CN 116132875 A CN116132875 A CN 116132875A CN 202310404993 A CN202310404993 A CN 202310404993A CN 116132875 A CN116132875 A CN 116132875A
Authority
CN
China
Prior art keywords
sound
hearing
gain
sound signal
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310404993.8A
Other languages
Chinese (zh)
Other versions
CN116132875B (en
Inventor
孙宇峰
余应恩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiuyin Technology Nanjing Co ltd
Original Assignee
Shenzhen Jiuyin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jiuyin Technology Co ltd filed Critical Shenzhen Jiuyin Technology Co ltd
Priority to CN202310404993.8A priority Critical patent/CN116132875B/en
Publication of CN116132875A publication Critical patent/CN116132875A/en
Application granted granted Critical
Publication of CN116132875B publication Critical patent/CN116132875B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1041Mechanical or electronic switches, or control elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/10Details of earpieces, attachments therefor, earphones or monophonic headphones covered by H04R1/10 but not provided for in any of its subgroups
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The invention discloses a multimode intelligent control method, a system and a storage medium for an auxiliary hearing earphone, which comprise the following steps: acquiring the verification feedback information of a target user according to a preset acoustic signal, setting a compensation reference of a preset working model, and initializing a mode flow and mode parameters; acquiring a current sound signal through a microphone array, judging a working mode corresponding to the hearing aid earphone, and calling a corresponding mode flow and model parameters to gain the sound signal; filtering and denoising the current sound signal to remove redundant noise, extracting a target sound signal of the current sound signal, and outputting a gain sound signal after intelligent audio compensation processing; and dynamically adjusting a hearing threshold curve according to the use data of the target user, and optimizing a hearing loss compensation scheme of the hearing aid earphone. According to the invention, through intelligent control of multiple modes and auxiliary gains of the auxiliary hearing earphone, the use experience of a user is improved, and meanwhile, the hearing threshold curve is dynamically adjusted, so that accurate compensation of hearing loss of the user is realized.

Description

Multi-mode intelligent control method, system and storage medium for hearing-aid earphone
Technical Field
The invention relates to the technical field of hearing-aid headphones, in particular to a multimode intelligent control method, a multimode intelligent control system and a storage medium of the hearing-aid headphones.
Background
The hearing aid earphone is a consumer electronic product for helping users including hearing-aid people obtain better listening experience in specific scenes, and the number of people with senile hearing loss is greatly increased along with the progress of population aging. The hearing loss brings a series of problems of hearing speech communication capacity decline, life quality decline, social activity deficiency, mental health impairment and the like to the old, the demand of the market for non-fitting hearing aids is increased due to the aging aggravation and the increase of hearing impaired people, wearable Bluetooth and related audio products are increasingly mature through the development of many years, the earphone with the hearing assistance function is also developed successively, the hearing assistance function of the earphone can amplify external sound, a wearer can obtain external sound information more clearly, the hearing aid product which does not need the hearing impaired people to perform fitting and has high cost performance solves the problems of high unit price, complex purchase and the like of the hearing aids, and the pressure of the wearer cannot be caused due to the special appearance of the hearing aid product.
Nowadays, audio products are increasingly mature after years of development, but the hearing assistance function has higher requirements on the hardware and software of the earphone, and the hearing assistance effect of the mainstream products in the market is not good. For example, the sound of the user is greatly amplified, and the user is not accepted, so that the user can hear the human voice in the environment, but the user is still disturbed; for example, when the earphone is used, clothes touch the microphone, noise such as outdoor wind noise is amplified, and the mode control of the traditional auxiliary hearing earphone cannot bring good use experience to users.
Disclosure of Invention
In order to solve the technical problem, the invention provides a multi-mode intelligent control method, a multi-mode intelligent control system and a storage medium of an auxiliary hearing earphone.
The first aspect of the present invention provides a multimode intelligent control method for an auxiliary hearing earphone, comprising:
acquiring verification feedback information of a target user according to a preset acoustic signal, setting a compensation reference of a preset working model based on the verification feedback information, and initializing a mode flow and mode parameters;
acquiring a current sound signal through a microphone array, judging a working mode corresponding to the auxiliary hearing earphone according to the sound signal, and calling a corresponding mode flow and model parameters to gain the sound signal;
filtering and denoising the current sound signal to remove redundant noise, extracting a target sound signal of the current sound signal, and outputting a gain sound signal after intelligent audio compensation processing;
and dynamically adjusting a hearing threshold curve according to the use data of the target user, and optimizing a hearing loss compensation scheme of the auxiliary hearing earphone through the hearing threshold curve.
In the scheme, the verification feedback information of a target user is obtained according to a preset acoustic signal, a compensation reference of a preset working model is set based on the verification feedback information, and a mode flow and mode parameters are initialized, specifically:
When a target user wears an auxiliary hearing earphone for the first time, simulating different working modes through preset acoustic signals of the auxiliary hearing earphone, acquiring basic identity information and hearing condition description information of the target user in a preset mode, and extracting and acquiring basic characteristics of the target user by utilizing keywords;
acquiring user data with similarity meeting preset similarity standards through data retrieval according to the basic characteristics, performing aggregate analysis of hearing-aid earphone gain compensation according to the screened user data, and setting standard gain compensation of preset acoustic signals in different working modes;
initializing working mode parameters through the standard gain compensation, and acquiring feedback information of a target user on the standard gain compensation in different working modes;
judging the speech intelligibility and the sound intensity acceptance of a preset acoustic signal to a target user under standard gain compensation according to the feedback information, and acquiring the speech intelligibility and the sound intensity acceptance as verification feedback information;
and iteratively adjusting the standard gain compensation according to the verification feedback information, outputting gain compensation references in each working mode when the speech intelligibility and the sound intensity acceptance meet preset requirements, and initializing mode flow and mode parameters by using the gain compensation references.
In this scheme, acquire current sound signal through the microphone array, judge the corresponding mode of hearing earphone is assisted according to the sound signal, call corresponding mode flow and model parameter and carry out the gain to the sound signal, specifically do:
sensing current sound signals through a microphone array, generating a sound sensing sequence, and selecting a corresponding working mode according to a signal type label of the sound sensing sequence;
reading a gain compensation reference of a working mode corresponding to the sound sensing sequence, performing gain processing on the sound signal according to the gain compensation reference, and performing real-time gain evaluation on a gain effect in the gain process of the sound signal;
reading the hearing preference of the target user through the historical usage data of the target user, and generating a reference signal sequence of the current signal type according to the hearing preference and gain compensation benchmarks of different working modes;
constructing a matching path of the sound sensing sequence after the real-time gain and the reference signal sequence in the gain process, calculating the similarity by utilizing the dynamic time regularity, and judging whether the distance is larger than a preset distance threshold according to the dynamic time regularity;
if the gain compensation reference is larger than the target user, setting the self-adaptive change grade of the gain compensation reference according to the deviation of the dynamic time warping distance, feeding back the gain compensation reference to the target user according to a preset mode, and selecting feedback according to the grade of the user for setting.
In this scheme, carry out filtering denoising processing and get rid of unnecessary noise in the current sound signal, extract the target sound signal of current sound signal, through intelligent audio compensation processing after, export the sound signal of gain, specifically:
filtering and denoising the sound sensing sequence acquired by the microphone array, dividing the sound sensing sequence after filtering and denoising into sound segments with preset time steps, extracting sound features in each sound segment, and acquiring a preset number of sound features with high accumulated contribution degree through principal component analysis to serve as principal component directions;
acquiring characteristic scattered point distribution of different sounds from different sound characteristic projection principal component directions, and distinguishing target sound signals according to the characteristic scattered point distribution;
constructing a target sound separation model based on the U-NET network, improving the U-NET network through a convolution self-encoder, extracting sound characteristics through the encoder of the target sound separation model, and generating a mask after extracting target sound signals through a decoder;
the optimal layer number of an encoder and a decoder is obtained through iterative training, the encoder consists of two convolution modules and a pooling module, and the decoder consists of two convolution modules and an up-sampling module;
And acquiring the proportion of the target sound signal to the sound signal through the target sound separation model, extracting and separating the target sound signal, acquiring a gain compensation reference of the current working mode and the self-adaptive change level of the gain compensation reference, and performing intelligent audio compensation processing.
In this scheme, carry out the suppression of sudden sound in intelligent audio compensation processing, specifically:
the method comprises the steps of monitoring the direction of a main component in a sound sensing sequence in real time, generating and obtaining characteristic scattered point distribution of different sounds according to the change of the direction of the main component, judging the sound change in the sound sequence according to dense information of the characteristic scattered point distribution, and marking a current time stamp;
under the condition of marking a time stamp, acquiring short-time energy of a target sound signal and each environment sound signal in a sound sensing sequence in a preset time step;
obtaining target sound signals and distribution of all environment sound signals in a frequency axis in a preset time step according to the short-time energy to generate high-frequency energy and low-frequency energy, and calculating the ratio of the high-frequency energy norm to the low-frequency energy norm;
judging whether the ratio is larger than a preset threshold value, if so, judging that sudden sounds exist in a preset time step, and carrying out reverse gain on the environmental sound signals according to the self-adaptive change level of the current gain compensation reference to reduce the sound intensity of the environmental sound signals.
In the scheme, a hearing threshold curve is dynamically adjusted according to the use data of a target user, and a hearing loss compensation scheme of the hearing aid earphone is optimized through the hearing threshold curve, and specifically comprises the following steps:
acquiring an acoustic signal of minimum sound intensity corresponding to each frequency which can be heard by the target user through the verification feedback information of the target user, and generating a hearing threshold curve of the target user;
storing historical use data of a target user, acquiring feedback operations of the target user on different frequencies and different sound intensities through the historical use data in preset time, screening the feedback operations of each point on a threshold curve, marking the point on the threshold curve, and extracting parameter characteristics of the screened feedback operations;
generating a new hearing threshold curve according to the parameter characteristics, adjusting hearing loss compensation according to the deviation of the parameter characteristics on the hearing threshold curve, and dynamically optimizing a hearing loss compensation scheme of a target user;
and calculating the average Manhattan distance between the new hearing threshold curve and each point on the original hearing threshold curve, and representing the change degree of the hearing threshold curve in the preset time according to the average Manhattan distance, and generating hearing early warning when the change degree is larger than a preset change degree threshold.
The second aspect of the present invention also provides a multimode intelligent control system for an auxiliary hearing earphone, the system comprising: the multi-mode intelligent control method program of the hearing aid earphone is executed by the processor and comprises the following steps:
acquiring verification feedback information of a target user according to a preset acoustic signal, setting a compensation reference of a preset working model based on the verification feedback information, and initializing a mode flow and mode parameters;
acquiring a current sound signal through a microphone array, judging a working mode corresponding to the auxiliary hearing earphone according to the sound signal, and calling a corresponding mode flow and model parameters to gain the sound signal;
filtering and denoising the current sound signal to remove redundant noise, extracting a target sound signal of the current sound signal, and outputting a gain sound signal after intelligent audio compensation processing;
and dynamically adjusting a hearing threshold curve according to the use data of the target user, and optimizing a hearing loss compensation scheme of the auxiliary hearing earphone through the hearing threshold curve.
The third aspect of the present invention also provides a computer readable storage medium, where the computer readable storage medium includes a multi-mode intelligent control method program for a hearing aid earphone, where the multi-mode intelligent control method program for a hearing aid earphone, when executed by a processor, implements the steps of the multi-mode intelligent control method for a hearing aid earphone as set forth in any one of the above.
The invention discloses a multimode intelligent control method, a system and a storage medium for an auxiliary hearing earphone, which comprise the following steps: acquiring the verification feedback information of a target user according to a preset acoustic signal, setting a compensation reference of a preset working model, and initializing a mode flow and mode parameters; acquiring a current sound signal through a microphone array, judging a working mode corresponding to the hearing aid earphone, and calling a corresponding mode flow and model parameters to gain the sound signal; filtering and denoising the current sound signal to remove redundant noise, extracting a target sound signal of the current sound signal, and outputting a gain sound signal after intelligent audio compensation processing; and dynamically adjusting a hearing threshold curve according to the use data of the target user, and optimizing a hearing loss compensation scheme of the hearing aid earphone. According to the invention, through intelligent control of multiple modes and auxiliary gains of the auxiliary hearing earphone, the use experience of a user is improved, and meanwhile, the hearing threshold curve is dynamically adjusted, so that accurate compensation of hearing loss of the user is realized.
Drawings
FIG. 1 shows a flow chart of a multi-mode intelligent control method of a hearing aid earphone of the present invention;
FIG. 2 is a flow chart of a method for judging the corresponding working mode of the hearing aid earphone according to the sound signal;
FIG. 3 is a flow chart of a method of the present invention for intelligent audio compensation by extracting a target sound signal of a current sound signal;
fig. 4 shows a block diagram of a multi-mode intelligent control system of a hearing aid headset of the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, in the case of no conflict, the embodiments of the present application and the features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those described herein, and therefore the scope of the present invention is not limited to the specific embodiments disclosed below.
Fig. 1 shows a flowchart of a multi-mode intelligent control method of the hearing aid earphone of the present invention.
As shown in fig. 1, a first aspect of the present invention provides a multi-mode intelligent control method for an auxiliary hearing earphone, including:
s102, acquiring verification feedback information of a target user according to a preset acoustic signal, setting a compensation reference of a preset working model based on the verification feedback information, and initializing a mode flow and mode parameters;
S104, acquiring a current sound signal through a microphone array, judging a corresponding working mode of the hearing aid earphone according to the sound signal, and calling a corresponding mode flow and model parameters to gain the sound signal;
s106, filtering and denoising the current sound signal to remove redundant noise, extracting a target sound signal of the current sound signal, and outputting a gain sound signal after intelligent audio compensation;
s108, dynamically adjusting a hearing threshold curve according to the use data of the target user, and optimizing a hearing loss compensation scheme of the auxiliary hearing earphone through the hearing threshold curve.
When a target user wears the auxiliary hearing earphone for the first time, simulating different working modes through preset acoustic signals of the auxiliary hearing earphone, wherein the working modes comprise a conversation mode, a music mode, an auxiliary hearing mode and the like; basic identity information and hearing condition description information of a target user are obtained in a preset mode, and characteristics such as age, hearing injury and the like are extracted by utilizing keywords, so that basic characteristics of the target user are obtained; acquiring user data with similarity meeting preset similarity standards through data retrieval according to the basic characteristics, performing aggregate analysis of hearing-aid earphone gain compensation according to the screened user data, and setting standard gain compensation of preset acoustic signals in different working modes; initializing working mode parameters through the standard gain compensation, and acquiring feedback information of a target user on the standard gain compensation in different working modes; judging the speech intelligibility and the sound intensity acceptance of a preset acoustic signal to a target user under standard gain compensation according to the feedback information, and acquiring the speech intelligibility and the sound intensity acceptance as verification feedback information; and iteratively adjusting the standard gain compensation according to the verification feedback information, outputting gain compensation references under each working mode when the speech intelligibility and the sound intensity acceptance meet preset requirements, and initializing a mode flow and mode parameters by using the gain compensation references, wherein the mode flow comprises filtering denoising, gain, frequency shifting, sound source positioning and the like.
Fig. 2 shows a flowchart of a method for judging the corresponding working mode of the hearing aid earphone according to the sound signal.
According to the embodiment of the invention, the current sound signal is obtained through the microphone array, the working mode corresponding to the auxiliary hearing earphone is judged according to the sound signal, and the corresponding mode flow and model parameters are called to gain the sound signal, specifically:
s202, sensing current sound signals through a microphone array, generating a sound sensing sequence, and selecting a corresponding working mode according to a signal type label of the sound sensing sequence;
s204, reading a gain compensation reference of a working mode corresponding to the sound sensing sequence, performing gain processing on the sound signal according to the gain compensation reference, and performing real-time gain evaluation on the gain effect in the gain process of the sound signal;
s206, reading the hearing preference of the target user through the historical use data of the target user, and generating a reference signal sequence of the current signal type according to the hearing preference and gain compensation references of different working modes;
s208, constructing a matching path of the sound sensing sequence after the real-time gain and the reference signal sequence in the gain process, calculating the similarity by utilizing the dynamic time regularity, and judging whether the distance is larger than a preset distance threshold according to the dynamic time regularity;
And S210, if the gain compensation reference is larger than the target user, setting the self-adaptive change grade of the gain compensation reference according to the dynamic time warping distance deviation, feeding back the self-adaptive change grade to the target user according to a preset mode, and selecting feedback according to the grade of the user for setting.
It should be noted that, the hearing preference of the target user includes characteristics such as a preferred volume and a preferred timbre of the user, the similarity calculation is performed by using dynamic time warping, and a matching path between the sound sensing sequence after the real-time gain and the reference signal sequence is constructed
Figure SMS_1
The method comprises the following steps:
Figure SMS_2
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_5
representing a reference signal sequence,/->
Figure SMS_7
Representing the sound perception sequence after real-time gain, +.>
Figure SMS_9
Respectively represent +.>
Figure SMS_4
The sound perception sequence after the signal points and the real-time gain is +.>
Figure SMS_8
Matching paths of individual signal points, +.>
Figure SMS_10
The value range of (2) is +.>
Figure SMS_11
, />
Figure SMS_3
The value range of (2) is +.>
Figure SMS_6
The calculation formula of the dynamic time warping distance DTW is as follows:
Figure SMS_12
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_13
respectively representing signal points in the reference signal sequence and the sound perception sequence after the real-time gain,
Figure SMS_14
representing the Euclidean distance between signal points in the reference signal sequence and the sound sensing sequence after the real-time gain;
the method comprises the steps of presetting a distance threshold, and setting an adaptive change level of a gain compensation reference according to the deviation between a dynamic time regular distance and the preset distance threshold, wherein the adaptive change level can be set by a user and can be one-stage or multi-stage.
Fig. 3 shows a flow chart of a method of the present invention for intelligent audio compensation by extracting a target sound signal of a current sound signal.
According to the embodiment of the invention, the current sound signal is subjected to filtering and denoising treatment to remove redundant noise, the target sound signal of the current sound signal is extracted, and the gain sound signal is output after intelligent audio compensation treatment, specifically:
s302, filtering and denoising the sound sensing sequence acquired by the microphone array, dividing the sound sensing sequence after filtering and denoising into sound segments with preset time steps, extracting sound features in each sound segment, and acquiring a preset number of sound features with high accumulated contribution degree through principal component analysis to serve as principal component directions;
s304, obtaining characteristic scattered point distribution of different sounds for different sound characteristic projection principal component directions, and distinguishing target sound signals according to the characteristic scattered point distribution;
s306, constructing a target sound separation model based on the U-NET network, improving the U-NET network through a convolution self-encoder, extracting sound characteristics through the encoder of the target sound separation model, and generating a mask after extracting target sound signals through a decoder;
S308, obtaining the optimal layer number of an encoder and a decoder through iterative training, wherein the encoder consists of two convolution modules and a pooling module, and the decoder consists of two convolution modules and an up-sampling module;
s310, obtaining the proportion of the target sound signal to the sound signal through the target sound separation model, extracting and separating the target sound signal, obtaining the gain compensation reference of the current working mode and the self-adaptive change level of the gain compensation reference, and performing intelligent audio compensation processing.
The method includes the steps of training a target sound separation model, setting a loss function through mean square error, training until the loss function converges, obtaining a time-frequency mask, and calculating the proportion of the target sound signal to the sound signal based on an ideal floating value mask on the assumption that the target sound signal and the environment sound signal do not intersect, namely, the point of the target sound signal and the environment sound signal is multiplied by 0.
The method comprises the steps of monitoring the direction of a main component in a sound sensing sequence in real time, generating and obtaining characteristic scattered point distribution of different sounds according to the change of the direction of the main component, judging the sound change in the sound sequence according to dense information of the characteristic scattered point distribution, and marking a current time stamp; under the condition of marking a time stamp, short-time energy of a target sound signal and short-time energy of each environment sound signal in a sound sensing sequence in a preset time step is obtained, and short-time energy of an ith frame signal in a frame L
Figure SMS_15
The calculation formula of (2) is as follows: />
Figure SMS_16
L represents the frame length of the frame, +.>
Figure SMS_17
Representing the amplitude value of an nth point in an ith frame signal obtained after framing; obtaining target sound signals and distribution of all environment sound signals in a frequency axis in a preset time step according to the short-time energy to generate high-frequency energy and low-frequency energy, and calculating the ratio of the high-frequency energy norm to the low-frequency energy norm; judging whether the ratio is larger than a preset threshold value, if so, judging that sudden sounds exist in a preset time step, and carrying out reverse gain on the environmental sound signals according to the self-adaptive change level of the current gain compensation reference to reduce the sound intensity of the environmental sound signals.
The method comprises the steps of acquiring an acoustic signal of minimum sound intensity corresponding to each frequency which can be heard by a target user through experimental feedback information of the target user, and generating a threshold curve of the target user; storing historical use data of a target user, acquiring feedback operations of the target user on different frequencies and different sound intensities through the historical use data in preset time, screening the feedback operations of each point on a threshold curve, marking the point on the threshold curve, and extracting parameter characteristics of the screened feedback operations; generating a new hearing threshold curve according to the parameter characteristics, adjusting hearing loss compensation according to the deviation of the parameter characteristics on the hearing threshold curve, and dynamically optimizing a hearing loss compensation scheme of a target user; and calculating the average Manhattan distance between the new hearing threshold curve and each point on the original hearing threshold curve, and representing the change degree of the hearing threshold curve in the preset time according to the average Manhattan distance, and generating hearing early warning when the change degree is larger than a preset change degree threshold.
According to the embodiment of the invention, the noise scene is judged according to the sound perception sequence, specifically:
the method comprises the steps of obtaining a sound sensing sequence for preprocessing, performing FFT processing on the preprocessed sound sensing sequence, dividing the sound sensing sequence into a plurality of spectrum sub-bands, and extracting sub-band characteristics;
selecting voice data with scene labels to generate a data set, screening sub-band features, generating training data by using the data set, training a noise scene recognition model, and selecting periodic features, energy features and adjacent sub-band correlation features of the spectrum sub-bands to train the noise scene recognition model;
the data of the preset noise scene in the data set is selected to verify the output of the trained noise scene recognition model, and when the accuracy rate meets the preset standard, the noise scene recognition model is output;
inputting the sub-band characteristics of the sound perception sequence into a noise scene recognition model to recognize the current noise scene, and generating a noise environment recognition result;
constructing a database, matching the noise environment with a preset gain compensation reference, storing the noise environment and the preset gain compensation reference in the database, searching in the database according to the current noise environment identification result, and searching the gain compensation reference with the similarity meeting the preset standard as the gain compensation reference of the current noise environment.
Fig. 4 shows a block diagram of a multi-mode intelligent control system of a hearing aid headset of the present invention.
The second aspect of the present invention also provides a multimode intelligent control system 4 for a hearing aid earphone, the system comprising: the memory 41 and the processor 42, wherein the memory comprises a multi-mode intelligent control method program of the hearing aid earphone, and the multi-mode intelligent control method program of the hearing aid earphone realizes the following steps when being executed by the processor:
acquiring verification feedback information of a target user according to a preset acoustic signal, setting a compensation reference of a preset working model based on the verification feedback information, and initializing a mode flow and mode parameters;
acquiring a current sound signal through a microphone array, judging a working mode corresponding to the auxiliary hearing earphone according to the sound signal, and calling a corresponding mode flow and model parameters to gain the sound signal;
filtering and denoising the current sound signal to remove redundant noise, extracting a target sound signal of the current sound signal, and outputting a gain sound signal after intelligent audio compensation processing;
and dynamically adjusting a hearing threshold curve according to the use data of the target user, and optimizing a hearing loss compensation scheme of the auxiliary hearing earphone through the hearing threshold curve.
When a target user wears the auxiliary hearing earphone for the first time, simulating different working modes through preset acoustic signals of the auxiliary hearing earphone, wherein the working modes comprise a conversation mode, a music mode, an auxiliary hearing mode and the like; basic identity information and hearing condition description information of a target user are obtained in a preset mode, and characteristics such as age, hearing injury and the like are extracted by utilizing keywords, so that basic characteristics of the target user are obtained; acquiring user data with similarity meeting preset similarity standards through data retrieval according to the basic characteristics, performing aggregate analysis of hearing-aid earphone gain compensation according to the screened user data, and setting standard gain compensation of preset acoustic signals in different working modes; initializing working mode parameters through the standard gain compensation, and acquiring feedback information of a target user on the standard gain compensation in different working modes; judging the speech intelligibility and the sound intensity acceptance of a preset acoustic signal to a target user under standard gain compensation according to the feedback information, and acquiring the speech intelligibility and the sound intensity acceptance as verification feedback information; and iteratively adjusting the standard gain compensation according to the verification feedback information, outputting gain compensation references in each working mode when the speech intelligibility and the sound intensity acceptance meet preset requirements, and initializing mode flow and mode parameters by using the gain compensation references.
According to the embodiment of the invention, the current sound signal is obtained through the microphone array, the working mode corresponding to the auxiliary hearing earphone is judged according to the sound signal, and the corresponding mode flow and model parameters are called to gain the sound signal, specifically:
sensing current sound signals through a microphone array, generating a sound sensing sequence, and selecting a corresponding working mode according to a signal type label of the sound sensing sequence;
reading a gain compensation reference of a working mode corresponding to the sound sensing sequence, performing gain processing on the sound signal according to the gain compensation reference, and performing real-time gain evaluation on a gain effect in the gain process of the sound signal;
reading the hearing preference of the target user through the historical usage data of the target user, and generating a reference signal sequence of the current signal type according to the hearing preference and gain compensation benchmarks of different working modes;
constructing a matching path of the sound sensing sequence after the real-time gain and the reference signal sequence in the gain process, calculating the similarity by utilizing the dynamic time regularity, and judging whether the distance is larger than a preset distance threshold according to the dynamic time regularity;
if the gain compensation reference is larger than the target user, setting the self-adaptive change grade of the gain compensation reference according to the deviation of the dynamic time warping distance, feeding back the gain compensation reference to the target user according to a preset mode, and selecting feedback according to the grade of the user for setting.
It should be noted that, the hearing preference of the target user includes characteristics such as a preferred volume and a preferred timbre of the user, the similarity calculation is performed by using dynamic time warping, and a matching path between the sound sensing sequence after the real-time gain and the reference signal sequence is constructed
Figure SMS_18
The method comprises the following steps:
Figure SMS_19
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_21
representing a reference signal sequence,/->
Figure SMS_25
Representing the sound perception sequence after real-time gain, +.>
Figure SMS_27
Respectively represent +.>
Figure SMS_22
The sound perception sequence after the signal points and the real-time gain is +.>
Figure SMS_23
Matching paths of individual signal points, +.>
Figure SMS_26
The value range of (2) is +.>
Figure SMS_28
, />
Figure SMS_20
The value range of (2) is +.>
Figure SMS_24
The calculation formula of the dynamic time warping distance DTW is as follows:
Figure SMS_29
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_30
respectively representing signal points in the reference signal sequence and the sound perception sequence after the real-time gain,
Figure SMS_31
representing the Euclidean distance between signal points in the reference signal sequence and the sound sensing sequence after the real-time gain;
the method comprises the steps of presetting a distance threshold, and setting an adaptive change level of a gain compensation reference according to the deviation between a dynamic time regular distance and the preset distance threshold, wherein the adaptive change level can be set by a user and can be one-stage or multi-stage.
According to the embodiment of the invention, the current sound signal is subjected to filtering and denoising treatment to remove redundant noise, the target sound signal of the current sound signal is extracted, and the gain sound signal is output after intelligent audio compensation treatment, specifically:
Filtering and denoising the sound sensing sequence acquired by the microphone array, dividing the sound sensing sequence after filtering and denoising into sound segments with preset time steps, extracting sound features in each sound segment, and acquiring a preset number of sound features with high accumulated contribution degree through principal component analysis to serve as principal component directions;
acquiring characteristic scattered point distribution of different sounds from different sound characteristic projection principal component directions, and distinguishing target sound signals according to the characteristic scattered point distribution;
constructing a target sound separation model based on the U-NET network, improving the U-NET network through a convolution self-encoder, extracting sound characteristics through the encoder of the target sound separation model, and generating a mask after extracting target sound signals through a decoder;
the optimal layer number of an encoder and a decoder is obtained through iterative training, the encoder consists of two convolution modules and a pooling module, and the decoder consists of two convolution modules and an up-sampling module;
and acquiring the proportion of the target sound signal to the sound signal through the target sound separation model, extracting and separating the target sound signal, acquiring a gain compensation reference of the current working mode and the self-adaptive change level of the gain compensation reference, and performing intelligent audio compensation processing.
The method includes the steps of training a target sound separation model, setting a loss function through mean square error, training until the loss function converges, obtaining a time-frequency mask, and calculating the proportion of the target sound signal to the sound signal based on an ideal floating value mask on the assumption that the target sound signal and the environment sound signal do not intersect, namely, the point of the target sound signal and the environment sound signal is multiplied by 0.
The method comprises the steps of monitoring the direction of a main component in a sound sensing sequence in real time, generating and obtaining characteristic scattered point distribution of different sounds according to the change of the direction of the main component, judging the sound change in the sound sequence according to dense information of the characteristic scattered point distribution, and marking a current time stamp; under the condition of marking a time stamp, short-time energy of a target sound signal and short-time energy of each environment sound signal in a sound sensing sequence in a preset time step is obtained, and short-time energy of an ith frame signal in a frame L
Figure SMS_32
The calculation formula of (2) is as follows: />
Figure SMS_33
L represents the frame length of the frame, +.>
Figure SMS_34
Representing the amplitude value of an nth point in an ith frame signal obtained after framing; obtaining target sound signals and distribution of all environment sound signals in a frequency axis in a preset time step according to the short-time energy to generate high-frequency energy and low-frequency energy, and calculating the ratio of the high-frequency energy norm to the low-frequency energy norm; judging whether the ratio is larger than a preset threshold value, if so, judging that sudden sounds exist in a preset time step, and carrying out reverse gain on the environmental sound signals according to the self-adaptive change level of the current gain compensation reference to reduce the sound intensity of the environmental sound signals.
The method comprises the steps of acquiring an acoustic signal of minimum sound intensity corresponding to each frequency which can be heard by a target user through experimental feedback information of the target user, and generating a threshold curve of the target user; storing historical use data of a target user, acquiring feedback operations of the target user on different frequencies and different sound intensities through the historical use data in preset time, screening the feedback operations of each point on a threshold curve, marking the point on the threshold curve, and extracting parameter characteristics of the screened feedback operations; generating a new hearing threshold curve according to the parameter characteristics, adjusting hearing loss compensation according to the deviation of the parameter characteristics on the hearing threshold curve, and dynamically optimizing a hearing loss compensation scheme of a target user; and calculating the average Manhattan distance between the new hearing threshold curve and each point on the original hearing threshold curve, and representing the change degree of the hearing threshold curve in the preset time according to the average Manhattan distance, and generating hearing early warning when the change degree is larger than a preset change degree threshold.
The third aspect of the present invention also provides a computer readable storage medium, where the computer readable storage medium includes a multi-mode intelligent control method program for a hearing aid earphone, where the multi-mode intelligent control method program for a hearing aid earphone, when executed by a processor, implements the steps of the multi-mode intelligent control method for a hearing aid earphone as set forth in any one of the above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units; can be located in one place or distributed to a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present invention may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk or an optical disk, or the like, which can store program codes.
Alternatively, the above-described integrated units of the present invention may be stored in a computer-readable storage medium if implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solutions of the embodiments of the present invention may be embodied in essence or a part contributing to the prior art in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, ROM, RAM, magnetic or optical disk, or other medium capable of storing program code.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. The multimode intelligent control method of the hearing aid earphone is characterized by comprising the following steps of:
acquiring verification feedback information of a target user according to a preset acoustic signal, setting a compensation reference of a preset working model based on the verification feedback information, and initializing a mode flow and mode parameters;
acquiring a current sound signal through a microphone array, judging a working mode corresponding to the auxiliary hearing earphone according to the sound signal, and calling a corresponding mode flow and model parameters to gain the sound signal;
filtering and denoising the current sound signal to remove redundant noise, extracting a target sound signal of the current sound signal, and outputting a gain sound signal after intelligent audio compensation processing;
and dynamically adjusting a hearing threshold curve according to the use data of the target user, and optimizing a hearing loss compensation scheme of the auxiliary hearing earphone through the hearing threshold curve.
2. The multi-mode intelligent control method of an auxiliary hearing earphone according to claim 1, wherein the method is characterized in that the verification feedback information of a target user is obtained according to a preset acoustic signal, a compensation reference of a preset working model is set based on the verification feedback information, and a mode flow and mode parameters are initialized, specifically:
when a target user wears an auxiliary hearing earphone for the first time, simulating different working modes through preset acoustic signals of the auxiliary hearing earphone, acquiring basic identity information and hearing condition description information of the target user in a preset mode, and extracting and acquiring basic characteristics of the target user by utilizing keywords;
acquiring user data with similarity meeting preset similarity standards through data retrieval according to the basic characteristics, performing aggregate analysis of hearing-aid earphone gain compensation according to the screened user data, and setting standard gain compensation of preset acoustic signals in different working modes;
initializing working mode parameters through the standard gain compensation, and acquiring feedback information of a target user on the standard gain compensation in different working modes;
judging the speech intelligibility and the sound intensity acceptance of a preset acoustic signal to a target user under standard gain compensation according to the feedback information, and acquiring the speech intelligibility and the sound intensity acceptance as verification feedback information;
And iteratively adjusting the standard gain compensation according to the verification feedback information, outputting gain compensation references in each working mode when the speech intelligibility and the sound intensity acceptance meet preset requirements, and initializing mode flow and mode parameters by using the gain compensation references.
3. The multi-mode intelligent control method of an auxiliary hearing earphone according to claim 1, wherein the method is characterized in that a current sound signal is obtained through a microphone array, a working mode corresponding to the auxiliary hearing earphone is judged according to the sound signal, and a corresponding mode flow and model parameters are called to gain the sound signal, specifically:
sensing current sound signals through a microphone array, generating a sound sensing sequence, and selecting a corresponding working mode according to a signal type label of the sound sensing sequence;
reading a gain compensation reference of a working mode corresponding to the sound sensing sequence, performing gain processing on the sound signal according to the gain compensation reference, and performing real-time gain evaluation on a gain effect in the gain process of the sound signal;
reading the hearing preference of the target user through the historical usage data of the target user, and generating a reference signal sequence of the current signal type according to the hearing preference and gain compensation benchmarks of different working modes;
Constructing a matching path of the sound sensing sequence after the real-time gain and the reference signal sequence in the gain process, calculating the similarity by utilizing the dynamic time regularity, and judging whether the distance is larger than a preset distance threshold according to the dynamic time regularity;
if the gain compensation reference is larger than the target user, setting the self-adaptive change grade of the gain compensation reference according to the deviation of the dynamic time warping distance, feeding back the gain compensation reference to the target user according to a preset mode, and selecting feedback according to the grade of the user for setting.
4. The multimode intelligent control method of the hearing-aid earphone according to claim 1, wherein the filtering denoising processing is performed on the current sound signal to remove redundant noise, the target sound signal of the current sound signal is extracted, and after the intelligent audio compensation processing, the gain sound signal is output, specifically:
filtering and denoising the sound sensing sequence acquired by the microphone array, dividing the sound sensing sequence after filtering and denoising into sound segments with preset time steps, extracting sound features in each sound segment, and acquiring a preset number of sound features with high accumulated contribution degree through principal component analysis to serve as principal component directions;
acquiring characteristic scattered point distribution of different sounds from different sound characteristic projection principal component directions, and distinguishing target sound signals according to the characteristic scattered point distribution;
Constructing a target sound separation model based on the U-NET network, improving the U-NET network through a convolution self-encoder, extracting sound characteristics through the encoder of the target sound separation model, and generating a mask after extracting target sound signals through a decoder;
the optimal layer number of an encoder and a decoder is obtained through iterative training, the encoder consists of two convolution modules and a pooling module, and the decoder consists of two convolution modules and an up-sampling module;
and acquiring the proportion of the target sound signal to the sound signal through the target sound separation model, extracting and separating the target sound signal, acquiring a gain compensation reference of the current working mode and the self-adaptive change level of the gain compensation reference, and performing intelligent audio compensation processing.
5. The multi-mode intelligent control method of an auxiliary hearing device according to claim 4, wherein the sudden sound suppression is performed in the intelligent audio compensation process, specifically:
the method comprises the steps of monitoring the direction of a main component in a sound sensing sequence in real time, generating and obtaining characteristic scattered point distribution of different sounds according to the change of the direction of the main component, judging the sound change in the sound sequence according to dense information of the characteristic scattered point distribution, and marking a current time stamp;
Under the condition of marking a time stamp, acquiring short-time energy of a target sound signal and each environment sound signal in a sound sensing sequence in a preset time step;
obtaining target sound signals and distribution of all environment sound signals in a frequency axis in a preset time step according to the short-time energy to generate high-frequency energy and low-frequency energy, and calculating the ratio of the high-frequency energy norm to the low-frequency energy norm;
judging whether the ratio is larger than a preset threshold value, if so, judging that sudden sounds exist in a preset time step, and carrying out reverse gain on the environmental sound signals according to the self-adaptive change level of the current gain compensation reference to reduce the sound intensity of the environmental sound signals.
6. The multi-mode intelligent control method of the auxiliary hearing earphone according to claim 1, wherein a hearing threshold curve is dynamically adjusted according to usage data of a target user, and a hearing loss compensation scheme of the auxiliary hearing earphone is optimized through the hearing threshold curve, specifically:
acquiring an acoustic signal of minimum sound intensity corresponding to each frequency which can be heard by the target user through the verification feedback information of the target user, and generating a hearing threshold curve of the target user;
storing historical use data of a target user, acquiring feedback operations of the target user on different frequencies and different sound intensities through the historical use data in preset time, screening the feedback operations of each point on a threshold curve, marking the point on the threshold curve, and extracting parameter characteristics of the screened feedback operations;
Generating a new hearing threshold curve according to the parameter characteristics, adjusting hearing loss compensation according to the deviation of the parameter characteristics on the hearing threshold curve, and dynamically optimizing a hearing loss compensation scheme of a target user;
and calculating the average Manhattan distance between the new hearing threshold curve and each point on the original hearing threshold curve, and representing the change degree of the hearing threshold curve in the preset time according to the average Manhattan distance, and generating hearing early warning when the change degree is larger than a preset change degree threshold.
7. A multi-mode intelligent control system for an assisted listening earphone, the system comprising: the multi-mode intelligent control method program of the hearing aid earphone is executed by the processor and comprises the following steps:
acquiring verification feedback information of a target user according to a preset acoustic signal, setting a compensation reference of a preset working model based on the verification feedback information, and initializing a mode flow and mode parameters;
acquiring a current sound signal through a microphone array, judging a working mode corresponding to the auxiliary hearing earphone according to the sound signal, and calling a corresponding mode flow and model parameters to gain the sound signal;
Filtering and denoising the current sound signal to remove redundant noise, extracting a target sound signal of the current sound signal, and outputting a gain sound signal after intelligent audio compensation processing;
and dynamically adjusting a hearing threshold curve according to the use data of the target user, and optimizing a hearing loss compensation scheme of the auxiliary hearing earphone through the hearing threshold curve.
8. The multi-mode intelligent control system of claim 7, wherein the current sound signal is obtained through the microphone array, the corresponding working mode of the auxiliary hearing earphone is judged according to the sound signal, and the corresponding mode flow and model parameters are called to gain the sound signal, specifically:
sensing current sound signals through a microphone array, generating a sound sensing sequence, and selecting a corresponding working mode according to a signal type label of the sound sensing sequence;
reading a gain compensation reference of a working mode corresponding to the sound sensing sequence, performing gain processing on the sound signal according to the gain compensation reference, and performing real-time gain evaluation on a gain effect in the gain process of the sound signal;
reading the hearing preference of the target user through the historical usage data of the target user, and generating a reference signal sequence of the current signal type according to the hearing preference and gain compensation benchmarks of different working modes;
Constructing a matching path of the sound sensing sequence after the real-time gain and the reference signal sequence in the gain process, calculating the similarity by utilizing the dynamic time regularity, and judging whether the distance is larger than a preset distance threshold according to the dynamic time regularity;
if the gain compensation reference is larger than the target user, setting the self-adaptive change grade of the gain compensation reference according to the deviation of the dynamic time warping distance, feeding back the gain compensation reference to the target user according to a preset mode, and selecting feedback according to the grade of the user for setting.
9. The multi-mode intelligent control system for an auxiliary hearing earphone according to claim 7, wherein the filtering denoising process is performed on the current sound signal to remove redundant noise, the target sound signal of the current sound signal is extracted, and after the intelligent audio compensation process, the gain sound signal is output, specifically:
filtering and denoising the sound sensing sequence acquired by the microphone array, dividing the sound sensing sequence after filtering and denoising into sound segments with preset time steps, extracting sound features in each sound segment, and acquiring a preset number of sound features with high accumulated contribution degree through principal component analysis to serve as principal component directions;
acquiring characteristic scattered point distribution of different sounds from different sound characteristic projection principal component directions, and distinguishing target sound signals according to the characteristic scattered point distribution;
Constructing a target sound separation model based on the U-NET network, improving the U-NET network through a convolution self-encoder, extracting sound characteristics through the encoder of the target sound separation model, and generating a mask after extracting target sound signals through a decoder;
the optimal layer number of an encoder and a decoder is obtained through iterative training, the encoder consists of two convolution modules and a pooling module, and the decoder consists of two convolution modules and an up-sampling module;
and acquiring the proportion of the target sound signal to the sound signal through the target sound separation model, extracting and separating the target sound signal, acquiring a gain compensation reference of the current working mode and the self-adaptive change level of the gain compensation reference, and performing intelligent audio compensation processing.
10. A computer-readable storage medium, characterized by: the computer readable storage medium comprises a multi-mode intelligent control method program of the hearing-aid earphone, and when the multi-mode intelligent control method program of the hearing-aid earphone is executed by a processor, the multi-mode intelligent control method steps of the hearing-aid earphone are realized according to any one of claims 1 to 6.
CN202310404993.8A 2023-04-17 2023-04-17 Multi-mode intelligent control method, system and storage medium for hearing-aid earphone Active CN116132875B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310404993.8A CN116132875B (en) 2023-04-17 2023-04-17 Multi-mode intelligent control method, system and storage medium for hearing-aid earphone

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310404993.8A CN116132875B (en) 2023-04-17 2023-04-17 Multi-mode intelligent control method, system and storage medium for hearing-aid earphone

Publications (2)

Publication Number Publication Date
CN116132875A true CN116132875A (en) 2023-05-16
CN116132875B CN116132875B (en) 2023-07-04

Family

ID=86312170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310404993.8A Active CN116132875B (en) 2023-04-17 2023-04-17 Multi-mode intelligent control method, system and storage medium for hearing-aid earphone

Country Status (1)

Country Link
CN (1) CN116132875B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116634344A (en) * 2023-07-24 2023-08-22 云天智能信息(深圳)有限公司 Intelligent remote monitoring method, system and storage medium based on hearing aid equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050069162A1 (en) * 2003-09-23 2005-03-31 Simon Haykin Binaural adaptive hearing aid
CN101755468A (en) * 2007-07-18 2010-06-23 迪特马·鲁伊沙克 User-adaptable hearing aid comprising an initialization module
US20110038496A1 (en) * 2009-08-17 2011-02-17 Spear Labs, Llc Hearing enhancement system and components thereof
CN107113516A (en) * 2014-12-22 2017-08-29 Gn瑞声达A/S Diffusion noise is listened to
US20180021176A1 (en) * 2015-01-22 2018-01-25 Eers Global Technologies Inc. Active hearing protection device and method therefore
US20190320946A1 (en) * 2018-04-18 2019-10-24 Matthew Bromwich Computer-implemented dynamically-adjustable audiometer
CN112334057A (en) * 2018-04-13 2021-02-05 康查耳公司 Hearing assessment and configuration of hearing assistance devices
CN113411707A (en) * 2021-06-17 2021-09-17 歌尔智能科技有限公司 Auxiliary listening earphone, control method, device and system thereof, and readable medium
US11627421B1 (en) * 2022-05-18 2023-04-11 Shenzhen Tingduoduo Technology Co., Ltd. Method for realizing hearing aid function based on bluetooth headset chip and a bluetooth headset

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050069162A1 (en) * 2003-09-23 2005-03-31 Simon Haykin Binaural adaptive hearing aid
CN101755468A (en) * 2007-07-18 2010-06-23 迪特马·鲁伊沙克 User-adaptable hearing aid comprising an initialization module
US20110038496A1 (en) * 2009-08-17 2011-02-17 Spear Labs, Llc Hearing enhancement system and components thereof
CN107113516A (en) * 2014-12-22 2017-08-29 Gn瑞声达A/S Diffusion noise is listened to
US20180021176A1 (en) * 2015-01-22 2018-01-25 Eers Global Technologies Inc. Active hearing protection device and method therefore
CN112334057A (en) * 2018-04-13 2021-02-05 康查耳公司 Hearing assessment and configuration of hearing assistance devices
US20190320946A1 (en) * 2018-04-18 2019-10-24 Matthew Bromwich Computer-implemented dynamically-adjustable audiometer
CN113411707A (en) * 2021-06-17 2021-09-17 歌尔智能科技有限公司 Auxiliary listening earphone, control method, device and system thereof, and readable medium
US11627421B1 (en) * 2022-05-18 2023-04-11 Shenzhen Tingduoduo Technology Co., Ltd. Method for realizing hearing aid function based on bluetooth headset chip and a bluetooth headset

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116634344A (en) * 2023-07-24 2023-08-22 云天智能信息(深圳)有限公司 Intelligent remote monitoring method, system and storage medium based on hearing aid equipment
CN116634344B (en) * 2023-07-24 2023-10-27 云天智能信息(深圳)有限公司 Intelligent remote monitoring method, system and storage medium based on hearing aid equipment

Also Published As

Publication number Publication date
CN116132875B (en) 2023-07-04

Similar Documents

Publication Publication Date Title
JP4150798B2 (en) Digital filtering method, digital filter device, digital filter program, and computer-readable recording medium
US10966033B2 (en) Systems and methods for modifying an audio signal using custom psychoacoustic models
EP3598442B1 (en) Systems and methods for modifying an audio signal using custom psychoacoustic models
US8504360B2 (en) Automatic sound recognition based on binary time frequency units
US10909995B2 (en) Systems and methods for encoding an audio signal using custom psychoacoustic models
KR20050115857A (en) System and method for speech processing using independent component analysis under stability constraints
JP4150795B2 (en) Hearing assistance device, audio signal processing method, audio processing program, computer-readable recording medium, and recorded apparatus
CN112185410B (en) Audio processing method and device
CN116132875B (en) Multi-mode intelligent control method, system and storage medium for hearing-aid earphone
CN115884032B (en) Smart call noise reduction method and system for feedback earphone
CN114338623A (en) Audio processing method, device, equipment, medium and computer program product
CN113012710A (en) Audio noise reduction method and storage medium
CN113949955A (en) Noise reduction processing method and device, electronic equipment, earphone and storage medium
CN115314823A (en) Hearing aid method, system and equipment based on digital sounding chip
KR102062454B1 (en) Music genre classification apparatus and method
CN113823301A (en) Training method and device of voice enhancement model and voice enhancement method and device
US11224360B2 (en) Systems and methods for evaluating hearing health
CN115223584B (en) Audio data processing method, device, equipment and storage medium
EP4207812A1 (en) Method for audio signal processing on a hearing system, hearing system and neural network for audio signal processing
CN106790963B (en) Audio signal control method and device
CN110767238B (en) Blacklist identification method, device, equipment and storage medium based on address information
Dai et al. An improved model of masking effects for robust speech recognition system
Pourmand et al. Computational auditory models in predicting noise reduction performance for wideband telephony applications
CN115376501B (en) Voice enhancement method and device, storage medium and electronic equipment
CN117238311B (en) Speech separation enhancement method and system in multi-sound source and noise environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Room 801, Building B, Tengfei Building, No. 88 Jiangmiao Road, Nanjing Area, China (Jiangsu) Pilot Free Trade Zone, Nanjing City, Jiangsu Province, 210000

Patentee after: Jiuyin Technology (Nanjing) Co.,Ltd.

Address before: 518000 Room 402, Building 6, Zhongkegu Industrial Park, Zhonghuan Avenue, Shanxia Community, Pinghu Street, Longgang District, Shenzhen City, Guangdong Province

Patentee before: SHENZHEN JIUYIN TECHNOLOGY CO.,LTD.

PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A multi-mode intelligent control method, system, and storage medium for auxiliary hearing earphones

Granted publication date: 20230704

Pledgee: Bank of China Limited Nanjing Jiangbei New Area Branch

Pledgor: Jiuyin Technology (Nanjing) Co.,Ltd.

Registration number: Y2024980013107