Disclosure of Invention
In order to solve the technical problem, the invention provides a multi-mode intelligent control method, a multi-mode intelligent control system and a storage medium of an auxiliary hearing earphone.
The first aspect of the present invention provides a multimode intelligent control method for an auxiliary hearing earphone, comprising:
acquiring verification feedback information of a target user according to a preset acoustic signal, setting a compensation reference of a preset working model based on the verification feedback information, and initializing a mode flow and mode parameters;
acquiring a current sound signal through a microphone array, judging a working mode corresponding to the auxiliary hearing earphone according to the sound signal, and calling a corresponding mode flow and model parameters to gain the sound signal;
filtering and denoising the current sound signal to remove redundant noise, extracting a target sound signal of the current sound signal, and outputting a gain sound signal after intelligent audio compensation processing;
and dynamically adjusting a hearing threshold curve according to the use data of the target user, and optimizing a hearing loss compensation scheme of the auxiliary hearing earphone through the hearing threshold curve.
In the scheme, the verification feedback information of a target user is obtained according to a preset acoustic signal, a compensation reference of a preset working model is set based on the verification feedback information, and a mode flow and mode parameters are initialized, specifically:
When a target user wears an auxiliary hearing earphone for the first time, simulating different working modes through preset acoustic signals of the auxiliary hearing earphone, acquiring basic identity information and hearing condition description information of the target user in a preset mode, and extracting and acquiring basic characteristics of the target user by utilizing keywords;
acquiring user data with similarity meeting preset similarity standards through data retrieval according to the basic characteristics, performing aggregate analysis of hearing-aid earphone gain compensation according to the screened user data, and setting standard gain compensation of preset acoustic signals in different working modes;
initializing working mode parameters through the standard gain compensation, and acquiring feedback information of a target user on the standard gain compensation in different working modes;
judging the speech intelligibility and the sound intensity acceptance of a preset acoustic signal to a target user under standard gain compensation according to the feedback information, and acquiring the speech intelligibility and the sound intensity acceptance as verification feedback information;
and iteratively adjusting the standard gain compensation according to the verification feedback information, outputting gain compensation references in each working mode when the speech intelligibility and the sound intensity acceptance meet preset requirements, and initializing mode flow and mode parameters by using the gain compensation references.
In this scheme, acquire current sound signal through the microphone array, judge the corresponding mode of hearing earphone is assisted according to the sound signal, call corresponding mode flow and model parameter and carry out the gain to the sound signal, specifically do:
sensing current sound signals through a microphone array, generating a sound sensing sequence, and selecting a corresponding working mode according to a signal type label of the sound sensing sequence;
reading a gain compensation reference of a working mode corresponding to the sound sensing sequence, performing gain processing on the sound signal according to the gain compensation reference, and performing real-time gain evaluation on a gain effect in the gain process of the sound signal;
reading the hearing preference of the target user through the historical usage data of the target user, and generating a reference signal sequence of the current signal type according to the hearing preference and gain compensation benchmarks of different working modes;
constructing a matching path of the sound sensing sequence after the real-time gain and the reference signal sequence in the gain process, calculating the similarity by utilizing the dynamic time regularity, and judging whether the distance is larger than a preset distance threshold according to the dynamic time regularity;
if the gain compensation reference is larger than the target user, setting the self-adaptive change grade of the gain compensation reference according to the deviation of the dynamic time warping distance, feeding back the gain compensation reference to the target user according to a preset mode, and selecting feedback according to the grade of the user for setting.
In this scheme, carry out filtering denoising processing and get rid of unnecessary noise in the current sound signal, extract the target sound signal of current sound signal, through intelligent audio compensation processing after, export the sound signal of gain, specifically:
filtering and denoising the sound sensing sequence acquired by the microphone array, dividing the sound sensing sequence after filtering and denoising into sound segments with preset time steps, extracting sound features in each sound segment, and acquiring a preset number of sound features with high accumulated contribution degree through principal component analysis to serve as principal component directions;
acquiring characteristic scattered point distribution of different sounds from different sound characteristic projection principal component directions, and distinguishing target sound signals according to the characteristic scattered point distribution;
constructing a target sound separation model based on the U-NET network, improving the U-NET network through a convolution self-encoder, extracting sound characteristics through the encoder of the target sound separation model, and generating a mask after extracting target sound signals through a decoder;
the optimal layer number of an encoder and a decoder is obtained through iterative training, the encoder consists of two convolution modules and a pooling module, and the decoder consists of two convolution modules and an up-sampling module;
And acquiring the proportion of the target sound signal to the sound signal through the target sound separation model, extracting and separating the target sound signal, acquiring a gain compensation reference of the current working mode and the self-adaptive change level of the gain compensation reference, and performing intelligent audio compensation processing.
In this scheme, carry out the suppression of sudden sound in intelligent audio compensation processing, specifically:
the method comprises the steps of monitoring the direction of a main component in a sound sensing sequence in real time, generating and obtaining characteristic scattered point distribution of different sounds according to the change of the direction of the main component, judging the sound change in the sound sequence according to dense information of the characteristic scattered point distribution, and marking a current time stamp;
under the condition of marking a time stamp, acquiring short-time energy of a target sound signal and each environment sound signal in a sound sensing sequence in a preset time step;
obtaining target sound signals and distribution of all environment sound signals in a frequency axis in a preset time step according to the short-time energy to generate high-frequency energy and low-frequency energy, and calculating the ratio of the high-frequency energy norm to the low-frequency energy norm;
judging whether the ratio is larger than a preset threshold value, if so, judging that sudden sounds exist in a preset time step, and carrying out reverse gain on the environmental sound signals according to the self-adaptive change level of the current gain compensation reference to reduce the sound intensity of the environmental sound signals.
In the scheme, a hearing threshold curve is dynamically adjusted according to the use data of a target user, and a hearing loss compensation scheme of the hearing aid earphone is optimized through the hearing threshold curve, and specifically comprises the following steps:
acquiring an acoustic signal of minimum sound intensity corresponding to each frequency which can be heard by the target user through the verification feedback information of the target user, and generating a hearing threshold curve of the target user;
storing historical use data of a target user, acquiring feedback operations of the target user on different frequencies and different sound intensities through the historical use data in preset time, screening the feedback operations of each point on a threshold curve, marking the point on the threshold curve, and extracting parameter characteristics of the screened feedback operations;
generating a new hearing threshold curve according to the parameter characteristics, adjusting hearing loss compensation according to the deviation of the parameter characteristics on the hearing threshold curve, and dynamically optimizing a hearing loss compensation scheme of a target user;
and calculating the average Manhattan distance between the new hearing threshold curve and each point on the original hearing threshold curve, and representing the change degree of the hearing threshold curve in the preset time according to the average Manhattan distance, and generating hearing early warning when the change degree is larger than a preset change degree threshold.
The second aspect of the present invention also provides a multimode intelligent control system for an auxiliary hearing earphone, the system comprising: the multi-mode intelligent control method program of the hearing aid earphone is executed by the processor and comprises the following steps:
acquiring verification feedback information of a target user according to a preset acoustic signal, setting a compensation reference of a preset working model based on the verification feedback information, and initializing a mode flow and mode parameters;
acquiring a current sound signal through a microphone array, judging a working mode corresponding to the auxiliary hearing earphone according to the sound signal, and calling a corresponding mode flow and model parameters to gain the sound signal;
filtering and denoising the current sound signal to remove redundant noise, extracting a target sound signal of the current sound signal, and outputting a gain sound signal after intelligent audio compensation processing;
and dynamically adjusting a hearing threshold curve according to the use data of the target user, and optimizing a hearing loss compensation scheme of the auxiliary hearing earphone through the hearing threshold curve.
The third aspect of the present invention also provides a computer readable storage medium, where the computer readable storage medium includes a multi-mode intelligent control method program for a hearing aid earphone, where the multi-mode intelligent control method program for a hearing aid earphone, when executed by a processor, implements the steps of the multi-mode intelligent control method for a hearing aid earphone as set forth in any one of the above.
The invention discloses a multimode intelligent control method, a system and a storage medium for an auxiliary hearing earphone, which comprise the following steps: acquiring the verification feedback information of a target user according to a preset acoustic signal, setting a compensation reference of a preset working model, and initializing a mode flow and mode parameters; acquiring a current sound signal through a microphone array, judging a working mode corresponding to the hearing aid earphone, and calling a corresponding mode flow and model parameters to gain the sound signal; filtering and denoising the current sound signal to remove redundant noise, extracting a target sound signal of the current sound signal, and outputting a gain sound signal after intelligent audio compensation processing; and dynamically adjusting a hearing threshold curve according to the use data of the target user, and optimizing a hearing loss compensation scheme of the hearing aid earphone. According to the invention, through intelligent control of multiple modes and auxiliary gains of the auxiliary hearing earphone, the use experience of a user is improved, and meanwhile, the hearing threshold curve is dynamically adjusted, so that accurate compensation of hearing loss of the user is realized.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, in the case of no conflict, the embodiments of the present application and the features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those described herein, and therefore the scope of the present invention is not limited to the specific embodiments disclosed below.
Fig. 1 shows a flowchart of a multi-mode intelligent control method of the hearing aid earphone of the present invention.
As shown in fig. 1, a first aspect of the present invention provides a multi-mode intelligent control method for an auxiliary hearing earphone, including:
s102, acquiring verification feedback information of a target user according to a preset acoustic signal, setting a compensation reference of a preset working model based on the verification feedback information, and initializing a mode flow and mode parameters;
S104, acquiring a current sound signal through a microphone array, judging a corresponding working mode of the hearing aid earphone according to the sound signal, and calling a corresponding mode flow and model parameters to gain the sound signal;
s106, filtering and denoising the current sound signal to remove redundant noise, extracting a target sound signal of the current sound signal, and outputting a gain sound signal after intelligent audio compensation;
s108, dynamically adjusting a hearing threshold curve according to the use data of the target user, and optimizing a hearing loss compensation scheme of the auxiliary hearing earphone through the hearing threshold curve.
When a target user wears the auxiliary hearing earphone for the first time, simulating different working modes through preset acoustic signals of the auxiliary hearing earphone, wherein the working modes comprise a conversation mode, a music mode, an auxiliary hearing mode and the like; basic identity information and hearing condition description information of a target user are obtained in a preset mode, and characteristics such as age, hearing injury and the like are extracted by utilizing keywords, so that basic characteristics of the target user are obtained; acquiring user data with similarity meeting preset similarity standards through data retrieval according to the basic characteristics, performing aggregate analysis of hearing-aid earphone gain compensation according to the screened user data, and setting standard gain compensation of preset acoustic signals in different working modes; initializing working mode parameters through the standard gain compensation, and acquiring feedback information of a target user on the standard gain compensation in different working modes; judging the speech intelligibility and the sound intensity acceptance of a preset acoustic signal to a target user under standard gain compensation according to the feedback information, and acquiring the speech intelligibility and the sound intensity acceptance as verification feedback information; and iteratively adjusting the standard gain compensation according to the verification feedback information, outputting gain compensation references under each working mode when the speech intelligibility and the sound intensity acceptance meet preset requirements, and initializing a mode flow and mode parameters by using the gain compensation references, wherein the mode flow comprises filtering denoising, gain, frequency shifting, sound source positioning and the like.
Fig. 2 shows a flowchart of a method for judging the corresponding working mode of the hearing aid earphone according to the sound signal.
According to the embodiment of the invention, the current sound signal is obtained through the microphone array, the working mode corresponding to the auxiliary hearing earphone is judged according to the sound signal, and the corresponding mode flow and model parameters are called to gain the sound signal, specifically:
s202, sensing current sound signals through a microphone array, generating a sound sensing sequence, and selecting a corresponding working mode according to a signal type label of the sound sensing sequence;
s204, reading a gain compensation reference of a working mode corresponding to the sound sensing sequence, performing gain processing on the sound signal according to the gain compensation reference, and performing real-time gain evaluation on the gain effect in the gain process of the sound signal;
s206, reading the hearing preference of the target user through the historical use data of the target user, and generating a reference signal sequence of the current signal type according to the hearing preference and gain compensation references of different working modes;
s208, constructing a matching path of the sound sensing sequence after the real-time gain and the reference signal sequence in the gain process, calculating the similarity by utilizing the dynamic time regularity, and judging whether the distance is larger than a preset distance threshold according to the dynamic time regularity;
And S210, if the gain compensation reference is larger than the target user, setting the self-adaptive change grade of the gain compensation reference according to the dynamic time warping distance deviation, feeding back the self-adaptive change grade to the target user according to a preset mode, and selecting feedback according to the grade of the user for setting.
It should be noted that, the hearing preference of the target user includes characteristics such as a preferred volume and a preferred timbre of the user, the similarity calculation is performed by using dynamic time warping, and a matching path between the sound sensing sequence after the real-time gain and the reference signal sequence is constructed
The method comprises the following steps:
wherein, the liquid crystal display device comprises a liquid crystal display device,
representing a reference signal sequence,/->
Representing the sound perception sequence after real-time gain, +.>
Respectively represent +.>
The sound perception sequence after the signal points and the real-time gain is +.>
Matching paths of individual signal points, +.>
The value range of (2) is +.>
, />
The value range of (2) is +.>
。
The calculation formula of the dynamic time warping distance DTW is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,
respectively representing signal points in the reference signal sequence and the sound perception sequence after the real-time gain,
representing the Euclidean distance between signal points in the reference signal sequence and the sound sensing sequence after the real-time gain;
the method comprises the steps of presetting a distance threshold, and setting an adaptive change level of a gain compensation reference according to the deviation between a dynamic time regular distance and the preset distance threshold, wherein the adaptive change level can be set by a user and can be one-stage or multi-stage.
Fig. 3 shows a flow chart of a method of the present invention for intelligent audio compensation by extracting a target sound signal of a current sound signal.
According to the embodiment of the invention, the current sound signal is subjected to filtering and denoising treatment to remove redundant noise, the target sound signal of the current sound signal is extracted, and the gain sound signal is output after intelligent audio compensation treatment, specifically:
s302, filtering and denoising the sound sensing sequence acquired by the microphone array, dividing the sound sensing sequence after filtering and denoising into sound segments with preset time steps, extracting sound features in each sound segment, and acquiring a preset number of sound features with high accumulated contribution degree through principal component analysis to serve as principal component directions;
s304, obtaining characteristic scattered point distribution of different sounds for different sound characteristic projection principal component directions, and distinguishing target sound signals according to the characteristic scattered point distribution;
s306, constructing a target sound separation model based on the U-NET network, improving the U-NET network through a convolution self-encoder, extracting sound characteristics through the encoder of the target sound separation model, and generating a mask after extracting target sound signals through a decoder;
S308, obtaining the optimal layer number of an encoder and a decoder through iterative training, wherein the encoder consists of two convolution modules and a pooling module, and the decoder consists of two convolution modules and an up-sampling module;
s310, obtaining the proportion of the target sound signal to the sound signal through the target sound separation model, extracting and separating the target sound signal, obtaining the gain compensation reference of the current working mode and the self-adaptive change level of the gain compensation reference, and performing intelligent audio compensation processing.
The method includes the steps of training a target sound separation model, setting a loss function through mean square error, training until the loss function converges, obtaining a time-frequency mask, and calculating the proportion of the target sound signal to the sound signal based on an ideal floating value mask on the assumption that the target sound signal and the environment sound signal do not intersect, namely, the point of the target sound signal and the environment sound signal is multiplied by 0.
The method comprises the steps of monitoring the direction of a main component in a sound sensing sequence in real time, generating and obtaining characteristic scattered point distribution of different sounds according to the change of the direction of the main component, judging the sound change in the sound sequence according to dense information of the characteristic scattered point distribution, and marking a current time stamp; under the condition of marking a time stamp, short-time energy of a target sound signal and short-time energy of each environment sound signal in a sound sensing sequence in a preset time step is obtained, and short-time energy of an ith frame signal in a frame L
The calculation formula of (2) is as follows: />
L represents the frame length of the frame, +.>
Representing the amplitude value of an nth point in an ith frame signal obtained after framing; obtaining target sound signals and distribution of all environment sound signals in a frequency axis in a preset time step according to the short-time energy to generate high-frequency energy and low-frequency energy, and calculating the ratio of the high-frequency energy norm to the low-frequency energy norm; judging whether the ratio is larger than a preset threshold value, if so, judging that sudden sounds exist in a preset time step, and carrying out reverse gain on the environmental sound signals according to the self-adaptive change level of the current gain compensation reference to reduce the sound intensity of the environmental sound signals.
The method comprises the steps of acquiring an acoustic signal of minimum sound intensity corresponding to each frequency which can be heard by a target user through experimental feedback information of the target user, and generating a threshold curve of the target user; storing historical use data of a target user, acquiring feedback operations of the target user on different frequencies and different sound intensities through the historical use data in preset time, screening the feedback operations of each point on a threshold curve, marking the point on the threshold curve, and extracting parameter characteristics of the screened feedback operations; generating a new hearing threshold curve according to the parameter characteristics, adjusting hearing loss compensation according to the deviation of the parameter characteristics on the hearing threshold curve, and dynamically optimizing a hearing loss compensation scheme of a target user; and calculating the average Manhattan distance between the new hearing threshold curve and each point on the original hearing threshold curve, and representing the change degree of the hearing threshold curve in the preset time according to the average Manhattan distance, and generating hearing early warning when the change degree is larger than a preset change degree threshold.
According to the embodiment of the invention, the noise scene is judged according to the sound perception sequence, specifically:
the method comprises the steps of obtaining a sound sensing sequence for preprocessing, performing FFT processing on the preprocessed sound sensing sequence, dividing the sound sensing sequence into a plurality of spectrum sub-bands, and extracting sub-band characteristics;
selecting voice data with scene labels to generate a data set, screening sub-band features, generating training data by using the data set, training a noise scene recognition model, and selecting periodic features, energy features and adjacent sub-band correlation features of the spectrum sub-bands to train the noise scene recognition model;
the data of the preset noise scene in the data set is selected to verify the output of the trained noise scene recognition model, and when the accuracy rate meets the preset standard, the noise scene recognition model is output;
inputting the sub-band characteristics of the sound perception sequence into a noise scene recognition model to recognize the current noise scene, and generating a noise environment recognition result;
constructing a database, matching the noise environment with a preset gain compensation reference, storing the noise environment and the preset gain compensation reference in the database, searching in the database according to the current noise environment identification result, and searching the gain compensation reference with the similarity meeting the preset standard as the gain compensation reference of the current noise environment.
Fig. 4 shows a block diagram of a multi-mode intelligent control system of a hearing aid headset of the present invention.
The second aspect of the present invention also provides a multimode intelligent control system 4 for a hearing aid earphone, the system comprising: the memory 41 and the processor 42, wherein the memory comprises a multi-mode intelligent control method program of the hearing aid earphone, and the multi-mode intelligent control method program of the hearing aid earphone realizes the following steps when being executed by the processor:
acquiring verification feedback information of a target user according to a preset acoustic signal, setting a compensation reference of a preset working model based on the verification feedback information, and initializing a mode flow and mode parameters;
acquiring a current sound signal through a microphone array, judging a working mode corresponding to the auxiliary hearing earphone according to the sound signal, and calling a corresponding mode flow and model parameters to gain the sound signal;
filtering and denoising the current sound signal to remove redundant noise, extracting a target sound signal of the current sound signal, and outputting a gain sound signal after intelligent audio compensation processing;
and dynamically adjusting a hearing threshold curve according to the use data of the target user, and optimizing a hearing loss compensation scheme of the auxiliary hearing earphone through the hearing threshold curve.
When a target user wears the auxiliary hearing earphone for the first time, simulating different working modes through preset acoustic signals of the auxiliary hearing earphone, wherein the working modes comprise a conversation mode, a music mode, an auxiliary hearing mode and the like; basic identity information and hearing condition description information of a target user are obtained in a preset mode, and characteristics such as age, hearing injury and the like are extracted by utilizing keywords, so that basic characteristics of the target user are obtained; acquiring user data with similarity meeting preset similarity standards through data retrieval according to the basic characteristics, performing aggregate analysis of hearing-aid earphone gain compensation according to the screened user data, and setting standard gain compensation of preset acoustic signals in different working modes; initializing working mode parameters through the standard gain compensation, and acquiring feedback information of a target user on the standard gain compensation in different working modes; judging the speech intelligibility and the sound intensity acceptance of a preset acoustic signal to a target user under standard gain compensation according to the feedback information, and acquiring the speech intelligibility and the sound intensity acceptance as verification feedback information; and iteratively adjusting the standard gain compensation according to the verification feedback information, outputting gain compensation references in each working mode when the speech intelligibility and the sound intensity acceptance meet preset requirements, and initializing mode flow and mode parameters by using the gain compensation references.
According to the embodiment of the invention, the current sound signal is obtained through the microphone array, the working mode corresponding to the auxiliary hearing earphone is judged according to the sound signal, and the corresponding mode flow and model parameters are called to gain the sound signal, specifically:
sensing current sound signals through a microphone array, generating a sound sensing sequence, and selecting a corresponding working mode according to a signal type label of the sound sensing sequence;
reading a gain compensation reference of a working mode corresponding to the sound sensing sequence, performing gain processing on the sound signal according to the gain compensation reference, and performing real-time gain evaluation on a gain effect in the gain process of the sound signal;
reading the hearing preference of the target user through the historical usage data of the target user, and generating a reference signal sequence of the current signal type according to the hearing preference and gain compensation benchmarks of different working modes;
constructing a matching path of the sound sensing sequence after the real-time gain and the reference signal sequence in the gain process, calculating the similarity by utilizing the dynamic time regularity, and judging whether the distance is larger than a preset distance threshold according to the dynamic time regularity;
if the gain compensation reference is larger than the target user, setting the self-adaptive change grade of the gain compensation reference according to the deviation of the dynamic time warping distance, feeding back the gain compensation reference to the target user according to a preset mode, and selecting feedback according to the grade of the user for setting.
It should be noted that, the hearing preference of the target user includes characteristics such as a preferred volume and a preferred timbre of the user, the similarity calculation is performed by using dynamic time warping, and a matching path between the sound sensing sequence after the real-time gain and the reference signal sequence is constructed
The method comprises the following steps:
wherein, the liquid crystal display device comprises a liquid crystal display device,
representing a reference signal sequence,/->
Representing the sound perception sequence after real-time gain, +.>
Respectively represent +.>
The sound perception sequence after the signal points and the real-time gain is +.>
Matching paths of individual signal points, +.>
The value range of (2) is +.>
, />
The value range of (2) is +.>
。
The calculation formula of the dynamic time warping distance DTW is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,
respectively representing signal points in the reference signal sequence and the sound perception sequence after the real-time gain,
representing the Euclidean distance between signal points in the reference signal sequence and the sound sensing sequence after the real-time gain;
the method comprises the steps of presetting a distance threshold, and setting an adaptive change level of a gain compensation reference according to the deviation between a dynamic time regular distance and the preset distance threshold, wherein the adaptive change level can be set by a user and can be one-stage or multi-stage.
According to the embodiment of the invention, the current sound signal is subjected to filtering and denoising treatment to remove redundant noise, the target sound signal of the current sound signal is extracted, and the gain sound signal is output after intelligent audio compensation treatment, specifically:
Filtering and denoising the sound sensing sequence acquired by the microphone array, dividing the sound sensing sequence after filtering and denoising into sound segments with preset time steps, extracting sound features in each sound segment, and acquiring a preset number of sound features with high accumulated contribution degree through principal component analysis to serve as principal component directions;
acquiring characteristic scattered point distribution of different sounds from different sound characteristic projection principal component directions, and distinguishing target sound signals according to the characteristic scattered point distribution;
constructing a target sound separation model based on the U-NET network, improving the U-NET network through a convolution self-encoder, extracting sound characteristics through the encoder of the target sound separation model, and generating a mask after extracting target sound signals through a decoder;
the optimal layer number of an encoder and a decoder is obtained through iterative training, the encoder consists of two convolution modules and a pooling module, and the decoder consists of two convolution modules and an up-sampling module;
and acquiring the proportion of the target sound signal to the sound signal through the target sound separation model, extracting and separating the target sound signal, acquiring a gain compensation reference of the current working mode and the self-adaptive change level of the gain compensation reference, and performing intelligent audio compensation processing.
The method includes the steps of training a target sound separation model, setting a loss function through mean square error, training until the loss function converges, obtaining a time-frequency mask, and calculating the proportion of the target sound signal to the sound signal based on an ideal floating value mask on the assumption that the target sound signal and the environment sound signal do not intersect, namely, the point of the target sound signal and the environment sound signal is multiplied by 0.
The method comprises the steps of monitoring the direction of a main component in a sound sensing sequence in real time, generating and obtaining characteristic scattered point distribution of different sounds according to the change of the direction of the main component, judging the sound change in the sound sequence according to dense information of the characteristic scattered point distribution, and marking a current time stamp; under the condition of marking a time stamp, short-time energy of a target sound signal and short-time energy of each environment sound signal in a sound sensing sequence in a preset time step is obtained, and short-time energy of an ith frame signal in a frame L
The calculation formula of (2) is as follows: />
L represents the frame length of the frame, +.>
Representing the amplitude value of an nth point in an ith frame signal obtained after framing; obtaining target sound signals and distribution of all environment sound signals in a frequency axis in a preset time step according to the short-time energy to generate high-frequency energy and low-frequency energy, and calculating the ratio of the high-frequency energy norm to the low-frequency energy norm; judging whether the ratio is larger than a preset threshold value, if so, judging that sudden sounds exist in a preset time step, and carrying out reverse gain on the environmental sound signals according to the self-adaptive change level of the current gain compensation reference to reduce the sound intensity of the environmental sound signals.
The method comprises the steps of acquiring an acoustic signal of minimum sound intensity corresponding to each frequency which can be heard by a target user through experimental feedback information of the target user, and generating a threshold curve of the target user; storing historical use data of a target user, acquiring feedback operations of the target user on different frequencies and different sound intensities through the historical use data in preset time, screening the feedback operations of each point on a threshold curve, marking the point on the threshold curve, and extracting parameter characteristics of the screened feedback operations; generating a new hearing threshold curve according to the parameter characteristics, adjusting hearing loss compensation according to the deviation of the parameter characteristics on the hearing threshold curve, and dynamically optimizing a hearing loss compensation scheme of a target user; and calculating the average Manhattan distance between the new hearing threshold curve and each point on the original hearing threshold curve, and representing the change degree of the hearing threshold curve in the preset time according to the average Manhattan distance, and generating hearing early warning when the change degree is larger than a preset change degree threshold.
The third aspect of the present invention also provides a computer readable storage medium, where the computer readable storage medium includes a multi-mode intelligent control method program for a hearing aid earphone, where the multi-mode intelligent control method program for a hearing aid earphone, when executed by a processor, implements the steps of the multi-mode intelligent control method for a hearing aid earphone as set forth in any one of the above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units; can be located in one place or distributed to a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present invention may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk or an optical disk, or the like, which can store program codes.
Alternatively, the above-described integrated units of the present invention may be stored in a computer-readable storage medium if implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solutions of the embodiments of the present invention may be embodied in essence or a part contributing to the prior art in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, ROM, RAM, magnetic or optical disk, or other medium capable of storing program code.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.