CN117334224A - Heart sound identification method and device, electronic equipment and storage medium - Google Patents

Heart sound identification method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117334224A
CN117334224A CN202311274076.9A CN202311274076A CN117334224A CN 117334224 A CN117334224 A CN 117334224A CN 202311274076 A CN202311274076 A CN 202311274076A CN 117334224 A CN117334224 A CN 117334224A
Authority
CN
China
Prior art keywords
heart sound
original
determining
sound signal
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311274076.9A
Other languages
Chinese (zh)
Inventor
王延凯
王秋明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yuanjian Information Technology Co Ltd
Original Assignee
Beijing Yuanjian Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yuanjian Information Technology Co Ltd filed Critical Beijing Yuanjian Information Technology Co Ltd
Priority to CN202311274076.9A priority Critical patent/CN117334224A/en
Publication of CN117334224A publication Critical patent/CN117334224A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B7/00Instruments for auscultation
    • A61B7/02Stethoscopes
    • A61B7/04Electric stethoscopes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/24Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being the cepstrum
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/66Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition

Abstract

The disclosure provides a heart sound identification method, a heart sound identification device, electronic equipment and a storage medium, wherein a target fusion characteristic is generated by extracting MFCC characteristics corresponding to an original heart sound signal and fusing GFCC characteristics; dividing target fusion characteristics into a plurality of characteristic fragments, and respectively determining the proportion coefficient of the number of fragments belonging to each heart sound class to the total number of fragments and the probability score of the characteristic fragment corresponding to each heart sound class in heart sound classification results output by a CRNN model and a ResNet18 model aiming at the characteristic fragments aiming at each characteristic fragment; determining a fusion decision score according to the probability score and the proportionality coefficient, and carrying out weighted summation on the fusion decision score corresponding to the CRNN model and the fusion decision score corresponding to the ResNet18 model to determine a target decision score; and taking the column corresponding to the maximum value of the target decision score as a target heart sound identification result. The robustness of the model is improved by adopting a multi-feature, multi-model and multi-fusion mode, and the accuracy of heart sound classification is improved.

Description

Heart sound identification method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of heart sound identification, and in particular relates to a heart sound identification method, a heart sound identification device, electronic equipment and a storage medium.
Background
Heart sound signals are one of the most important physiological signals of the human body, and application to auscultation assisted therapy has been a quite long history. The heart sound signals contain a great deal of physiological information of heart function states and have the characteristics of universality, uniqueness and acquirability.
Most of the existing heart sound identification methods directly input single characteristics such as an MFCC, an Fbank and other acoustic characteristics into a single classification network for identification, the method has higher accuracy on a heart sound data set collected under an ideal environment, but in an actual scene, collected heart sound signals often contain various types of noise such as environmental sound, equipment sound, breathing sound, speaking sound and the like, so that the classification accuracy of the existing heart sound classification algorithm in the actual scene is seriously reduced, and meanwhile, the single acoustic characteristics and a single heart sound identification model lead to poor robustness of the existing heart sound identification algorithm.
Disclosure of Invention
The embodiment of the disclosure at least provides a heart sound identification method, a heart sound identification device, an electronic device and a storage medium, and adopts a multi-feature, multi-model and multi-fusion mode to improve the robustness of the model and the accuracy of heart sound classification.
The embodiment of the disclosure provides a heart sound identification method, which comprises the following steps:
acquiring an original heart sound signal;
extracting MFCC features and GFCC features corresponding to the original heart sound signals, and fusing the MFCC features and the GFCC features to generate target fusion features;
dividing the target fusion characteristics into a plurality of characteristic fragments, and respectively inputting the characteristic fragments into a pre-trained CRNN model and a pre-trained ResNet18 model aiming at each characteristic fragment;
respectively determining the proportion coefficient of the number of fragments belonging to each heart sound class to the total number of fragments and the probability score of the feature fragment corresponding to each heart sound class in heart sound classification results output by the CRNN model and the ResNet18 model aiming at the feature fragment;
determining a fusion decision score according to the probability score and the proportionality coefficient, and carrying out weighted summation on the fusion decision score corresponding to the CRNN model and the fusion decision score corresponding to the ResNet18 model to determine a target decision score;
and taking the column corresponding to the maximum value of the target judgment score as a target heart sound identification result.
In an alternative embodiment, before the extracting the MFCC feature and the GFCC feature corresponding to the original heart sound signal, the method further includes:
Determining an original decibel value corresponding to the original heart sound signal;
determining an audio gain adjustment factor according to a preset expected decibel value and the original decibel value;
and performing gain adjustment on the original heart sound signal according to the audio gain adjustment factor, and determining the original heart sound signal after gain adjustment.
In an alternative embodiment, the audio gain adjustment factor is determined according to a preset expected decibel value and the original value, and specifically includes:
determining a decibel adjustment factor that causes the original decibel value to reach the preset desired decibel value;
taking the preset expected decibel value as an actual decibel value output after gain adjustment, and determining the corresponding decibel gain change of the heart sound signals before and after gain adjustment according to the original decibel value and the decibel adjustment factor;
and determining the audio gain adjusting factor according to the decibel gain change.
In an alternative embodiment, after the acquiring the original heart sound signal, the method further comprises:
performing noise reduction processing on the original heart sound signal to determine a noise-reduced heart sound signal;
after framing and windowing are carried out on the noise reduction heart sound signals, short-time average energy and short-time zero-crossing rate corresponding to the noise reduction heart sound signals of each frame are determined;
Setting a high energy threshold and a preset low energy threshold according to the short-time average energy and the short-time zero-crossing rate, and dividing the noise reduction heart sound signal into a mute section, a transition section and a voice section;
and filtering the mute sections positioned at two ends of the noise reduction heart sound signal.
In an optional implementation manner, the determining the original decibel value corresponding to the original heart sound signal specifically includes:
determining the sequence length and the sequence root mean square corresponding to the original heart sound signal;
determining the root mean square sequence length corresponding to the root mean square of the sequence;
determining a maximum amplitude value corresponding to the original heart sound signal;
and determining the original decibel value according to the sequence length, the sequence root mean square, the root mean square sequence length and the maximum amplitude value.
In an alternative embodiment, after said gain-adjusting the original heart sound signal according to the audio gain-adjusting factor, determining the gain-adjusted original heart sound signal, the method further comprises:
according to a preset sampling rate threshold, downsampling is carried out on the original heart sound signal after gain adjustment;
and inputting the original heart sound signals after the downsampling to a preset band-pass filter, and filtering the contained background noise signals.
The embodiment of the disclosure also provides a heart sound recognition device, which comprises:
the acquisition module is used for acquiring an original heart sound signal;
the feature fusion module is used for extracting the MFCC features and GFCC features corresponding to the original heart sound signals and fusing the MFCC features and the GFCC features to generate target fusion features;
the feature dividing module is used for dividing the target fusion feature into a plurality of feature fragments, and inputting the feature fragments to a pre-trained CRNN model and a pre-trained ResNet18 model respectively aiming at each feature fragment;
the score judging module is used for respectively determining the proportion coefficient of the number of fragments belonging to each heart sound class to the total number of fragments and the probability score of each heart sound class corresponding to the characteristic fragment in the heart sound classification results output by the CRNN model and the ResNet18 model aiming at the characteristic fragments;
the joint judgment module is used for determining a fusion judgment score according to the probability score and the proportionality coefficient, and carrying out weighted summation on the fusion judgment score corresponding to the CRNN model and the fusion judgment score corresponding to the ResNet18 model to determine a target judgment score;
and the recognition result determining module is used for taking the column corresponding to the maximum value of the target decision score as a target heart sound recognition result.
In an alternative embodiment, the apparatus further comprises an audio gain control module for:
determining an original decibel value corresponding to the original heart sound signal;
determining an audio gain adjustment factor according to a preset expected decibel value and the original decibel value;
and performing gain adjustment on the original heart sound signal according to the audio gain adjustment factor, and determining the original heart sound signal after gain adjustment.
The embodiment of the disclosure also provides an electronic device, including: a processor, a memory and a bus, the memory storing machine readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is running, the machine readable instructions when executed by the processor performing the method of heart sound identification as described above, or steps in any of the possible embodiments of the method of heart sound identification as described above.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the above-described heart sound identification method, or steps in any one of the possible implementation manners of the above-described heart sound identification method.
The disclosed embodiments also provide a computer program product comprising a computer program/instructions which, when executed by a processor, implement the above-described heart sound identification method, or steps in any one of the possible implementation manners of the above-described heart sound identification method.
The embodiment of the disclosure provides a heart sound identification method, a heart sound identification device, electronic equipment and a storage medium, wherein an original heart sound signal is obtained; extracting MFCC features and GFCC features corresponding to the original heart sound signals, and fusing the MFCC features and the GFCC features to generate target fusion features; dividing target fusion characteristics into a plurality of characteristic fragments, and respectively inputting the characteristic fragments into a pre-trained CRNN model and a pre-trained ResNet18 model aiming at each characteristic fragment; respectively determining the proportion coefficient of the number of fragments belonging to each heart sound class to the total number of fragments and the probability score of the feature fragment corresponding to each heart sound class in heart sound classification results output by the CRNN model and the ResNet18 model aiming at the feature fragments; determining a fusion decision score according to the probability score and the proportionality coefficient, and carrying out weighted summation on the fusion decision score corresponding to the CRNN model and the fusion decision score corresponding to the ResNet18 model to determine a target decision score; and taking the column corresponding to the maximum value of the target decision score as a target heart sound identification result. The robustness of the model is improved by adopting a multi-feature, multi-model and multi-fusion mode, and the accuracy of heart sound classification is improved.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
Fig. 1 shows a flowchart of a method for identifying heart sounds provided by an embodiment of the present disclosure;
fig. 2 shows a flowchart of a method for adaptive gain control of heart sound signals according to an embodiment of the present disclosure;
fig. 3 shows a schematic diagram of a heart sound recognition device provided by an embodiment of the disclosure;
fig. 4 shows a schematic diagram of an electronic device provided by an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The term "and/or" is used herein to describe only one relationship, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
According to research, the existing heart sound identification method mostly inputs acoustic features such as MFCC, fbank and the like into a single classification network for identification, the method has higher accuracy on a heart sound data set collected under an ideal environment, but in an actual scene, collected heart sound signals often contain various types of noise such as environmental sounds, equipment sounds, breathing sounds, speaking sounds and the like, so that the classification accuracy of the existing heart sound classification algorithm in the actual scene is seriously reduced, and meanwhile, the single acoustic feature and a single heart sound identification model lead to poor robustness of the existing heart sound identification algorithm.
Based on the above study, the present disclosure provides a method, an apparatus, an electronic device, and a storage medium for identifying heart sounds by acquiring an original heart sound signal; extracting MFCC features and GFCC features corresponding to the original heart sound signals, and fusing the MFCC features and the GFCC features to generate target fusion features; dividing target fusion characteristics into a plurality of characteristic fragments, and respectively inputting the characteristic fragments into a pre-trained CRNN model and a pre-trained ResNet18 model aiming at each characteristic fragment; respectively determining the proportion coefficient of the number of fragments belonging to each heart sound class to the total number of fragments and the probability score of the feature fragment corresponding to each heart sound class in heart sound classification results output by the CRNN model and the ResNet18 model aiming at the feature fragments; determining a fusion decision score according to the probability score and the proportionality coefficient, and carrying out weighted summation on the fusion decision score corresponding to the CRNN model and the fusion decision score corresponding to the ResNet18 model to determine a target decision score; and taking the column corresponding to the maximum value of the target decision score as a target heart sound identification result. The robustness of the model is improved by adopting a multi-feature, multi-model and multi-fusion mode, and the accuracy of heart sound classification is improved.
For the sake of understanding the present embodiment, first, a detailed description will be given of a heart sound identification method disclosed in an embodiment of the present disclosure, where an execution subject of the heart sound identification method provided in the embodiment of the present disclosure is generally a computer device having a certain computing capability, and the computer device includes, for example: the terminal device, or server or other processing device, may be a User Equipment (UE), mobile device, user terminal, cellular telephone, cordless telephone, personal digital assistant (Personal Digital Assistant, PDA), handheld device, computing device, vehicle mounted device, wearable device, etc. In some possible implementations, the heart sound identification method may be implemented by way of a processor invoking computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of a method for identifying heart sounds according to an embodiment of the disclosure is shown, where the method includes steps S101 to S106, where:
s101, acquiring an original heart sound signal.
In a specific implementation, an original heart sound signal of a patient is acquired, and preprocessing such as noise reduction, endpoint detection and the like is performed on the original heart sound signal.
Specifically, noise reduction processing is carried out on the original heart sound signal, and a noise-reduced heart sound signal is determined; after framing and windowing are carried out on the noise reduction heart sound signals, short-time average energy and short-time zero-crossing rate corresponding to each frame of noise reduction heart sound signals are determined; according to the short-time average energy and the short-time zero-crossing rate, setting a high energy threshold and a preset low energy threshold, and dividing the noise reduction heart sound signal into a mute section, a transition section and a voice section; and filtering the mute sections positioned at two ends of the noise reduction heart sound signal.
Here, in the noise reduction process, the noise reduction is performed by selecting the spectral subtraction, which may be implemented by:
step 1, performing discrete Fourier transform on each frame of heart sound signal subjected to windowing and framing processing on an original heart sound signal.
And step 2, determining corresponding amplitude and phase angle for the heart sound signal frame after discrete Fourier transform.
And step 3, determining the average energy value of the noise segment according to the duration of the leading non-speech segment, namely the noise segment, of the original heart sound signal and the corresponding frame number.
Step 4, determining the amplitude of the heart sound signal frame after spectral subtraction by adopting a spectral subtraction algorithm; and obtaining a heart sound signal sequence after spectrum reduction through inverse fast Fourier transform according to the amplitude after spectrum reduction and the phase angle before spectrum reduction.
As a possible implementation manner, since the collection of the heart sound signal has several different sampling point positions, and the difference of the body conditions of different patients causes the difference of the voice decibel value intervals of different heart sound signals, the embodiment of the present application adopts an adaptive gain control manner to solve the above problem, as shown in fig. 2, and the method for adaptively gain controlling the heart sound signal according to the embodiment of the present disclosure includes steps S201-S203, where:
S201, determining an original decibel value corresponding to the original heart sound signal.
In specific implementation, determining a sequence length and a sequence root mean square corresponding to an original heart sound signal; determining the root mean square sequence length corresponding to the root mean square of the sequence; determining a maximum amplitude value corresponding to an original heart sound signal; and determining an original decibel value according to the sequence length, the sequence root mean square, the root mean square sequence length and the maximum amplitude value.
Specifically, the original decibel value corresponding to the original heart sound signal may be calculated by the following formula:
here, B in Representing an original decibel value corresponding to the original heart sound signal; n represents the corresponding sequence length of the original heart sound signal; p (P) ref Representing the maximum amplitude value corresponding to the original heart sound signal; p (P) rms Representing the root mean square of the sequence corresponding to the current original heart sound signal sequence; τ represents the root mean square sequence length corresponding to the root mean square of the sequence.
Further, the root mean square of the sequence can be calculated by the following formula:
wherein P is rms Representing the root mean square of the sequence corresponding to the current original heart sound signal sequence; τ represents the root mean square sequence length corresponding to the root mean square of the sequence; x (n) represents the nth sampling point.
S202, determining an audio gain adjustment factor according to a preset expected decibel value and the original decibel value.
In specific implementation, experiments show that the acceptable voice decibel range of the human ear is-25 dB to-15 dB, when the decibel value is lower than-25 dB, the human ear is difficult to hear voice content, and when the decibel value is higher than-15 dB, plosive sounds appear, and the two conditions can interfere the human ear to acquire voice information, so that the embodiment of the application aims at the auditory characteristics of the human ear, and the decibel value of an original heart sound signal is adjusted by introducing an audio self-adaptive gain control method through an audio gain adjustment factor.
Specifically, the audio gain adjustment factor may be determined by the following steps 1 to 3:
step 1, determining a decibel adjustment factor enabling the original decibel value to reach the preset expected decibel value.
And 2, taking the preset expected decibel value as an actual decibel value output after gain adjustment, and determining the decibel gain change corresponding to the heart sound signals before and after gain adjustment according to the original decibel value and the decibel adjustment factor.
And step 3, determining the audio gain adjustment factor according to the decibel gain change.
In a specific implementation, the correspondence between the preset expected db value and the db adjustment factor, the original db value may be expressed as:
wherein,representing a preset expected decibel value; b (B) in Representing the original decibel value; parameters α, ε, ρ, β, γ represent decibel adjustmentsFactors.
Preferably, the decibel adjustment factors α, ε, ρ, β, γ are set to 10, 0.3, 0.5, 8, 24, respectively.
Here, the db adjustment factor is used to implement suppression of high db level speech, and boost of low db level speech, so as to ensure that db levels of different voices are within the same range.
Further, the actual decibel value of the original heart sound signal after the self-adaptive gain control conversion can be expressed as:
wherein B is out Representing the actual decibel value of the original heart sound signal after gain adjustment; n represents the corresponding sequence length of the original heart sound signal; p (P) ref Representing the maximum amplitude value corresponding to the original heart sound signal; τ represents the root mean square sequence length corresponding to the root mean square of the sequence; μ represents an audio gain adjustment factor; s (n) represents the sampled value of the nth sampling point in the original heart sound signal sequence.
Here, the relation between the audio gain adjustment factor and the original decibel value and the relation between the audio gain adjustment factor and the actual decibel value after gain adjustment can be obtained by performing a difference between the actual decibel value after gain adjustment of the original heart sound signal and the original decibel value:
wherein B is out Representing the actual decibel value of the original heart sound signal after gain adjustment; n represents the corresponding sequence length of the original heart sound signal; b (B) in Representing an original decibel value corresponding to the original heart sound signal; τ represents the root mean square sequence length corresponding to the root mean square of the sequence; μ represents the audio gain adjustment factor.
Then, the calculation formula of the audio gain adjustment factor obtained through arrangement is as follows:
wherein μ represents an audio gain adjustment factor; b (B) out Representing the actual decibel value of the original heart sound signal after gain adjustment; b (B) in Representing the original decibel value corresponding to the original heart sound signal.
S203, performing gain adjustment on the original heart sound signal according to the audio gain adjustment factor, and determining the original heart sound signal after gain adjustment.
In a specific implementation, the relationship between the heart sound signal after the audio gain adjustment and the original heart sound signal may be expressed as:
y τ (n)=μ·s τ (n)
wherein y is τ (n) represents the sampling value of the nth sampling point in the heart sound sequence with the length tau after the self-adaptive gain adjustment; s is(s) τ (n) represents the sampled value of the nth sampling point of the original heart sound signal in the heart sound sequence with the length tau; μ represents the audio gain adjustment factor.
Here, through B out -B in Representing the corresponding decibel gain change of the heart sound signals before and after the gain adjustment, because the actual decibel value B of the original heart sound signal after the gain adjustment is adopted at present out The calculation cannot be performed, so that the actual decibel value of the original heart sound signal after the self-adaptive gain control conversion is equal to the preset expected decibel value, namelyThe gain-adjusted raw heart sound signal can be expressed as:
wherein y is τ (n) represents the sampling value of the nth sampling point in the heart sound sequence with the length tau after the self-adaptive gain adjustment; s is(s) τ (n) represents the sampled value of the nth sampling point of the original heart sound signal in the heart sound sequence with the length tau; b (B) in Representing an original decibel value corresponding to the original heart sound signal;representing a preset desired decibel value.
As another possible implementation manner, after the gain adjustment is performed on the original decibel value of the original heart sound signal, the down-sampling may be performed on the original heart sound signal after the gain adjustment according to a preset sampling rate threshold; and inputting the original heart sound signals after the down sampling into a preset band-pass filter, and filtering the contained background noise signals.
In a specific implementation, since the sampling rate information settings of different heart sound devices are different, and the heart sound signals are found to be mainly distributed below 1000Hz in experiments, the sampling rate of downsampling is set to 2000Hz according to the nyquist sampling theorem.
Here, the band-pass filter may be a 5-order butterworth band-pass filter, and the passing frequency interval is [25Hz,800Hz ], because the respiratory sound of the patient is also recorded in the process of collecting the heart sound signals, experiments find that the interval in which the heart sound signals are mainly distributed is below 1000Hz, and filtering the sound signals below 25Hz is to filter the background noise signals existing in the equipment, so that the interference of other signals on the heart sound signals is reduced.
S102, extracting MFCC features and GFCC features corresponding to the original heart sound signals, and fusing the MFCC features and the GFCC features to generate target fusion features.
In a specific implementation, the mel-frequency cepstrum coefficient (Mel Frequency Cepstrum Coefficient, MFCC) features are filtered by a mel filter with human ear auditory characteristics, and can well represent low-frequency part voice information, and the extraction mode of the MFCC features can be as follows in steps 1-4:
step 1, preprocessing including pre-emphasis, framing and windowing is performed on heart sound signals.
And step 2, carrying out Fourier transform on the windowed heart sound signal and taking a module to obtain a Fourier transform module value.
And step 3, processing the Fourier transform module value obtained after the module taking through a Mel filter to obtain Mel spectrum energy.
And 4, performing discrete cosine transform on the logarithmic Mel frequency spectrum energy to obtain MFCC parameters.
Here, the MFCC characteristics may be subjected to differential operation to obtain dynamic cepstrum characteristics of the heart sound signal.
In the embodiment of the application, the dimension of the MFCC coefficient takes 13 dimensions, and the obtained 13-dimensional MFCC coefficient and the first-order difference and second-order difference dynamic coefficient thereof form the MFCC characteristic.
Furthermore, the Gammatine Frequency Cepstrum Coefficient (GFCC) feature is that based on MFCC, the mel filter is changed into the gammatine filter, and an equivalent rectangular bandwidth ERB scale is adopted in the frequency domain direction, so as to simulate the processing process of the human auditory system on the sound signal, and similar to the MFCC feature, the GFCC feature can be extracted in the following steps 1-4:
step 1, preprocessing including pre-emphasis, framing and windowing is performed on heart sound signals.
And 2, carrying out Fourier transform on the windowed heart sound signals.
And 3, filtering the spectrum energy information through a gammatine filter.
And 4, performing discrete cosine transform on the gamma spectral line energy after filtering to eliminate correlation among different parameters.
Here, the feature dimension of the GFCC feature in the embodiment of the present application takes 32 dimensions.
Note that the frame lengths, frame shifts, FFT points, and windowing types of the MFCC features and GFCC features are consistent.
For example, parameters may be set to a frame length of 256, a frame shift of 128, fft point number n=256, a window function of hamming window, MFCC feature dimension J c Number of mel filters m=13 c =26, the mfcc parameters calculate their first-order difference and second-order difference, respectively, the gammatine filter number M g=32 GFCC feature dimension J g =32, post-fusion features 71 dimensions.
Further, in the feature fusion stage, the MFCC features and GFCC features are normalized first according to the following formula to eliminate the difference before different feature distributions:
wherein x is ji Representing acoustic features of the j-th dimension i frame; x is x j i Representing normalized eigenvalues; f represents the number of frames of heart sound signals; j represents the feature dimension.
Here, feature fusion is performed by the following formula:
wherein X represents the fused target fusion characteristics; m represents a 39-dimensional MFCC feature column vector; g represents 32-dimensional GFCC feature column vectors, and the fused target fusion feature is 71-dimensional.
After obtaining the target fusion feature, the target fusion feature is input to the CRNN model and the res net18 model, and the CRNN model and the res net18 model are trained as heart sound classification models based on the fusion feature and stored.
S103, dividing the target fusion feature into a plurality of feature fragments, and respectively inputting the feature fragments into a pre-trained CRNN model and a pre-trained ResNet18 model aiming at each feature fragment.
In a specific implementation, the fusion feature is divided into a plurality of feature segments with the length of 30 frames, and the feature segments are input into a trained CRNN model and a ResNet18 model for each feature segment.
Here, it is determined whether the target fusion feature is smaller than 30 frames, and if so, it is necessary to stitch the target fusion feature to 30 frames.
Wherein, in the CRNN model and the ResNet18 model, each feature segment after being processed by the softmax layer outputs the probability of the feature segment belonging to each heart sound category as P i ={p i1 ,p i2 ,p ij ,…,p iC And meetC is the heart sound category number.
S104, determining a proportion coefficient of the number of fragments belonging to each heart sound class to the total number of fragments and a probability score of the feature fragment corresponding to each heart sound class in heart sound classification results output by the CRNN model and the ResNet18 model aiming at the feature fragments respectively.
In specific implementation, for the CRNN model and the ResNet18 model, the probability that each characteristic segment belongs to each heart sound category is maximized to obtain the judging result of the heart sound category to which the current heart sound segment belongs, the identifying results of the categories are counted, the proportion of the number of segments in each category to the number of total heart sound segments is calculated and judged, and the proportion is used as the proportion coefficient R for judging the categories i ={r i1 ,r i2 ,r ij ,…,r iC And meet
Further, in the CRNN model and the ResNet18 model, the probability that each characteristic segment output by each characteristic segment after the processing of the softmax layer belongs to each heart sound category is added, and the probability score of each characteristic segment corresponding to each heart sound category is obtained.
It should be noted that, the CRNN model and the ResNet18 model respectively process the feature segments in the processing process, and each model outputs a proportionality coefficient of the number of segments belonging to each heart sound category to the total number of segments, and a probability score of the feature segment corresponding to each heart sound category.
S105, determining a fusion judgment score according to the probability score and the proportionality coefficient, and carrying out weighted summation on the fusion judgment score corresponding to the CRNN model and the fusion judgment score corresponding to the ResNet18 model to determine a target judgment score.
In specific implementation, for a CRNN model, multiplying the proportion coefficient of the number of fragments which are judged by the CRNN model and belong to each heart sound category to the total number of fragments and the probability score of the characteristic fragment corresponding to each heart sound category to obtain a fusion judgment score corresponding to the CRNN model; and multiplying the proportional coefficient of the number of the fragments which are judged by the ResNet18 model and belong to each heart sound class to the total number of the fragments by the probability score of the characteristic fragments corresponding to each heart sound class aiming at the ResNet18 model to obtain a fusion judgment score corresponding to the ResNet18 model.
Further, the corresponding weight coefficients are respectively configured for the fusion judgment scores corresponding to the CRNN model and the fusion judgment scores corresponding to the ResNet18 model, and the fusion judgment scores corresponding to the CRNN model and the fusion judgment scores corresponding to the ResNet18 model are weighted and averaged to obtain the final target judgment score.
Here, the target decision score may be determined by the following formula:
S final =a*S ResNet +b*S CRNN
wherein S is final Representing a target decision score; s is S ResNet Representing a fusion judgment corresponding to the ResNet18 model; s is S CRNN Representing a fusion judgment score corresponding to the CRNN model; a represents a weight coefficient corresponding to the ResNet18 model; b represents the weight coefficient corresponding to the CRNN model.
Preferably, the weight coefficient corresponding to the ResNet18 model can be 0.75, and the weight coefficient corresponding to the CRNN model can be 0.25.
And S106, taking a column corresponding to the maximum value of the target decision score as a target heart sound recognition result.
In specific implementation, a column corresponding to the maximum value of the target decision score is taken as a final target heart sound recognition result.
For example, in the target heart sound recognition result, 0 represents normal heart sound, 1 represents mitral regurgitation, 2 represents mitral valve double-phase murmur, 3 represents mitral valve stenosis (bell-shaped), 4 represents aortic valve regurgitation, 5 represents aortic valve double-phase murmur, and 6 represents aortic valve stenosis.
According to the heart sound identification method provided by the embodiment of the disclosure, an original heart sound signal is obtained; extracting MFCC features and GFCC features corresponding to the original heart sound signals, and fusing the MFCC features and the GFCC features to generate target fusion features; dividing target fusion characteristics into a plurality of characteristic fragments, and respectively inputting the characteristic fragments into a pre-trained CRNN model and a pre-trained ResNet18 model aiming at each characteristic fragment; respectively determining the proportion coefficient of the number of fragments belonging to each heart sound class to the total number of fragments and the probability score of the feature fragment corresponding to each heart sound class in heart sound classification results output by the CRNN model and the ResNet18 model aiming at the feature fragments; determining a fusion decision score according to the probability score and the proportionality coefficient, and carrying out weighted summation on the fusion decision score corresponding to the CRNN model and the fusion decision score corresponding to the ResNet18 model to determine a target decision score; and taking the column corresponding to the maximum value of the target decision score as a target heart sound identification result. The robustness of the model is improved by adopting a multi-feature, multi-model and multi-fusion mode, and the accuracy of heart sound classification is improved.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same inventive concept, the embodiment of the disclosure further provides a heart sound recognition device corresponding to the heart sound recognition method, and since the principle of solving the problem by the device in the embodiment of the disclosure is similar to that of the heart sound recognition method in the embodiment of the disclosure, the implementation of the device can refer to the implementation of the method, and the repetition is omitted.
Referring to fig. 3, fig. 3 is a schematic diagram of a heart sound recognition device according to an embodiment of the disclosure. As shown in fig. 3, a heart sound recognition apparatus 300 provided by an embodiment of the present disclosure includes:
an acquisition module 310 is configured to acquire an original heart sound signal.
And a feature fusion module 320, configured to extract MFCC features and GFCC features corresponding to the original heart sound signals, and fuse the MFCC features and the GFCC features to generate target fusion features.
The feature dividing module 330 is configured to divide the target fusion feature into a plurality of feature segments, and for each feature segment, input the feature segment to a pre-trained CRNN model and a pre-trained res net18 model respectively.
The score discriminating module 340 is configured to determine a proportionality coefficient of the number of segments belonging to each heart sound class to the total number of segments and a probability score of the feature segment corresponding to each heart sound class in the heart sound classification results output by the CRNN model and the ResNet18 model for the feature segments respectively.
And the joint decision module 350 is configured to determine a fusion decision score according to the probability score and the scaling factor, and weight and sum the fusion decision score corresponding to the CRNN model and the fusion decision score corresponding to the res net18 model to determine a target decision score.
The recognition result determining module 360 is configured to take a column corresponding to the maximum value of the target decision score as a target heart sound recognition result.
The process flow of each module in the apparatus and the interaction flow between the modules may be described with reference to the related descriptions in the above method embodiments, which are not described in detail herein.
The embodiment of the disclosure provides a heart sound identification device, which is used for acquiring an original heart sound signal; extracting MFCC features and GFCC features corresponding to the original heart sound signals, and fusing the MFCC features and the GFCC features to generate target fusion features; dividing target fusion characteristics into a plurality of characteristic fragments, and respectively inputting the characteristic fragments into a pre-trained CRNN model and a pre-trained ResNet18 model aiming at each characteristic fragment; respectively determining the proportion coefficient of the number of fragments belonging to each heart sound class to the total number of fragments and the probability score of the feature fragment corresponding to each heart sound class in heart sound classification results output by the CRNN model and the ResNet18 model aiming at the feature fragments; determining a fusion decision score according to the probability score and the proportionality coefficient, and carrying out weighted summation on the fusion decision score corresponding to the CRNN model and the fusion decision score corresponding to the ResNet18 model to determine a target decision score; and taking the column corresponding to the maximum value of the target decision score as a target heart sound identification result. The robustness of the model is improved by adopting a multi-feature, multi-model and multi-fusion mode, and the accuracy of heart sound classification is improved.
Corresponding to the heart sound recognition method in fig. 1, the embodiment of the disclosure further provides an electronic device 400, as shown in fig. 4, which is a schematic structural diagram of the electronic device 400 provided in the embodiment of the disclosure, including:
a processor 41, a memory 42, and a bus 43; memory 42 is used to store execution instructions, including memory 421 and external memory 422; the memory 421 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 41 and data exchanged with the external memory 422 such as a hard disk, and the processor 41 exchanges data with the external memory 422 through the memory 421, and when the electronic device 400 is operated, the processor 41 and the memory 42 communicate with each other through the bus 43, so that the processor 41 performs the steps of the heart sound recognition method in fig. 1.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the heart sound identification method described in the method embodiments above. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiments of the present disclosure further provide a computer program product, where the computer program product includes computer instructions, and when the computer instructions are executed by a processor, the steps of the heart sound identification method described in the foregoing method embodiments may be executed, and specifically reference may be made to the foregoing method embodiments, which are not described herein.
Wherein the above-mentioned computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, reference may be made to the corresponding process in the foregoing method embodiment for the specific working process of the apparatus described above, which is not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A method of heart sound identification, comprising:
acquiring an original heart sound signal;
extracting MFCC features and GFCC features corresponding to the original heart sound signals, and fusing the MFCC features and the GFCC features to generate target fusion features;
dividing the target fusion characteristics into a plurality of characteristic fragments, and respectively inputting the characteristic fragments into a pre-trained CRNN model and a pre-trained ResNet18 model aiming at each characteristic fragment;
Respectively determining the proportion coefficient of the number of fragments belonging to each heart sound class to the total number of fragments and the probability score of the feature fragment corresponding to each heart sound class in heart sound classification results output by the CRNN model and the ResNet18 model aiming at the feature fragment;
determining a fusion decision score according to the probability score and the proportionality coefficient, and carrying out weighted summation on the fusion decision score corresponding to the CRNN model and the fusion decision score corresponding to the ResNet18 model to determine a target decision score;
and taking the column corresponding to the maximum value of the target judgment score as a target heart sound identification result.
2. The method of claim 1, wherein prior to the extracting MFCC features and GFCC features corresponding to the original heart sound signal, the method further comprises:
determining an original decibel value corresponding to the original heart sound signal;
determining an audio gain adjustment factor according to a preset expected decibel value and the original decibel value;
and performing gain adjustment on the original heart sound signal according to the audio gain adjustment factor, and determining the original heart sound signal after gain adjustment.
3. The method according to claim 2, wherein determining the audio gain adjustment factor according to the preset desired decibel value and the original decibel value comprises:
Determining a decibel adjustment factor that causes the original decibel value to reach the preset desired decibel value;
taking the preset expected decibel value as an actual decibel value output after gain adjustment, and determining the corresponding decibel gain change of the heart sound signals before and after gain adjustment according to the original decibel value and the decibel adjustment factor;
and determining the audio gain adjusting factor according to the decibel gain change.
4. The method of claim 1, wherein after the acquiring the original heart sound signal, the method further comprises:
performing noise reduction processing on the original heart sound signal to determine a noise-reduced heart sound signal;
after framing and windowing are carried out on the noise reduction heart sound signals, short-time average energy and short-time zero-crossing rate corresponding to the noise reduction heart sound signals of each frame are determined;
setting a high energy threshold and a preset low energy threshold according to the short-time average energy and the short-time zero-crossing rate, and dividing the noise reduction heart sound signal into a mute section, a transition section and a voice section;
and filtering the mute sections positioned at two ends of the noise reduction heart sound signal.
5. The method according to claim 2, wherein the determining the original decibel value corresponding to the original heart sound signal specifically includes:
Determining the sequence length and the sequence root mean square corresponding to the original heart sound signal;
determining the root mean square sequence length corresponding to the root mean square of the sequence;
determining a maximum amplitude value corresponding to the original heart sound signal;
and determining the original decibel value according to the sequence length, the sequence root mean square, the root mean square sequence length and the maximum amplitude value.
6. The method of claim 2, wherein after the gain adjustment is performed for the original heart sound signal according to the audio gain adjustment factor, determining the gain-adjusted original heart sound signal, the method further comprises:
according to a preset sampling rate threshold, downsampling is carried out on the original heart sound signal after gain adjustment;
and inputting the original heart sound signals after the downsampling to a preset band-pass filter, and filtering the contained background noise signals.
7. A heart sound identification device, comprising:
the acquisition module is used for acquiring an original heart sound signal;
the feature fusion module is used for extracting the MFCC features and GFCC features corresponding to the original heart sound signals and fusing the MFCC features and the GFCC features to generate target fusion features;
The feature dividing module is used for dividing the target fusion feature into a plurality of feature fragments, and inputting the feature fragments to a pre-trained CRNN model and a pre-trained ResNet18 model respectively aiming at each feature fragment;
the score judging module is used for respectively determining the proportion coefficient of the number of fragments belonging to each heart sound class to the total number of fragments and the probability score of each heart sound class corresponding to the characteristic fragment in the heart sound classification results output by the CRNN model and the ResNet18 model aiming at the characteristic fragments;
the joint judgment module is used for determining a fusion judgment score according to the probability score and the proportionality coefficient, and carrying out weighted summation on the fusion judgment score corresponding to the CRNN model and the fusion judgment score corresponding to the ResNet18 model to determine a target judgment score;
and the recognition result determining module is used for taking the column corresponding to the maximum value of the target decision score as a target heart sound recognition result.
8. The apparatus of claim 7, further comprising an audio gain control module configured to:
determining an original decibel value corresponding to the original heart sound signal;
Determining an audio gain adjustment factor according to a preset expected decibel value and the original decibel value;
and performing gain adjustment on the original heart sound signal according to the audio gain adjustment factor, and determining the original heart sound signal after gain adjustment.
9. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the heart sound identification method of any one of claims 1 to 6.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the heart sound identification method as claimed in any one of claims 1 to 6.
CN202311274076.9A 2023-09-28 2023-09-28 Heart sound identification method and device, electronic equipment and storage medium Pending CN117334224A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311274076.9A CN117334224A (en) 2023-09-28 2023-09-28 Heart sound identification method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311274076.9A CN117334224A (en) 2023-09-28 2023-09-28 Heart sound identification method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117334224A true CN117334224A (en) 2024-01-02

Family

ID=89289732

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311274076.9A Pending CN117334224A (en) 2023-09-28 2023-09-28 Heart sound identification method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117334224A (en)

Similar Documents

Publication Publication Date Title
Han et al. Learning spectral mapping for speech dereverberation and denoising
CN108369813B (en) Specific voice recognition method, apparatus and storage medium
CN106486131B (en) A kind of method and device of speech de-noising
JP5666444B2 (en) Apparatus and method for processing an audio signal for speech enhancement using feature extraction
CN105611477B (en) The voice enhancement algorithm that depth and range neutral net are combined in digital deaf-aid
EP2984649B1 (en) Extraction of acoustic relative excitation features
CN109036460B (en) Voice processing method and device based on multi-model neural network
JP2000515987A (en) Voice activity detector
WO2014153800A1 (en) Voice recognition system
CN109147798B (en) Speech recognition method, device, electronic equipment and readable storage medium
CN106653004B (en) Perception language composes the Speaker Identification feature extracting method of regular cochlea filter factor
CN110111769B (en) Electronic cochlea control method and device, readable storage medium and electronic cochlea
CN111540342B (en) Energy threshold adjusting method, device, equipment and medium
CN103021405A (en) Voice signal dynamic feature extraction method based on MUSIC and modulation spectrum filter
CN110189746A (en) A kind of method for recognizing speech applied to earth-space communication
CN112951259A (en) Audio noise reduction method and device, electronic equipment and computer readable storage medium
Shoba et al. A new Genetic Algorithm based fusion scheme in monaural CASA system to improve the performance of the speech
CN112017658A (en) Operation control system based on intelligent human-computer interaction
CN111489763A (en) Adaptive method for speaker recognition in complex environment based on GMM model
WO2020015546A1 (en) Far-field speech recognition method, speech recognition model training method, and server
CN111261192A (en) Audio detection method based on LSTM network, electronic equipment and storage medium
CN117334224A (en) Heart sound identification method and device, electronic equipment and storage medium
Mehta et al. Robust front-end and back-end processing for feature extraction for Hindi speech recognition
Dai et al. An improved model of masking effects for robust speech recognition system
CN112118511A (en) Earphone noise reduction method and device, earphone and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination