CN117898684A - Method, device and equipment for monitoring heart failure illness state and readable storage medium - Google Patents

Method, device and equipment for monitoring heart failure illness state and readable storage medium Download PDF

Info

Publication number
CN117898684A
CN117898684A CN202410319642.1A CN202410319642A CN117898684A CN 117898684 A CN117898684 A CN 117898684A CN 202410319642 A CN202410319642 A CN 202410319642A CN 117898684 A CN117898684 A CN 117898684A
Authority
CN
China
Prior art keywords
heart failure
voice
congestion
classification model
failure patient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410319642.1A
Other languages
Chinese (zh)
Other versions
CN117898684B (en
Inventor
万巧琴
杨轶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN202410319642.1A priority Critical patent/CN117898684B/en
Publication of CN117898684A publication Critical patent/CN117898684A/en
Application granted granted Critical
Publication of CN117898684B publication Critical patent/CN117898684B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4803Speech analysis specially adapted for diagnostic purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B7/00Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00
    • G08B7/06Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00 using electric transmission, e.g. involving audible and visible signalling through the use of sound and light sources
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/21Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being power information
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/24Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being the cepstrum
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/45Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of analysis window
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/66Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Physiology (AREA)
  • Cardiology (AREA)
  • Evolutionary Computation (AREA)
  • Emergency Management (AREA)
  • Artificial Intelligence (AREA)
  • Epidemiology (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The application discloses a method, a device and equipment for monitoring the condition of heart failure and a readable storage medium, and relates to the technical field of condition monitoring. The heart failure condition monitoring method comprises the following steps: acquiring a voice signal of a heart failure patient; extracting the characteristics of the voice signal to obtain voice characteristics; inputting the voice characteristics into a target classification model to obtain a congestion symptom result of the heart failure patient; based on the congestion symptom result, the heart failure patient is provided with a disease prompt. The application solves the technical problem of poor convenience of the existing method for monitoring the condition of heart failure.

Description

Method, device and equipment for monitoring heart failure illness state and readable storage medium
Technical Field
The present application relates to the field of medical condition monitoring technology, and in particular, to a method, apparatus, and device for monitoring heart failure medical condition, and a readable storage medium.
Background
Heart failure is a condition that causes abnormal changes in the structure and/or function of the heart, dysfunction of ventricular contractions and/or expansions, and thus causes dyspnea, fatigue, and fluid retention (pulmonary circulation/systemic blood stasis).
The clinical course of heart failure is characterized by repeated symptoms and signs worsening, which results in frequent readmission and prolonged illness of heart failure patients forming a vicious circle, severely reducing the quality of life of the patients and increasing the death risk of the patients.
The key point of reducing the readmission rate of heart failure patients is early detection and intervention of congestion symptoms, and if the current patients need to monitor the early congestion symptoms, special examination needs to be frequently carried out in hospitals, and the examination process is too complicated. Namely, the existing method for monitoring the condition of heart failure has poor convenience.
Disclosure of Invention
The application mainly aims to provide a heart failure condition monitoring method, which aims to solve the technical problem that the existing method for heart failure condition monitoring is poor in convenience.
To achieve the above object, in a first aspect, the present application provides a method for monitoring a heart failure condition, which is applied to a heart failure condition monitoring apparatus, the method for monitoring a heart failure condition comprising the steps of:
Acquiring a voice signal of a heart failure patient;
Extracting the characteristics of the voice signal to obtain voice characteristics;
Inputting the voice characteristics into a target classification model to obtain a congestion symptom result of the heart failure patient;
based on the congestion symptom result, the heart failure patient is provided with a disease prompt.
According to a first aspect, the step of obtaining a speech signal of a heart failure patient comprises:
Displaying a preset pronunciation task to the heart failure patient;
And collecting pronunciation voice output by the heart failure patient aiming at the preset pronunciation task as the voice signal.
According to the first aspect, or any implementation manner of the first aspect, the preset pronunciation tasks include at least one of a continuous vowel task, a longest pronunciation task, a continuous reading task, and a paragraph reading task.
According to the first aspect, or any implementation manner of the first aspect, the step of extracting features of the voice signal to obtain voice features includes:
Preprocessing the voice signal to obtain a preprocessed signal;
And extracting the audio characteristics of the preprocessing signals to obtain voice characteristics.
According to a first aspect, or any implementation manner of the first aspect, the step of preprocessing the voice signal to obtain a preprocessed signal includes:
Identifying a mute segment in the voice signal, and deleting the mute segment to obtain an effective voice signal;
And carrying out pre-emphasis, framing and windowing on the effective voice signal to obtain a pre-processing signal.
According to a first aspect, or any implementation manner of the first aspect, before the step of inputting the speech feature into a target classification model, the method includes:
acquiring sample voice data, wherein the sample voice data comprises a plurality of groups of voice samples and corresponding sample labels thereof, and the sample labels comprise the presence of congestion symptoms and the absence of congestion symptoms;
Dividing the sample speech data into a training data set and a test data set;
Training the initial classification model based on the training data set to obtain a trained classification model;
and testing the trained classification model based on the test data set, and taking the trained classification model passing the test as a target classification model.
According to a first aspect, or any implementation manner of the first aspect, the step of performing condition prompting on the heart failure patient based on the congestion symptom result includes:
Outputting the congestion symptom result to the heart failure patient;
And outputting preset prompt information to the heart failure patient after the congestion symptom results in the congestion symptom, so as to prompt the heart failure patient to have a disease deterioration risk.
In a second aspect, the present application provides a heart failure condition monitoring device for use in heart failure condition monitoring apparatus, the heart failure condition monitoring device comprising:
the voice acquisition module is used for acquiring voice signals of heart failure patients;
The feature extraction module is used for extracting features of the voice signals to obtain voice features;
the blood stasis detection module is used for inputting the voice characteristics into a target classification model to obtain a blood stasis symptom result of the heart failure patient;
and the condition prompting module is used for prompting the condition of the heart failure patient based on the blood stasis symptom result.
According to a second aspect, the speech acquisition module is further configured to: displaying a preset pronunciation task to the heart failure patient;
And collecting pronunciation voice output by the heart failure patient aiming at the preset pronunciation task as the voice signal.
According to a second aspect, or any implementation manner of the second aspect, the preset pronunciation tasks include at least one of a continuous vowel task, a longest pronunciation task, a continuous reading task, and a paragraph reading task.
According to a second aspect, or any implementation manner of the second aspect, the feature extraction module is further configured to:
Preprocessing the voice signal to obtain a preprocessed signal;
And extracting the audio characteristics of the preprocessing signals to obtain voice characteristics.
According to a second aspect, or any implementation manner of the second aspect, the feature extraction module is further configured to:
Identifying a mute segment in the voice signal, and deleting the mute segment to obtain an effective voice signal;
And carrying out pre-emphasis, framing and windowing on the effective voice signal to obtain a pre-processing signal.
According to a second aspect, or any implementation manner of the second aspect, the heart failure condition monitoring device further comprises a model training module for:
acquiring sample voice data, wherein the sample voice data comprises a plurality of groups of voice samples and corresponding sample labels thereof, and the sample labels comprise the presence of congestion symptoms and the absence of congestion symptoms;
Dividing the sample speech data into a training data set and a test data set;
Training the initial classification model based on the training data set to obtain a trained classification model;
and testing the trained classification model based on the test data set, and taking the trained classification model passing the test as a target classification model.
According to a second aspect, or any implementation manner of the second aspect, the condition prompting module is further configured to:
Outputting the congestion symptom result to the heart failure patient;
And outputting preset prompt information to the heart failure patient after the congestion symptom results in the congestion symptom, so as to prompt the heart failure patient to have a disease deterioration risk.
In a third aspect, the present application provides a heart failure condition monitoring device comprising: a memory, a processor, the memory having stored thereon a computer program executable on the processor, the computer program being configured to implement the steps of the heart failure condition monitoring method as described above.
In a fourth aspect, the present application provides a computer readable storage medium having a computer program stored therein, which when executed by a processor, causes the processor to perform a heart failure condition monitoring method as described above.
In a fifth aspect, embodiments of the present application provide a computer program comprising instructions for performing a heart failure condition monitoring method as described above.
The application provides a heart failure condition monitoring method, a device, equipment and a computer readable storage medium, which are used for acquiring voice signals of heart failure patients; extracting the characteristics of the voice signal to obtain voice characteristics; inputting the voice characteristics into a target classification model to obtain a congestion symptom result of the heart failure patient, and further prompting the condition of the heart failure patient based on the congestion symptom result. Abnormalities in either of these links can lead to speech changes due to the complex interactions of pronunciation dependent on cognitive and physiological processes, including linguistic and cognitive systems, neuromotor pathways, and physical processes of airflow through airways and vocal tract. Congestion refers to the accumulation of extracellular fluid caused by an increase in ventricular filling pressure, which affects the pronunciation related nerves, muscles and organs. Therefore, the voice characteristics of the heart failure patient are detected through the pre-trained target classification model, the congestion symptom result of the heart failure patient is obtained, the early warning of the congestion symptom of the heart failure patient is realized, and the early congestion symptom of the heart failure patient is detected in time. Compared with the existing mode that early congestion symptom monitoring can be achieved only through complex examination procedures such as routine examination, electrocardiographic examination and imaging examination in hospitals, the method can achieve early congestion symptom monitoring through voice signals of heart failure patients, and the convenience of heart failure condition monitoring of heart failure patients is effectively improved.
Drawings
FIG. 1 is a flow chart of a first embodiment of a method for monitoring heart failure condition according to the present application;
FIG. 2 is a flow chart of a second embodiment of a method for monitoring heart failure condition according to the present application;
FIG. 3 is a flow chart of a third embodiment of a method for monitoring heart failure condition according to the present application;
FIG. 4 is a schematic diagram of a heart failure condition monitoring device according to the present application;
FIG. 5 is a schematic view of a scenario according to an embodiment of the present application;
fig. 6 is a schematic device structure diagram of a hardware running environment according to an embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone.
The terms first and second and the like in the description and in the claims of embodiments of the application, are used for distinguishing between different objects and not necessarily for describing a particular sequential order of objects. For example, the first target object and the second target object, etc., are used to distinguish between different target objects, and are not used to describe a particular order of target objects.
In embodiments of the application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
To more clearly describe the technical solution of the present application, the following description will be made with reference to some of the prior art for heart failure condition monitoring methods of the present application:
Heart failure is a condition that causes abnormal changes in the structure and/or function of the heart, dysfunction of ventricular contractions and/or expansions, and thus causes dyspnea, fatigue, and fluid retention (pulmonary circulation/systemic blood stasis). The high readmission rate is a great characteristic of heart failure, and the readmission rate of patients reaches 25% within 30 days after discharge, and the readmission rate of all diseases is the first; more than 50% of patients will be admitted within 6 months after discharge. The clinical course of heart failure is characterized by repeated symptoms and worsening of physical signs, frequent readmission and prolonged illness form vicious circle, seriously reduce the life quality of patients and increase the death risk of patients.
Congestion refers to the accumulation of extracellular fluid caused by an increase in ventricular filling pressure. Because of abnormal changes in heart structure and/or function in heart failure, congestion generally begins with elevated ventricular filling pressure, hemodynamic abnormalities occur, with the development of hemodynamic abnormalities, interstitial-lymph homeostasis is affected, fluid accumulates in interstitial spaces, tissue congestion is initiated, and patients in this stage can develop obvious clinical signs such as dyspnea, edema, etc., leading to clinical events such as hospitalization. Therefore, the key point of reducing the readmission rate of heart failure patients is early detection and intervention of congestion symptoms, and for a long time, the condition self-monitoring of heart failure patients after discharge is mainly performed through objective measurement of weight, heart rate and the like, subjective perception of edema degree, dyspnea degree and the like, however, significant changes of the indexes usually occur in the late stage of congestion, and the occurrence of congestion cannot be predicted early. If the current patient needs to monitor early congestion symptoms, the patient needs to go to a hospital frequently to carry out special examination, and the examination process is too complicated. Namely, the existing method for monitoring the condition of heart failure has poor convenience.
In the application, the fact that the pronunciation depends on complex interactions of cognitive and physiological processes, including language and cognitive systems, neuromotor pathways and physical processes of airflow through airways and sound channels, the abnormality of any link can cause voice change is considered, the pronunciation comprises three stages of starting, sounding and tuning, and the power source of airflow in the starting stage mainly comes from the lung; the sound production means the regulation activity of the air flow exhaled from the lung when passing through the laryngeal door, and the main sound producing organ is vocal cords; tuning refers to the process of modulating tone in the vocal tract by exhaling from the lung and adjusting the air flow with certain sound quality through the throat, so as to make different sounds, and the tuning organ involved is the vocal tract (nasal cavity, oral cavity and pharyngeal cavity). The blood stasis may cause fluid accumulation in the relevant organs of the sound, thereby significantly affecting the sound production process.
Therefore, the voice characteristics of the heart failure patient are detected through the pre-trained target classification model, the congestion symptom result of the heart failure patient is obtained, the early warning of the congestion symptom of the heart failure patient is realized, and the early congestion symptom of the heart failure patient is detected in time. Compared with the mode that early congestion symptom monitoring can be achieved only through complex examination procedures such as routine examination, electrocardiographic examination and imaging examination in hospitals at present, the method can achieve early congestion symptom monitoring through voice signals of heart failure patients, and the convenience of heart failure condition monitoring of heart failure patients is effectively improved.
Referring to fig. 1, fig. 1 is a flowchart illustrating a first embodiment of a method for monitoring a heart failure condition according to the present application. It should be noted that although a logical order is depicted in the flowchart, in some cases the steps depicted or described may be performed in a different order than presented herein.
A first embodiment of the present application provides a method for heart failure condition monitoring, comprising the steps of:
Step S100, acquiring a voice signal of a heart failure patient;
In this embodiment, the heart failure patient may be a patient after heart failure treatment. The voice signal is a signal obtained by collecting voice sound waves emitted by the heart failure patient.
As an example, the present embodiment may obtain the voice signal of the heart failure patient by periodically or aperiodically performing audio acquisition on the speaking content of the heart failure patient.
As another example, since the randomly collected speaking content of the heart failure patient may be difficult to ensure the detection accuracy, the present embodiment may be used to guide the heart failure patient to pronounce the physiological structure related to pronounce, which is affected by the performance symptoms (at least including the congestion symptoms) caused by the heart failure, by displaying the preset pronouncing task to the heart failure patient. Therefore, the pronunciation voice output by the heart failure patient aiming at the preset pronunciation task is collected as the voice signal and can be used for evaluating whether the pronunciation related physiological structure is influenced by the performance symptoms (such as congestion symptoms) caused by heart failure. According to the embodiment, the heart failure patient is guided to make the appointed pronunciation through the preset pronunciation task, and compared with the randomly collected speaking content, the accuracy of detecting the congestion symptoms of the heart failure patient can be further improved.
Step S200, extracting the characteristics of the voice signal to obtain voice characteristics;
In this embodiment, it should be noted that the voice features may include one or more audio features such as a fundamental frequency feature, an amplitude feature, a glottal feature, an energy feature, a spectrum feature, and a formant feature.
As an example, the present embodiment may obtain the speech feature by inputting the speech signal into a preset feature extraction model. The feature extraction model may be a machine learning model, a neural network model, or the like that may be used for speech feature extraction.
As another example, the present embodiment may also obtain, as the voice feature, one or more of audio features such as a fundamental frequency feature, an amplitude feature, a glottal feature, an energy feature, a frequency spectrum feature, and a formant feature of the voice signal by performing spectrum analysis on the voice signal.
The step S200 of extracting features of the voice signal to obtain voice features includes:
step S210, preprocessing the voice signal to obtain a preprocessed signal;
step S220, extracting the audio features of the preprocessed signal to obtain voice features.
In this embodiment, the preprocessing is a processing operation for eliminating the influence of aliasing, higher harmonic distortion, high frequency and other interference factors on the quality of the voice signal, which are caused by the human vocal organ and the acquisition device.
In order to improve accuracy of the subsequently extracted voice features, the embodiment may perform preprocessing on the voice signal before performing feature extraction to obtain a preprocessed signal, where the preprocessing may include processing operations such as pre-emphasis, framing, windowing, and fast fourier transform. And then extracting the audio characteristics of the preprocessing signals, wherein one or more of the audio characteristics such as fundamental frequency characteristics, amplitude characteristics, glottal characteristics, energy characteristics, frequency spectrum characteristics, formant characteristics and the like of the voice signals are used as voice characteristics.
For example, for extraction of fundamental frequencies in fundamental frequency class features, the preprocessed signal may be analyzed to obtain pitch candidates extracted for each frame in the preprocessed signal and their corresponding amplitudes. In each frame, a pitch (frequency) candidate is selected and the amplitude of the pitch candidate is calculated. For each frame, selecting the one with the largest amplitude from all pitch candidates as the fundamental frequency of the frame, and obtaining the fundamental frequency class characteristics. If the largest magnitude pitch candidate is near 0Hz, it is generally considered to be no pitch frame or non-speech frame;
Illustratively, the extraction of the harmonic to noise ratio in the energy class features, which is the ratio of the harmonic component to the noise component in the speech, is an objective index for detecting pathological voices and evaluating voice quality, and can effectively reflect the situation of glottal closure. First, the frequency positions of all harmonics in the frame are calculated and the energy of all harmonics is estimated And the total energy/>, of the frame. Non-harmonic energyIs the total energy minus the harmonic energy.
Wherein,
The harmonic to noise ratio (HNR, harmonics-to-noise ratio) is the ratio of harmonic energy to non-harmonic energy, typically expressed in decibels (dB).
The calculation formula of the harmonic to noise ratio is
For example, for the extraction of mel frequency cepstrum coefficients in the spectrum class feature, the frequency of sound heard by human ear is not in a linear proportional relationship with the frequency of sound, the mel frequency scale is more consistent with the human auditory characteristics, and the relationship between the mel frequency and the actual frequency f can be expressed as:
on the basis of mel frequency scale, a set of triangular filters can be used to simulate the selectivity of human ears to sounds of different frequencies. Configuring one to include Filter bank of triangular filters (/ >)Generally 20-80), the lower, center and upper frequencies of each filter are/>, respectivelyThe response at the center frequency is 1 and decreases linearly towards 0, the center frequency of the filter is equally spaced on the Mel scale axis, theThe frequency response of the individual filters can be defined as:
calculating a filter output of each triangular filter according to the power spectrum of the preprocessing signal:
where l=1, 2,3 … … L.
The filter outputs corresponding to all frames are combined to obtain a filter with one dimension (frame number) The matrix is FBank (FilterBanks, filtering component) features, the obtained FBank features are highly correlated among different dimensions, logarithmic operation is performed on all filter outputs, discrete cosine transform (Discrete Cosine Transform, DCT) is used for processing filter bank coefficients, and a compressed representation of the filter bank coefficients, namely MFCC (Mel Frequency Cepstral Coefficients, mel-frequency cepstral coefficient) features, can be obtained by a calculation formula of mel-frequency cepstral coefficients:
where l=1, 2,3 … … L.
In some embodiments, the step of preprocessing the voice signal in step S210 to obtain a preprocessed signal includes:
Step S211, recognizing a mute segment in the voice signal, and deleting the mute segment to obtain an effective voice signal;
Step S212, pre-emphasis, framing and windowing are carried out on the effective voice signal, and a pre-processing signal is obtained.
In this embodiment, it should be noted that the silence segment is a segment in which no voice content exists in the voice signal.
As an example, the present embodiment may identify silence segments in the speech signal by using a method for identifying based on short-time energy. The voice signal is firstly divided into a plurality of voice fragments according to preset lengths (such as 40ms, 50ms, 60ms and the like), and short-time voice energy of each voice fragment is calculated. And taking the voice segment with the short-time voice energy lower than a preset energy threshold as a mute segment. The calculation formula of the short-time voice energy is as follows:
Wherein the method comprises the steps of The frame signal of the m-th frame in the voice segment, N is the length of the voice segment;
As another example, the present embodiment may also identify silence segments in the speech signal through a trained speech classifier.
In this embodiment, after identifying a mute segment in the voice signal, the mute segment is deleted from the voice signal, so as to obtain an effective voice signal. According to the application, the silence segments in the voice signal are deleted before pre-emphasis, framing, windowing and other treatments are carried out, so that the accuracy of voice features obtained by subsequent extraction can be effectively improved, and the accuracy of detecting the congestion symptom is further improved.
The embodiment further performs pre-emphasis, framing and windowing on the effective voice signal to obtain a pre-processed signal.
Illustratively, the pre-emphasis is performed by filtering the effective speech signal through a high-pass filter to enhance the high-frequency component of the effective speech signal, reduce the low-frequency component, and improve the signal-to-noise ratio. The calculation formula of the sample value y (n) of the effective voice signal after the pre-emphasis operation is as follows:
where x (n) represents a sample value of the input valid voice signal, x (n-1) represents a previous sample value of the input valid voice signal, and the value of the filter coefficient (α) is typically taken to be 0.9-1.0;
the frame division processing mode is to divide the effective voice signal into a plurality of frames, wherein the length of each frame is N sampling points, and the overlapping part between adjacent frames is M sampling points;
for example, for the windowing, a discontinuity may be caused by the sudden truncation of the signal at the time of framing after the valid speech signal is divided into frames, thereby causing spectrum leakage. To alleviate this, the windowing is performed by multiplying the framed valid speech signal by a window function that smoothly attenuates the beginning and ending ends of the frame to zero. The windowed valid speech signal may be expressed as:
where x (n) represents a sample value of the input valid speech signal, The functional expression for a window function, such as a hamming window, is:
further, in this embodiment, in order to facilitate subsequent spectrum analysis of the voice signal, fourier transform may be performed on the effective voice signal after the windowing process, so as to obtain a preprocessed signal.
Illustratively, since the speech signal is a discrete signal, a DFT (Discrete Fourier Transform ) should be used for the discrete signal, which can be expressed as:
Wherein the method comprises the steps of Represents theFrame signal,Is the number of sampling points of the frame signal,Is the number of points of the DFT. In practical use, to reduce the computational complexity of DFT, a fast fourier transform (Fast Fourier Transform, FFT) is typically used instead of a discrete fourier transform. The post-FFT result for each frame signal is a discrete complex sequence, with the final dimension being m-dimensional due to the conjugate symmetry of the FFT, representing the frame signal as a combination of m different frequency sinusoids. For each time-frequency point, the absolute value of the complex number of the point represents the amplitude of the sine wave corresponding to the frequency, and the phase of the complex number is the phase of the sine wave. In order to obtain the energy distribution in the frequency domain of the speech signal, the power spectrum of the speech signal is calculated using the following formula:
Step S300, inputting the voice characteristics into a target classification model to obtain a congestion symptom result of the heart failure patient;
In this embodiment, it should be noted that the target classification model is a classification model trained in advance for distinguishing the congestion symptom based on the voice feature, and the classification model may be at least one analysis model of adaptive enhancement, an artificial neural network, a decision tree, a gradient lifting machine, a K nearest neighbor algorithm, a light gradient lifting machine, logistic regression, random forest, a support vector machine, limit gradient lifting, and a convolutional neural network.
The present embodiment may obtain a congestion symptom result of the heart failure patient by inputting the voice feature into the target classification model. Illustratively, the blood stasis symptom results include at least the presence of blood stasis symptoms and the absence of blood stasis symptoms, although further the presence of blood stasis symptoms may also include blood stasis symptom levels, such as mild blood stasis symptoms, moderate blood stasis symptoms, and severe blood stasis symptoms.
Step S400, performing condition prompting on the heart failure patient based on the blood stasis symptom result.
In this embodiment, it should be noted that the disease prompt may be displayed in the form of an image, text, voice, etc.
For example, the present embodiment may send the congestion symptom result to a smart device (such as a smart phone, a smart watch, a tablet computer, etc.) of the heart failure patient, and output the congestion symptom result in at least one of an image, text, or voice, so as to prompt the heart failure patient of the illness.
In some embodiments, the step of prompting the heart failure patient for a condition based on the congestion symptom result in step S400 comprises:
step S410 of outputting the congestion symptom result to the heart failure patient;
Step S420, after the congestion symptom results in the presence of the congestion symptom, outputting preset prompt information to the heart failure patient so as to prompt the heart failure patient to have a disease deterioration risk.
In this embodiment, it should be noted that the preset prompting information is information that a preset eye-catching mark exists for prompting the heart failure patient. The striking mark can be a character or an image with high brightness and bright color, a flash, a high brightness and bright color prompting lamp or a high-frequency and high-volume prompting sound.
The present embodiment may output the congestion symptom result to the heart failure patient in the form of at least one of an image, text, or voice. And outputting preset prompt information to the heart failure patient after the congestion symptom results are that the congestion symptom exists, so as to prompt the heart failure patient that the disease state is worsened, and further effectively improving the prompt effect of the heart failure patient that the disease state is worsened.
In a first embodiment of the application, the heart failure patient's speech signal is obtained; extracting the characteristics of the voice signal to obtain voice characteristics; inputting the voice characteristics into a target classification model to obtain a congestion symptom result of the heart failure patient, and further prompting the condition of the heart failure patient based on the congestion symptom result. Abnormalities in either of these links can lead to speech changes due to the complex interactions of pronunciation dependent on cognitive and physiological processes, including linguistic and cognitive systems, neuromotor pathways, and physical processes of airflow through airways and vocal tract. And the symptoms of congestion caused by the worsening process of heart failure disease can have a significant impact on the pronunciation process. Therefore, the voice characteristics of the heart failure patient are detected through the pre-trained target classification model, the congestion symptom result of the heart failure patient is obtained, the early warning of the congestion symptom of the heart failure patient is realized, and the early congestion symptom of the heart failure patient is detected in time. Compared with the existing mode that early congestion symptom monitoring can be achieved only through complex examination procedures such as routine examination, electrocardiographic examination and imaging examination in hospitals, the method can achieve early congestion symptom monitoring through voice signals of heart failure patients, and therefore convenience in heart failure condition monitoring of heart failure patients is effectively improved.
Referring to fig. 2, fig. 2 is a flowchart illustrating a second embodiment of a method for monitoring a heart failure condition according to the present application.
In another embodiment of the present application, the same or similar contents as those of the above embodiment may be referred to the above description, and will not be repeated. A second embodiment of the present application provides a method for monitoring a heart failure condition, wherein the step of acquiring a voice signal of a heart failure patient in step S100 includes:
Step S110, a preset pronunciation task is displayed to the heart failure patient;
step S120, collecting pronunciation voice output by the heart failure patient for the preset pronunciation task as the voice signal.
In this embodiment, it should be noted that the preset pronunciation task is a task for mobilizing at least one pronunciation physiological structure to perform pronunciation. The pronunciation physiological structure is a physiological structure for pronunciation affected by the manifestation symptoms of heart failure (including at least congestion symptoms), such as lungs, superior laryngeal channels (nasal cavity, oral cavity, pharyngeal cavity), pronunciation related muscles (pharyngeal muscles, laryngeal muscles, soft palate muscles), larynx (vocal cords), and the like.
According to the embodiment, the preset pronunciation tasks can be displayed to the heart failure patient in the forms of characters, images or voices, so that the heart failure patient is guided to send corresponding voices according to the preset pronunciation tasks. Then, the embodiment may collect, through a sound pickup device (such as a microphone of a smart phone or a smart watch), the pronunciation voice output by the heart failure patient for the preset pronunciation task as the voice signal.
In some embodiments, the preset pronunciation tasks include at least one of a continuous vowel task, a longest pronunciation task, a continuous reading task, and a paragraph reading task.
In this embodiment, the continuous vowel task is a pronunciation task for guiding the heart failure patient to issue a designated single vowel, and the continuous vowel task at least includes a pronunciation task of any single vowel. The present embodiment selects the sustained vowel task mainly in view of: the continuous vowel task can eliminate the influence of pronunciation confounding factors such as speech speed, intonation, dialect, accent and even language, so that based on vowel pronunciation voice of a heart failure patient under the guidance of the continuous vowel task, the abnormal voice state of the heart failure patient caused by congestion symptoms under different pronunciation confounding factors can be effectively identified.
In this embodiment, the longest sound task is a sound task that guides the heart failure patient to perform monosyllabic sound for the longest time, and the sound content of the monosyllabic sound is not limited in this embodiment, and the longest sound task may include at least one sound task of any monosyllabic sound. The longest sounding task is selected in this embodiment mainly in consideration of: good respiratory movement capability is required during long-term pronunciation. Respiratory dysfunction of heart failure patients caused by congestion symptoms can be identified based on the longest sounding speech uttered by heart failure patients under guidance of the longest sounding task.
In this embodiment, the continuous reading task is a pronunciation task that guides the continuous reading of the heart failure patient, and the continuous reading may be, for example, a continuous reading of the same number or a continuous reading of different numbers. The continuous reading task is selected in this embodiment mainly in consideration of: the continuous reading requires the coordination between the laryngeal-related sounding muscles and the respiratory motion, so that the stability of the respiratory/muscle functions and the coordination between the respiratory/muscle functions can be reflected based on the continuous reading voice of the heart failure patient under the guidance of the continuous reading task, and the respiratory motion, the instability of the sounding-related muscle control and the incompatibility between the respiratory motion and the sounding-related muscle control of the heart failure patient caused by the congestion symptom can be identified.
In this embodiment, the paragraph reading task is a pronunciation task for guiding the heart failure patient to read the text of the specified paragraph. The guiding text of "please read next paragraph" can be displayed in the screen, the specific appointed paragraph text is presented, and the corresponding voice prompt is played. The paragraph reading task is selected in this embodiment mainly in consideration of: in long text reading, multiple pronunciations and conversion between different pronunciations are involved, so that based on paragraph reading voice sent by a heart failure patient under the guidance of the paragraph reading task, richer voice acoustic characteristics of the heart failure patient can be obtained. The present embodiment can identify abnormal pronunciation and abnormal pronunciation conversion of the heart failure patient caused by the congestion symptom from the viewpoint of a plurality of voice acoustic features.
Therefore, in this embodiment, at least one of vowel pronunciation voice, longest pronunciation voice, continuous reading voice and paragraph reading voice, which are issued by the heart failure patient under the guidance of the preset pronunciation task, may be collected as the voice signal of the heart failure patient.
In a second embodiment of the application, the heart failure patient is presented with a preset pronunciation task; and collecting pronunciation voice output by the heart failure patient aiming at the preset pronunciation task as the voice signal. In this embodiment, the preset pronunciation task guides the heart failure patient to make a designated pronunciation voice, and compared with the randomly collected speaking content, the voice signal collected in this embodiment has a higher correlation degree with the congestion symptoms of the heart failure patient, so that the accuracy of detecting the congestion symptoms of the heart failure patient can be further improved.
Referring to fig. 3, fig. 3 is a flow chart of a third embodiment of the disease monitoring method of the present application.
In another embodiment of the present application, the same or similar contents as those of the above embodiment may be referred to the above description, and will not be repeated. A third embodiment of the present application provides a method for monitoring a disease condition, before the step of inputting the speech feature into a target classification model in step S300, including:
step A10, obtaining sample voice data, wherein the sample voice data comprises a plurality of groups of voice samples and corresponding sample labels thereof, and the sample labels comprise the presence of congestion symptoms and the absence of congestion symptoms;
Step A20, dividing the sample voice data into a training data set and a test data set;
Step A30, training the initial classification model based on the training data set to obtain a trained classification model;
And step A40, testing the trained classification model based on the test data set, and taking the trained classification model passing the test as a target classification model.
In this embodiment, it should be noted that the initial classification model is an initialized classification model, and the classification model may be at least one analysis model of adaptive enhancement, an artificial neural network, a decision tree, a gradient elevator, a K nearest neighbor algorithm, a light gradient elevator, logistic regression, random forest, a support vector machine, limit gradient hoisting, and a convolutional neural network.
In this embodiment, it should be further noted that the sample voice data includes a plurality of sets of voice samples and corresponding sample tags thereof. The voice sample is a voice characteristic of a sample person, and the sample tag characterizes the sample person as having a symptom of congestion or not having a symptom of congestion. Illustratively, the sample person has a symptom of congestion, then the sample label may be noted as 1; the sample person does not have a congestion symptom, the sample label may be marked as 0. The voice sample may be a voice feature obtained by collecting a voice signal sent by the sample person under the guidance of the preset voice task and extracting features of the voice signal of the sample person.
In this embodiment, sample voice data may be acquired, and then the sample voice data may be divided into a training data set and a test data set according to a preset ratio (e.g., 7:3, 8:2, etc.). And training the initial classification model based on the training data set until the initial classification model converges or reaches the preset iteration number to obtain a trained classification model. And then inputting the test voice sample in the test data set into the trained classification model to obtain a classification result. And then comparing the classification result with the test sample label to obtain the classification accuracy of the trained classification model. If the classification accuracy is higher than a preset accuracy threshold (such as 95%, 96%, etc.), the trained classification model is determined to pass the test, and the trained classification model passing the test can be used as the target classification model. And if the classification accuracy is not higher than a preset accuracy threshold (such as 95%, 96%, etc.), judging that the trained classification model fails the test, and further executing the step A10 to retrain the initial classification model. Or extracting erroneous judgment data from the test data set, adding the erroneous judgment data to a training data set to obtain a new training data set, and executing the step A20.
Taking the classification model as a limit gradient lifting (XGBoost, eXtreme Gradient Boosting) as an example, in the training stage, the objective function of the limit gradient lifting consists of two parts: training loss and regularization terms. The objective function is as follows:
wherein, Is a loss function for measuringTrue value of individual samplesAnd predictive valueDifferences between; /(I)Is a regularization term used to control the complexity of the classification model,IsA personal learner. In XGBoost, regularization termIs a penalty for the complexity of the classification model, defined as: /(I)
Wherein,Is the number of leaf nodes of the decision tree,IsWeights of individual leaf nodes,AndThe superparameter of L2 regularization that controls the number and weight of leaf nodes, respectively.
To optimize the objective function, XGBoost employs a second order taylor expansion to approximate the loss function, such that for the firstTree with round augmentationThe approximation of the objective function is:
wherein, IsPredicted value of wheel,AndLoss function at/>, respectivelyFirst and second derivatives at, i.e.
When constructing the decision tree XGBoost selects the split point to maximize the gain, which is calculated based on the variance of the objective function. Given a split, the data is divided into left subsetsAnd right subsetGain is defined as:
given the tree structure, the optimal weights for each leaf node can be obtained by solving for the minimization of the objective function:
wherein, Represents theIndex sets for all samples on each leaf node. XGBoost builds a decision tree layer by a greedy algorithm. And traversing all the features and all the division points of the features when dividing each time, calculating the gain of each division point, and selecting the division point with the maximum gain to divide the nodes. This process is repeated until a stop condition is met, such as reaching a maximum depth or the gain being below a threshold. After a decision tree is built XGBoost is pruned. It supports two forms of pruning: pre-pruning (stopping growth as the tree grows, such as by gain of maximum depth or minimum weight) and post-pruning (checking bottom up after the tree is fully grown if it is necessary to preserve branches). At each round of training, the output of the new decision tree will be multiplied by this learning rate. The outputs of the multiple decision trees are added to obtain the final prediction:
In this embodiment, the set of all the constructed decision trees is used as the trained classification model, then the trained classification model is tested based on the test data set, and the trained classification model passing the test is used as the target classification model.
In a third embodiment of the present application, sample speech data is divided into a training data set and a test data set by acquiring the sample speech data; training the initial classification model based on the training data set to obtain a trained classification model; and testing the trained classification model based on the test data set, and taking the trained classification model passing the test as a target classification model. A classification model is thus obtained that can be used to distinguish between congestion symptoms based on the speech characteristics of heart failure patients.
Referring to fig. 4, fig. 4 is a schematic structural view of a heart failure condition monitoring device according to the present application.
As shown in fig. 4, the present application provides a heart failure condition monitoring apparatus, which is applied to heart failure condition monitoring equipment, the heart failure condition monitoring apparatus includes:
a voice acquisition module 10 for acquiring a voice signal of a heart failure patient;
The feature extraction module 20 is configured to perform feature extraction on the voice signal to obtain a voice feature;
A congestion detection module 30 for inputting the speech features into a target classification model to obtain a congestion symptom result for the heart failure patient;
A condition prompting module 40, configured to prompt the heart failure patient for a condition based on the congestion symptom result.
Optionally, the voice acquisition module 10 is further configured to: displaying a preset pronunciation task to the heart failure patient;
And collecting pronunciation voice output by the heart failure patient aiming at the preset pronunciation task as the voice signal.
Optionally, the preset pronunciation tasks include at least one of a continuous vowel task, a longest pronunciation task, a continuous reading task, and a paragraph reading task.
Optionally, the feature extraction module 20 is further configured to:
Preprocessing the voice signal to obtain a preprocessed signal;
And extracting the audio characteristics of the preprocessing signals to obtain voice characteristics.
Optionally, the feature extraction module 20 is further configured to:
Identifying a mute segment in the voice signal, and deleting the mute segment to obtain an effective voice signal;
And carrying out pre-emphasis, framing and windowing on the effective voice signal to obtain a pre-processing signal.
Optionally, the heart failure condition monitoring device further comprises a model training module for:
acquiring sample voice data, wherein the sample voice data comprises a plurality of groups of voice samples and corresponding sample labels thereof, and the sample labels comprise the presence of congestion symptoms and the absence of congestion symptoms;
Dividing the sample speech data into a training data set and a test data set;
Training the initial classification model based on the training data set to obtain a trained classification model;
and testing the trained classification model based on the test data set, and taking the trained classification model passing the test as a target classification model.
Optionally, the condition prompting module 40 is further configured to:
Outputting the congestion symptom result to the heart failure patient;
And outputting preset prompt information to the heart failure patient after the congestion symptom results in the congestion symptom, so as to prompt the heart failure patient to have a disease deterioration risk.
The heart failure condition monitoring device provided by the application adopts the heart failure condition monitoring method in each embodiment, and solves the technical problem that the existing heart failure condition monitoring method is poor in convenience. Compared with the prior art, the heart failure condition monitoring device provided by the embodiment of the application has the same beneficial effects as the heart failure condition monitoring method provided by the embodiment, and other technical features in the heart failure condition monitoring device are the same as the features disclosed by the method of the embodiment, and are not repeated herein.
In addition, referring to fig. 5, fig. 5 is a schematic view of a scenario according to an embodiment of the present application. In order to further reduce the requirement of monitoring the condition of heart failure, the voice acquisition module 10 and the condition prompting module 40 may be disposed in an intelligent terminal (such as a smart phone, a tablet computer, an intelligent wearable device, etc.), so that the voice acquisition module 10 can acquire the voice signal of the heart failure patient, and the condition prompting module 40 can prompt the heart failure patient based on the blood stasis symptom result. In this embodiment, the feature extraction module 20 and the blood stasis detection module 30 may be disposed at a server, and the intelligent terminal and the server may be in communication through wireless connection or wired connection, so as to implement data transmission. Therefore, the server side can perform feature extraction on the voice signal through the feature extraction module 20 to obtain voice features, and input the voice features into the target classification model through the congestion detection module 30 to obtain the congestion symptom result of the heart failure patient, so that on one hand, the detection efficiency of the congestion symptom result of the heart failure patient is ensured, and the operation capability requirement of the intelligent terminal is effectively reduced.
As shown in fig. 6, fig. 6 is a schematic device structure diagram of a hardware running environment according to an embodiment of the present application.
Specifically, the heart failure condition monitoring device may be a PC (Personal Computer ), a tablet, a smart wearable device, a portable computer, or a server.
As shown in fig. 6, the heart failure condition monitoring device may include: a processor 1001, such as a central processing unit (Central Processing Unit, CPU), a communication bus 1002, a heart failure patient interface 1003, a network interface 1004, a memory 1005. Wherein the communication bus 1002 is used to enable connected communication between these components. Heart failure patient interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and heart failure patient interface 1003 may also include standard wired, wireless interfaces. Alternatively, the network interface 1004 may include a standard wired interface, a wireless interface (e.g., a wireless FIdelity (WI-FI) interface). The memory 1005 may be a high-speed random access memory (Random Access Memory, RAM) memory or a stable non-volatile memory (NVM), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
It will be appreciated by those skilled in the art that the device configuration shown in fig. 6 is not limiting of the heart failure condition monitoring device and may include more or fewer components than shown, or certain components in combination, or a different arrangement of components.
As shown in fig. 6, an operating system, a network communication module, a heart failure patient interface module, and a computer program may be included in memory 1005, which is a type of computer storage medium.
In the device shown in fig. 6, the network interface 1004 is mainly used for connecting to a background server, and performing data communication with the background server; heart failure patient interface 1003 is primarily for connecting to a client, in data communication with the client; and the processor 1001 may be configured to invoke a computer program stored in the memory 1005 to implement the operations in the heart failure condition monitoring method provided in the above embodiment.
In addition, an embodiment of the present application further provides a computer storage medium, where a computer program is stored, and when the computer program is executed by a processor, the operations in the heart failure condition monitoring method provided in the foregoing embodiment are implemented, and specific steps are not described herein in detail.
It should be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity/operation/object from another entity/operation/object without necessarily requiring or implying any actual such relationship or order between such entities/operations/objects; the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points. The above-described apparatus embodiments are merely illustrative, in which the units illustrated as separate components may or may not be physically separate. Some or all of the modules may be selected according to actual needs to achieve the objectives of the present application. Those of ordinary skill in the art will understand and implement the present application without undue burden.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, a television set, or a network device, etc.) to perform the method according to the embodiments of the present application.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the application, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (10)

1. A method of heart failure condition monitoring, the method comprising the steps of:
Acquiring a voice signal of a heart failure patient;
Extracting the characteristics of the voice signal to obtain voice characteristics;
Inputting the voice characteristics into a target classification model to obtain a congestion symptom result of the heart failure patient;
based on the congestion symptom result, the heart failure patient is provided with a disease prompt.
2. The heart failure condition monitoring method according to claim 1, wherein the step of acquiring a voice signal of the heart failure patient comprises:
Displaying a preset pronunciation task to the heart failure patient;
And collecting pronunciation voice output by the heart failure patient aiming at the preset pronunciation task as the voice signal.
3. The heart failure condition monitoring method of claim 2, wherein the preset pronunciation tasks include at least one of a sustained vowel task, a longest pronunciation task, a continuous reading task, and a paragraph reading task.
4. The method of claim 1, wherein the step of feature extracting the speech signal to obtain speech features comprises:
Preprocessing the voice signal to obtain a preprocessed signal;
And extracting the audio characteristics of the preprocessing signals to obtain voice characteristics.
5. The heart failure condition monitoring method of claim 4, wherein the step of preprocessing the speech signal to obtain a preprocessed signal comprises:
Identifying a mute segment in the voice signal, and deleting the mute segment to obtain an effective voice signal;
And carrying out pre-emphasis, framing and windowing on the effective voice signal to obtain a pre-processing signal.
6. The heart failure condition monitoring method of claim 1, comprising, prior to the step of inputting the speech features into a target classification model:
acquiring sample voice data, wherein the sample voice data comprises a plurality of groups of voice samples and corresponding sample labels thereof, and the sample labels comprise the presence of congestion symptoms and the absence of congestion symptoms;
Dividing the sample speech data into a training data set and a test data set;
Training the initial classification model based on the training data set to obtain a trained classification model;
and testing the trained classification model based on the test data set, and taking the trained classification model passing the test as a target classification model.
7. The method of monitoring the condition of heart failure according to any one of claims 1 to 6, wherein the step of prompting the patient of heart failure for a condition based on the congestion symptom outcome comprises:
Outputting the congestion symptom result to the heart failure patient;
And outputting preset prompt information to the heart failure patient after the congestion symptom results in the congestion symptom, so as to prompt the heart failure patient to have a disease deterioration risk.
8. A heart failure condition monitoring device, the heart failure condition monitoring device comprising:
the voice acquisition module is used for acquiring voice signals of heart failure patients;
The feature extraction module is used for extracting features of the voice signals to obtain voice features;
the blood stasis detection module is used for inputting the voice characteristics into a target classification model to obtain a blood stasis symptom result of the heart failure patient;
and the condition prompting module is used for prompting the condition of the heart failure patient based on the blood stasis symptom result.
9. A heart failure condition monitoring device, the heart failure condition monitoring device comprising: a memory, a processor, the memory having stored thereon a computer program executable on the processor, the computer program when executed by the processor implementing the steps of the heart failure condition monitoring method of any one of claims 1 to 7.
10. A computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, which when executed by a processor, implements the steps of the heart failure condition monitoring method of any one of claims 1 to 7.
CN202410319642.1A 2024-03-20 2024-03-20 Method, device and equipment for monitoring heart failure illness state and readable storage medium Active CN117898684B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410319642.1A CN117898684B (en) 2024-03-20 2024-03-20 Method, device and equipment for monitoring heart failure illness state and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410319642.1A CN117898684B (en) 2024-03-20 2024-03-20 Method, device and equipment for monitoring heart failure illness state and readable storage medium

Publications (2)

Publication Number Publication Date
CN117898684A true CN117898684A (en) 2024-04-19
CN117898684B CN117898684B (en) 2024-06-18

Family

ID=90686317

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410319642.1A Active CN117898684B (en) 2024-03-20 2024-03-20 Method, device and equipment for monitoring heart failure illness state and readable storage medium

Country Status (1)

Country Link
CN (1) CN117898684B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100052284A (en) * 2008-11-10 2010-05-19 한국 한의학 연구원 Stroke differentiation of syndromes discriminant apparatus and method for the same
US20200327882A1 (en) * 2019-04-15 2020-10-15 Janssen Pharmaceutica Nv System and method for detecting cognitive decline using speech analysis
CN112863667A (en) * 2021-01-22 2021-05-28 杭州电子科技大学 Lung sound diagnosis device based on deep learning
CN114299925A (en) * 2021-12-31 2022-04-08 江苏省省级机关医院 Method and system for obtaining importance measurement index of dysphagia symptom of Parkinson disease patient based on voice
CN116013371A (en) * 2022-12-13 2023-04-25 华南理工大学 Neurodegenerative disease monitoring method, system, device and storage medium
CN116269223A (en) * 2023-02-10 2023-06-23 平安科技(深圳)有限公司 Alzheimer's disease prediction method, device, equipment and storage medium
CN116434739A (en) * 2023-03-06 2023-07-14 深圳大学 Device for constructing classification model for identifying different stages of heart failure and related assembly

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100052284A (en) * 2008-11-10 2010-05-19 한국 한의학 연구원 Stroke differentiation of syndromes discriminant apparatus and method for the same
US20200327882A1 (en) * 2019-04-15 2020-10-15 Janssen Pharmaceutica Nv System and method for detecting cognitive decline using speech analysis
CN112863667A (en) * 2021-01-22 2021-05-28 杭州电子科技大学 Lung sound diagnosis device based on deep learning
CN114299925A (en) * 2021-12-31 2022-04-08 江苏省省级机关医院 Method and system for obtaining importance measurement index of dysphagia symptom of Parkinson disease patient based on voice
CN116013371A (en) * 2022-12-13 2023-04-25 华南理工大学 Neurodegenerative disease monitoring method, system, device and storage medium
CN116269223A (en) * 2023-02-10 2023-06-23 平安科技(深圳)有限公司 Alzheimer's disease prediction method, device, equipment and storage medium
CN116434739A (en) * 2023-03-06 2023-07-14 深圳大学 Device for constructing classification model for identifying different stages of heart failure and related assembly

Also Published As

Publication number Publication date
CN117898684B (en) 2024-06-18

Similar Documents

Publication Publication Date Title
US20200294509A1 (en) Method and apparatus for establishing voiceprint model, computer device, and storage medium
Panek et al. Acoustic analysis assessment in speech pathology detection
KR102216160B1 (en) Apparatus and method for diagnosing disease that causes voice and swallowing disorders
US11672472B2 (en) Methods and systems for estimation of obstructive sleep apnea severity in wake subjects by multiple speech analyses
Kapoor et al. Parkinson’s disease diagnosis using Mel-frequency cepstral coefficients and vector quantization
US20150154980A1 (en) Cepstral separation difference
CN110970036A (en) Voiceprint recognition method and device, computer storage medium and electronic equipment
KR102442426B1 (en) Method and device for improving dysarthria
Almaghrabi et al. Bio-acoustic features of depression: A review
CN113496696A (en) Speech function automatic evaluation system and method based on voice recognition
Wang et al. Automatic hypernasality detection in cleft palate speech using cnn
Usman et al. Heart rate detection and classification from speech spectral features using machine learning
Khan et al. Assessing Parkinson's disease severity using speech analysis in non-native speakers
WO2023139559A1 (en) Multi-modal systems and methods for voice-based mental health assessment with emotion stimulation
KR20220128976A (en) Device, method and program for speech impairment evaluation
CN113571088A (en) Difficult airway assessment method and device based on deep learning voiceprint recognition
JP2024504097A (en) Automated physiological and pathological assessment based on speech analysis
CN113974607A (en) Sleep snore detecting system based on impulse neural network
CN117898684B (en) Method, device and equipment for monitoring heart failure illness state and readable storage medium
Hidaka et al. Automatic GRBAS scoring of pathological voices using deep learning and a small set of labeled voice data
Sahoo et al. Analyzing the vocal tract characteristics for out-of-breath speech
Likhachov et al. A mobile application for detection of amyotrophic lateral sclerosis via voice analysis
CN115116475A (en) Voice depression automatic detection method and device based on time delay neural network
Aggarwal et al. Parameterization techniques for automatic speech recognition system
KR102564412B1 (en) System for providing self dysphonia treatment service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant