CN115644848A - Lung function parameter measuring method and system based on voice signals - Google Patents

Lung function parameter measuring method and system based on voice signals Download PDF

Info

Publication number
CN115644848A
CN115644848A CN202211328300.3A CN202211328300A CN115644848A CN 115644848 A CN115644848 A CN 115644848A CN 202211328300 A CN202211328300 A CN 202211328300A CN 115644848 A CN115644848 A CN 115644848A
Authority
CN
China
Prior art keywords
signal
lung function
formants
time sequence
pronunciation signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211328300.3A
Other languages
Chinese (zh)
Inventor
伍楷舜
王泰华
李聪
陈霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202211328300.3A priority Critical patent/CN115644848A/en
Publication of CN115644848A publication Critical patent/CN115644848A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a lung function parameter measuring method and system based on voice signals. The method comprises the following steps: a microphone of the intelligent equipment is used as a signal receiving end to collect a user pronunciation signal; dividing and preprocessing the pronunciation signal to determine a starting point and an end point of a single pronunciation signal; calculating a formant frequency sequence of the single pronunciation signal to obtain a plurality of time sequence formants with different frequencies, and further extracting relevant statistical characteristics of the time sequence formants; inputting the relevant statistical characteristics of the time sequence formants into a trained machine learning model to obtain a lung function parameter prediction result, wherein the lung function parameters comprise FEV1/FVC; and displaying the lung function prediction result and the related suggestion on the APP of the intelligent device. According to the invention, the lung function parameters are measured by acquiring voice signals through intelligent equipment used in daily life, so that the detection cost is reduced, and continuous monitoring and mobile tracking of the self-detection of the lung function of an individual outside a hospital can be realized.

Description

Lung function parameter measuring method and system based on voice signals
Technical Field
The invention relates to the technical field of machine learning, in particular to a lung function parameter measuring method and system based on voice signals.
Background
Spirometry is typically performed in clinical laboratories using medical grade equipment. As a gold standard, spirometry is a procedure that directs the patient to inhale maximally and expel air, as long as possible. Typically, the test is performed by a respiratory or general practitioner to guide the subject in making maximum effort and obtaining accurate results. A successful test will give a flow-volume curve. After understanding a typical appearance of a flow-volume curve for a healthy person, a respiratory physician may obtain information about the underlying disease from the curved shape of a patient suspected of having a respiratory disease. Active respiratory disease does not preclude the patient from receiving spirometry measurements. This may be a limiting factor for laboratory-based spirometry. With the popularity of portable spirometry devices, an available alternative is offered outside of the clinical setting for expensive medical grade devices, with accuracy comparable to laboratory based systems. The portable spirometric device can remotely monitor and track the evolution of a patient's respiratory problems more frequently. Peak flow meters are another inexpensive and portable option for monitoring maximum expiratory airflow. However, most of these measurement devices represent airflow of the main airways, depending on the effort level, and are unreliable as predictors of asthma exacerbations.
In the prior art, audio-based breath assessment research has focused on the expiratory sounds collected by the smartphone microphone. However, this approach has some drawbacks, including concerns over unsupervised use, as it requires exertion by the patient, which may result in a heart rate that exceeds baseline. Also, patients with severely compromised lung function may not speak at all. In addition, the usual spirometry method requires the execution of a laborious spirometry task. Also, due to the lack of clinical supervision and the complexity of the spirometric task, subject compliance may be reduced, leading to incorrect data. Such variations are errors outside the measurements caused by environmental variables during the spirometry, such as body motion artifacts of image-based spirometry, background noise, and variations in microphone-to-oral distance in audio-based spirometry.
Disclosure of Invention
The present invention is directed to overcoming the above-mentioned drawbacks of the prior art and providing a method and system for measuring lung function parameters based on speech signals.
According to a first aspect of the present invention, a method for measuring lung function parameters based on a speech signal is provided. The method comprises the following steps:
a microphone of the intelligent equipment is used as a signal receiving end to collect a user pronunciation signal;
dividing and preprocessing the pronunciation signal to determine a starting point and an end point of a single pronunciation signal;
calculating a formant frequency sequence of the single pronunciation signal to obtain a plurality of time sequence formants with different frequencies, and further extracting relevant statistical characteristics of the time sequence formants;
inputting the relevant statistical characteristics of the time sequence formants into a trained machine learning model to obtain a lung function parameter prediction result, wherein the lung function parameters comprise FEV1/FVC;
and displaying the lung function prediction result and the related suggestion on the APP of the intelligent device.
According to a second aspect of the present invention, a system for measuring lung function parameters based on speech signals is provided. The system comprises:
the signal acquisition module: the system comprises a microphone, a signal receiving end and a processing end, wherein the microphone is used for collecting a user pronunciation signal;
the signal processing module: the system comprises a segmentation unit, a pre-processing unit and a processing unit, wherein the segmentation unit is used for segmenting and pre-processing the pronunciation signal to determine a starting point and an end point of a single pronunciation signal;
a feature extraction module: the system is used for calculating a formant frequency sequence of the single pronunciation signal, obtaining a plurality of time sequence formants with different frequencies and further extracting the related statistical characteristics of the time sequence formants;
a model prediction module: the time sequence formant prediction method comprises the steps of inputting relevant statistical characteristics of the time sequence formants into a trained machine learning model to obtain a lung function parameter prediction result, wherein the lung function parameters comprise FEV1/FVC;
a result feedback module: for displaying the lung function prediction result and the related suggestion on the APP of the smart device.
Compared with the prior art, the method has the advantages that the requirement of lung function evaluation outside a clinical environment can be met, and the continuous monitoring and mobile tracking of the self-test of the lung function of an individual outside a hospital can be realized, so that a user does not need to prepare expensive detection equipment, and the cost required by the detection equipment is reduced. In addition, the invention can be carried on commercial electronic equipment (such as a smart phone), does not need additional equipment and reduces the application cost.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a flowchart of a lung function parameter measurement method based on a voice signal according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a framework structure of a lung function parameter measurement system based on a voice signal according to an embodiment of the present invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Referring to fig. 1, the provided lung function parameter measuring method based on a voice signal includes the following steps.
And S10, taking a microphone of the intelligent equipment as a signal receiving end to collect a user pronunciation signal.
The smart device may be a variety of types of electronic devices, such as a smartphone, a tablet, a wearable device, and so forth. The following description will be given by taking a smartphone as an example.
For example, a user aims a microphone of a smartphone as a signal receiving end at the mouth of the user, and then a user pronunciation signal is collected. The pronunciation signal is a sound signal directly generated when a user reads the voice. In practice, only the handset can be placed in front of the mouth (within 15 cm from the mouth) to ensure good audio recording.
In one embodiment, step S10 comprises the sub-steps of:
s11, collecting a pronunciation signal of a microphone of the smart phone;
preferably, the user needs to read a vowel letter (e.g., a, o, e, i, u, etc. in Mandarin) as long as possible. This is because, when the user makes a sound, the airflow passes through the trachea, the larynx in order from the lungs up, and finally out of the mouth. If during this process the vowel is an open type sound, the acoustic chamber is completely open and the airflow can flow out smoothly, which is beneficial for the measurement of lung function parameters. Reading a vowel letter at a time as long as possible is called a vowel conversation. Hereinafter, a vowel conversation will be mainly described as an example.
And S12, performing data processing on the acquired pronunciation signals, and removing interference of high-frequency environmental noise and electromagnetic noise.
For example, denoising is performed using a butterworth low pass filter, with the cutoff frequency set to 5500Hz and the sampling rate of the microphone set to 44100Hz.
And step S20, dividing and preprocessing the collected pronunciation signal to determine the starting point and the end point of the single pronunciation signal.
In one embodiment, step S20 comprises the following sub-steps:
and step S21, normalizing the received pronunciation signal, so as to eliminate interference caused by the distance difference between the smart phone and the mouth of the user.
For example, the normalization process formula is expressed as:
Figure BDA0003909971930000041
wherein x and y correspond to the speech signal before and after normalization, respectively, x min Is the minimum peak, x, before normalization max Is the maximum peak before normalization.
Step S22, removing noise in the sound signal by using a least mean square filter;
for example, the length of the sampling time of the least mean square filter is set to 10ms.
And step S23, detecting the starting point and the ending point of the single vowel conversation of the user by using a frame energy threshold-based method, thereby segmenting the single vowel conversation.
Suppose that the speech time-domain signal collected by the microphone is x (n), and the nth frame speech signal is x n (m) = x ((n-1) × l + m), where l is the frame hop, m ∈ [0,N-1]N is the frame length; the nth frame speech signal x n The frame energy of (m) is:
Figure BDA0003909971930000051
when the signal continuously exceeds the threshold value and is maintained for a period of time t (t is more than 3 s) in signal detection, the signal is not considered to be a noise signal but a vowel conversation needing to be extracted, and the signal is further cut to extract the starting point and the ending point of the single vowel conversation. Specifically, the start point of a vowel conversation is first determined, and a sample point before the first frame (i.e., the last of the previous frame) exceeding a threshold, for example, is selected as the start point; then, the end point of the pronunciation signal is determined, for example, the first sampling point after the energy of M continuous frames of the signal is lower than the threshold.
And step S30, calculating a formant frequency sequence of the single pronunciation signal, obtaining a plurality of time sequence formants with different frequencies, and further extracting the relevant statistical characteristics of the time sequence formants.
In one embodiment, step S30 includes the following sub-steps:
step S31, obtaining a spectrogram by using short-time Fourier transform on the vowel conversation, and obtaining a time sequence formant of the vowel conversation by using a linear predictive coding technology;
specifically, the sound signal is resampled at a sampling frequency twice the upper limit value (e.g., 5500 Hz) of the formant, and then subjected to a pre-emphasis operation. For each frame of a vowel conversation (e.g., frame length 0.025s, frame shift 0.005 s), a gaussian window function is applied and LPC coefficients are calculated using the Burg algorithm. The algorithm finally obtains 5 time sequence resonance peaks F = { F1 (n), F2 (n), F3 (n), F4 (n), F5 (n) } with different frequencies, wherein n is the frame number of the session. It should be understood that the number of frequencies can be set as desired.
And step S32, performing statistical characteristic analysis on the time-series formants, and further extracting relevant statistical characteristics (four characteristics of mean value, standard deviation, skewness and kurtosis). For example, the final extraction yields 20 features: mean (Fi (n)), std (Fi (n)), skewness (Fi (n)), kurtosis (Fi (n)), i =1,2,3,4,5.
And S40, inputting the relevant statistical characteristics of the time sequence formants into the trained machine learning model to obtain a lung function parameter prediction result.
The machine learning model may be a process type, such as a convolutional neural network or a gaussian process regression model.
In one embodiment, step S40 includes the following sub-steps:
step S41, a Gaussian Process Regression (GPR) model is constructed for training. In the training phase, model training was performed using the lung function parameter (gold standard) FEV1/FVC ratio evaluated by the FDA-approved lung function instrument as a data label. Further, in the testing stage, the testing result of the trained Gaussian process regression model is output to verify the accuracy of the model.
And S42, inputting the processed time sequence formant frequency characteristics into a trained Gaussian process regression model to obtain a lung function parameter prediction result.
And S50, displaying the lung function prediction result and the related suggestion on the APP of the intelligent device.
For example, the lung function parameter FEV1/FVC prediction result and related recommendations are fed back to the user and displayed in the smartphone APP. FEV1 refers to forced expiratory volume in the first second and FVC is forced vital capacity. FEV1/FVC is referred to as the one second rate and is the standard for diagnosing chronic obstructive pulmonary disease.
Accordingly, the present invention also provides a system for measuring lung function parameters based on voice signals, which is used for realizing one or more aspects of the method. For example, the system includes: the signal acquisition module is used for taking a microphone of the intelligent equipment as a signal receiving end and acquiring a user pronunciation signal; the signal processing module is used for carrying out segmentation and preprocessing on the pronunciation signal so as to determine the starting point and the end point of the single pronunciation signal; the characteristic extraction module is used for calculating a formant frequency sequence of the single pronunciation signal, obtaining a plurality of time sequence formants with different frequencies and further extracting the related statistical characteristics of the time sequence formants; the model prediction module is used for inputting the relevant statistical characteristics of the time sequence formants into a trained machine learning model to obtain a lung function parameter prediction result, wherein the lung function parameters comprise FEV1/FVC; a result feedback module for displaying the lung function prediction result and the related suggestion on an APP of the smart device.
Further, the signal acquisition module further comprises:
the voice acquisition unit is used for acquiring the pronunciation signal of the microphone of the smart phone;
and the interference elimination unit is used for carrying out data processing on the acquired sounding signals and removing the interference of high-frequency environmental noise and electromagnetic noise.
Further, the signal processing module further includes:
and the normalization unit is used for performing normalization processing on the received pronunciation signals so as to eliminate interference caused by the distance difference between the smart phone and the mouth of the user.
A filtering processing unit for removing noise in the sound signal by using a least mean square filter;
and the voice segmentation unit detects the starting point and the ending point of the single vowel conversation of the user by using a frame energy threshold-based method, so as to segment the single vowel conversation.
Further, the feature extraction module further comprises:
the formant calculation unit is used for obtaining a spectrogram by using short-time Fourier transform on the vowel conversation and obtaining a time sequence formant of the vowel conversation by using a linear predictive coding technology;
the statistical analysis unit is used for performing statistical characteristic analysis on the time-series formants, further extracting relevant statistical characteristics (four characteristics of mean value, standard deviation, skewness and kurtosis), and finally extracting 20 characteristics: mean (Fi (n)), std (Fi (n)), skewness (Fi (n)), kurtosis (Fi (n)), i =1,2,3,4,5.
Further, the model prediction module further comprises:
the testing unit is trained, and the Gaussian process regression model has a training stage and a testing stage. In the training stage, the lung function parameter (gold standard) FEV1/FVC ratio evaluated by an FDA approved lung function instrument is used as a data label for model training; and in the testing stage, outputting the prediction result of the trained Gaussian process regression model.
And the prediction unit inputs the processed time sequence formant frequency characteristics into the Gaussian Process Regression model based on a prediction method of the Gaussian Process Regression (GPR) model.
In summary, on one hand, the voice signals are collected by the smart phone frequently used in daily life of the user so as to measure the lung function parameters, so that the user does not need to prepare expensive detection equipment, and the cost required by the detection equipment is reduced. On the other hand, the invention can meet the requirement of lung function evaluation outside the clinical environment, thereby realizing continuous monitoring and movement tracking of the self-test of the lung function of the individual outside the hospital.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present invention may be assembler instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + +, python, or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing an electronic circuit, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA), with state information of computer-readable program instructions, which can execute the computer-readable program instructions.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, by software, and by a combination of software and hardware are equivalent.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (10)

1. A lung function parameter measuring method based on voice signals comprises the following steps:
a microphone of the intelligent equipment is used as a signal receiving end to collect a user pronunciation signal;
segmenting and preprocessing the pronunciation signal to determine a starting point and an end point of a single pronunciation signal;
calculating a formant frequency sequence of the single pronunciation signal to obtain a plurality of time sequence formants with different frequencies, and further extracting relevant statistical characteristics of the time sequence formants;
inputting the relevant statistical characteristics of the time sequence formants into a trained machine learning model to obtain a lung function parameter prediction result, wherein the lung function parameters comprise FEV1/FVC;
and displaying the lung function prediction result and the related suggestion on the APP of the intelligent device.
2. The method of claim 1, wherein a microphone of the smart device is used as a signal receiving end, and the collecting the user pronunciation signal comprises:
placing the intelligent device in a set distance range in front of the mouth of the user;
collecting a sound signal generated when a user reads a sound;
and removing interference in a set frequency range from the sound signal to obtain the pronunciation signal.
3. The method of claim 1, wherein segmenting and pre-processing the pronunciation signal to determine the start and end points of a single pronunciation signal comprises:
normalizing the pronunciation signal;
removing noise in the pronunciation signal by using a least mean square filter;
and detecting the starting point and the end point of the single pronunciation of the user by using a frame energy threshold-based method, and further segmenting a single pronunciation signal.
4. The method according to claim 1, wherein calculating the formant frequency sequence of the single utterance signal to obtain a plurality of time-series formants with different frequencies, and further extracting the related statistical features of the time-series formants comprises:
obtaining a spectrogram by using short-time Fourier transform on the divided single pronunciation signal, and obtaining a time sequence formant of the single pronunciation signal by using linear predictive coding, wherein the time sequence formant corresponds to a plurality of different frequencies;
and performing statistical characteristic analysis on the time sequence formants, and further extracting the relevant statistical characteristics.
5. The method of claim 4, wherein the number of the related statistical features is set to 20, and the related statistical features correspond to a mean feature, a standard deviation feature, a skewness feature and a kurtosis feature at 5 different frequencies.
6. The method of claim 1, wherein the machine learning model is a gaussian process regression model, and wherein a gold standard lung function parameter FEV1/FVC ratio is used as a data label during model training.
7. The method of claim 1, wherein the sounding signal is a vowel conversation, wherein a vowel conversation refers to a user reading a vowel at a set duration.
8. The method of claim 1, wherein the smart device is a smartphone, tablet, or wearable device.
9. A system for measuring lung function parameters based on speech signals, comprising:
the signal acquisition module: the system comprises a microphone, a signal receiving end and a processing end, wherein the microphone is used for collecting a user pronunciation signal;
the signal processing module: the system comprises a segmentation unit, a pre-processing unit and a processing unit, wherein the segmentation unit is used for segmenting and pre-processing the pronunciation signal to determine a starting point and an end point of a single pronunciation signal;
a feature extraction module: the system is used for calculating a formant frequency sequence of the single pronunciation signal, obtaining a plurality of time sequence formants with different frequencies and further extracting the related statistical characteristics of the time sequence formants;
a model prediction module: the time sequence formant prediction method comprises the steps of inputting relevant statistical characteristics of the time sequence formants into a trained machine learning model to obtain a lung function parameter prediction result, wherein the lung function parameters comprise FEV1/FVC;
a result feedback module: for displaying the lung function prediction result and the related suggestion on the APP of the smart device.
10. A computer-readable storage medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, realizes the steps of the method according to any one of claims 1 to 8.
CN202211328300.3A 2022-10-26 2022-10-26 Lung function parameter measuring method and system based on voice signals Pending CN115644848A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211328300.3A CN115644848A (en) 2022-10-26 2022-10-26 Lung function parameter measuring method and system based on voice signals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211328300.3A CN115644848A (en) 2022-10-26 2022-10-26 Lung function parameter measuring method and system based on voice signals

Publications (1)

Publication Number Publication Date
CN115644848A true CN115644848A (en) 2023-01-31

Family

ID=84993362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211328300.3A Pending CN115644848A (en) 2022-10-26 2022-10-26 Lung function parameter measuring method and system based on voice signals

Country Status (1)

Country Link
CN (1) CN115644848A (en)

Similar Documents

Publication Publication Date Title
CN107622797B (en) Body condition determining system and method based on sound
JP5708155B2 (en) Speaker state detecting device, speaker state detecting method, and computer program for detecting speaker state
AU2019356224B2 (en) Estimating lung volume by speech analysis
US20220007964A1 (en) Apparatus and method for detection of breathing abnormalities
Kapoor et al. Parkinson’s disease diagnosis using Mel-frequency cepstral coefficients and vector quantization
Usman et al. Heart rate detection and classification from speech spectral features using machine learning
Selvakumari et al. A voice activity detector using SVM and Naïve Bayes classification algorithm
Touahria et al. Discrete Wavelet based Features for PCG Signal Classification using Hidden Markov Models.
CN106782616A (en) A kind of method that respiratory tract is detected by voice analysis
Schultz et al. A tutorial review on clinical acoustic markers in speech science
US20220409063A1 (en) Diagnosis of medical conditions using voice recordings and auscultation
CN115644848A (en) Lung function parameter measuring method and system based on voice signals
Sengupta et al. Optimization of cepstral features for robust lung sound classification
JP2012024527A (en) Device for determining proficiency level of abdominal breathing
WO2021132289A1 (en) Pathological condition analysis system, pathological condition analysis device, pathological condition analysis method, and pathological condition analysis program
Singh et al. IIIT-S CSSD: A cough speech sounds database
US11918346B2 (en) Methods and systems for pulmonary condition assessment
CN116723793A (en) Automatic physiological and pathological assessment based on speech analysis
CN115670434A (en) Voice signal-based chronic obstructive pulmonary disease diagnosis method and system
JP6782940B2 (en) Tongue position / tongue habit judgment device, tongue position / tongue habit judgment method and program
WO2023233667A1 (en) Information processing device, information processing method, information processing system, and information processing program
EP4360548A1 (en) Diagnosis of some diseases with sound frequencies
CN114863951B (en) Rapid dysarthria detection method based on modal decomposition
Jičínský et al. Speech Processing in Diagnosis of Vocal Chords Diseases
Krishna et al. Continuous Speech Recognition using EEG and Video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination