CN107622797B - Body condition determining system and method based on sound - Google Patents

Body condition determining system and method based on sound Download PDF

Info

Publication number
CN107622797B
CN107622797B CN201710877854.1A CN201710877854A CN107622797B CN 107622797 B CN107622797 B CN 107622797B CN 201710877854 A CN201710877854 A CN 201710877854A CN 107622797 B CN107622797 B CN 107622797B
Authority
CN
China
Prior art keywords
sound
disease
database
data
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710877854.1A
Other languages
Chinese (zh)
Other versions
CN107622797A (en
Inventor
李涵之
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201710877854.1A priority Critical patent/CN107622797B/en
Publication of CN107622797A publication Critical patent/CN107622797A/en
Application granted granted Critical
Publication of CN107622797B publication Critical patent/CN107622797B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a body condition determining system and method based on sound. The physical condition determination system includes: the sound acquisition module is used for acquiring sound information of a person to be detected; the noise reduction module is used for carrying out noise reduction processing on the sound information to obtain noise reduction sound; the analysis module is used for analyzing the noise reduction sound information to obtain sound characteristics; the integrated module is used for integrating the sound characteristics, the environment data and the physical sign data into a sound data group; the database establishing module is used for establishing a database of a person to be tested and a disease database; the first judging module is used for judging the similarity between the sound data set and the sound data set of the database of the person to be detected; and the second judging module is used for judging the similarity between the sound data set and the sound data set of the disease database so as to determine the disease type of the person to be detected. By adopting the method or the system provided by the invention, remote non-contact intelligent diagnosis is realized, and the existing diagnosis and treatment means are assisted to reduce the misdiagnosis probability.

Description

Body condition determining system and method based on sound
Technical Field
The present invention relates to the field of physical condition recognition, and more particularly, to a system and method for determining a physical condition based on sound.
Background
The observation and the inquiry are the traditional method for diagnosing diseases and are the basic means for mastering the comprehensive state of a person to be tested. The body condition determining system based on the sound characteristics comprehensively utilizes the technologies of voice recognition, mode matching, information technology, wearable equipment and the like, and the starting point is also simulation of the traditional inquiry. At present, various modern medical detection instruments for looking at the question are very abundant, wherein, the speech recognition technology is widely applied in the medical field, and the speech recognition technology in the prior art mainly has three types:
(1) the first is a widely used organ auscultation device such as a stethoscope, which only involves body voice acquisition for doctors;
(2) the second type is a voice recording system for assisting a doctor to record or select characters, and only relates to the conversion from voice to characters;
(3) the third category is medical consultation systems based on question-answering software, such as medical robots, which only involve knowledge-based human-computer interaction question-answering.
The three voice recognition technologies need to recognize the physical condition of the detected person based on the experience of the doctor, which wastes human resources on one hand and increases the diagnosis time and the misdiagnosis probability due to the subjectivity of the doctor on the other hand.
Disclosure of Invention
The invention aims to provide a body condition determining system and method based on sound so as to solve the problems of waste of human resources, long diagnosis time and high misdiagnosis probability.
In order to achieve the purpose, the invention provides the following scheme:
a sound-based physical condition determination system, comprising:
the voice acquisition module is used for acquiring voice information of a person to be detected;
the noise reduction module is used for carrying out noise reduction processing on the sound information to obtain noise reduction sound;
the analysis module is used for analyzing the noise reduction sound information to obtain sound characteristics; the sound characteristics include sound speed, timbre, tone, and loudness;
the integrated module is used for integrating a plurality of sound characteristics, environment data and sign data into a sound data group; the environmental data comprises the geographical position, the climate, the temperature and the humidity of the person to be detected; the physical sign data comprises the body temperature, brain waves, heartbeats and blood pressure of the person to be measured;
the database establishing module is used for establishing a database of a person to be tested and a disease database; the database of the testee is established according to the sound data sets of the testee under different diseases; the disease database is established according to sound data sets of a plurality of disease patients under different diseases;
the first judgment module is used for judging whether the similarity between the sound data set and the sound data set of the database of the person to be detected is within a first preset similarity range or not to obtain a first judgment result;
a first determining module, configured to determine a disease type of the subject if the first determination result indicates that the similarity between the sound data set and the sound data set of the subject database is within the first preset similarity range;
a second judging module, configured to, if the first judgment result indicates that the similarity between the sound data set and the sound data set of the database of the person to be tested is not within the first preset similarity range, judge whether the similarity between the sound data set and the sound data set of the disease database is within a second preset similarity range, and obtain a second judgment result;
and the second determining module is used for determining the disease type of the person to be detected if the similarity between the sound data set and the sound data set of the disease database is within a second preset similarity range.
Optionally, the parsing module specifically includes a parsing unit;
and the analysis unit is used for analyzing the noise reduction sound information by adopting a voice spectrum analysis function to obtain sound characteristics.
Optionally, the database establishing module specifically includes:
the sound characteristic integration unit is used for integrating the sound characteristics of the patient under the disease;
an average sound feature calculation unit configured to calculate an average sound feature from the sound features; the average sound features comprise an average value sequence of sound velocity, an average value sequence of tone color, an average value sequence of tone and an average value sequence of loudness;
the environment data and sign data acquisition unit is used for acquiring the environment data and sign data of the person to be measured;
the voice data set building unit is used for building a disease voice data set corresponding to each disease voice characteristic according to the average voice characteristic, the environment data and the physical sign data;
the device comprises a data base building unit for the device to be tested, a data base building unit for the device to be tested and a data base building unit for the device to be tested; the database of the testee is established according to a plurality of sound data sets of the testee under different diseases.
Optionally, the database establishing module specifically includes:
the disease sound characteristic integration module is used for integrating a plurality of disease sound characteristics;
the average disease sound characteristic calculation module is used for calculating average disease sound characteristics according to the plurality of disease sound characteristics; the average disease sound features comprise an average value sequence of sound velocity, an average value sequence of tone color, an average value sequence of tone and an average value sequence of loudness;
the disease data acquisition module is used for acquiring environmental data and disease sign data of a plurality of disease personnel;
the to-be-detected disease sound data group building module is used for building a disease sound data group corresponding to each disease sound characteristic according to the average disease sound characteristic, the environmental data of the disease personnel and the disease sign data;
and the disease database establishing module is used for establishing a disease database according to the disease sound data set.
Optionally, the physical status determination system further comprises:
a subject database updating subunit, configured to store the sound data group for determining the disease type of the subject into the subject database to update the subject database;
alternatively, the first and second electrodes may be,
and the disease database updating subunit is used for storing the sound data group for determining the disease type of the testee into the disease database so as to update the disease database.
A sound-based physical condition determination method, comprising:
collecting sound information of a person to be detected;
carrying out noise reduction processing on the sound information to obtain noise reduction sound;
analyzing the noise reduction sound information to obtain sound characteristics; the sound characteristics include sound speed, timbre, tone, and loudness;
integrating a plurality of sound characteristics, environment data and physical sign data into a sound data group; the environmental data comprises the geographical position, the climate, the temperature and the humidity of the person to be detected; the physical sign data comprises the body temperature, brain waves, heartbeats and blood pressure of the person to be measured;
used for establishing a database of the testee and a disease database; the database of the testee is established according to the sound data sets of the testee under different diseases; the disease database is established according to sound data sets of a plurality of disease patients under different diseases;
judging whether the similarity between the sound data set and the sound data set of the database of the person to be tested is within a first preset similarity range or not to obtain a first judgment result;
if the first judgment result shows that the similarity between the sound data set and the sound data set of the database of the person to be tested is within the first preset similarity range, determining the disease type of the person to be tested;
if the first judgment result shows that the similarity between the sound data set and the sound data set of the database of the person to be tested is not within the first preset similarity range, judging whether the similarity between the sound data set and the sound data set of the disease database is within a second preset similarity range or not, and obtaining a second judgment result;
and if the similarity between the sound data set and the sound data set of the disease database is within a second preset similarity range, determining the disease type of the person to be detected.
Optionally, the analyzing the noise reduction sound information to obtain sound characteristics specifically includes:
and analyzing the noise reduction sound information by using an audio spectrum analysis function to obtain sound characteristics.
Optionally, the establishing a subject database and a disease database specifically includes:
integrating the sound characteristics of the disease of the person to be tested;
calculating an average sound characteristic according to the sound characteristic; the average sound features comprise an average value sequence of sound velocity, an average value sequence of tone color, an average value sequence of tone and an average value sequence of loudness;
acquiring environmental data and physical sign data of the person to be measured;
according to the average sound characteristics, the environment data and the physical sign data, constructing a disease sound data group corresponding to each disease sound characteristic;
establishing a database of the testee; the database of the testee is established according to a plurality of sound data sets of the testee under different diseases.
Optionally, the establishing a subject database and a disease database specifically includes:
integrating a plurality of disease sound characteristics;
calculating an average disease sound characteristic from a plurality of the disease sound characteristics; the average disease sound features comprise an average value sequence of sound velocity, an average value sequence of tone color, an average value sequence of tone and an average value sequence of loudness;
acquiring environmental data and disease sign data of a plurality of disease personnel;
according to the average disease sound characteristics, the environmental data of the disease personnel and the disease sign data, constructing a disease sound data group corresponding to each disease sound characteristic;
and establishing a disease database according to the disease sound data set.
Optionally, after determining the disease type of the subject, the method further includes:
storing the sound data group for determining the disease type of the testee into the tester database so as to update the tester database;
alternatively, the first and second electrodes may be,
storing the sound data set for determining the disease type of the subject to the disease database to update the disease database.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects: the invention provides a body condition determining system and method based on sound, which determine the current body condition of a person to be tested by analyzing sound information sent by the person to be tested and performing similarity calculation with sound data groups in an existing database and a disease database of the person to be tested, and do not need to manually perform operations such as inquiry or pulse feeling to determine the body condition of the person to be tested, thereby reducing the waste of human resources, greatly shortening the diagnosis time and greatly reducing the possibility of misdiagnosis.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a block diagram of a determination system provided by the present invention;
FIG. 2 is a schematic diagram of an interface of a mobile software system provided by the present invention;
fig. 3 is a flowchart of a physical condition determination method provided by the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a body condition determining system and method based on sound, which can reduce the waste of human resources, shorten the diagnosis time and reduce the misdiagnosis probability.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Fig. 1 is a block diagram of a determination system provided in the present invention, and as shown in fig. 1, a sound-based physical condition determination system includes:
the voice acquisition module 101 is used for acquiring voice information of a person to be detected;
the noise reduction module 102 is configured to perform noise reduction processing on the sound information to obtain noise reduction sound;
the analysis module 103 is configured to analyze the noise reduction sound information to obtain a sound characteristic; the sound characteristics include sound speed, timbre, tone, and loudness; the analysis module specifically comprises an analysis unit; the analysis unit is used for analyzing the noise reduction sound information by adopting a voice spectrum analysis function to obtain sound characteristics; the sound characteristics are obtained from the voiceprint of the sound, the voiceprint is a spectrogram, and the sound velocity, the tone and the loudness can be extracted from the spectrogram and are object variables; the sound speed, timbre, pitch and loudness are values of a sequence of n frames of the acquired sound segment, e.g., 100 frames, and the sound speed, timbre, pitch and loudness parameter values are all a sequence of 100 values.
An integration module 104, configured to integrate the plurality of sound characteristics, the environmental data, and the physical sign data into a sound data set; the environmental data comprises the geographical position, the climate, the temperature and the humidity of the person to be detected; the physical sign data comprises the body temperature, brain waves, heartbeats and blood pressure of the person to be measured;
a database establishing module 105, configured to establish a database of a subject to be tested and a disease database; the database of the testee is established according to the sound data sets of the testee under different diseases; the disease database is established according to sound data sets of a plurality of disease patients under different diseases; the database establishing module specifically comprises: the sound characteristic integration unit is used for integrating the sound characteristics of the patient under the disease; an average sound feature calculation unit configured to calculate an average sound feature from the sound features; the average sound features comprise an average value sequence of sound velocity, an average value sequence of tone color, an average value sequence of tone and an average value sequence of loudness; the environment data and sign data acquisition unit is used for acquiring the environment data and sign data of the person to be measured; the voice data set building unit is used for building a disease voice data set corresponding to each disease voice characteristic according to the average voice characteristic, the environment data and the physical sign data; the device comprises a data base building unit for the device to be tested, a data base building unit for the device to be tested and a data base building unit for the device to be tested; the database of the testee is established according to a plurality of sound data sets of the testee under different diseases. The database establishing module specifically comprises: the disease sound characteristic integration module is used for integrating a plurality of disease sound characteristics; the average disease sound characteristic calculation module is used for calculating average disease sound characteristics according to the plurality of disease sound characteristics; the average disease sound features comprise an average value sequence of sound velocity, an average value sequence of tone color, an average value sequence of tone and an average value sequence of loudness; the disease data acquisition module is used for acquiring environmental data and disease sign data of a plurality of disease personnel; the to-be-detected disease sound data group building module is used for building a disease sound data group corresponding to each disease sound characteristic according to the average disease sound characteristic, the environmental data of the disease personnel and the disease sign data; and the disease database establishing module is used for establishing a disease database according to the disease sound data set.
The first judging module 106 is configured to judge whether the similarity between the sound data set and the sound data set of the database of the person to be measured is within a first preset similarity range, so as to obtain a first judgment result;
a first determining module 107, configured to determine a disease type of the subject if the first determination result indicates that the similarity between the sound data set and the sound data set of the subject database is within the first preset similarity range;
a second determining module 108, configured to determine whether the similarity between the sound data set and the sound data set of the disease database is within a second preset similarity range if the first determining result indicates that the similarity between the sound data set and the sound data set of the database of the subject is not within the first preset similarity range, so as to obtain a second determining result;
a second determining module 109, configured to determine a disease type of the subject if the similarity between the sound data set and the sound data set of the disease database is within a second preset similarity range.
The invention arranges the voiceprint of each frame of voice and the values of sound velocity, tone and loudness according to a time sequence, and uses the values as a combination to express the collected voice characteristics; the voiceprint can express a fundamental tone frequency spectrum and an envelope, the energy of a fundamental tone frame, the occurrence frequency of a fundamental tone formant and a track thereof; sound speed, timbre, tone and loudness are the fundamental elements of sound; the sound characteristics are related to the vocal organs of human (or animal), and physiological, pathological, psychological or environmental factors can cause the vocal organs such as tongue, teeth, larynx, lung and nasal cavity to change, so that the sound characteristics are different; under health or disease, even if the voice is not directly related to the vocal organs, the corresponding voice characteristics can be different; the voice characteristics of the person to be tested are identified through the voice of the person to be tested, the voice characteristics are matched with the corresponding voice characteristics in the database of the person to be tested and the disease database, the health or the disease of the person to be tested is intelligently judged, and the environment data and the physical sign data are combined for confirmation, elimination or modification.
In practical applications, the basic flow of the disclosed sound-based physical condition determination system is as follows:
(1) firstly, sound collection is carried out through a sound collection hardware terminal to generate sound characteristics; the sound collecting device may be a sound collecting device which is owned by the body condition determining system and can also be externally connected, and during collecting, the sound collecting device may be a body condition chief complaint, a pre-designed content corresponding to a certain disease, a content which is randomly recorded or monitored, or a voice which is stored in an electronic file by a person to be detected;
(2) integrating health or disease databases of a plurality of testees, averaging values such as voiceprints, timbres, tones, loudness and the like, averaging sign data, increasing similarity of similarity variables of the testees to express the similarity of the same health state of different testees, expressing sound arrays corresponding to different health or diseases of different testees, and forming a disease database or a disease database;
(3) the intelligent diagnosis method based on the database of the person to be detected has two basic routes, one is to use the database of the person to be detected as a reference mode library, use the sound characteristics corresponding to specific health or diseases as an identification target and a reference mode, perform mode identification of voiceprint, tone and loudness on the collected sound characteristics of the person to be detected, and use the identification result as a diagnosis basis; the other method comprises the steps of firstly obtaining sound characteristics such as voiceprint, timbre, tone, loudness and the like of a person to be detected for the collected sound of the person to be detected, then carrying out similarity matching on the sound characteristics and a sound array in a database of the person to be detected, and integrating matching results to judge whether the person to be detected is healthy or diseased;
(4) in order to improve the diagnosis accuracy and precision, the environmental data and the physical sign data of the person to be detected during speaking, such as temperature, brain waves, blood pressure, heartbeat and other physical sign data, are integrated to perform further correlation analysis, and the judgment result is eliminated, confirmed and corrected;
(5) in order to improve the diagnosis accuracy, efficiency and precision, a diagnosis standard of health or disease is formed by constructing a mode identification association rule facing diagnosis;
(6) the body condition determining system based on the sound provided by the invention can construct a database of a person to be tested by only adopting one or more elements in the sound characteristics (voiceprint, tone or loudness) according to actual requirements, wherein the voice characteristics are related to the body condition determining system;
(7) the body condition determining system based on sound provided by the invention relates to environmental data and physical sign data, which are not necessary and can be configured according to actual requirements;
(8) the body condition determining system based on the sound can acquire a database of a person to be measured who is in health and compares the database with the database of the person to be measured to obtain a body condition determining result, namely the body condition determining system based on the sound disclosed by the invention is not only based on the database of the person to be measured, but also based on the database of the person to be measured, and also based on two databases;
(9) when analyzing the voice characteristics, the body condition determining system based on the voice provided by the invention usually uses a template matching method, a nearest neighbor method, a neuron network method and a VQ clustering method, user classification algorithms such as decision tree classification, Bayesian network classification, nearest neighbor classification, rule induction method, neural network and other methods, and user clustering analysis calculation methods such as a division method, a hierarchy method, a density-based method, a grid-based method, a model-based method and the like to obtain the voice print.
The invention judges the health or disease of the person to be tested by collecting and analyzing the sound of the person to be tested and assisting the physical sign data and the environment data of the person to be tested acquired in other modes, thereby greatly reducing the misdiagnosis probability.
The invention can judge the disease types such as health, cold, pharyngitis, pneumonia and the like through the disease database; the degree of the disease of the person to be tested, such as mild degree, severe degree, initial degree, the degree of healing and the like, can be judged, and the person to be tested can take the determined result and directly give the result to a doctor for further processing, so that a large amount of human resources are saved, and the waiting time and the diagnosis time are shortened.
In practical applications, the physical status determination system further comprises:
a subject database updating subunit, configured to store the sound data group for determining the disease type of the subject into the subject database to update the subject database;
alternatively, the first and second electrodes may be,
and the disease database updating subunit is used for storing the sound data group for determining the disease type of the testee into the disease database so as to update the disease database.
For the preprocessed voice sample, the invention adopts a voice spectrum analysis function, comprehensively uses a hidden Markov model or a Gaussian mixture model and algorithms of dynamic time warping, vector quantization and a support vector machine to analyze the voiceprint, tone, timbre and loudness, and stores the voiceprint, the tone, the timbre and the loudness in the following ten-element array form, namely:
EPn=(Hn,Sn,Dn,Vn,An,Fn,Tn,Vn,Cn,Pn)
wherein, EPn represents the health sound characteristic of the person to be tested with the serial number n; the ten-element array elements are expressed as entity objects, vector objects or data classes; where Hn is the health type of the character type data type; sn is a health state variable of a character type data type; dn is a sound characterization of the character type data; an represents a voiceprint object, is a spectrogram of voice of a person to be detected, is represented by An imaged voice signal, the horizontal axis of the voiceprint object represents time, the vertical axis of the voiceprint object represents frequency, and the amplitude of the voice at each frequency point is distinguished by colors; vn is a sound velocity variable, Fn is a pitch variable, Tn is a tone variable, Vn is a loudness variable, Cn is an environmental data vector, Pn is a sign data vector, the sign data describes the personalized data of the person under test forming the voiceprint combination, including physiological data from a physical examination, medical record or wearable device.
And constructing a database of the testee. Integrating a database of a person to be tested or a disease database of a plurality of persons to be tested, averaging values such as voiceprints, tone, loudness and the like, averaging sign data, increasing similarity variables of the persons to be tested to express the similarity of different persons to be tested in the same health state n to form an eleven-element array, expressing sound characteristic arrays corresponding to different health and disease types of different persons to be tested, and forming the database of the person to be tested or the disease database;
GPn=(Hn,Sn,Dn,An,Vn,Fn,Tn,Vn,Cn,Pn,Bn)
wherein GPn represents a generic health sound signature; eleven-element array elements are expressed as entity objects, vector objects or data classes; where Hn is the health type of the character type data type; sn is a health state variable of a character type data type; dn is a voice characterization of a character-type or attribute-type data type; an represents a voiceprint object, is a spectrogram of the voice of the person to be detected, is represented by An imaged voice signal, the horizontal axis of the voiceprint object represents time, the vertical axis of the voiceprint object represents frequency, and the amplitude of the voice at each frequency point is distinguished by colors; vn is the sound speed variable, Fn is the pitch variable, Tn is the timbre variable, Vn is the loudness variable, Cn is the environmental data vector, Pn is the vital signs data vector, including physiological data from a physical examination, medical record, or wearable device. Bn is the similarity, expressing the similarity of different testees in the same health state. The similarity adopts cosine similarity to calculate the similarity of the two, and confidence coefficients are introduced to improve the probability of the similarity, namely the similarity between the users a and b can be calculated by the following formula, wherein H (a) AH (u) represents the characteristic value set of the user a, H (b) H (v) represents the characteristic value set of the user b, and Ca is the confidence coefficient.
Figure BDA0001418464530000111
The invention establishes an index database based on the ten-element array and the eleven-element array, and judges the health or the disease of the person to be tested.
The establishment of the database of the person to be tested or the disease database also takes sex, age, region, accent and occupation into account. Meanwhile, the database of the person to be tested also considers the factors influencing the sound, such as sound content, acquisition equipment, transmission channels, environmental noise, recording playback, time span, sampling duration and the like.
Besides direct matching based on the database of the person to be tested or the disease database, the method can also be used for analyzing each variable in the sound characteristics of the database of the person to be tested or the disease database:
for example, the method carries out mode analysis on various diseases such as health, cold, pneumonia and the like in a voice sample, constructs a diagnosis mode facing to a specific disease type or state, is used for intelligent diagnosis of the specific health or disease type, and improves efficiency and pertinence;
alternatively, under specific health or illness, such as the end of influenza, speech rate, amplitude energy, pitch frequency and period, formant frequency, etc. can be extracted from the relevant sound features such as voiceprint, timbre, and pitch to form cold-oriented features. Wherein the change in speech rate caused by influenza is analyzed and compared; the silent part is also a component of the disease and should be extracted for analysis; the amplitude energy of the voice signal is mainly obtained by extracting the amplitude of each frame in the voice signal and analyzing the change of the amplitude along with the change of time, and only analyzing the average value of absolute values when the average amplitude of each frame exceeds a threshold value in order to avoid the influence of silence and noise; the pitch frequency is the vocal cord vibration frequency, and the pitch period is the period presented by the pitch frequency and is also the reciprocal of the vocal cord vibration frequency; the formants are the resonance frequencies generated by the vowels entering the vocal tract, including the positions and widths of the frequencies, and the vocal tract affected by different diseases is different, so the positions of the formants are also different.
According to the mobile software system designed by the physical condition determination system provided by the invention, as an embodiment of the invention, fig. 2 is a schematic interface diagram of the mobile software system provided by the invention, as shown in fig. 2, data such as heart rate, blood pressure value and blood fat of a person to be tested can be clearly detected;
(1) adopting a universal mobile device, namely a smart phone, as a body terminal for intelligent health and disease diagnosis; the 'color cloud angel' (name of mobile software system) is a mobile application, which is composed of a mobile end and a server end and respectively realizes human-computer interaction and intelligent diagnosis tasks. The mobile terminal is an Android application, the display layer and the service logic of the display layer are realized based on an AngularJS framework, the primary component at the bottom layer is realized by an Android SDK, the persistent storage of the health sound characteristic data and the big data of the person to be tested and the calling of the system component are realized, and the display layer and the primary component are connected by adopting an Ionic framework. Under the framework, the color cloud angel can be transplanted to other platforms only by replacing codes related to the Android SDK; the color cloud angel server end is composed of a business logic layer and a data layer, the business logic layer runs on the JVM and comprises an intelligent diagnostic program, an API interface and the like, and the data layer mainly stores healthy sound characteristic data and big data of a person to be measured.
(2) The color cloud angel adopts two human-computer interaction terminals, namely a display screen of the smart phone and an externally connected 5-inch large-screen liquid crystal display; a sound collection hardware component of the smart phone is adopted; the wearable device adopts the weather and position application carried by a smart phone, and wearable devices such as a Flex pedometer of external Fitbit, a UP tracking recorder of Jawbone, a Wahoo Fitness heart rate belt, Google Glass, SmartWatch of a fruit shell, iWatch of an apple, a smart water cup, a brain wave instrument, a weighing device and the like, and the intelligent diagnosis main program can call the local application and operate the input and output data of the local application, and operate the local application through an interface of the wearable device.
(3) The color cloud angel adopts the sound recording application of a smart phone, and operates input and output data of the smart phone through an intelligent diagnosis main program;
(4) the 'color cloud angel' analyzes the tone, tone and loudness of the 'color cloud angel' based on general voice recognition software and a voice spectrum analysis function of Mathlab, and realizes a voiceprint recognition program based on a voiceprint recognition development platform Java SDK environment;
(5) the 'color cloud angel' adopts a relational database sqlite at a server end to store a database of a person to be tested and a disease database;
(6) the intelligent diagnosis program module is developed into server-side software by the 'color cloud angel', based on the diagnosis standard and the enhanced pattern recognition rule of a database or a disease database of a person to be tested, the recognized sound data set is subjected to pattern matching by retrieving the database or the disease database of the person to be tested, and a diagnosis result and a confidence coefficient are given according to the matching degree; and (4) further eliminating, confirming and correcting the judgment result by calling the big data of the person to be tested.
(7) The color cloud angel operates a sound acquisition terminal and wearable equipment through a hardware interface, calls sound acquisition, voice print recognition enhancement and intelligent diagnosis to record, preprocess and recognize sound characteristics, calls an intelligent diagnosis module to match a sound data set to judge health or diseases, outputs the health or the diseases through a human-computer interaction terminal, and outputs the types of the diseases if a person to be detected is in the diseases.
Fig. 3 is a flowchart of a body condition determining method according to the present invention, and as shown in fig. 3, a method for determining a body condition based on sound includes:
step 301: collecting sound information of a person to be detected;
step 302: carrying out noise reduction processing on the sound information to obtain noise reduction sound;
step 303: analyzing the noise reduction sound information to obtain sound characteristics; the sound characteristics include sound speed, timbre, tone, and loudness; analyzing the noise reduction sound information by using an audio spectrum analysis function to obtain sound characteristics;
step 304: integrating a plurality of sound characteristics, environment data and physical sign data into a sound data group; the environmental data comprises the geographical position, the climate, the temperature and the humidity of the person to be detected; the physical sign data comprises the body temperature, brain waves, heartbeats and blood pressure of the person to be measured;
step 305: used for establishing a database of the testee and a disease database; the database of the testee is established according to the sound data sets of the testee under different diseases; the disease database is established according to sound data sets of a plurality of disease patients under different diseases; the establishing of the database of the testee specifically comprises the following steps: integrating the sound characteristics of the disease of the person to be tested; calculating an average sound characteristic according to the sound characteristic; the average sound features comprise an average value sequence of sound velocity, an average value sequence of tone color, an average value sequence of tone and an average value sequence of loudness; acquiring environmental data and physical sign data of the person to be measured; according to the average sound characteristics, the environment data and the physical sign data, constructing a disease sound data group corresponding to each disease sound characteristic; establishing a database of the testee; the database of the testee is established according to a plurality of sound data sets of the testee under different diseases.
The establishing of the disease database specifically comprises the following steps: integrating a plurality of disease sound characteristics; calculating an average disease sound characteristic from a plurality of the disease sound characteristics; the average disease sound features comprise an average value sequence of sound velocity, an average value sequence of tone color, an average value sequence of tone and an average value sequence of loudness; acquiring environmental data and disease sign data of a plurality of disease personnel; according to the average disease sound characteristics, the environmental data of the disease personnel and the disease sign data, constructing a disease sound data group corresponding to each disease sound characteristic; and establishing a disease database according to the disease sound data set.
Step 306: judging whether the similarity between the sound data set and the sound data set of the database of the person to be tested is within a first preset similarity range, if so, executing a step 307, and if not, executing a step 308;
step 307: determining a disease type of the subject; and storing the sound data group for determining the disease type of the testee into the tester database so as to update the tester database.
Step 308: judging whether the similarity between the sound data set and the sound data set of the disease database is within a second preset similarity range, if so, executing step 309, and if not, executing step 310;
step 309: determining a disease type of the subject; storing the sound data set for determining the disease type of the subject to the disease database to update the disease database;
step 310: determining that the subject is healthy.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (5)

1. A sound-based physical condition determination system, comprising:
the voice acquisition module is used for acquiring voice information of a person to be detected;
the noise reduction module is used for carrying out noise reduction processing on the sound information to obtain noise reduction sound;
the analysis module is used for analyzing the noise reduction sound information to obtain sound characteristics; the sound characteristics include sound speed, timbre, tone, and loudness;
the integrated module is used for integrating a plurality of sound characteristics, environment data and sign data into a sound data group; the environmental data comprises the geographical position, the climate, the temperature and the humidity of the person to be detected; the physical sign data comprises the body temperature, brain waves, heartbeats and blood pressure of the person to be measured;
the database establishing module is used for establishing a database of a person to be tested and a disease database; the database of the testee is established according to the sound data sets of the testee under different diseases; the disease database is established according to sound data sets of a plurality of disease patients under different diseases;
for the preprocessed sound information, a speech spectrum analysis function is adopted, a hidden Markov model and a support vector machine algorithm are comprehensively applied, the voiceprint, the tone color and the loudness of the sound information are analyzed, and the voiceprint, the tone color and the loudness are stored in the form of the following ten-element array, namely:
EPn=(Hn,Sn,Dn,Vn,An,Fn,Tn,Vn’,Cn,Pn)
wherein, EPn represents the health sound characteristic of the person to be tested with the serial number n; the ten-element array elements are expressed as entity objects, vector objects or data classes; where Hn is the health type of the character type data type; sn is a health state variable of a character type data type; dn is a sound characterization of the character type data; an represents a voiceprint object, is a spectrogram of voice of a person to be detected, is represented by An imaged voice signal, the horizontal axis of the voiceprint object represents time, the vertical axis of the voiceprint object represents frequency, and the amplitude of the voice at each frequency point is distinguished by colors; vn is a sound velocity variable, Fn is a tone variable, Tn is a tone variable, Vn' is a loudness variable, Cn is an environment data vector, Pn is a sign data vector, and the sign data describes personalized data of a person to be tested forming a voiceprint combination, and comprises physiological data from a physical examination, a medical record or wearable equipment;
constructing a database of the testee: integrating a database of a person to be tested or a disease database of a plurality of persons to be tested, averaging values such as voiceprints, tone, loudness and the like, averaging sign data, increasing similarity variables of the persons to be tested to express the similarity of different persons to be tested in the same health state n to form an eleven-element array, expressing sound characteristic arrays corresponding to different health and disease types of different persons to be tested, and forming the database of the person to be tested or the disease database;
GPn=(Hn,Sn,Dn,An,Vn,Fn,Tn,Vn’,Cn,Pn,Bn)
wherein GPn represents a generic health sound signature; eleven-element array elements are expressed as entity objects, vector objects or data classes; where Hn is the health type of the character type data type; sn is a health state variable of a character type data type; dn is a voice characterization of a character-type or attribute-type data type; an represents a voiceprint object, is a spectrogram of the voice of the person to be detected, is represented by An imaged voice signal, the horizontal axis of the voiceprint object represents time, the vertical axis of the voiceprint object represents frequency, and the amplitude of the voice at each frequency point is distinguished by colors; vn is a sound speed variable, Fn is a pitch variable, Tn is a tone variable, Vn' is a loudness variable, Cn is an environmental data vector, Pn is a vital sign data vector, the vital sign data includes physiological data from a physical examination, medical record, or wearable device; bn is similarity, expressing the similarity of the same health type of different testees; the similarity adopts cosine similarity to calculate the similarity of the two, and a confidence coefficient is introduced to improve the probability of the similarity, namely the similarity between the users a and b can be calculated by the following formula, wherein H (a) represents the characteristic value set of the user a, H (b) represents the characteristic value set of the user b, and Ca is the confidence coefficient;
Figure FDA0002493024480000021
establishing an index database based on the ten-element array and the eleven-element array, and judging the health or the disease of the person to be tested;
the first judgment module is used for judging whether the similarity between the sound data set and the sound data set of the database of the person to be detected is within a first preset similarity range or not to obtain a first judgment result;
a first determining module, configured to determine a disease type of the subject if the first determination result indicates that the similarity between the sound data set and the sound data set of the subject database is within the first preset similarity range;
a second judging module, configured to, if the first judgment result indicates that the similarity between the sound data set and the sound data set of the database of the person to be tested is not within the first preset similarity range, judge whether the similarity between the sound data set and the sound data set of the disease database is within a second preset similarity range, and obtain a second judgment result;
and the second determining module is used for determining the disease type of the person to be detected if the similarity between the sound data set and the sound data set of the disease database is within a second preset similarity range.
2. The physical status determination system according to claim 1, wherein the parsing module comprises in particular a parsing unit;
and the analysis unit is used for analyzing the noise reduction sound information by adopting a voice spectrum analysis function to obtain sound characteristics.
3. The system for determining physical condition according to claim 1, wherein the database building module specifically comprises:
the sound characteristic integration unit is used for integrating the sound characteristics of the patient under the disease;
an average sound feature calculation unit configured to calculate an average sound feature from the sound features; the average sound features comprise an average value sequence of sound speed, an average value sequence of timbre, an average value sequence of tone and an average value sequence of loudness;
the environment data and sign data acquisition unit is used for acquiring the environment data and sign data of the person to be measured;
the voice data set building unit is used for building a disease voice data set corresponding to each disease voice characteristic according to the average voice characteristic, the environment data and the physical sign data;
the device comprises a data base building unit for the device to be tested, a data base building unit for the device to be tested and a data base building unit for the device to be tested; the database of the testee is established according to a plurality of sound data sets of the testee under different diseases.
4. The system for determining physical condition according to claim 1, wherein the database building module specifically comprises:
the disease sound characteristic integration module is used for integrating a plurality of disease sound characteristics;
the average disease sound characteristic calculation module is used for calculating average disease sound characteristics according to the plurality of disease sound characteristics; the average disease sound features comprise a sound speed average value sequence, a tone average value sequence and a loudness average value sequence;
the disease data acquisition module is used for acquiring environmental data and disease sign data of a plurality of disease personnel;
the to-be-detected disease sound data group building module is used for building a disease sound data group corresponding to each disease sound characteristic according to the average disease sound characteristic, the environmental data of the disease personnel and the disease sign data;
and the disease database establishing module is used for establishing a disease database according to the disease sound data set.
5. The condition determining system according to claim 4, further comprising:
a subject database updating subunit, configured to store the sound data group for determining the disease type of the subject into the subject database to update the subject database;
alternatively, the first and second electrodes may be,
and the disease database updating subunit is used for storing the sound data group for determining the disease type of the testee into the disease database so as to update the disease database.
CN201710877854.1A 2017-09-26 2017-09-26 Body condition determining system and method based on sound Active CN107622797B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710877854.1A CN107622797B (en) 2017-09-26 2017-09-26 Body condition determining system and method based on sound

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710877854.1A CN107622797B (en) 2017-09-26 2017-09-26 Body condition determining system and method based on sound

Publications (2)

Publication Number Publication Date
CN107622797A CN107622797A (en) 2018-01-23
CN107622797B true CN107622797B (en) 2020-07-28

Family

ID=61090558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710877854.1A Active CN107622797B (en) 2017-09-26 2017-09-26 Body condition determining system and method based on sound

Country Status (1)

Country Link
CN (1) CN107622797B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6780182B2 (en) 2015-10-08 2020-11-04 コルディオ メディカル リミテッド Evaluation of lung disease by voice analysis
CN108553772A (en) * 2018-04-23 2018-09-21 苏州登阳信息技术有限公司 A kind of unmanned plane fire-fighting system based on Internet of Things
CN108962389A (en) * 2018-06-21 2018-12-07 上海掌门科技有限公司 Method and system for indicating risk
CN108937866B (en) * 2018-06-29 2020-03-20 出门问问信息科技有限公司 Sleep state monitoring method and device
US10847177B2 (en) 2018-10-11 2020-11-24 Cordio Medical Ltd. Estimating lung volume by speech analysis
US11024327B2 (en) 2019-03-12 2021-06-01 Cordio Medical Ltd. Diagnostic techniques based on speech models
US11011188B2 (en) 2019-03-12 2021-05-18 Cordio Medical Ltd. Diagnostic techniques based on speech-sample alignment
US11484211B2 (en) 2020-03-03 2022-11-01 Cordio Medical Ltd. Diagnosis of medical conditions using voice recordings and auscultation
CN111599463B (en) * 2020-05-09 2023-07-14 吾征智能技术(北京)有限公司 Intelligent auxiliary diagnosis system based on sound cognition model
US11417342B2 (en) 2020-06-29 2022-08-16 Cordio Medical Ltd. Synthesizing patient-specific speech models
CN111831836A (en) * 2020-07-10 2020-10-27 深圳鑫想科技有限责任公司 Method and intelligent system for simulating interaction between mother and infant
CN112201351A (en) * 2020-09-04 2021-01-08 广东科学技术职业学院 Method, device and medium for health prompt based on sound collection and analysis
CN113948109B (en) * 2021-10-14 2023-03-28 广州蓝仕威克软件开发有限公司 System for recognizing physiological phenomenon based on voice
CN116344023B (en) * 2023-03-17 2023-11-14 浙江普康智慧养老产业科技有限公司 Remote monitoring system based on wisdom endowment medical treatment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103428347A (en) * 2012-05-17 2013-12-04 上海闻泰电子科技有限公司 Method of mobile phone user identification system
CN104382570A (en) * 2014-12-16 2015-03-04 张剑 Digitized full-automatic health condition detection device
CN105596016A (en) * 2015-12-23 2016-05-25 王嘉宇 Human body psychological and physical health monitoring and managing device and method
CN105810213A (en) * 2014-12-30 2016-07-27 浙江大华技术股份有限公司 Typical abnormal sound detection method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170086778A1 (en) * 2015-09-29 2017-03-30 International Business Machines Corporation Capture and analysis of body sounds

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103428347A (en) * 2012-05-17 2013-12-04 上海闻泰电子科技有限公司 Method of mobile phone user identification system
CN104382570A (en) * 2014-12-16 2015-03-04 张剑 Digitized full-automatic health condition detection device
CN105810213A (en) * 2014-12-30 2016-07-27 浙江大华技术股份有限公司 Typical abnormal sound detection method and device
CN105596016A (en) * 2015-12-23 2016-05-25 王嘉宇 Human body psychological and physical health monitoring and managing device and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"嗓音分析在疾病诊断中的应用";彭策;《生物医学工程学杂质》;20071231;第24卷(第6期);第1-2节 *

Also Published As

Publication number Publication date
CN107622797A (en) 2018-01-23

Similar Documents

Publication Publication Date Title
CN107622797B (en) Body condition determining system and method based on sound
US10010288B2 (en) Screening for neurological disease using speech articulation characteristics
US6480826B2 (en) System and method for a telephonic emotion detection that provides operator feedback
US6697457B2 (en) Voice messaging system that organizes voice messages based on detected emotion
US6353810B1 (en) System, method and article of manufacture for an emotion detection system improving emotion recognition
US6427137B2 (en) System, method and article of manufacture for a voice analysis system that detects nervousness for preventing fraud
CN108806720B (en) Microphone, data processor, monitoring system and monitoring method
CN110494916A (en) Oral regular screening for heart disease
US20080045805A1 (en) Method and System of Indicating a Condition of an Individual
AU2013274940B2 (en) Cepstral separation difference
Reggiannini et al. A flexible analysis tool for the quantitative acoustic assessment of infant cry
CN113317763A (en) Multi-modal Parkinson's disease detection device and computer-readable storage medium
Costantini et al. Deep learning and machine learning-based voice analysis for the detection of COVID-19: A proposal and comparison of architectures
KR20080040803A (en) Method, apparatus, and system for diagnosing health status of mobile terminal users
Usman et al. Heart rate detection and classification from speech spectral features using machine learning
Bugdol et al. Prediction of menarcheal status of girls using voice features
CN108766462B (en) Voice signal feature learning method based on Mel frequency spectrum first-order derivative
Al-Dhief et al. Dysphonia detection based on voice signals using naive bayes classifier
CN108601567A (en) Estimation method, estimating program, estimating unit and hypothetical system
JP2022145373A (en) Voice diagnosis system
Dubey et al. Sinusoidal model-based hypernasality detection in cleft palate speech using CVCV sequence
RU2559689C2 (en) Method of determining risk of development of individual's disease by their voice and hardware-software complex for method realisation
Vatanparvar et al. Speechspiro: Lung function assessment from speech pattern as an alternative to spirometry for mobile health tracking
JP7307507B2 (en) Pathological condition analysis system, pathological condition analyzer, pathological condition analysis method, and pathological condition analysis program
JP2023517175A (en) Diagnosing medical conditions using voice recordings and listening to sounds from the body

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant