CN117393156A - Multi-dimensional remote auscultation and diagnosis intelligent system based on cloud computing - Google Patents

Multi-dimensional remote auscultation and diagnosis intelligent system based on cloud computing Download PDF

Info

Publication number
CN117393156A
CN117393156A CN202311700401.3A CN202311700401A CN117393156A CN 117393156 A CN117393156 A CN 117393156A CN 202311700401 A CN202311700401 A CN 202311700401A CN 117393156 A CN117393156 A CN 117393156A
Authority
CN
China
Prior art keywords
data
pathology
patient
auscultation
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311700401.3A
Other languages
Chinese (zh)
Other versions
CN117393156B (en
Inventor
张帅军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Haorui Technology Co ltd
Original Assignee
Zhuhai Haorui Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Haorui Technology Co ltd filed Critical Zhuhai Haorui Technology Co ltd
Priority to CN202311700401.3A priority Critical patent/CN117393156B/en
Publication of CN117393156A publication Critical patent/CN117393156A/en
Application granted granted Critical
Publication of CN117393156B publication Critical patent/CN117393156B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The invention belongs to the technical field of telemedicine, and discloses a multi-dimensional remote auscultation and diagnosis intelligent system based on cloud computing; connecting a patient end and a doctor end through a cloud platform; collecting m groups of historical patient medical data; historical patient medical data includes auscultation digital signals, patient pathology images, patient sounds, and electronic medical records; the cloud platform processes auscultation digital signals and patient sounds to obtain training data; the method comprises the steps that a patient end collects first pathology data corresponding to training data, second pathology data corresponding to pathology images of a patient and first disease data corresponding to electronic medical records; the cloud platform trains an auxiliary analysis model for predicting the first pathology data based on the training data; training an auxiliary recognition model for recognizing second pathology data based on the pathology image of the patient; training a medical history extraction model for extracting first disease data based on the electronic medical record; the intelligent auxiliary diagnosis can be realized, and the accuracy and the speed of diagnosis are improved.

Description

Multi-dimensional remote auscultation and diagnosis intelligent system based on cloud computing
Technical Field
The invention relates to the technical field of telemedicine, in particular to a multi-dimensional remote auscultation and diagnosis intelligent system based on cloud computing.
Background
In the prior art, for example, patent application publication number CN112530574a discloses a medical internet of things remote service system based on cloud computing, which comprises a display screen, a stethoscope, a movable camera and a microphone, wherein the matching of patients and doctors is carried out through cloud computing, when the doctors need to specifically observe the conditions of the patients, the movable arm is adjusted to a proper angle, the receiver is pulled out from the storage barrel, the beating frequency received by the receiver is changed into an electric signal, and then the electric signal is transmitted to an audio interface at one end of the doctors through a network, so that the auscultation can be carried out on the patients, the cloud computing is realized through auscultation and observation of the lenses, and medical resources are transferred in time, so that more patients are helped; for example, the patent with the application publication number of CN112309561A discloses a standard remote diagnosis system which comprises a flexible light system, an auscultation waistcoat, a percussion hammer, a small ultrasonic probe and a cloud computing system, so that a patient and an upper-level hospital can be subjected to face-to-face real-time remote diagnosis, timeliness, standardization and accuracy of a diagnosis result can be greatly ensured, data retention, comparison and standardized interpretation are facilitated, and support is provided for big data of the hospital;
however, the above technology only considers auscultation and lens observation, and cannot provide comprehensive medical data of patients; in addition, a doctor is required to carry out real-time remote diagnosis on a patient, and a large number of remote diagnosis requests cannot be efficiently processed due to limited time of the doctor; the system is only used for data storage and transmission, and cannot process the data, so that the auxiliary diagnosis effect cannot be achieved, the diagnosis result is only dependent on the medical experience of doctors, and is easily influenced by subjective factors, and the risks of misdiagnosis and missed diagnosis exist;
In view of the above, the present invention proposes a multi-dimensional remote auscultation and diagnosis intelligent system based on cloud computing to solve the above-mentioned problems.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides the following technical scheme for achieving the purposes: multi-dimensional remote auscultation and diagnosis intelligent system based on cloud computing comprises:
the connection module is used for connecting the patient end and the doctor end through the cloud platform;
the data acquisition module is used for acquiring m groups of historical patient medical data at the patient end and sending the data to the cloud platform; historical patient medical data includes auscultation digital signals, patient pathology images, patient sounds, and electronic medical records;
the cloud platform processes auscultation digital signals and patient sounds to obtain training data;
the pathology acquisition module is used for acquiring first pathology data corresponding to the training data, second pathology data corresponding to the pathology image of the patient and first disease data corresponding to the electronic medical record by the patient end and sending the first pathology data, the second pathology data and the first disease data to the cloud platform;
the cloud platform trains an auxiliary analysis model for predicting first pathology data based on training data;
the cloud platform trains an auxiliary identification model for identifying second pathology data based on the pathology image of the patient;
The cloud platform trains a medical history extraction model for extracting the first disease data based on the electronic medical record;
the data analysis module is used for sending the patient medical data acquired in real time to the platform end, and the platform end respectively inputs the real-time patient medical data into the auxiliary analysis model, the auxiliary identification model and the medical history extraction model to acquire corresponding first pathology data, second pathology data and first disease data and judge whether to generate analysis instructions and similar instructions.
Further, the auscultation digital signals include heart sound signals, breath sound signals, and bowel sound signals.
Further, the training data includes auscultation digital signal characteristic parameters and patient sound characteristic parameters; the patient sound characteristic parameters include volume, pitch, and pace of speech.
Further, the method for processing auscultation digital signals comprises the following steps:
s101: preprocessing the collected auscultation digital signals, wherein the preprocessing comprises filtering and denoising;
s102: performing time-frequency analysis on the preprocessed auscultation digital signals by adopting fast Fourier transform, and analyzing the change of the auscultation digital signals in time and frequency;
fast fourier transform into Wherein->Representing the>Frequency component->Representing the%>Auscultation digital signal, < >>Representing the total number of auscultatory digital signals, < >>Is natural logarithmic constant, ++>Is imaginary unit, ++>Is of circumference rate>
S103: extracting auscultation digital signal characteristic parameters from a time-frequency analysis result;
the auscultation digital signal characteristic parameters comprise auscultation period, auscultation average value and auscultation standard deviation;
obtaining the peak value of auscultation digital signal, the time corresponding to the peak value and the total time according to the time-frequency analysis result, and taking the time difference of two adjacent peak values as auscultation periodAcquiring A auscultation periods in auscultation digital signals; dividing A by total time to obtain auscultation mean +.>The method comprises the steps of carrying out a first treatment on the surface of the Auscultation standard deviation/>Is->The method comprises the steps of carrying out a first treatment on the surface of the Wherein->For the j-th auscultation period, < >>
Further, the method for processing patient sound comprises:
s201: converting the collected patient sound into a digital form to obtain a sound signal;
s202: preprocessing the sound signal, wherein the preprocessing comprises filtering and denoising;
s203: acquiring tone and volume from the preprocessed sound signal, wherein the tone is basic frequency, and the volume is amplitude;
s204: framing the preprocessed sound signals by using a window function to obtain signal fragments in a plurality of time windows;
S205: performing fast Fourier transform on the signals in each time window to obtain frequency domain information;
s206: extracting speech rate from the frequency domain information; the speech rate is zero crossing rate.
Further, the training process of the auxiliary analysis model comprises:
converting the training data and the first pathology data into a corresponding set of feature vectors;
taking each group of feature vectors as input of an auxiliary analysis model, wherein the auxiliary analysis model takes a group of predicted first pathology data corresponding to each group of training data as output, and takes actual first pathology data corresponding to each group of training data as a predicted target, wherein the actual first pathology data is the acquired first pathology data corresponding to the training data; taking the sum of the prediction errors of the minimum training data as a training target; training the auxiliary analysis model until the sum of the prediction errors reaches convergence, and stopping training; the auxiliary analysis model is a deep neural network model.
Further, the training process of the auxiliary recognition model comprises the following steps:
s301: constructing an auxiliary recognition model;
determining a model structure of an auxiliary recognition model, wherein the model structure comprises an input layer, a convolution layer, a pooling layer, a full connection layer and an output layer; defining the format and size of an input layer; adding a convolution layer in the auxiliary recognition model for extracting characteristics of input data, wherein the convolution layer comprises convolution operation, an activation function and normalization operation; adding a pooling layer after the convolution layer; adding a full connection layer after the pooling layer; adding an output layer at the end of the auxiliary recognition model;
S302: training the auxiliary recognition model based on the patient pathology image;
marking the patient pathology image as a training image, labeling each training image, labeling second pathology data corresponding to the patient pathology image, and respectively converting each second pathology data into a digital label; dividing the marked training image into a training set and a testing set; training the auxiliary recognition model by using a training set, testing the auxiliary recognition model by using a testing set, presetting an error threshold, and outputting the auxiliary recognition model meeting the prediction error when the prediction error is smaller than the preset error threshold; the auxiliary recognition model is a convolutional neural network model.
Further, the method for judging whether to generate the analysis instruction comprises the following steps:
a. constructing a medical knowledge graph;
b. converting the first pathology data and the second pathology data into a medical knowledge-graph representation;
c. calculating the shortest path length of the first pathology data and the second pathology data;
searching the shortest path from the node where the first pathology data is located to the node where the second pathology data is located by using a breadth first search algorithm, and recording the number of the nodes passing through in the searching process, wherein the number of the nodes is the shortest path length from the node where the first pathology data is located to the node where the second pathology data is located;
d. Calculating a degree of association score of the first pathology data and the second pathology data, and marking the degree of association score as a first degree of association; normalizing the calculated shortest path length to obtain a correlation score of the first pathology data and the second pathology data, namelyWherein->For relevance score, ++>Is the shortest path length;
comparing the first association degree with a preset association degree threshold value;
if the first association degree is smaller than the association degree threshold value, an analysis instruction is not generated;
and if the first association degree is greater than or equal to the association degree threshold value, generating an analysis instruction.
Further, the method for judging whether to generate the similar instruction comprises the following steps:
if an analysis instruction is generated, inputting the first pathology data and the second pathology data into a medical knowledge graph to acquire corresponding second disease data; the second disease data is a disease corresponding to the first disease data and the second disease data, the first disease data and the second disease data are input into a medical knowledge graph, a correlation score between the first disease data and the second disease data is calculated, and the second correlation score is marked as a second correlation;
comparing the second association with an association threshold;
if the second association degree is smaller than the association degree threshold value, a similar instruction is not generated;
And if the second association degree is greater than or equal to the association degree threshold value, generating a similar instruction.
Further, if the analysis instruction is not generated, the first pathology data, the second pathology data and the first disease data are sent to a doctor side;
if a similar instruction is generated, the first disease data and the second disease data are sent to a doctor side;
if the similar instruction is not generated, the second disease data is sent to the doctor side.
The intelligent multi-dimensional remote auscultation and diagnosis method based on cloud computing is realized based on the intelligent multi-dimensional remote auscultation and diagnosis system based on cloud computing, and comprises the following steps:
connecting a patient end and a doctor end through a cloud platform;
the patient end collects m groups of historical patient medical data and sends the data to the cloud platform; historical patient medical data includes auscultation digital signals, patient pathology images, patient sounds, and electronic medical records;
the cloud platform processes auscultation digital signals and patient sounds to obtain training data;
the method comprises the steps that a patient end collects first pathology data corresponding to training data, second pathology data corresponding to pathology images of a patient and first disease data corresponding to electronic medical records, and sends the first pathology data, the second pathology data and the first disease data to a cloud platform;
the cloud platform trains an auxiliary analysis model for predicting the first pathology data based on the training data;
The cloud platform trains an auxiliary identification model for identifying second pathology data based on the pathology image of the patient;
the cloud platform trains a medical history extraction model for extracting first disease data based on the electronic medical record;
the patient end sends the patient medical data acquired in real time to the platform end, the platform end respectively inputs the real-time patient medical data into the auxiliary analysis model, the auxiliary identification model and the medical history extraction model, acquires corresponding first pathology data, second pathology data and first disease data, and judges whether to generate an analysis instruction and a similar instruction.
An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the cloud computing based multi-dimensional remote auscultation and diagnosis intelligence method when executing the computer program.
A computer readable storage medium having stored thereon a computer program which when executed performs the cloud computing based multi-dimensional remote auscultation and diagnosis intelligence method.
The intelligent multi-dimensional remote auscultation and diagnosis system based on cloud computing has the technical effects and advantages that:
Through the cloud platform, a patient and a doctor can be remotely connected, remote diagnosis is realized, and diagnosis is convenient for a remote area or a patient needing long-time monitoring; the comprehensive multidimensional data comprise a plurality of data types such as auscultation digital signals, patient pathology images, patient sounds, electronic medical records and the like, can provide more comprehensive patient information, and is helpful for comprehensively diagnosing patient diseases; meanwhile, intelligent auxiliary diagnosis can be realized according to the knowledge graph, auxiliary diagnosis is provided for doctors, the workload of the doctors is reduced, and the accuracy and the speed of diagnosis are improved.
Drawings
Fig. 1 is a schematic diagram of a multi-dimensional remote auscultation and diagnosis intelligent system based on cloud computing in embodiment 1 of the present invention;
FIG. 2 is a schematic diagram of endpoint connection according to embodiment 1 of the present invention;
FIG. 3 is a schematic diagram of a system architecture according to an embodiment 1 of the present invention;
FIG. 4 is a schematic diagram of a multi-dimensional remote auscultation and diagnosis intelligent method based on cloud computing according to embodiment 2 of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to embodiment 3 of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Referring to fig. 1, the multi-dimensional remote auscultation and diagnosis intelligent system based on cloud computing in this embodiment includes a connection module, a data acquisition module, a data processing module, a pathology acquisition module, a first model training module, a second model training module, a third model training module, and a data analysis module; each module is connected in a wired and/or wireless mode, so that data transmission among the modules is realized;
the connection module is used for connecting the patient end and the doctor end through the cloud platform, and please refer to fig. 2;
the cloud platform is a service platform constructed based on a cloud computing technology, is a distributed computing architecture, consists of a plurality of servers, storage equipment and network equipment, and can provide computing, storage and service resources in a virtualization method; the patient end and the doctor end can be connected through a network provided by cloud computing to perform real-time data transmission and communication; at the same time, patient medical data can be stored, processed and analyzed; the safety and privacy of the medical data of the patient can be ensured;
the data acquisition module is used for acquiring m groups of historical patient medical data at a patient end, wherein m is an integer greater than 1; historical patient medical data includes auscultation digital signals, patient pathology images, patient sounds, and electronic medical records; transmitting the historical patient medical data to a cloud platform;
The auscultation digital signals comprise heart sound signals, breathing sound signals and bowel sound signals; the auscultation digital signal is obtained by an electronic stethoscope, and the electronic stethoscope obtains heart sounds, respiratory sounds and borborygmus sounds of a patient and converts the heart sounds, respiratory sounds and borborygmus sounds into digital signals; heart sound signals reflect the heart function of a patient, including heart rate, heart rhythm, and possibly heart disease, from which the patient's heart health can be assessed; the breath sound signal reflects the respiratory function of the patient and possibly respiratory diseases, such as wheezing, dyspnea, etc., by which the patient's lung health can be assessed; the borborygmus signal reflects the digestive system functions of the patient, including intestinal peristalsis and possibly intestinal diseases, through which the digestive system health of the patient can be assessed;
the disease state image of the patient is shot and acquired by the mobile equipment; the patient takes an image of his own pathology via the mobile device, the pathology of the patient such as skin lesions: eczema, rash, wart, sore and the like, and the appearance, color, size and distribution of lesions can be recorded by taking photos, thereby being beneficial to the diagnosis and treatment planning of skin diseases for doctors; trauma: wounds, fractures, contusions and the like, and the appearance and the change condition of the injured part of a patient can be recorded by taking pictures or videos, so that doctors can evaluate the injuries and formulate treatment schemes; ocular pathology: red eye, eyelid swelling, ocular foreign body sensation, etc., can record symptoms by taking eye photographs or videos, and is helpful for ophthalmologists to diagnose and treat eye diseases;
Patient sound is recorded and acquired by mobile equipment; the patient records own voice through the mobile device, and the voice of the patient such as speaking voice, cough voice and the like is helpful for the doctor to diagnose the speech disorder, the throat diseases and the respiratory system diseases;
the electronic medical record is filled in and acquired by a patient according to the symptoms of the patient, the electronic medical record comprises personal information of the patient, past medical history and other information, if the patient is in a doctor's visit in other hospitals, the electronic medical record can be acquired by a medical information system of the doctor's visit hospital, and the electronic medical record also comprises diagnosis results, doctor's advice, medicine prescriptions and other information in various aspects so as to comprehensively record the symptoms of the patient;
the cloud platform processes auscultation digital signals and patient sounds to obtain training data;
the training data comprises auscultation digital signal characteristic parameters and patient sound characteristic parameters; patient sound characteristic parameters include volume, pitch, and speech rate;
the method for processing auscultation digital signals comprises the following steps:
s101: preprocessing the collected auscultation digital signals, wherein the preprocessing comprises filtering and denoising;
s102: performing time-frequency analysis on the preprocessed auscultation digital signals by adopting fast Fourier transform, and analyzing the change of the auscultation digital signals in time and frequency;
Fast fourier transformIs thatWherein->Representing the>Frequency component->Representing the%>Auscultation digital signal, < >>Representing the total number of auscultatory digital signals, < >>Is natural logarithmic constant, ++>Is imaginary unit, ++>Is of circumference rate>
S103: extracting auscultation digital signal characteristic parameters from a time-frequency analysis result;
the auscultation digital signal characteristic parameters comprise auscultation period, auscultation average value and auscultation standard deviation;
obtaining the peak value of auscultation digital signal, the time corresponding to the peak value and the total time according to the time-frequency analysis result, and taking the time difference of two adjacent peak values as auscultation periodAcquiring A auscultation periods in auscultation digital signals; dividing A by total time to obtain auscultation mean +.>The method comprises the steps of carrying out a first treatment on the surface of the Auscultation standard deviation->Is->The method comprises the steps of carrying out a first treatment on the surface of the Wherein->For the j-th auscultation period, < >>
The method for processing patient sound comprises the following steps:
s201: converting the collected patient sound into a digital form to obtain a sound signal;
s202: preprocessing the sound signal, wherein the preprocessing comprises filtering and denoising;
s203: acquiring tone and volume, namely basic frequency, volume and amplitude, from the preprocessed sound signal;
s204: framing the preprocessed sound signals by using a window function to obtain signal fragments in a plurality of time windows; framing is typically performed using a window of 20-30 milliseconds, window functions such as hamming windows, hanning windows, etc.;
S205: performing fast Fourier transform on the signals in each time window to obtain frequency domain information;
s206: extracting speech rate from the frequency domain information; speech speed is zero-crossing rate; the method for calculating the zero crossing rate comprises the following steps:
def calculate_zero_crossing_rate(signal):
zero_crossings = 0
for i in range(1, len(signal)):
if (signal[i] >= 0 and signal[i-1] < 0) or (signal[i] < 0 and signal[i-1] >= 0):
zero_crossings += 1
zero_crossing_rate = zero_crossings / (len(signal) - 1)
return zero_crossing_rate
the system comprises a pathology acquisition module, a diagnosis module and a diagnosis module, wherein the patient end receives training data sent by a cloud platform and acquires first pathology data corresponding to the training data, second pathology data corresponding to a pathology image of the patient and first disease data corresponding to an electronic medical record; transmitting the first pathology data, the second pathology data and the first disease data to a cloud platform;
wherein, the plurality of sets of training data can correspond to one first pathology data, the plurality of patient pathology images can correspond to one second pathology data, and one electronic medical record can correspond to a plurality of first disease data; the first pathology data and the second pathology data are pathology corresponding to the patient, the first disease data are disease history corresponding to the patient, training data, pathology images of the patient and electronic medical records are analyzed by a person skilled in the art, and the corresponding first pathology data, second pathology data and first disease data are obtained one by one;
first condition data such as arrhythmia, heart murmur, asthma, lung infection, intestinal obstruction, flatulence, bronchitis, etc.; second pathology data such as eczema, sores, wounds, fractures, red eyes, eyelid swelling, and the like; first disease data such as hypertension, gastritis, respiratory infections, and the like;
The cloud platform trains an auxiliary analysis model for predicting first pathology data based on training data;
the training process of the auxiliary analysis model comprises the following steps:
converting the training data and the first pathology data into a corresponding set of feature vectors;
taking each group of feature vectors as input of an auxiliary analysis model, wherein the auxiliary analysis model takes a group of predicted first pathology data corresponding to each group of training data as output, and takes actual first pathology data corresponding to each group of training data as a predicted target, wherein the actual first pathology data is the acquired first pathology data corresponding to the training data; taking the sum of the prediction errors of the minimum training data as a training target; wherein, the calculation formula of the prediction error is as followsWherein->For prediction error +.>Group number of corresponding feature vector for training data, +.>Is->Predictive first pathology data corresponding to the set of training data, < >>Is->Actual first pathology data corresponding to the set of training data; training the auxiliary analysis model until the sum of the prediction errors reaches convergence, and stopping training;
the auxiliary analysis model is specifically a deep neural network model and comprises an input layer, a hidden layer and an output layer; each hidden layer comprises a plurality of neurons, each neuron is connected with the next layer of neurons, the connection comprises weights, and the importance and influence of data transmission in the neural network are determined; applying an activation function to each neuron between the hidden layer and the output layer, the activation function reflecting nonlinearities, allowing the network to learn more complex patterns and features;
The cloud platform trains an auxiliary identification model for identifying second pathology data based on the pathology image of the patient;
the training process of the auxiliary recognition model comprises the following steps:
s301: constructing an auxiliary recognition model;
determining a model structure of an auxiliary recognition model, wherein the model structure comprises an input layer, a convolution layer, a pooling layer, a full connection layer and an output layer; defining the format and size of an input layer; adding a convolution layer in the auxiliary recognition model for extracting characteristics of input data, wherein the convolution layer comprises convolution operation, an activation function and normalization operation; adding a pooling layer behind the convolution layer for reducing the dimension of the feature map, accelerating the calculation speed and retaining important features; adding a full connection layer after the pooling layer for classification or regression tasks; adding an output layer at the end of the auxiliary recognition model, using a softmax activation function to perform classification tasks, and using a linear activation function to perform regression tasks;
after the model structure of the auxiliary recognition model is constructed, determining a loss function, an optimizer and an evaluation index to compile the auxiliary recognition model;
s302: training the auxiliary recognition model based on the patient pathology image;
marking the patient pathology image as a training image, labeling each training image, labeling second pathology data corresponding to the patient pathology image, and respectively converting each second pathology data into a digital label, which is exemplary: eczema was converted to 1, trauma was converted to 2, red eye was converted to 3; dividing the marked training images into a training set and a testing set, taking 70% of the training images as the training set and 30% of the training images as the testing set; training the auxiliary recognition model by using a training set, testing the auxiliary recognition model by using a testing set, presetting an error threshold, and outputting the auxiliary recognition model meeting the prediction error when the prediction error is smaller than the preset error threshold, wherein the calculation formula of the prediction error is as follows Wherein->For prediction error +.>For the number of training images, +.>Is->Predictive annotation corresponding to group training image, +.>Is->Actual labels corresponding to the group training images; the loss function is a prediction error function, and the evaluation index is a prediction error; the error threshold value is preset according to the precision required by the auxiliary recognition model;
the auxiliary recognition model is specifically a convolutional neural network model;
the cloud platform trains a medical history extraction model for extracting the first disease data based on the electronic medical record;
the training process of the medical history extraction model comprises the following steps:
s401: constructing a medical history extraction model;
importing a deep learning framework (e.g., tensorFlow, keras, pyTorch, etc.) and related libraries; defining a model structure of a medical history extraction model, wherein the model structure comprises an embedded layer, an RNN layer and an output layer; because the input medical history extraction model is an electronic medical record, and the electronic medical record is text data, an embedding layer is required to be added to convert the vocabulary into dense vector representation; adding a plurality of RNN layers after embedding, carrying out parameter setting on the RNN layers according to the electronic medical record, wherein the RNN layers are used for processing text data in the medical record and capturing time sequence and context information in medical history description, so that the medical history information of a patient is extracted more accurately; the output layer is added at the end of the medical history extraction model, and a full connection layer is not needed to be added as text classification is not needed;
S402: training a medical history extraction model based on the electronic medical record; the medical history extraction model is specifically a cyclic neural network model;
the first disease data is extracted from the electronic medical records, namely the medical history of the patient is extracted from the electronic cases, and the medical history extraction model for extracting the medical history of the patient from the electronic cases is the prior art, so that the training process of the medical history extraction model is not repeated here;
the data analysis module is used for sending the patient medical data acquired in real time to the platform end, and the platform end respectively inputs the real-time patient medical data into the auxiliary analysis model, the auxiliary identification model and the medical history extraction model to acquire corresponding first pathology data, second pathology data and first disease data; analyzing the first pathology data, the second pathology data and the first disease data, and judging whether to generate an analysis instruction and a similar instruction;
the method for judging whether to generate the analysis instruction comprises the following steps:
a. constructing a medical knowledge graph;
collecting data from a plurality of sources, including medical literature, clinical practice, medical databases, expert opinions, and the like, the data including disease names and condition descriptions; preprocessing operations such as cleaning, de-duplication, standardization and the like are carried out on the acquired data, and specifically, the operations such as word segmentation, part-of-speech tagging and the like of the data are carried out; the natural language processing technology is utilized to carry out entity identification and relation extraction, namely disease names and symptoms are described, and the association relation between the disease names and the symptoms is identified, for example, fever and cough are typical manifestations of influenza, so that the association degree between fever, cough and influenza is higher; the method comprises the steps of carrying out structural representation on entities and relations, constructing a data model of a medical knowledge graph, organizing association relations among the entities by adopting graph theory, and connecting diseases and symptoms as nodes through different types of relations; storing the constructed medical knowledge graph into a graph database of a cloud platform to support efficient data query and retrieval;
b. Converting the first pathology data and the second pathology data into a medical knowledge-graph representation;
mapping the first pathology data and the second pathology data to entities in the constructed medical knowledge graph through entity identification;
c. calculating the shortest path length of the first pathology data and the second pathology data;
searching the shortest path from the node where the first pathology data is located to the node where the second pathology data is located by using a breadth first search algorithm, and recording the number of nodes passing through in the searching process, wherein the number of nodes is the shortest path length from the node where the first pathology data is located to the node where the second pathology data is located; the shortest path length reflects the association degree of the first pathology data and the second pathology data, and the smaller the shortest path length is, the higher the association degree of the first pathology data and the second pathology data is;
the breadth-first search algorithm includes:
setting a queue, marking the node where the first pathology data is located as accessed, adding the node into the queue, and initializing the path length to be 0; retrieving the first node from the queue; checking whether the node is the node where the second pathology data is located, if so, recording the current path length and ending the search; if not, traversing all the non-accessed adjacent nodes of the node, marking the adjacent nodes as accessed and added into a queue, and increasing the path length; repeating the search with such a push line until a node where the second pathology data is located is found; recording the current path length and ending the search;
d. Calculating a degree of association score of the first pathology data and the second pathology data, and marking the degree of association score as a first degree of association; normalizing the calculated shortest path length to obtain a correlation score of the first pathology data and the second pathology data, namelyWherein->For relevance score, ++>Is the shortest path length;
comparing the first association degree with a preset association degree threshold value;
if the first association degree is smaller than the association degree threshold value, an analysis instruction is not generated; the correlation degree between the first pathology data and the second pathology data is smaller, and diseases suffered by the patient cannot be analyzed according to the first pathology data and the second pathology data;
if the first association degree is greater than or equal to the association degree threshold value, generating an analysis instruction; the correlation degree between the first pathology data and the second pathology data is larger, and the diseases suffered by the patient can be analyzed according to the first pathology data and the second pathology data;
it should be noted that, the association threshold is a person skilled in the art, a plurality of similar patient diseases are collected in advance, a plurality of typical symptoms are collected for each patient disease, the association score of each symptom corresponding to each two symptoms in the plurality of symptoms corresponding to each disease is calculated, the average value of the association scores of the plurality of symptoms is calculated, the association score of each two symptoms in the plurality of diseases is calculated, the average value of the association scores of the plurality of diseases is calculated, the average value corresponding to the two average values is calculated, and the finally calculated average value is used as the association threshold;
The method for judging whether to generate the similar instruction comprises the following steps:
if the analysis instruction is not generated, the first pathology data, the second pathology data and the first disease data are sent to a doctor end, and the doctor performs disease diagnosis on the patient;
if an analysis instruction is generated, inputting the first pathology data and the second pathology data into a medical knowledge graph to acquire corresponding second disease data; the second disease data is a disease in which the first disease data corresponds to the second disease data, for example, a disease such as heart disease, heart valve disease, airway inflammation, conjunctivitis, etc.; inputting the first disease data and the second disease data into a medical knowledge graph, calculating a correlation score between the first disease data and the second disease data, and marking the correlation score as a second correlation; the calculating process of the second association degree is identical to the calculating process of the first association degree, and redundant description is omitted herein;
comparing the second association with an association threshold;
if the second association degree is smaller than the association degree threshold value, a similar instruction is not generated; the correlation degree between the first disease data and the second disease data is smaller, and the disease suffered by the patient at the time is different from the past medical history, so that a doctor is required to further diagnose;
If the second association degree is greater than or equal to the association degree threshold value, generating a similar instruction; the correlation degree between the first disease data and the second disease data is larger, the disease suffered by the patient is similar to the past history, and the patient does not cure the disease suffered by the patient;
if a similar instruction is generated, the first disease data and the second disease data are sent to a doctor side; the doctor can know the current diseases and the past medical history of the patient, and the doctor is required to adjust the treatment scheme because the patient has similar diseases after the patient passes the previous treatment scheme;
if the similar instruction is not generated, the second disease data is sent to a doctor side; the doctor can conveniently diagnose the disease suffered by the patient at this time;
the doctor can also acquire the real-time patient medical data stored in the cloud platform from the doctor end, so that the doctor can comprehensively know the condition of the patient and make more accurate diagnosis;
referring to fig. 3, a specific system architecture diagram of the present embodiment is shown;
according to the embodiment, through the cloud platform, a patient and a doctor can be remotely connected, remote diagnosis is realized, and diagnosis is convenient for a remote area or a patient needing long-time monitoring; the comprehensive multidimensional data comprise a plurality of data types such as auscultation digital signals, patient pathology images, patient sounds, electronic medical records and the like, can provide more comprehensive patient information, and is helpful for comprehensively diagnosing patient diseases; meanwhile, intelligent auxiliary diagnosis can be realized according to the knowledge graph, auxiliary diagnosis is provided for doctors, the workload of the doctors is reduced, and the accuracy and the speed of diagnosis are improved.
Example 2
Referring to fig. 4, this embodiment, which is not described in detail in embodiment 1, provides a multi-dimensional remote auscultation and diagnosis intelligent method based on cloud computing, the method includes:
connecting a patient end and a doctor end through a cloud platform;
the patient end collects m groups of historical patient medical data and sends the data to the cloud platform; historical patient medical data includes auscultation digital signals, patient pathology images, patient sounds, and electronic medical records;
the cloud platform processes auscultation digital signals and patient sounds to obtain training data;
the method comprises the steps that a patient end collects first pathology data corresponding to training data, second pathology data corresponding to pathology images of a patient and first disease data corresponding to electronic medical records, and sends the first pathology data, the second pathology data and the first disease data to a cloud platform;
the cloud platform trains an auxiliary analysis model for predicting the first pathology data based on the training data;
the cloud platform trains an auxiliary identification model for identifying second pathology data based on the pathology image of the patient;
the cloud platform trains a medical history extraction model for extracting first disease data based on the electronic medical record;
the patient end sends the patient medical data acquired in real time to the platform end, the platform end respectively inputs the real-time patient medical data into the auxiliary analysis model, the auxiliary identification model and the medical history extraction model, acquires corresponding first pathology data, second pathology data and first disease data, and judges whether to generate an analysis instruction and a similar instruction.
Further, the auscultation digital signals include heart sound signals, breath sound signals, and bowel sound signals.
Further, the training data includes auscultation digital signal characteristic parameters and patient sound characteristic parameters; the patient sound characteristic parameters include volume, pitch, and pace of speech.
Further, the method for processing auscultation digital signals comprises the following steps:
s101: preprocessing the collected auscultation digital signals, wherein the preprocessing comprises filtering and denoising;
s102: performing time-frequency analysis on the preprocessed auscultation digital signals by adopting fast Fourier transform, and analyzing the change of the auscultation digital signals in time and frequency;
fast fourier transform intoWherein->Representing the>Frequency component->Representing the%>Auscultation digital signal, < >>Representing the total number of auscultatory digital signals, < >>Is natural logarithmic constant, ++>Is imaginary unit, ++>Is of circumference rate>
S103: extracting auscultation digital signal characteristic parameters from a time-frequency analysis result;
the auscultation digital signal characteristic parameters comprise auscultation period, auscultation average value and auscultation standard deviation;
obtaining the peak value of auscultation digital signal, the time corresponding to the peak value and the total time according to the time-frequency analysis result, and taking the time difference of two adjacent peak values as auscultation period Acquiring A auscultation periods in auscultation digital signals; dividing A by total time to obtain auscultation mean +.>The method comprises the steps of carrying out a first treatment on the surface of the Auscultation standard deviation->Is->The method comprises the steps of carrying out a first treatment on the surface of the Wherein->For the j-th auscultation period, < >>
Further, the method for processing patient sound comprises:
s201: converting the collected patient sound into a digital form to obtain a sound signal;
s202: preprocessing the sound signal, wherein the preprocessing comprises filtering and denoising;
s203: acquiring tone and volume from the preprocessed sound signal, wherein the tone is basic frequency, and the volume is amplitude;
s204: framing the preprocessed sound signals by using a window function to obtain signal fragments in a plurality of time windows;
s205: performing fast Fourier transform on the signals in each time window to obtain frequency domain information;
s206: extracting speech rate from the frequency domain information; the speech rate is zero crossing rate.
Further, the training process of the auxiliary analysis model comprises:
converting the training data and the first pathology data into a corresponding set of feature vectors;
taking each group of feature vectors as input of an auxiliary analysis model, wherein the auxiliary analysis model takes a group of predicted first pathology data corresponding to each group of training data as output, and takes actual first pathology data corresponding to each group of training data as a predicted target, wherein the actual first pathology data is the acquired first pathology data corresponding to the training data; taking the sum of the prediction errors of the minimum training data as a training target; training the auxiliary analysis model until the sum of the prediction errors reaches convergence, and stopping training; the auxiliary analysis model is a deep neural network model.
Further, the training process of the auxiliary recognition model comprises the following steps:
s301: constructing an auxiliary recognition model;
determining a model structure of an auxiliary recognition model, wherein the model structure comprises an input layer, a convolution layer, a pooling layer, a full connection layer and an output layer; defining the format and size of an input layer; adding a convolution layer in the auxiliary recognition model for extracting characteristics of input data, wherein the convolution layer comprises convolution operation, an activation function and normalization operation; adding a pooling layer after the convolution layer; adding a full connection layer after the pooling layer; adding an output layer at the end of the auxiliary recognition model;
s302: training the auxiliary recognition model based on the patient pathology image;
marking the patient pathology image as a training image, labeling each training image, labeling second pathology data corresponding to the patient pathology image, and respectively converting each second pathology data into a digital label; dividing the marked training image into a training set and a testing set; training the auxiliary recognition model by using a training set, testing the auxiliary recognition model by using a testing set, presetting an error threshold, and outputting the auxiliary recognition model meeting the prediction error when the prediction error is smaller than the preset error threshold; the auxiliary recognition model is a convolutional neural network model.
Further, the method for judging whether to generate the analysis instruction comprises the following steps:
a. constructing a medical knowledge graph;
b. converting the first pathology data and the second pathology data into a medical knowledge-graph representation;
c. calculating the shortest path length of the first pathology data and the second pathology data;
searching the shortest path from the node where the first pathology data is located to the node where the second pathology data is located by using a breadth first search algorithm, and recording the number of the nodes passing through in the searching process, wherein the number of the nodes is the shortest path length from the node where the first pathology data is located to the node where the second pathology data is located;
d. calculating first pathology data and second pathology dataThe association degree of the pathology data is scored and marked as a first association degree; normalizing the calculated shortest path length to obtain a correlation score of the first pathology data and the second pathology data, namelyWherein->For relevance score, ++>Is the shortest path length;
comparing the first association degree with a preset association degree threshold value;
if the first association degree is smaller than the association degree threshold value, an analysis instruction is not generated;
and if the first association degree is greater than or equal to the association degree threshold value, generating an analysis instruction.
Further, the method for judging whether to generate the similar instruction comprises the following steps:
If an analysis instruction is generated, inputting the first pathology data and the second pathology data into a medical knowledge graph to acquire corresponding second disease data; the second disease data is a disease corresponding to the first disease data and the second disease data, the first disease data and the second disease data are input into a medical knowledge graph, a correlation score between the first disease data and the second disease data is calculated, and the second correlation score is marked as a second correlation;
comparing the second association with an association threshold;
if the second association degree is smaller than the association degree threshold value, a similar instruction is not generated;
and if the second association degree is greater than or equal to the association degree threshold value, generating a similar instruction.
Further, if the analysis instruction is not generated, the first pathology data, the second pathology data and the first disease data are sent to a doctor side;
if a similar instruction is generated, the first disease data and the second disease data are sent to a doctor side;
if the similar instruction is not generated, the second disease data is sent to the doctor side.
Example 3
Referring to fig. 5, the disclosure provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, wherein the processor implements the cloud computing-based multi-dimensional remote auscultation and diagnosis intelligent method provided by the above methods when executing the computer program.
Since the electronic device described in this embodiment is an electronic device used for implementing the cloud computing-based multi-dimensional remote auscultation and diagnosis intelligent method in this embodiment, those skilled in the art can understand the specific implementation manner of the electronic device and various variations thereof based on the cloud computing-based multi-dimensional remote auscultation and diagnosis intelligent method described in this embodiment, so how to implement the method in this embodiment will not be described in detail herein. As long as the person skilled in the art implements the electronic device adopted by the intelligent multi-dimensional remote auscultation and diagnosis method based on cloud computing in the embodiment of the application, the electronic device belongs to the scope of protection intended by the application.
Example 4
The embodiment discloses a computer readable storage medium, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the intelligent multi-dimensional remote auscultation and diagnosis method based on cloud computing provided by the methods when executing the computer program.
The above embodiments may be implemented in whole or in part by software, hardware, firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. When the computer instructions or computer program are loaded or executed on a computer, the processes or functions described in accordance with embodiments of the present invention are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center over a wired network or a wireless network. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more sets of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided by the present invention, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely one, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Finally: the foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (13)

1. Multi-dimensional remote auscultation and diagnosis intelligent system based on cloud computing, which is characterized by comprising:
the connection module is used for connecting the patient end and the doctor end through the cloud platform;
the data acquisition module is used for acquiring m groups of historical patient medical data at a patient end and sending the data to the cloud platform, wherein m is an integer greater than 1; historical patient medical data includes auscultation digital signals, patient pathology images, patient sounds, and electronic medical records;
the cloud platform processes auscultation digital signals and patient sounds to obtain training data;
the pathology acquisition module is used for acquiring first pathology data corresponding to the training data, second pathology data corresponding to the pathology image of the patient and first disease data corresponding to the electronic medical record by the patient end and sending the first pathology data, the second pathology data and the first disease data to the cloud platform;
the cloud platform trains an auxiliary analysis model for predicting first pathology data based on training data;
the cloud platform trains an auxiliary identification model for identifying second pathology data based on the pathology image of the patient;
the cloud platform trains a medical history extraction model for extracting the first disease data based on the electronic medical record;
the data analysis module is used for sending the patient medical data acquired in real time to the platform end, and the platform end respectively inputs the real-time patient medical data into the auxiliary analysis model, the auxiliary identification model and the medical history extraction model to respectively acquire corresponding first pathology data, second pathology data and first disease data and judge whether to generate analysis instructions and similar instructions.
2. The cloud computing based multi-dimensional remote auscultation and diagnosis intelligent system of claim 1, wherein said auscultation digital signals include heart sound signals, breath sound signals, and bowel sound signals.
3. The cloud computing based multi-dimensional remote auscultation and diagnosis intelligent system of claim 2, wherein the training data comprises auscultation digital signal characteristic parameters and patient sound characteristic parameters; the patient sound characteristic parameters include volume, pitch, and pace of speech.
4. The cloud computing based multi-dimensional remote auscultation and diagnosis intelligent system according to claim 3, wherein said method of processing auscultation digital signals comprises:
s101: preprocessing the collected auscultation digital signals, wherein the preprocessing comprises filtering and denoising;
s102: performing time-frequency analysis on the preprocessed auscultation digital signals by adopting fast Fourier transform, and analyzing the change of the auscultation digital signals in time and frequency;
fast fourier transform intoWherein->Representing the>Frequency component->Representing the%>Auscultation digital signal, < >>Representing the total number of auscultatory digital signals, < > >Is natural logarithmic constant, ++>Is imaginary unit, ++>Is of circumference rate>
S103: extracting auscultation digital signal characteristic parameters from a time-frequency analysis result;
the auscultation digital signal characteristic parameters comprise auscultation period, auscultation average value and auscultation standard deviation;
obtaining the peak value of auscultation digital signal, the time corresponding to the peak value and the total time according to the time-frequency analysis result, and taking the time difference of two adjacent peak values as auscultation periodAcquiring A auscultation periods in auscultation digital signals; dividing A by total time to obtain auscultation mean +.>The method comprises the steps of carrying out a first treatment on the surface of the Auscultation standard deviation->Is->The method comprises the steps of carrying out a first treatment on the surface of the Wherein the method comprises the steps ofFor the j-th auscultation period, < >>
5. The cloud computing based multi-dimensional remote auscultation and diagnosis intelligent system of claim 4, wherein said method of processing patient sounds comprises:
s201: converting the collected patient sound into a digital form to obtain a sound signal;
s202: preprocessing the sound signal, wherein the preprocessing comprises filtering and denoising;
s203: acquiring tone and volume from the preprocessed sound signal, wherein the tone is basic frequency, and the volume is amplitude;
s204: framing the preprocessed sound signals by using a window function to obtain signal fragments in a plurality of time windows;
S205: performing fast Fourier transform on the signals in each time window to obtain frequency domain information;
s206: extracting speech rate from the frequency domain information; the speech rate is zero crossing rate.
6. The cloud computing-based multi-dimensional remote auscultation and diagnosis intelligent system of claim 5, wherein the training process of the auxiliary analysis model comprises:
converting the training data and the first pathology data into a corresponding set of feature vectors;
taking each group of feature vectors as input of an auxiliary analysis model, wherein the auxiliary analysis model takes a group of predicted first pathology data corresponding to each group of training data as output, and takes actual first pathology data corresponding to each group of training data as a predicted target, wherein the actual first pathology data is the acquired first pathology data corresponding to the training data; taking the sum of the prediction errors of the minimum training data as a training target; training the auxiliary analysis model until the sum of the prediction errors reaches convergence, and stopping training; the auxiliary analysis model is a deep neural network model.
7. The cloud computing-based multi-dimensional remote auscultation and diagnosis intelligent system of claim 6, wherein the training process of the auxiliary recognition model comprises:
S301: constructing an auxiliary recognition model;
determining a model structure of an auxiliary recognition model, wherein the model structure comprises an input layer, a convolution layer, a pooling layer, a full connection layer and an output layer; defining the format and size of an input layer; adding a convolution layer in the auxiliary recognition model for extracting characteristics of input data, wherein the convolution layer comprises convolution operation, an activation function and normalization operation; adding a pooling layer after the convolution layer; adding a full connection layer after the pooling layer; adding an output layer at the end of the auxiliary recognition model;
s302: training the auxiliary recognition model based on the patient pathology image;
marking the patient pathology image as a training image, labeling each training image, labeling second pathology data corresponding to the patient pathology image, and respectively converting each second pathology data into a digital label; dividing the marked training image into a training set and a testing set; training the auxiliary recognition model by using a training set, testing the auxiliary recognition model by using a testing set, presetting an error threshold, and outputting the auxiliary recognition model meeting the prediction error when the prediction error is smaller than the preset error threshold; the auxiliary recognition model is a convolutional neural network model.
8. The cloud computing-based multi-dimensional remote auscultation and diagnosis intelligent system according to claim 7, wherein said method for judging whether to generate an analysis instruction comprises:
a. constructing a medical knowledge graph;
b. converting the first pathology data and the second pathology data into a medical knowledge-graph representation;
c. calculating the shortest path length of the first pathology data and the second pathology data;
searching the shortest path from the node where the first pathology data is located to the node where the second pathology data is located by using a breadth first search algorithm, and recording the number of the nodes passing through in the searching process, wherein the number of the nodes is the shortest path length from the node where the first pathology data is located to the node where the second pathology data is located;
d. calculating a degree of association score of the first pathology data and the second pathology data, and marking the degree of association score as a first degree of association; normalizing the calculated shortest path length to obtain a correlation score of the first pathology data and the second pathology data, namelyWherein->For relevance score, ++>Is the shortest path length;
comparing the first association degree with a preset association degree threshold value;
if the first association degree is smaller than the association degree threshold value, an analysis instruction is not generated;
And if the first association degree is greater than or equal to the association degree threshold value, generating an analysis instruction.
9. The cloud computing-based multi-dimensional remote auscultation and diagnosis intelligent system according to claim 8, wherein said method for judging whether to generate a similar instruction comprises:
if an analysis instruction is generated, inputting the first pathology data and the second pathology data into a medical knowledge graph to acquire corresponding second disease data; the second disease data is a disease corresponding to the first disease data and the second disease data, the first disease data and the second disease data are input into a medical knowledge graph, a correlation score between the first disease data and the second disease data is calculated, and the second correlation score is marked as a second correlation;
comparing the second association with an association threshold;
if the second association degree is smaller than the association degree threshold value, a similar instruction is not generated;
and if the second association degree is greater than or equal to the association degree threshold value, generating a similar instruction.
10. The cloud computing-based multi-dimensional remote auscultation and diagnosis intelligent system of claim 9, wherein if no analysis instruction is generated, the first pathology data, the second pathology data, and the first disease data are sent to a doctor side;
If a similar instruction is generated, the first disease data and the second disease data are sent to a doctor side;
if the similar instruction is not generated, the second disease data is sent to the doctor side.
11. A cloud computing-based multi-dimensional remote auscultation and diagnosis intelligent method, characterized in that the cloud computing-based multi-dimensional remote auscultation and diagnosis intelligent system is realized, and the method comprises:
connecting a patient end and a doctor end through a cloud platform;
the patient end collects m groups of historical patient medical data and sends the data to the cloud platform; historical patient medical data includes auscultation digital signals, patient pathology images, patient sounds, and electronic medical records;
the cloud platform processes auscultation digital signals and patient sounds to obtain training data;
the method comprises the steps that a patient end collects first pathology data corresponding to training data, second pathology data corresponding to pathology images of a patient and first disease data corresponding to electronic medical records, and sends the first pathology data, the second pathology data and the first disease data to a cloud platform;
the cloud platform trains an auxiliary analysis model for predicting the first pathology data based on the training data;
the cloud platform trains an auxiliary identification model for identifying second pathology data based on the pathology image of the patient;
The cloud platform trains a medical history extraction model for extracting first disease data based on the electronic medical record;
the patient end sends the patient medical data acquired in real time to the platform end, the platform end respectively inputs the real-time patient medical data into the auxiliary analysis model, the auxiliary identification model and the medical history extraction model, acquires corresponding first pathology data, second pathology data and first disease data, and judges whether to generate an analysis instruction and a similar instruction.
12. An electronic device comprising a memory, a processor, and a computer program stored on the memory and running on the processor, wherein the processor implements the cloud computing-based multi-dimensional remote auscultation and diagnosis intelligence method of claim 11 when the computer program is executed by the processor.
13. A computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, the computer program when executed implementing the cloud computing based multi-dimensional remote auscultation and diagnosis intelligence method of claim 11.
CN202311700401.3A 2023-12-12 2023-12-12 Multi-dimensional remote auscultation and diagnosis intelligent system based on cloud computing Active CN117393156B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311700401.3A CN117393156B (en) 2023-12-12 2023-12-12 Multi-dimensional remote auscultation and diagnosis intelligent system based on cloud computing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311700401.3A CN117393156B (en) 2023-12-12 2023-12-12 Multi-dimensional remote auscultation and diagnosis intelligent system based on cloud computing

Publications (2)

Publication Number Publication Date
CN117393156A true CN117393156A (en) 2024-01-12
CN117393156B CN117393156B (en) 2024-04-05

Family

ID=89465254

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311700401.3A Active CN117393156B (en) 2023-12-12 2023-12-12 Multi-dimensional remote auscultation and diagnosis intelligent system based on cloud computing

Country Status (1)

Country Link
CN (1) CN117393156B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118213092A (en) * 2024-03-18 2024-06-18 暨南大学附属第一医院(广州华侨医院) Remote medical supervision system for chronic wound diseases
CN118213092B (en) * 2024-03-18 2024-09-27 暨南大学附属第一医院(广州华侨医院) Remote medical supervision system for chronic wound diseases

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113972005A (en) * 2021-11-19 2022-01-25 北京明略软件系统有限公司 Artificial intelligence auxiliary diagnosis and treatment method and system, storage medium and electronic equipment
CN114639479A (en) * 2022-03-16 2022-06-17 南京海彬信息科技有限公司 Intelligent diagnosis auxiliary system based on medical knowledge map
CN115083599A (en) * 2022-07-12 2022-09-20 南京云创大数据科技股份有限公司 Knowledge graph-based preliminary diagnosis and treatment method for disease state
CN116129182A (en) * 2023-01-10 2023-05-16 南京大学 Multi-dimensional medical image classification method based on knowledge distillation and neighbor classification
EP4239647A1 (en) * 2022-03-03 2023-09-06 Tempus Labs, Inc. Systems and methods for deep orthogonal fusion for multimodal prognostic biomarker discovery
CN117151215A (en) * 2023-09-25 2023-12-01 郑州轻工业大学 Coronary heart disease multi-mode data characteristic extraction method based on knowledge graph

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113972005A (en) * 2021-11-19 2022-01-25 北京明略软件系统有限公司 Artificial intelligence auxiliary diagnosis and treatment method and system, storage medium and electronic equipment
EP4239647A1 (en) * 2022-03-03 2023-09-06 Tempus Labs, Inc. Systems and methods for deep orthogonal fusion for multimodal prognostic biomarker discovery
CN114639479A (en) * 2022-03-16 2022-06-17 南京海彬信息科技有限公司 Intelligent diagnosis auxiliary system based on medical knowledge map
CN115083599A (en) * 2022-07-12 2022-09-20 南京云创大数据科技股份有限公司 Knowledge graph-based preliminary diagnosis and treatment method for disease state
CN116129182A (en) * 2023-01-10 2023-05-16 南京大学 Multi-dimensional medical image classification method based on knowledge distillation and neighbor classification
CN117151215A (en) * 2023-09-25 2023-12-01 郑州轻工业大学 Coronary heart disease multi-mode data characteristic extraction method based on knowledge graph

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118213092A (en) * 2024-03-18 2024-06-18 暨南大学附属第一医院(广州华侨医院) Remote medical supervision system for chronic wound diseases
CN118213092B (en) * 2024-03-18 2024-09-27 暨南大学附属第一医院(广州华侨医院) Remote medical supervision system for chronic wound diseases

Also Published As

Publication number Publication date
CN117393156B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
Lella et al. Automatic diagnosis of COVID-19 disease using deep convolutional neural network with multi-feature channel from respiratory sound data: cough, voice, and breath
EP3776586B1 (en) Managing respiratory conditions based on sounds of the respiratory system
Belkacem et al. End-to-end AI-based point-of-care diagnosis system for classifying respiratory illnesses and early detection of COVID-19: A theoretical framework
JP6435257B2 (en) Method and apparatus for processing patient sounds
JP7300802B2 (en) Augmented reality presentations associated with patient medical conditions and/or treatments
US11275757B2 (en) Systems and methods for capturing data, creating billable information and outputting billable information
US20240161769A1 (en) Method for Detecting and Classifying Coughs or Other Non-Semantic Sounds Using Audio Feature Set Learned from Speech
WO2021015381A1 (en) Pulmonary function estimation
CN111091906A (en) Auxiliary medical diagnosis method and system based on real world data
TWI521467B (en) Nursing decision support system
Xia et al. Exploring machine learning for audio-based respiratory condition screening: A concise review of databases, methods, and open issues
Gupta StrokeSave: a novel, high-performance mobile application for stroke diagnosis using deep learning and computer vision
Mukherjee et al. Lung health analysis: adventitious respiratory sound classification using filterbank energies
CN117393156B (en) Multi-dimensional remote auscultation and diagnosis intelligent system based on cloud computing
Dutta et al. A Fine-Tuned CatBoost-Based Speech Disorder Detection Model
Abhishek et al. ESP8266-based Real-time Auscultation Sound Classification
KR20220170673A (en) Apparatus and method for analysing disease of lung based on pulmonary sound
Zhang et al. Towards Open Respiratory Acoustic Foundation Models: Pretraining and Benchmarking
Ali et al. Detection of crackle and wheeze in lung sound using machine learning technique for clinical decision support system
Abhishek et al. The Auscultation Sound Classification Era of the Future
Vishnyakou et al. Structure and components of internet of things network for it patient diagnostics
CN116612885B (en) Prediction device for acute exacerbation of chronic obstructive pulmonary disease based on multiple modes
Ghaffarzadegan et al. Active Learning for Abnormal Lung Sound Data Curation and Detection in Asthma
Sreerama Respiratory Sound Analysis for the Evidence of Lung Health
CN118173292A (en) AI self-training-based remote diagnosis and treatment system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant