CN111933274A - Disease classification diagnosis method and device, electronic equipment and storage medium - Google Patents

Disease classification diagnosis method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111933274A
CN111933274A CN202010683824.9A CN202010683824A CN111933274A CN 111933274 A CN111933274 A CN 111933274A CN 202010683824 A CN202010683824 A CN 202010683824A CN 111933274 A CN111933274 A CN 111933274A
Authority
CN
China
Prior art keywords
focus
image data
training
data set
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010683824.9A
Other languages
Chinese (zh)
Inventor
郭晏
郭振
李东芳
吕彬
吕传峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANDONG EYE INSTITUTE
Ping An Technology Shenzhen Co Ltd
Original Assignee
SHANDONG EYE INSTITUTE
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANDONG EYE INSTITUTE, Ping An Technology Shenzhen Co Ltd filed Critical SHANDONG EYE INSTITUTE
Priority to CN202010683824.9A priority Critical patent/CN111933274A/en
Publication of CN111933274A publication Critical patent/CN111933274A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Abstract

The invention relates to big data technology, and discloses a disease classification diagnosis method, which comprises the following steps: training two pre-constructed convolutional neural network models by utilizing a historical patient original image data set; obtaining a test image dataset, inputting the test image dataset into two trained models to obtain a focus distribution thermal power atlas and a focus position structure gray level atlas, training a pre-constructed third convolutional neural network model by using the test image dataset, the focus distribution thermal power atlas and the focus position structure gray level atlas to obtain a focus classification model, and diagnosing image data of a patient to be detected by using the classification model to obtain a diagnosis focus. The invention also relates to a block chain technology, and the model training data and the image data of the patient to be detected can be stored in the block chain. The invention also provides a disease classification diagnosis device, an electronic device and a computer readable storage medium. The invention can improve the efficiency of disease classification diagnosis.

Description

Disease classification diagnosis method and device, electronic equipment and storage medium
Technical Field
The invention relates to the field of artificial intelligence, in particular to a disease classification diagnosis method and device, electronic equipment and a storage medium.
Background
With the advent of the artificial intelligence era, artificial intelligence is widely applied to the field of auxiliary diagnosis based on medical images, focus classification models are obtained by training convolutional neural network models, the existence probabilities of various focuses in patient images are output, and the classification diagnosis of the various focuses is realized.
Disclosure of Invention
The invention provides a disease classification diagnosis method, a disease classification diagnosis device, electronic equipment and a computer readable storage medium, and mainly aims to improve the accuracy of disease classification diagnosis.
In order to achieve the above object, the present invention provides a method for diagnosing diseases by classification, comprising:
acquiring a historical patient original image data set, and training a pre-constructed first convolution neural network model by using the original image data set to obtain a focus distribution thermodynamic diagram regression model;
acquiring a test image data set, and inputting the test image data set into the focus distribution thermodynamic diagram regression model to obtain a focus distribution thermodynamic diagram set;
training a pre-constructed second convolutional neural network model by using the original image data to obtain a focus part structure gray map model;
inputting the test image data set into the focus part structure gray level image model to obtain a focus part structure gray level image set;
training a pre-constructed third convolutional neural network model by utilizing the test image data set, the focus distribution thermal force image set and the focus position structure gray level image set to obtain a focus classification model;
when receiving the image data of the patient to be detected, diagnosing the image data by using the focus classification model, and outputting the diagnosis focus of the patient to be detected.
Optionally, acquiring an original image data set of a historical patient, and training a pre-constructed first convolution neural network model by using the original image data set to obtain a focus distribution thermodynamic diagram regression model; the method comprises the following steps:
taking the original image data set as a first training set, and marking the focus position of the original image data set to obtain a first label set;
and training the first convolution neural network model by using the first training set and the first label set to obtain the focus distribution thermodynamic diagram regression model.
Optionally, the training the first convolutional neural network model by using the first training set and the first label set to obtain a focus distribution thermodynamic diagram regression model, including:
a: performing convolution pooling operation on the first training set according to a preset first convolution pooling number to obtain a first dimension reduction data set;
b: according to a preset first deconvolution frequency, performing deconvolution operation on the first dimensionality reduction data set to obtain a first dimensionality increasing data set;
c: and calculating the first ascending-dimensional data set by using a preset first activation function to obtain a first predicted value, and calculating the first predicted value and a tag value contained in the first tag set as input parameters of a pre-constructed first loss function to obtain a first loss value.
D: comparing the first loss value with a preset first loss threshold value, and if the first loss value is greater than or equal to the first loss threshold value, returning to the step A; and if the first loss value is smaller than the first loss threshold value, obtaining the focus distribution thermodynamic diagram regression model.
Optionally, the training of a pre-constructed third convolutional neural network model by using the test image dataset, the lesion distribution thermal force atlas and the lesion site structure gray level atlas to obtain a lesion classification model includes:
summarizing the test image data set, the focus distribution thermal force atlas and the focus part structure gray level atlas to obtain a third training set;
performing lesion type marking on the third training set to obtain a third label set;
and training the third convolutional neural network model by using the third training set and the third label set to obtain the focus classification model.
Optionally, training the third convolutional neural network model using the third training set and the third label set, including:
x: according to preset depth separable convolution pooling times, performing depth separable convolution pooling operation on the third training set to obtain a third dimension reduction data set;
y: and calculating the third dimensionality reduction data set by using a preset third activation function to obtain a third predicted value, and calculating to obtain a third loss value by using the third predicted value and the third label value as input parameters of a pre-constructed third loss function.
Z: comparing the third loss value with a preset third loss threshold value, and if the third loss value is greater than or equal to the third loss threshold value, returning to X; and if the third loss value is smaller than the third loss threshold value, obtaining the lesion classification model.
Optionally, the performing a deep separable convolution pooling operation on the third training set to obtain a third reduced-dimension data set includes:
performing packet convolution operation on the third training set to obtain a deep convolution data set;
performing point-by-point convolution operation on the depth convolution data set to obtain a point-by-point convolution data set;
and carrying out average pooling operation on the point-by-point convolution data sets to obtain the third dimension reduction data set.
Optionally, when receiving image data of a patient to be detected, diagnosing the image data by using the lesion classification model, and outputting a diagnosis lesion of the patient to be detected, the method includes:
inputting the image data into the lesion classification model, and outputting preset disease probability of at least one lesion;
identifying a confidence threshold for the at least one lesion using the joden index principle;
comparing the prevalence probability to a confidence threshold for the corresponding lesion;
and selecting the focus with the disease probability larger than the corresponding focus confidence threshold value as the diagnosis focus of the patient to be detected.
In order to solve the above problems, the present invention also provides a disease classification diagnosis apparatus including:
the thermodynamic diagram generation module is used for acquiring a historical patient original image data set, and training a pre-constructed first convolution neural network model by using the original image data set to obtain a focus distribution thermodynamic diagram regression model; acquiring a test image data set, and inputting the test image data set into the focus distribution thermodynamic diagram regression model to obtain a focus distribution thermodynamic diagram set;
the gray level image generation module is used for training a pre-constructed second convolutional neural network model by utilizing the original image data to obtain a focus part structure gray level image model; inputting the test image data set into the focus part structure gray level image model to obtain a focus part structure gray level image set;
the classification model training module is used for training a pre-constructed third convolution neural network model by utilizing the test image data set, the focus distribution thermal power atlas and the focus position structure gray level atlas to obtain a focus classification model;
and the model detection module is used for diagnosing the image data by using the focus classification model and outputting the diagnosis focus of the patient to be detected when receiving the image data of the patient to be detected.
In order to solve the above problem, the present invention also provides an electronic device, including:
a memory storing at least one instruction; and
and the processor executes the instructions stored in the memory to realize the disease classification diagnosis method.
In order to solve the above problem, the present invention also provides a computer-readable storage medium including a stored data area and a stored program area, the stored data area storing data created according to the use of blockchain nodes, the stored program area storing a computer program, the computer-readable storage medium having stored therein at least one instruction, the at least one instruction being executed by a processor in an electronic device to implement the disease classification diagnosis method described above.
The method comprises the steps of training by utilizing an original image data set of a historical patient to obtain a focus distribution thermodynamic diagram regression model; calculating a test image data set by using the focus distribution thermodynamic diagram regression model to obtain a focus distribution thermodynamic diagram set, so that the definition and identification of the focus position can be realized; further training by using the original image data to obtain a focus part structure gray level image model, and calculating the test image data set by using the focus part structure gray level image model to obtain a focus part structure gray level image set, so that the definition and identification of focus parts can be realized; the focus position and the focus position are defined and identified, the range of focus feature extraction in the focus classification model training is narrowed, the training precision of the focus classification model is improved, and therefore the accuracy of disease classification diagnosis is improved.
Drawings
FIG. 1 is a schematic flow chart of a disease classification diagnosis method according to an embodiment of the present invention;
FIG. 2 is a block diagram of a disease classification and diagnosis apparatus according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an internal structure of an electronic device for implementing a disease classification diagnosis method according to an embodiment of the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides a disease classification diagnosis method. Referring to fig. 1, a flow chart of a disease classification diagnosis method according to an embodiment of the present invention is shown. The method may be performed by an apparatus, which may be implemented by software and/or hardware.
In this embodiment, the disease classification diagnosis method includes:
s1, obtaining a historical patient original image data set, and training a pre-constructed first convolution neural network model by using the original image data set to obtain a focus distribution thermodynamic diagram regression model.
In an embodiment of the present invention, the historical patient raw image data set is a set of image data of a plurality of historical patients. The historical patient raw image data set may be obtained from a patient repository at a hospital.
Preferably, in the embodiment of the present invention, the first convolutional neural network model may be a full convolutional neural network model.
In detail, in the embodiment of the present invention, the original image data set is used as a first training set, and lesion position labeling is performed on the original image data set to obtain a first label set. Preferably, embodiments of the present invention may use a mark Me (Label Me) image annotation tool to manually perform lesion location marking.
Further, training the first convolutional neural network model using the first training set and the first label set according to the embodiment of the present invention includes:
step A: performing convolution pooling operation on the first training set according to a preset first convolution pooling number to obtain a first dimension reduction data set;
and B: according to a preset first deconvolution frequency, performing deconvolution operation on the first dimensionality reduction data set to obtain a first dimensionality increasing data set;
and C: and calculating the first ascending-dimensional data set by using a preset first activation function to obtain a first predicted value, and calculating the first predicted value and a tag value contained in the first tag set as input parameters of a pre-constructed first loss function to obtain a first loss value.
Step D: comparing the first loss value with a preset first loss threshold value, and if the first loss value is greater than or equal to the first loss threshold value, returning to the step A; and if the first loss value is smaller than the first loss threshold value, obtaining the focus distribution thermodynamic diagram regression model.
In detail, the convolution pooling operation includes: convolution operations and pooling operations.
Further, the convolution operation is:
Figure BDA0002585931760000051
wherein ω' is a first convolution data set, ω is the first training set, k is a size of a preset convolution kernel, s is a step of a preset convolution operation, and f is a preset data zero-padding matrix;
preferably, in the embodiment of the present invention, the pooling operation is a maximal pooling operation performed on the first convolution data set to obtain the first dimension reduction data set.
Further, in a preferred embodiment of the present invention, the first activation function includes:
Figure BDA0002585931760000061
wherein mutRepresenting the first predicted value, s representing data in the first up-dimensioned data set.
In detail, the first loss function according to the preferred embodiment of the present invention includes:
Figure BDA0002585931760000062
wherein T represents the first loss value, n is the number of data of the first training set, T is a positive integer, ytIs the first label value, mutIs the first predicted value.
And S2, acquiring a test image data set, and inputting the test image data set into the focus distribution thermodynamic diagram regression model to obtain a focus distribution thermodynamic diagram set.
In an embodiment of the invention, the test image data set is a set of image data of another part of the historical patients different from the original image data set.
The definition and identification of the focus position are realized through the focus distribution thermodynamic diagram regression model, and the model training precision of subsequent focus classification is improved.
And S3, training a pre-constructed second convolutional neural network model by using the original image data to obtain a focus part structure gray level map model.
In detail, the original image data set is used as a second training set, and a lesion site marking is performed on the original image data set to obtain a second label set, where the lesion site is a human body site where a lesion is located, for example: the focus is in the lung of the human body, namely the lung is the focus position.
Preferably, embodiments of the present invention may use a mark Me (Label Me) image annotation tool for manual lesion site marking.
Further, the training the second convolutional neural network model by using the second training set and the second label set in the embodiment of the present invention includes:
s31: performing convolution pooling operation on the second training set according to a preset second convolution pooling number to obtain a second dimension reduction data set;
s32: according to a preset second deconvolution frequency, performing deconvolution operation on the second dimensionality reduction data set to obtain a second dimensionality increasing data set;
s33: and calculating the second ascending-dimensional data set by using a preset second activation function to obtain a second predicted value, and calculating to obtain a second loss value by using the second predicted value and the second label value as input parameters of a second loss function which is pre-constructed.
S34: and comparing the second loss value with a preset second loss threshold value, and if the second loss value is greater than or equal to the second loss threshold value, returning to the step S31. And if the second loss value is smaller than the second loss threshold value, obtaining the focus part structure gray level graph model.
In detail, in the preferred embodiment of the present invention, the second loss function E can be calculated by using the following formula:
Figure BDA0002585931760000071
wherein b represents a set of all second label values, and c represents a set of all second predicted values.
And S4, inputting the test image data set into the focus part structure gray level image model to obtain a focus part structure gray level image set.
The focus part is defined and identified through the focus part structure gray level graph model, and the precision of subsequent focus classification model training is improved.
And S5, training a pre-constructed third convolution neural network model by utilizing the test image data set, the focus distribution heat force image set and the focus position structure gray level image set to obtain a focus classification model.
In the embodiment of the invention, the test image data set, the focus distribution thermal force atlas and the focus position structure gray level atlas are collected to obtain a third training set, and focus type marking is carried out on the third training set to obtain a third label set. Preferably, embodiments of the present invention may use a mark Me (Label Me) image annotation tool for manual lesion type marking.
Preferably, in the preferred embodiment of the present invention, the third neural convolutional network model can be constructed by using a deep separable convolutional network model.
Further, the training the third convolutional neural network model by using the third training set and the third label set in the embodiment of the present invention includes:
s51: according to preset depth separable convolution pooling times, performing depth separable convolution pooling operation on the third training set to obtain a third dimension reduction data set;
s52: and calculating the third dimensionality reduction data set by using a preset third activation function to obtain a third predicted value, and calculating the third predicted value and the third label value as input parameters of a pre-constructed third loss function to obtain a third loss value.
S53: comparing the third loss value with a preset third loss threshold value, and returning to S51 if the third loss value is greater than or equal to the third loss threshold value; and if the third loss value is smaller than the third loss threshold value, obtaining the lesion classification model.
In detail, the depth separable convolution pooling operation includes: and performing grouping convolution operation on the third training set to obtain a depth convolution data set, performing point-by-point convolution operation on the depth convolution data set to obtain a point-by-point convolution data set, and performing average pooling operation on the point-by-point convolution data set to obtain the third dimension reduction data set.
In the preferred embodiment of the present invention, the third activation function can be calculated by using the following formula:
f(x)=max(0,x)
wherein f (x) is the third predicted value and x is data in the third reduced-dimension dataset.
In the preferred embodiment of the present invention, the third loss function can be calculated by using the following formula:
Figure BDA0002585931760000081
wherein N is the number of data included in the third training sample, i is a positive integer, and hiIs the third tag value, miIs the third predicted value.
By taking the focus distribution thermal force atlas and the focus position structure gray level atlas as additional training sets, the range of feature extraction in model training is reduced, and the training precision of the focus classification model is improved.
And S6, when the image data of the patient to be detected is received, diagnosing the image data by using the lesion classification model, and outputting the diagnosed lesion of the patient to be detected.
In one embodiment of the present invention, the training data for the lesion classification model and the image data of the patient to be diagnosed may be stored in a blockchain.
In an embodiment of the present invention, the image data of the patient to be detected is medical image data of the patient, for example: fundus color photography, fundus OCT, etc.
Further, inputting the image data into the lesion classification model, and outputting a preset probability of the at least one lesion; identifying a confidence threshold for the at least one lesion using the joden index principle; comparing the prevalence probability to a confidence threshold for the corresponding lesion; and selecting the focus with the disease probability larger than the corresponding focus confidence threshold value as the diagnosis focus of the patient to be detected.
In the embodiment of the invention, a historical patient original image data set is obtained, and a focus distribution thermodynamic diagram regression model is obtained by training a pre-constructed first convolution neural model by using the original image data set; acquiring a test image data set, inputting the test data set into the focus distribution thermodynamic diagram regression model to obtain a focus distribution thermodynamic diagram set, and realizing definition and identification of focus positions; training a pre-constructed second convolutional neural network model by using the original image data to obtain a focus part structure gray level map model, and inputting the test image data set into the focus part structure gray level map model to obtain a focus part structure gray level map set, so that the definition and identification of focus parts are realized; a third convolutional neural network model which is pre-constructed by training the test image data set, the focus distribution thermal power atlas and the focus part structure gray level atlas is used for obtaining a focus classification model, and the focus distribution thermal power atlas and the focus part structure gray level atlas are used as additional input of model training for improving the precision of model training; and diagnosing the image data of the patient to be detected by using the lesion classification model to obtain the lesion classification of the patient. The focus position and the focus position are defined and identified, the range of focus feature extraction in the focus classification model training is narrowed, the training precision of the focus classification model is improved, and therefore the accuracy of disease classification diagnosis is improved.
FIG. 2 is a functional block diagram of the disease classification diagnosis apparatus according to the present invention.
The disease classification diagnosis apparatus 100 according to the present invention may be installed in an electronic device. According to the realized functions, the disease classification diagnosis device can comprise a thermodynamic diagram generation module 101, a gray scale diagram generation module 102, a classification model training module 103 and a model detection module 104. A module according to the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the thermodynamic diagram generation module 101 is configured to acquire an original image data set of a historical patient, and train a pre-constructed first convolution neural network model with the original image data set to obtain a focus distribution thermodynamic diagram regression model; and acquiring a test image data set, and inputting the test image data set into the focus distribution thermodynamic diagram regression model to obtain a focus distribution thermodynamic diagram set.
In an embodiment of the present invention, the historical patient raw image data set is a set of image data of a plurality of historical patients. The historical patient raw image data set may be obtained from a patient repository at a hospital.
Preferably, in the embodiment of the present invention, the first convolutional neural network model may be a full convolutional neural network model.
In detail, in the embodiment of the present invention, the original image data set is used as a first training set, and lesion position labeling is performed on the original image data set to obtain a first label set. Preferably, embodiments of the present invention may use a mark Me (Label Me) image annotation tool to manually perform lesion location marking.
Further, training the first convolutional neural network model using the first training set and the first label set according to the embodiment of the present invention includes:
step A: performing convolution pooling operation on the first training set according to a preset first convolution pooling number to obtain a first dimension reduction data set;
and B: according to a preset first deconvolution frequency, performing deconvolution operation on the first dimensionality reduction data set to obtain a first dimensionality increasing data set;
and C: and calculating the first ascending-dimensional data set by using a preset first activation function to obtain a first predicted value, and calculating the first predicted value and a tag value contained in the first tag set as input parameters of a pre-constructed first loss function to obtain a first loss value.
Step D: comparing the first loss value with a preset first loss threshold value, and if the first loss value is greater than or equal to the first loss threshold value, returning to the step A; and if the first loss value is smaller than the first loss threshold value, obtaining the focus distribution thermodynamic diagram regression model.
In detail, the convolution pooling operation includes: convolution operations and pooling operations.
Further, the convolution operation is:
Figure BDA0002585931760000101
wherein ω' is a first convolution data set, ω is the first training set, k is a size of a preset convolution kernel, s is a step of a preset convolution operation, and f is a preset data zero-padding matrix;
preferably, in the embodiment of the present invention, the pooling operation is a maximal pooling operation performed on the first convolution data set to obtain the first dimension reduction data set.
Further, in a preferred embodiment of the present invention, the first activation function includes:
Figure BDA0002585931760000102
wherein mutRepresenting the first predicted value, s representing data in the first up-dimensioned data set.
In detail, the first loss function according to the preferred embodiment of the present invention includes:
Figure BDA0002585931760000103
wherein T represents the first loss value, n is the number of data of the first training set, T is a positive integer, ytIs the first label value, mutIs the first predicted value.
In the embodiment of the invention, a test image data set is obtained and is input into the focus distribution thermodynamic diagram regression model to obtain a focus distribution thermodynamic diagram set. The test image dataset is a collection of image data of another portion of the historical patients that is different from the original image dataset.
The definition and identification of the focus position are realized through the focus distribution thermodynamic diagram regression model, and the model training precision of subsequent focus classification is improved.
The gray-scale map generation module 102 is configured to train a pre-constructed second convolutional neural network model with the original image data to obtain a focus site structure gray-scale map model; and inputting the test image data set into the focus part structure gray level image model to obtain a focus part structure gray level image set.
In detail, the original image data set is used as a second training set, and a lesion site marking is performed on the original image data set to obtain a second label set, where the lesion site is a human body site where a lesion is located, for example: the focus is in the lung of the human body, namely the lung is the focus position.
Preferably, embodiments of the present invention may use a mark Me (Label Me) image annotation tool for manual lesion site marking.
Further, the training the second convolutional neural network model by using the second training set and the second label set in the embodiment of the present invention includes:
s31: performing convolution pooling operation on the second training set according to a preset second convolution pooling number to obtain a second dimension reduction data set;
s32: according to a preset second deconvolution frequency, performing deconvolution operation on the second dimensionality reduction data set to obtain a second dimensionality increasing data set;
s33: and calculating the second ascending-dimensional data set by using a preset second activation function to obtain a second predicted value, and calculating to obtain a second loss value by using the second predicted value and the second label value as input parameters of a second loss function which is pre-constructed.
S34: and comparing the second loss value with a preset second loss threshold value, and if the second loss value is greater than or equal to the second loss threshold value, returning to the step S31. And if the second loss value is smaller than the second loss threshold value, obtaining the focus part structure gray level graph model.
In detail, in the preferred embodiment of the present invention, the second loss function E can be calculated by using the following formula:
Figure BDA0002585931760000111
wherein b represents a set of all second label values, and c represents a set of all second predicted values.
According to the embodiment of the invention, the test image data set is input into the focus part structure gray level image model to obtain a focus part structure gray level image set.
The focus part is defined and identified through the focus part structure gray level graph model, and the precision of subsequent focus classification model training is improved.
The classification model training module 103 is configured to train a pre-constructed third convolutional neural network model by using the test image data set, the lesion distribution thermal map set, and the lesion site structure gray scale map set to obtain a lesion classification model.
In the embodiment of the invention, the test image data set, the focus distribution thermal force atlas and the focus position structure gray level atlas are collected to obtain a third training set, and focus type marking is carried out on the third training set to obtain a third label set. Preferably, embodiments of the present invention may use a mark Me (Label Me) image annotation tool for manual lesion type marking.
Preferably, in the preferred embodiment of the present invention, the third neural convolutional network model can be constructed by using a deep separable convolutional network model.
Further, the training the third convolutional neural network model by using the third training set and the third label set in the embodiment of the present invention includes:
s51: according to preset depth separable convolution pooling times, performing depth separable convolution pooling operation on the third training set to obtain a third dimension reduction data set;
s52: and calculating the third dimensionality reduction data set by using a preset third activation function to obtain a third predicted value, and calculating the third predicted value and the third label value as input parameters of a pre-constructed third loss function to obtain a third loss value.
S53: comparing the third loss value with a preset third loss threshold value, and returning to S51 if the third loss value is greater than or equal to the third loss threshold value; and if the third loss value is smaller than the third loss threshold value, obtaining the lesion classification model.
In detail, the depth separable convolution pooling operation includes: and performing grouping convolution operation on the third training set to obtain a depth convolution data set, performing point-by-point convolution operation on the depth convolution data set to obtain a point-by-point convolution data set, and performing average pooling operation on the point-by-point convolution data set to obtain the third dimension reduction data set.
In the preferred embodiment of the present invention, the third activation function can be calculated by using the following formula:
f(x)=max(0,x)
wherein f (x) is the third predicted value and x is data in the third reduced-dimension dataset.
In the preferred embodiment of the present invention, the third loss function can be calculated by using the following formula:
Figure BDA0002585931760000121
wherein N is the number of data included in the third training sample, i is a positive integer, and hiIs the third tag value, miIs the third predicted value.
By taking the focus distribution thermal force atlas and the focus position structure gray level atlas as additional training sets, the range of feature extraction in model training is reduced, and the training precision of the focus classification model is improved.
The model detection module 104 is configured to, when receiving image data of a patient to be detected, diagnose the image data by using the lesion classification model, and output a diagnosed lesion of the patient to be detected.
In one embodiment of the present invention, the training data for the lesion classification model and the image data of the patient to be diagnosed may be stored in a blockchain.
In an embodiment of the present invention, the image data of the patient to be detected is medical image data of the patient, for example: fundus color photography, fundus OCT, etc.
Further, inputting the image data into the lesion classification model, and outputting a preset probability of the at least one lesion; identifying a confidence threshold for the at least one lesion using the joden index principle; comparing the prevalence probability to a confidence threshold for the corresponding lesion; and selecting the focus with the disease probability larger than the corresponding focus confidence threshold value as the diagnosis focus of the patient to be detected.
In the embodiment of the invention, a historical patient original image data set is obtained, and a focus distribution thermodynamic diagram regression model is obtained by training a pre-constructed first convolution neural model by using the original image data set; acquiring a test image data set, inputting the test data set into the focus distribution thermodynamic diagram regression model to obtain a focus distribution thermodynamic diagram set, and realizing definition and identification of focus positions; training a pre-constructed second convolutional neural network model by using the original image data to obtain a focus part structure gray level map model, and inputting the test image data set into the focus part structure gray level map model to obtain a focus part structure gray level map set, so that the definition and identification of focus parts are realized; a third convolutional neural network model which is pre-constructed by training the test image data set, the focus distribution thermal power atlas and the focus part structure gray level atlas is used for obtaining a focus classification model, and the focus distribution thermal power atlas and the focus part structure gray level atlas are used as additional input of model training for improving the precision of model training; and diagnosing the image data of the patient to be detected by using the lesion classification model to obtain the lesion classification of the patient. The focus position and the focus position are defined and identified, the range of focus feature extraction in the focus classification model training is narrowed, the training precision of the focus classification model is improved, and therefore the accuracy of disease classification diagnosis is improved.
Fig. 3 is a schematic structural diagram of an electronic device for implementing the disease classification diagnosis method according to the present invention.
The electronic device 1 may comprise a processor 10, a memory 11 and a bus, and may further comprise a computer program, such as a disease classification diagnostic program 12, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only to store application software installed in the electronic device 1 and various types of data, such as codes of the disease classification and diagnosis program 12, but also to temporarily store data that has been output or is to be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the whole electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device 1 by running or executing programs or modules (e.g., a disease classification and diagnosis program 12, etc.) stored in the memory 11 and calling data stored in the memory 11.
The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
Fig. 3 shows only an electronic device with components, and it will be understood by those skilled in the art that the structure shown in fig. 3 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than those shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so as to implement functions of charge management, discharge management, power consumption management, and the like through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the electronic device 1 may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the electronic device 1 and other electronic devices.
Optionally, the electronic device 1 may further comprise a user interface, which may be a Display (Display), an input unit (such as a Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the electronic device 1 and for displaying a visualized user interface, among other things.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The disease classification diagnostic program 12 stored in the memory 11 of the electronic device 1 is a combination of instructions that, when executed in the processor 10, may implement:
acquiring a historical patient original image data set, and training a pre-constructed first convolution neural network model by using the original image data set to obtain a focus distribution thermodynamic diagram regression model;
acquiring a test image data set, and inputting the test image data set into the focus distribution thermodynamic diagram regression model to obtain a focus distribution thermodynamic diagram set;
training a pre-constructed second convolutional neural network model by using the original image data to obtain a focus part structure gray map model;
inputting the test image data set into the focus part structure gray level image model to obtain a focus part structure gray level image set;
training a pre-constructed third convolutional neural network model by utilizing the test image data set, the focus distribution thermal force image set and the focus position structure gray level image set to obtain a focus classification model;
when receiving the image data of the patient to be detected, diagnosing the image data by using the focus classification model, and outputting the diagnosis focus of the patient to be detected.
Specifically, the specific implementation method of the processor 10 for the instruction may refer to the description of the relevant steps in the embodiment corresponding to fig. 1, which is not described herein again.
Further, the integrated modules/units of the electronic device 1, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. The computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
Further, the computer usable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A method for disease classification diagnosis, the method comprising:
acquiring a historical patient original image data set, and training a pre-constructed first convolution neural network model by using the original image data set to obtain a focus distribution thermodynamic diagram regression model;
acquiring a test image data set, and inputting the test image data set into the focus distribution thermodynamic diagram regression model to obtain a focus distribution thermodynamic diagram set;
training a pre-constructed second convolutional neural network model by using the original image data to obtain a focus part structure gray map model;
inputting the test image data set into the focus part structure gray level image model to obtain a focus part structure gray level image set;
training a pre-constructed third convolutional neural network model by utilizing the test image data set, the focus distribution thermal force image set and the focus position structure gray level image set to obtain a focus classification model;
when receiving the image data of the patient to be detected, diagnosing the image data by using the focus classification model, and outputting the diagnosis focus of the patient to be detected.
2. The method for disease classification and diagnosis of claim 1, wherein the obtaining of a raw image data set of a historical patient, training a pre-constructed first convolution neural network model using the raw image data set to obtain a lesion distribution thermodynamic regression model, comprises:
taking the original image data set as a first training set, and marking the focus position of the original image data set to obtain a first label set;
and training the first convolution neural network model by using the first training set and the first label set to obtain the focus distribution thermodynamic diagram regression model.
3. The method for disease classification and diagnosis of claim 2, wherein the training of the first convolutional neural network model using the first training set and the first label set to obtain a lesion distribution thermodynamic regression model comprises:
a: performing convolution pooling operation on the first training set according to a preset first convolution pooling number to obtain a first dimension reduction data set;
b: according to a preset first deconvolution frequency, performing deconvolution operation on the first dimensionality reduction data set to obtain a first dimensionality increasing data set;
c: calculating the first ascending-dimensional data set by using a preset first activation function to obtain a first predicted value, and calculating the first predicted value and a tag value contained in the first tag set as input parameters of a pre-constructed first loss function to obtain a first loss value;
d: comparing the first loss value with a preset first loss threshold value, and if the first loss value is greater than or equal to the first loss threshold value, returning to the step A; and if the first loss value is smaller than the first loss threshold value, obtaining the focus distribution thermodynamic diagram regression model.
4. The method for classifying and diagnosing diseases as claimed in claim 1, wherein the training of a pre-constructed third convolutional neural network model using the test image data set, the lesion distribution thermal force map set and the lesion site structure gray scale map set to obtain a lesion classification model comprises:
summarizing the test image data set, the focus distribution thermal force atlas and the focus part structure gray level atlas to obtain a third training set;
performing lesion type marking on the third training set to obtain a third label set;
and training the third convolutional neural network model by using the third training set and the third label set to obtain the focus classification model.
5. The disease classification diagnostic method of claim 4, wherein the training of the third convolutional neural network model using the third training set and the third label set comprises:
x: according to preset depth separable convolution pooling times, performing depth separable convolution pooling operation on the third training set to obtain a third dimension reduction data set;
y: calculating the third dimensionality reduction data set by using a preset third activation function to obtain a third predicted value, and calculating to obtain a third loss value by using the third predicted value and the third label value as input parameters of a pre-constructed third loss function;
z: comparing the third loss value with a preset third loss threshold value, and if the third loss value is greater than or equal to the third loss threshold value, returning to X; and if the third loss value is smaller than the third loss threshold value, obtaining the lesion classification model.
6. The disease classification diagnostic method of claim 5, wherein the performing a deep separable convolution pooling operation on the third training set to obtain a third reduced-dimension dataset comprises:
performing packet convolution operation on the third training set to obtain a deep convolution data set;
performing point-by-point convolution operation on the depth convolution data set to obtain a point-by-point convolution data set;
and carrying out average pooling operation on the point-by-point convolution data sets to obtain the third dimension reduction data set.
7. The disease classification and diagnosis method according to any one of claims 1 to 5, wherein the diagnosing the image data by using the lesion classification model and outputting the diagnosis lesion of the patient to be detected when the image data of the patient to be detected is received comprises:
inputting the image data into the lesion classification model, and outputting preset disease probability of at least one lesion;
identifying a confidence threshold for the at least one lesion using the joden index principle;
comparing the prevalence probability to a confidence threshold for the corresponding lesion;
and selecting the focus with the disease probability larger than the corresponding focus confidence threshold value as the diagnosis focus of the patient to be detected.
8. A disease classification diagnostic apparatus, characterized in that the apparatus comprises:
the thermodynamic diagram generation module is used for acquiring a historical patient original image data set, and training a pre-constructed first convolution neural network model by using the original image data set to obtain a focus distribution thermodynamic diagram regression model; acquiring a test image data set, and inputting the test image data set into the focus distribution thermodynamic diagram regression model to obtain a focus distribution thermodynamic diagram set;
the gray level image generation module is used for training a pre-constructed second convolutional neural network model by utilizing the original image data to obtain a focus part structure gray level image model; inputting the test image data set into the focus part structure gray level image model to obtain a focus part structure gray level image set;
the classification model training module is used for training a pre-constructed third convolution neural network model by utilizing the test image data set, the focus distribution thermal force image set and the focus position structure gray level image set to obtain a focus classification model;
and the model detection module is used for diagnosing the image data by using the focus classification model and outputting the diagnosis focus of the patient to be detected when the image data of the patient to be detected is received.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a disease classification diagnostic method as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium comprising a stored data area storing data created according to use of blockchain nodes and a stored program area storing a computer program, wherein the computer program when executed by a processor implements the disease classification diagnosis method according to any one of claims 1 to 7.
CN202010683824.9A 2020-07-15 2020-07-15 Disease classification diagnosis method and device, electronic equipment and storage medium Pending CN111933274A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010683824.9A CN111933274A (en) 2020-07-15 2020-07-15 Disease classification diagnosis method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010683824.9A CN111933274A (en) 2020-07-15 2020-07-15 Disease classification diagnosis method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111933274A true CN111933274A (en) 2020-11-13

Family

ID=73313439

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010683824.9A Pending CN111933274A (en) 2020-07-15 2020-07-15 Disease classification diagnosis method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111933274A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113768520A (en) * 2021-09-18 2021-12-10 中国科学院自动化研究所 Training method and device for electroencephalogram detection model
CN114782337A (en) * 2022-04-08 2022-07-22 平安国际智慧城市科技股份有限公司 OCT image recommendation method, device, equipment and medium based on artificial intelligence

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017086413A (en) * 2015-11-09 2017-05-25 株式会社日立製作所 X-ray image diagnostic apparatus, image processing device, image processing program, method for processing x-ray image, and stent treatment support system
CN108109152A (en) * 2018-01-03 2018-06-01 深圳北航新兴产业技术研究院 Medical Images Classification and dividing method and device
CN108563975A (en) * 2017-07-31 2018-09-21 汉鼎宇佑互联网股份有限公司 A kind of Dense crowd Population size estimation method based on deep learning
CN109447966A (en) * 2018-10-26 2019-03-08 科大讯飞股份有限公司 Lesion localization recognition methods, device, equipment and the storage medium of medical image
CN109493954A (en) * 2018-12-20 2019-03-19 广东工业大学 A kind of SD-OCT image retinopathy detection system differentiating positioning based on classification
CN110136103A (en) * 2019-04-24 2019-08-16 平安科技(深圳)有限公司 Medical image means of interpretation, device, computer equipment and storage medium
CN110264443A (en) * 2019-05-20 2019-09-20 平安科技(深圳)有限公司 Eye fundus image lesion mask method, device and medium based on feature visualization
CN110838125A (en) * 2019-11-08 2020-02-25 腾讯医疗健康(深圳)有限公司 Target detection method, device, equipment and storage medium of medical image
CN111062947A (en) * 2019-08-14 2020-04-24 深圳市智影医疗科技有限公司 Deep learning-based X-ray chest radiography focus positioning method and system
CN111242131A (en) * 2020-01-06 2020-06-05 北京十六进制科技有限公司 Method, storage medium and device for image recognition in intelligent marking

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017086413A (en) * 2015-11-09 2017-05-25 株式会社日立製作所 X-ray image diagnostic apparatus, image processing device, image processing program, method for processing x-ray image, and stent treatment support system
CN108563975A (en) * 2017-07-31 2018-09-21 汉鼎宇佑互联网股份有限公司 A kind of Dense crowd Population size estimation method based on deep learning
CN108109152A (en) * 2018-01-03 2018-06-01 深圳北航新兴产业技术研究院 Medical Images Classification and dividing method and device
CN109447966A (en) * 2018-10-26 2019-03-08 科大讯飞股份有限公司 Lesion localization recognition methods, device, equipment and the storage medium of medical image
CN109493954A (en) * 2018-12-20 2019-03-19 广东工业大学 A kind of SD-OCT image retinopathy detection system differentiating positioning based on classification
CN110136103A (en) * 2019-04-24 2019-08-16 平安科技(深圳)有限公司 Medical image means of interpretation, device, computer equipment and storage medium
CN110264443A (en) * 2019-05-20 2019-09-20 平安科技(深圳)有限公司 Eye fundus image lesion mask method, device and medium based on feature visualization
CN111062947A (en) * 2019-08-14 2020-04-24 深圳市智影医疗科技有限公司 Deep learning-based X-ray chest radiography focus positioning method and system
CN110838125A (en) * 2019-11-08 2020-02-25 腾讯医疗健康(深圳)有限公司 Target detection method, device, equipment and storage medium of medical image
CN111242131A (en) * 2020-01-06 2020-06-05 北京十六进制科技有限公司 Method, storage medium and device for image recognition in intelligent marking

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
欧攀等: "基于热力图的手部姿态识别研究", 计算机应用研究, no. 1, 30 June 2020 (2020-06-30), pages 2 - 3 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113768520A (en) * 2021-09-18 2021-12-10 中国科学院自动化研究所 Training method and device for electroencephalogram detection model
CN114782337A (en) * 2022-04-08 2022-07-22 平安国际智慧城市科技股份有限公司 OCT image recommendation method, device, equipment and medium based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN111932482B (en) Method and device for detecting target object in image, electronic equipment and storage medium
CN111932564B (en) Picture identification method and device, electronic equipment and computer readable storage medium
CN111932547B (en) Method and device for segmenting target object in image, electronic device and storage medium
WO2021151306A1 (en) Method and apparatus for smart analysis of question and answer linguistic material, electronic device, and readable storage medium
CN111932534B (en) Medical image picture analysis method and device, electronic equipment and readable storage medium
CN111932562B (en) Image identification method and device based on CT sequence, electronic equipment and medium
CN111915609A (en) Focus detection analysis method, device, electronic equipment and computer storage medium
CN112465060A (en) Method and device for detecting target object in image, electronic equipment and readable storage medium
CN112885423A (en) Disease label detection method and device, electronic equipment and storage medium
CN111933274A (en) Disease classification diagnosis method and device, electronic equipment and storage medium
CN112308853A (en) Electronic equipment, medical image index generation method and device and storage medium
CN112435755A (en) Disease analysis method, disease analysis device, electronic device, and storage medium
CN115206512A (en) Hospital information management method and device based on Internet of things
CN111652209A (en) Damage detection method, device, electronic apparatus, and medium
CN114708461A (en) Multi-modal learning model-based classification method, device, equipment and storage medium
CN111814743A (en) Handwriting recognition method and device and computer readable storage medium
CN114511569B (en) Tumor marker-based medical image identification method, device, equipment and medium
CN115760656A (en) Medical image processing method and system
CN112233194B (en) Medical picture optimization method, device, equipment and computer readable storage medium
CN114373541A (en) Intelligent traditional Chinese medicine diagnosis method, system and storage medium based on distribution
CN114757787A (en) Vehicle insurance personal injury damage assessment method and device based on big data, electronic equipment and medium
CN113590845A (en) Knowledge graph-based document retrieval method and device, electronic equipment and medium
CN114864032B (en) Clinical data acquisition method and device based on HIS system
CN114400089A (en) Big data based reading data analysis method, device, equipment and storage medium
CN114398277A (en) Test information marking method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination