CN112259199A - Medical image classification model training method, system, storage medium and medical image processing device - Google Patents

Medical image classification model training method, system, storage medium and medical image processing device Download PDF

Info

Publication number
CN112259199A
CN112259199A CN202011178984.4A CN202011178984A CN112259199A CN 112259199 A CN112259199 A CN 112259199A CN 202011178984 A CN202011178984 A CN 202011178984A CN 112259199 A CN112259199 A CN 112259199A
Authority
CN
China
Prior art keywords
medical
lung
training
classification model
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011178984.4A
Other languages
Chinese (zh)
Inventor
苏炯龙
胡华峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong Liverpool University
Original Assignee
Xian Jiaotong Liverpool University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong Liverpool University filed Critical Xian Jiaotong Liverpool University
Priority to CN202011178984.4A priority Critical patent/CN112259199A/en
Publication of CN112259199A publication Critical patent/CN112259199A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Abstract

The invention discloses a training method, a system, a storage medium and a medical image processing device of a medical image classification model, wherein the training method of the medical image classification model comprises the following steps: s1, extracting image blocks from each obtained common RGB color map with different types of interstitial lung diseases or normal lungs and marking the image blocks; s2, analyzing the correlation between the lung type and the medical position data; s3, expanding the image blocks extracted in S1 into a three-channel Henschel unit diagram according to Henschel units corresponding to human organs; and S4, inputting the results obtained by the processing of S2 and S3 into a medical image classification model based on the composite convolution neural network for model training until the model converges. Compared with the existing traditional learning model, the method greatly improves the classification efficiency and accuracy, and has strong practicability; compared with the existing deep learning model, additional medical and position information is added, so that the method is more suitable for classification of medical images.

Description

Medical image classification model training method, system, storage medium and medical image processing device
Technical Field
The invention relates to the technical field of medical image processing, in particular to a training method, a training system, a storage medium and a medical image processing device for a medical image classification model.
Background
With the development of electronic computer technology, automatic image recognition and classification by software algorithms has been able to be preliminarily realized. Image analysis systems are generally used medically for detection and classification in medical images, but when real and complex disease information is encountered, neither classification accuracy nor false positive rate is sufficient to support the application in real life.
Interstitial lung disease represents over 200 lesions that result in scarring of lung tissue, often affecting the lung parenchyma, the small lung airways and alveoli. Preliminary classification of interstitial lung diseases using high resolution computed tomography is generally considered to be the most appropriate approach. Certain types of interstitial properties, however, may be misdiagnosed by a radiologist's subjective diagnosis of CT scans of the lungs. Therefore, computer-aided detection systems are a powerful aid to improve the classification of interstitial lung diseases. The traditional method is used for extracting the characteristics describing the lung texture, such as first-order gray scale statistics, gray scale co-occurrence matrixes and fractal analysis, however, the characteristics have the defects of classification accuracy and do not achieve full automation.
Disclosure of Invention
The present invention is directed to solving the above-mentioned problems in the prior art, and provides a method and a system for training a medical image classification model, a storage medium, and a medical image processing apparatus.
The purpose of the invention is realized by the following technical scheme:
the training method of the medical image classification model comprises the following steps:
s1, acquiring common RGB color maps converted from lung CT sectional maps of different types of interstitial lung diseases and lung CT sectional maps of normal lungs, extracting image blocks with one type of interstitial lung diseases or normal lungs from each acquired common RGB color map, and marking the image blocks;
s2, extracting medical position data of each image block, and analyzing the correlation between the lung type and the medical position data;
s3, expanding the image blocks extracted in S1 into a three-channel Henschel unit diagram according to Henschel units corresponding to human organs;
and S4, inputting the correlation information of the lung type and the medical position data obtained by the processing of S2 and S3 and the three-channel Hounsfield Unit diagram into a medical image classification model based on the composite convolution neural network for training until the model converges.
Preferably, in the training method of the medical image classification model, the types of the interstitial lung diseases include emphysema, pulmonary mucoid, pulmonary fibrosis and pulmonary micro-nodules.
Preferably, in the training method of the medical image classification model, the S1 includes extracting the image blocks by sliding on each ordinary RGB color map line by line through a sliding window, where the size of the sliding window is 32 × 32.
Preferably, in the method for training a medical image classification model, in S1, when the ratio of the area of the intermediate interstitial lung disease region or the area of the normal lung region in the sliding window is not less than 75%, corresponding image blocks are extracted and labeled, the same label is used for the image blocks corresponding to the same interstitial lung disease, and different labels are used for the image blocks corresponding to different interstitial lung diseases and the image blocks corresponding to the normal lung.
Preferably, in the method for training a medical image classification model, the step S2 includes
S21, extracting medical position data corresponding to each image block, and storing the medical position data and the marks of the image blocks as a character pair;
s22, all character pairs are statistically analyzed to verify the correlation of lung type with medical location data.
Preferably, in the method for training a classification model of medical images, the step S3 includes generating three-channel henry unit maps for distinguishing different regions of the lung from each other by using each of the ordinary RGB color maps according to henry units corresponding to organs of a human body; the three channels include a normal channel for exhibiting general characteristics of the lungs, a low attenuation channel for delineating low intensity regions, and a high attenuation channel for delineating high intensity regions.
Preferably, in the method for training a medical image classification model, the step S4 includes
S41, extracting image features of each three-channel Hounsfield unit diagram by using a composite convolutional neural network;
s42, performing down-sampling operation on the extracted image features by adopting a nonlinear transformation by using an activation function including but not limited to ReLU;
s43, obtaining a low-resolution image of the three-channel Henschel unit diagram by using the maximum pooling layer, extracting image features under different resolutions by using the methods in the steps S41 and S42, and performing down-sampling operation;
s44, using Softmax function to the last convolution layer of the model
Figure BDA0002749575480000031
Obtaining the probability that each three-channel Hounsfield unit diagram belongs to each lung category;
s45, generating a position vector by using the correlation information of the lung type and the medical position data, and multiplying the generated probability belonging to each lung type to obtain the final probability of each three-channel Henschel unit diagram belonging to each lung type;
s46, model using cross entropy loss function
Figure BDA0002749575480000032
And (5) training.
Preferably, in the training method of the medical image classification model, the S41 composite convolutional neural network includes an Xception convolutional neural network and an inclusion-v 3 convolutional neural network.
Preferably, the training method for the medical image classification model further includes S5, and an optimal hyper-parameter is automatically selected by using a lattice search method to determine an optimized model.
A training model for a medical image classification model, comprising
The image block generation module is used for acquiring common RGB color maps converted from lung CT sectional maps of different types of interstitial lung diseases and lung CT sectional maps of normal lungs, extracting an image block with one type of interstitial lung diseases or normal lungs from each acquired common RGB color map and marking the image block;
the correlation analysis module is used for extracting the medical position data of each image block and analyzing the correlation between the lung type and the medical position data;
the Henschel unit diagram generating module is used for expanding the image block of the image block generating module into a three-channel Henschel unit diagram according to the Henschel unit corresponding to the human organ;
and the model training module is used for inputting the results obtained by the processing of the correlation analysis module and the Henry unit diagram generation module into a medical image classification model based on the composite convolutional neural network for training until the model converges.
A storage medium storing a program for implementing any of the above methods.
The medical image processing device comprises the medical image classification model obtained by training through any one of the methods. .
The technical scheme of the invention has the advantages that:
compared with the traditional manual observation distinguishing method, the method disclosed by the invention greatly improves the efficiency and saves the working time and cost; compared with the existing traditional learning model, the method greatly improves the classification efficiency and accuracy and has strong practicability; compared with the existing deep learning model, additional medical and position information is added, so that the method is more suitable for classification of medical images.
Drawings
Fig. 1 is an image block of four interstitial lung diseases and normal lung selected in step S1; wherein (a) is normal lung; (b) emphysema; (c) grinding glass into pieces; (d) pulmonary fibrosis; (e) the lung nodules were small.
FIG. 2 is a Henschel unit plot of the three different channels based on Henschel units employed in S3, where (f) is the original CT plot; (g) is a low attenuation channel hounsfield unit diagram; (h) is a Henry unit diagram of normal channels; (i) high attenuation channel hounsfield unit diagrams.
Detailed Description
Objects, advantages and features of the present invention will be illustrated and explained by the following non-limiting description of preferred embodiments. The embodiments are merely exemplary for applying the technical solutions of the present invention, and any technical solution formed by replacing or converting the equivalent thereof falls within the scope of the present invention claimed.
In the description of the schemes, it should be noted that the terms "center", "upper", "lower", "left", "right", "front", "rear", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the devices or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the embodiment, the operator is used as a reference, and the direction close to the operator is a proximal end, and the direction away from the operator is a distal end.
The training method of the medical image classification model disclosed by the invention is explained with reference to the accompanying drawings, and comprises the following steps:
s1, common RGB color maps converted from lung CT sectional maps of different types of interstitial lung diseases and lung CT sectional maps of normal lungs are obtained, image blocks with one type of interstitial lung diseases or normal lungs are extracted from each obtained common RGB color map, and the image blocks are marked.
Specifically, the method comprises the following steps:
the doctor selects x lung CT sectional images, and the x lung CT sectional images are led into the computer, and simultaneously, the sectional position information of each CT sectional image on the lung is led in.
S11, the computer converts the acquired x lung CT sectional images into common RGB color images.
S12, y CT sectional views of the lungs with typical interstitial lung diseases and normal lungs are obtained from the x CT sectional views of the lungs, and since the largest interstitial lung diseases among all the interstitial lung diseases comprise emphysema, choriogobioid of the lungs, pulmonary fibrosis and pulmonary micro-nodules, the selected CT sectional views of the lungs with the interstitial lung diseases are the CT sectional views of the lungs of the four types of interstitial lung diseases. Of course, in other embodiments, the process of selecting y lung CT sectional views from x lung CT sectional views is not necessary, and the lung CT sectional views introduced into the computer are the lung CT sectional views of the four types of interstitial lung diseases and the lung CT sectional view of the normal lung, and at this time, the acquired lung CT sectional view needs to be converted into the common RGB color map.
S13, sliding the acquired common RGB color map formed by converting the CT sectional image of each lung line by using a sliding window to extract image blocks, and storing information corresponding to each image block, wherein the information comprises the types of qualitative lung diseases corresponding to the image blocks, the coordinates of the image blocks on the common RGB color map and medical position data corresponding to the image blocks, the medical position data is the position of an image block display area in a real lung, the medical position comprises the top of the lung, the base of the lung, a diffusion position, a lung peripheral area and an area under the pleura, and the medical position information is the position information of the image block display area in the real lung.
The size of the sliding window is preferably 32 × 32, but other sizes may be adopted in other embodiments, because some pre-trained models exist in the prior art, unnecessary parameter fitting processes may be reduced, and the size may be more flexibly slid on the original map, and will not occupy too large an unnecessary range, and will not obtain critical feature information because it is too small.
And when the area of a certain interstitial lung disease region or the area of a normal lung region in the sliding window accounts for not less than 75%, extracting a corresponding image block and marking. Different types of interstitial lung diseases or normal lung regions are labeled with different numbers, for example, in one possible embodiment, the image patch corresponding to emphysema is labeled 1, the image patch corresponding to nephelium of lung is labeled 2, the image patch corresponding to pulmonary fibrosis is labeled 3, the image patch corresponding to pulmonary micro-nodule is labeled 4, and the image patch corresponding to normal lung region is labeled 5, although in other embodiments, other numbers and/or letters and/or symbols may be used for labeling. Regions in the sliding window that do not meet the area ratio requirements are removed.
Training the composite convolution neural network for all the obtained image blocks according to 5-fold cross validation, and adjusting parameters of the validation and self-learning model.
S2, extracting medical position data of each image block, and analyzing the correlation between the lung type and the medical position data; this is because the original CT sectional view of the lung is an image of a section of the real lung (three-dimensional), and therefore, there must be a certain relation between different lung types and the medical position data, so that it is necessary to verify the correlation between the lung types (the lung types refer to the four interstitial lung diseases or the normal lung) and the medical position data.
The concrete steps include that,
s21, extracting medical position data corresponding to each image block, and storing the medical position data and the marks of the image blocks as a character pair;
s22, all character pairs are statistically analyzed to verify the correlation of lung type with medical location data. The probability of each interstitial lung disease occurring at each location can be obtained. The statistical analysis method mainly uses chi-square test and novel Grammer chi-square test during analysis, and the statistical analysis method used here is a known technology, is not an innovation point of the scheme, and is not described herein.
And S3, expanding the image blocks extracted in the S1 into a three-channel Henschel unit diagram according to Henschel units corresponding to the human organs. Specifically, a three-channel Henry unit diagram for distinguishing different areas of the lung is generated according to Henry units corresponding to human organs; the three channels include a normal channel for exhibiting general characteristics of the lungs, a low attenuation channel for delineating low intensity regions, and a high attenuation channel for delineating high intensity regions. And then converting each acquired image block into the three-channel Henschel unit diagram from the common RGB color diagram. The three-channel model is used because in the three-channel Henry unit diagram, different organ tissues, air and bones have clearer Henry unit differences, so that the three channels can be used for respectively enhancing the distinguishing degree of lung organs and skeletal muscles so as to improve the extraction accuracy of the characteristics. The conversion technology for converting the three-channel model and the image block from the common RGB color map into the three-channel hounsfield unit map is the prior art, and is not described herein again.
And S4, inputting the correlation information of the lung type and the medical position data obtained by the processing of S2 and S3 and the three-channel Hounsfield Unit diagram into a medical image classification model based on the composite convolution neural network for training until the model converges.
The composite convolutional neural network consists of two classical convolutional neural networks (an Xception convolutional neural network and an inclusion-v 3 convolutional neural network) and has the functions of generating two groups of different feature sets for each input image block, extracting features in the different convolutional neural networks corresponding to multiple interstitial lung diseases, and self-learning the relationship between the extracted features, medical position data where a region of interest (four interstitial lung diseases and normal lung) is located and a given mark.
The S4 specifically comprises
S41, extracting image features of each three-channel Hounsfield unit diagram by using a composite convolutional neural network;
s42, performing down-sampling operation on the extracted image features by adopting a nonlinear transformation by using an activation function including but not limited to ReLU;
s43, obtaining a low-resolution image of the three-channel Henschel unit diagram by using the maximum pooling layer, extracting image features under different resolutions by using the methods in the steps S41 and S42, and performing down-sampling operation;
s44, using Softmax function to the last convolution layer of the model
Figure BDA0002749575480000081
The probability that each three-channel hounsfield unit map belongs to a respective lung class (normal lung or four interstitial lung diseases) is obtained.
And S45, multiplying the generated probability belonging to each lung type by the generated position vector by using the correlation information generation position vector of the lung type and the medical position data determined in the step S2 to obtain the final probability of each three-channel Hounsfield unit chart belonging to each lung type.
S46, model using cross entropy loss function
Figure BDA0002749575480000091
And (5) training.
Finally, S5, automatically selecting the optimal hyper-parameter by using a lattice point search method, which is specifically a known technique and is not described herein.
The medical image classification model of the scheme generates a corresponding novel color map for preliminary detection by learning a Henschel unit corresponding to a real lung CT picture, optimizes a preliminary detection result by extracting the position distribution of different interstitial lungs and generating different disease probability differences according to different regions, and finally selects the classification with the maximum probability to finish the classification of medical images,
and when the new interstitial lung disease image blocks are classified, predicting and selecting the classification with the maximum probability according to the specific medical position data of the new interstitial lung disease image blocks by combining the composite convolutional neural network. Finally, 92.3% of classification accuracy is obtained.
The scheme further discloses a training system of the medical image classification model, which comprises
The image block generation module is used for acquiring common RGB color maps converted from lung CT sectional maps of different types of interstitial lung diseases and lung CT sectional maps of normal lungs, extracting an image block with one type of interstitial lung diseases or normal lungs from each common RGB color map and marking the image block;
the correlation analysis module is used for extracting the medical position data of each image block and analyzing the correlation between the lung type and the medical position data;
the Henschel unit diagram generating module is used for expanding the image block of the image block generating module into a three-channel Henschel unit diagram according to the Henschel unit corresponding to the human organ;
and the model training module is used for inputting the results obtained by the processing of the correlation analysis module and the Henry unit diagram generation module into a medical image classification model based on the composite convolutional neural network for training until the model converges.
The present invention further discloses a storage medium storing a program for implementing the method according to the above embodiment.
Finally, the scheme also discloses a medical image processing device which comprises the storage medium storing the program for realizing the model training method. Or include the trained medical image classification model described above.
The invention has various embodiments, and all technical solutions formed by adopting equivalent transformation or equivalent transformation are within the protection scope of the invention.

Claims (12)

1. The training method of the medical image classification model is characterized by comprising the following steps: the method comprises the following steps:
s1, acquiring common RGB color maps converted from lung CT sectional maps of different types of interstitial lung diseases and lung CT sectional maps of normal lungs, extracting image blocks with one type of interstitial lung diseases or normal lungs from each acquired common RGB color map, and marking the image blocks;
s2, extracting medical position data of each image block, and analyzing the correlation between the lung type and the medical position data;
s3, expanding the image blocks extracted in S1 into a three-channel Henschel unit diagram according to Henschel units corresponding to human organs;
and S4, inputting the correlation information of the lung type and the medical position data obtained by the processing of S2 and S3 and the three-channel Hounsfield Unit diagram into a medical image classification model based on the composite convolution neural network for model training until the model converges.
2. The method for training a medical image classification model according to claim 1, characterized in that: the types of interstitial lung diseases include emphysema, pulmonary mucoid, pulmonary fibrosis and pulmonary micro-nodules.
3. The method for training a medical image classification model according to claim 1, characterized in that: the S1 includes sliding the extracted image blocks line by line on each ordinary RGB color map through a sliding window, where the size of the sliding window is 32 × 32.
4. The method for training a medical image classification model according to claim 3, characterized in that: in S1, when the ratio of the interstitial lung disease region area or the normal lung region area in the sliding window is not less than 75%, extracting and labeling corresponding image blocks, where the same label is used for image blocks corresponding to the same interstitial lung disease, and different labels are used for image blocks corresponding to different interstitial lung diseases and normal lungs.
5. The method for training a medical image classification model according to claim 4, characterized in that: the step of S2 includes
S21, extracting medical position data corresponding to each image block, and storing the medical position data and the marks of the image blocks as a character pair;
s22, all character pairs are statistically analyzed to verify the correlation of lung type to medical site data.
6. The method for training a medical image classification model according to claim 1, characterized in that: in the S3, the three channels include a normal channel for exhibiting general characteristics of the lungs, a low attenuation channel for describing a low intensity region, and a high attenuation channel for describing a high intensity region.
7. The method for training a medical image classification model according to claim 1, characterized in that: said S4 includes
S41, extracting image features of each three-channel Hounsfield unit diagram by using a composite convolutional neural network;
s42, performing down-sampling operation on the extracted image features by adopting a nonlinear transformation by using an activation function including but not limited to ReLU;
s43, obtaining a low-resolution image of the three-channel Henschel unit diagram by using the maximum pooling layer, extracting image features under different resolutions by using the methods in the steps S41 and S42, and performing down-sampling operation;
s44, using Softmax function to the last convolution layer of the model
Figure FDA0002749575470000021
Obtaining the probability that each three-channel Hounsfield unit diagram belongs to each lung category;
s45, generating a position vector by using the correlation information of the lung type and the medical position data, and multiplying the generated probability belonging to each lung type to obtain the final probability of each three-channel Henschel unit diagram belonging to each lung type;
s46, model using cross entropy loss function
Figure FDA0002749575470000022
And (5) training.
8. The method for training a medical image classification model according to claim 7, characterized in that: the S41 composite convolutional neural network comprises an Xception convolutional neural network and an inclusion-v 3 convolutional neural network.
9. The method for training a medical image classification model according to any one of claims 1 to 8, characterized by: and S5, automatically selecting the optimal hyper-parameter by using a lattice point search method to determine an optimization model.
10. The training module of the medical image classification model is characterized in that: comprises that
The image block generation module is used for acquiring common RGB color maps converted from lung CT sectional maps of different types of interstitial lung diseases and lung CT sectional maps of normal lungs, extracting an image block with one type of interstitial lung diseases or normal lungs from each acquired common RGB color map and marking the image block;
the correlation analysis module is used for extracting the medical position data of each image block and analyzing the correlation between the lung type and the medical position data;
the Henschel unit diagram generation module is used for expanding the image blocks extracted by the image block generation module into a three-channel Henschel unit diagram according to Henschel units corresponding to human organs;
and the model training module is used for inputting the results obtained by the processing of the correlation analysis module and the Henry unit diagram generation module into a medical image classification model based on the composite convolutional neural network for training until the model converges.
11. A storage medium, characterized by: a program for implementing the method of any one of claims 1 to 9 is stored.
12. A medical image processing apparatus, characterized in that: a medical image classification model trained by the method of any one of claims 1-9.
CN202011178984.4A 2020-10-29 2020-10-29 Medical image classification model training method, system, storage medium and medical image processing device Pending CN112259199A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011178984.4A CN112259199A (en) 2020-10-29 2020-10-29 Medical image classification model training method, system, storage medium and medical image processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011178984.4A CN112259199A (en) 2020-10-29 2020-10-29 Medical image classification model training method, system, storage medium and medical image processing device

Publications (1)

Publication Number Publication Date
CN112259199A true CN112259199A (en) 2021-01-22

Family

ID=74262804

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011178984.4A Pending CN112259199A (en) 2020-10-29 2020-10-29 Medical image classification model training method, system, storage medium and medical image processing device

Country Status (1)

Country Link
CN (1) CN112259199A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315378A (en) * 2023-11-29 2023-12-29 北京大学第三医院(北京大学第三临床医学院) Grading judgment method for pneumoconiosis and related equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110280457A1 (en) * 2010-05-11 2011-11-17 The University Of Copenhagen Classification of medical diagnostic images
CN107280697A (en) * 2017-05-15 2017-10-24 北京市计算中心 Lung neoplasm grading determination method and system based on deep learning and data fusion
CN107680082A (en) * 2017-09-11 2018-02-09 宁夏医科大学 Lung tumor identification method based on depth convolutional neural networks and global characteristics
CN108876779A (en) * 2018-06-22 2018-11-23 中山仰视科技有限公司 Lung cancer method for early prediction, electronic equipment based on deep learning
CN109461495A (en) * 2018-11-01 2019-03-12 腾讯科技(深圳)有限公司 A kind of recognition methods of medical image, model training method and server

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110280457A1 (en) * 2010-05-11 2011-11-17 The University Of Copenhagen Classification of medical diagnostic images
CN107280697A (en) * 2017-05-15 2017-10-24 北京市计算中心 Lung neoplasm grading determination method and system based on deep learning and data fusion
CN107680082A (en) * 2017-09-11 2018-02-09 宁夏医科大学 Lung tumor identification method based on depth convolutional neural networks and global characteristics
CN108876779A (en) * 2018-06-22 2018-11-23 中山仰视科技有限公司 Lung cancer method for early prediction, electronic equipment based on deep learning
CN109461495A (en) * 2018-11-01 2019-03-12 腾讯科技(深圳)有限公司 A kind of recognition methods of medical image, model training method and server

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315378A (en) * 2023-11-29 2023-12-29 北京大学第三医院(北京大学第三临床医学院) Grading judgment method for pneumoconiosis and related equipment
CN117315378B (en) * 2023-11-29 2024-03-12 北京大学第三医院(北京大学第三临床医学院) Grading judgment method for pneumoconiosis and related equipment

Similar Documents

Publication Publication Date Title
CN109785303B (en) Rib marking method, device and equipment and training method of image segmentation model
CN112101451B (en) Breast cancer tissue pathological type classification method based on generation of antagonism network screening image block
CN108205806B (en) Automatic analysis method for three-dimensional craniofacial structure of cone beam CT image
Chan et al. Texture-map-based branch-collaborative network for oral cancer detection
CN110956635A (en) Lung segment segmentation method, device, equipment and storage medium
CN109859233A (en) The training method and system of image procossing, image processing model
CN109670510A (en) A kind of gastroscopic biopsy pathological data screening system and method based on deep learning
CN108062749B (en) Identification method and device for levator ani fissure hole and electronic equipment
CN107851194A (en) Visual representation study for brain tumor classification
CN109671068B (en) Abdominal muscle labeling method and device based on deep learning
US8285013B2 (en) Method and apparatus for detecting abnormal patterns within diagnosis target image utilizing the past positions of abnormal patterns
CN110110808B (en) Method and device for performing target labeling on image and computer recording medium
CN110008992B (en) Deep learning method for prostate cancer auxiliary diagnosis
US20220335600A1 (en) Method, device, and storage medium for lesion segmentation and recist diameter prediction via click-driven attention and dual-path connection
CN111539956A (en) Cerebral hemorrhage automatic detection method based on brain auxiliary image and electronic medium
CN115578372A (en) Bone age assessment method, device and medium based on target detection and convolution transformation
CN111062936B (en) Quantitative index evaluation method for facial deformation diagnosis and treatment effect
CN112259199A (en) Medical image classification model training method, system, storage medium and medical image processing device
CN114372962A (en) Laparoscopic surgery stage identification method and system based on double-particle time convolution
CN112001877A (en) Thyroid malignant nodule detection method based on deep learning
CN113205153B (en) Training method of pediatric pneumonia auxiliary diagnosis model and model obtained by training
CN110837844A (en) Pancreatic cystic tumor benign and malignant classification method based on CT image dissimilarity characteristics
CN112885464B (en) Internal nasal disease real-time auxiliary diagnosis and treatment system based on Att-Res2-CE-Net
CN115937609A (en) Corneal disease image detection and classification method and device based on local and global information
CN115147636A (en) Lung disease identification and classification method based on chest X-ray image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination