CN110298345A - A kind of area-of-interest automatic marking method of medical images data sets - Google Patents
A kind of area-of-interest automatic marking method of medical images data sets Download PDFInfo
- Publication number
- CN110298345A CN110298345A CN201910606180.0A CN201910606180A CN110298345A CN 110298345 A CN110298345 A CN 110298345A CN 201910606180 A CN201910606180 A CN 201910606180A CN 110298345 A CN110298345 A CN 110298345A
- Authority
- CN
- China
- Prior art keywords
- interest
- area
- data sets
- medical images
- images data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
Abstract
The present invention relates to a kind of area-of-interest automatic marking method of medical images data sets, building area-of-interest first detects network, obtains the area-of-interest in medical images data sets in every medical image;Then the highest N number of area-of-interest of confidence level is chosen from the area-of-interest;The feature of N number of area-of-interest is extracted followed by depth e-learning and respectively;Then obtained N number of feature will be extracted to be separately input in perceptron, calculate the probability that N number of area-of-interest is target class by sotfmax function;Finally by probability obtained in the previous step in conjunction with a Leakey noisy-or gate, automatic marking task to complete area-of-interest of a possibility that being finally target class is obtained.The present invention is able to solve the area-of-interest missing inspection problem in detecting step simultaneously, to provide the advisory opinion for having more meaning.
Description
Technical field
The present invention relates to technical field of image processing, especially a kind of area-of-interest of medical images data sets is marked automatically
Injecting method.
Background technique
Medical images data sets and general image data collection mark are different, and the mark of medical images data sets needs profession
Knowledge and skills.Also, limited mass is marked in the wide gap of medicine and computer major, is marked of poor quality.Accordingly, it is difficult to obtain
Large-scale high quality labeled data collection.
Summary of the invention
In view of this, the purpose of the present invention is to propose to a kind of area-of-interest automatic marking sides of medical images data sets
Method, while the area-of-interest missing inspection problem being able to solve in detecting step, to provide the advisory opinion for having more meaning.
The present invention is realized using following scheme: a kind of area-of-interest automatic marking method of medical images data sets, tool
Body the following steps are included:
Step S1: building area-of-interest detects network, obtains the sense in medical images data sets in every medical image
Interest region;
Step S2: the highest N number of area-of-interest of confidence level is chosen from the area-of-interest;
Step S3: the feature of N number of area-of-interest is extracted using depth e-learning and respectively;
Step S4: obtained N number of feature will be extracted and be separately input in perceptron, calculate N number of sense by sotfmax function
Interest region is the probability of target class;
Step S5: for the probability that step S4 is obtained in conjunction with a Leakey noisy-or gate, obtaining finally is mesh
A possibility that marking class is to complete the automatic marking task of area-of-interest.
Further, in the S1, area-of-interest detection network by a UNet network as core network with
One RPN network is formed as output layer.
Further, in step S2, the N is 5, emerging using the non-sense of same size when area-of-interest is less than 5
Interesting area image polishing.
Further, in step S3, the depth network is the core network UNet of multiplexing detection network interested.
Further, in step S4, the perceptron is two layers, respectively hidden unit and output unit, wherein hiding
Unit is 64, and output unit is 1, and for the activation primitive used for Sigmoid function, obtaining area-of-interest is target class
Probability.
Further, step S5 specifically: introducing the probability that an imaginary area-of-interest is target class is Pd, and benefit
With following formula, last target class probability is obtained, to complete mark task:
In formula, PiIndicate that i-th of area-of-interest is the probability of target class.
Compared with prior art, the invention has the following beneficial effects: the present invention is using Discontinuous Factors to realize medical image
The automatic marking of data set can provide a large amount of high quality labeled data, meanwhile, it is capable to reduce to artificial for AI medical image
The dependence of mark.The present invention is able to solve the area-of-interest missing inspection problem in detecting step simultaneously, more anticipates to provide
The advisory opinion of justice.
Detailed description of the invention
Fig. 1 is the Method And Principle schematic diagram of the embodiment of the present invention.
Specific embodiment
The present invention will be further described with reference to the accompanying drawings and embodiments.
It is noted that described further below be all exemplary, it is intended to provide further instruction to the application.Unless another
It indicates, all technical and scientific terms used herein has usual with the application person of an ordinary skill in the technical field
The identical meanings of understanding.
It should be noted that term used herein above is merely to describe specific embodiment, and be not intended to restricted root
According to the illustrative embodiments of the application.As used herein, unless the context clearly indicates otherwise, otherwise singular
Also it is intended to include plural form, additionally, it should be understood that, when in the present specification using term "comprising" and/or " packet
Include " when, indicate existing characteristics, step, operation, device, component and/or their combination.
As shown in Figure 1, present embodiments providing a kind of area-of-interest automatic marking method of medical images data sets, have
Body the following steps are included:
Step S1: building area-of-interest detects network, obtains the sense in medical images data sets in every medical image
Interest region;
Step S2: the highest N number of area-of-interest of confidence level is chosen from the area-of-interest;
Step S3: the feature of N number of area-of-interest is extracted using depth e-learning and respectively;
Step S4: obtained N number of feature will be extracted and be separately input in perceptron, calculate N number of sense by sotfmax function
Interest region is the probability of target class;
Step S5: for the probability that step S4 is obtained in conjunction with a Leakey noisy-or gate, obtaining finally is mesh
A possibility that marking class is to complete the automatic marking task of area-of-interest.
In the present embodiment, in the S1, the area-of-interest detection network is by a UNet network as backbone network
Network and a RPN network are formed as output layer.
In the present embodiment, in step S2, the N is 5, when area-of-interest is less than 5, utilizes the non-of same size
Region of interest area image polishing.
In the present embodiment, in step S3, the depth network is the core network UNet of multiplexing detection network interested.
In the present embodiment, in step S4, the perceptron is two layers, respectively hidden unit and output unit, wherein
Hidden unit is 64, and output unit is 1, and for the activation primitive used for Sigmoid function, acquisition area-of-interest is target
The probability of class.
In the present embodiment, step S5 specifically: introducing the probability that an imaginary area-of-interest is target class is Pd,
And following formula is utilized, last target class probability is obtained, to complete mark task:
In formula, PiIndicate that i-th of area-of-interest is the probability of target class.
Specifically, the present embodiment is illustrated by lung CT image, described image is 3D rendering, specifically include with
Lower implementation steps:
Step 1: collecting data, and it includes 1186 Lung neoplasms in the data set that data set used, which is LUNA16 data set,
Mark, patient populations are 888.
Step 2: pretreatment, to extract pulmonary parenchyma image, told pre-treatment step is specific as follows:
1, the processing of HU value is carried out to original image;
2, binary conversion treatment is done by threshold value, obtains gray level image;
3, to gray level image obtained, morphological erosion and expansion process are carried out, obtains pulmonary parenchyma image;
Step 3: by the pulmonary parenchyma image, the sliding window for being 128x128x128x1 using size, by image with block
Based on inputted;
Step 4: network model is detected using UNet network and a RPN network as area-of-interest, extracts each CT
The area-of-interest of image, specific as follows:
UNet network is divided into down-sampling part and up-sampling part:
Down-sampling part specifically:
First block is made of two layers of 3D convolutional layer, and core size is (3,3,3), output channel 24, BN regularization, activation
Function is ReLU function;
Second block is residual block, is made of 2 residual units, each residual unit is by two layers of 3D convolutional layer, core size
For (3,3,3), output channel 32, BN regularization, activation primitive is ReLU function;
Middle layer, maximum pond layer, core size are (2,2,2), step-length 2;
Third block is residual block, is made of 2 residual units, each residual unit is by two layers of 3D convolutional layer, core size
For (3,3,3), output channel 64, BN regularization, activation primitive is ReLU function;
Middle layer, maximum pond layer, core size are (2,2,2), step-length 2;
4th block is residual block, is made of 3 residual units, each residual unit is by two layers of 3D convolutional layer, core size
For (3,3,3), output channel 64, BN regularization, activation primitive is ReLU function;
Middle layer, maximum pond layer, core size are (2,2,2), step-length 2;
5th block is residual block, is made of 3 residual units, each residual unit is by two layers of 3D convolutional layer, core size
For (3,3,3), output channel 64, BN regularization, activation primitive is ReLU function;
Middle layer, maximum pond layer, core size are (2,2,2), step-length 2;
Part is up-sampled, specifically:
First layer is 3D warp lamination in first block, and input, output channel 64, core size is (2,2,2), and step-length is
2, BN regularizations, activation primitive be ReLU function and a series connection layer, connect described in up-sampling in the 4th piece output and institute
State the output of first layer;
Second block is residual block, is made of 3 residual units, each residual unit is by two layers of 3D convolutional layer, core size
For (3,3,3), output channel 64, BN regularization, activation primitive is ReLU function;
First layer is 3D warp lamination in third block, and input, output channel 64, core size is (2,2,2), and step-length is
2, BN regularizations, activation primitive are ReLU function and a series connection layer, the output of third block and institute in the up-sampling of connecting
State the output of first layer;
4th block is that residual block is made of 3 residual units, and each residual unit is by two layers of 3D convolutional layer, core size
(3,3,3), output channel 64, BN regularization, activation primitive are ReLU function;
5th block is made of two layers of 3D convolutional layer, and core size is (1,1,1), and output channel is respectively 64 and 15, and BN is just
Then change, activation primitive is ReLU function, and the feature sizes of output are 32x32x32x15;
The output feature sizes of UNet network are reset into 32x32x32x3x5,3 for the number scale of anchor be respectively 10,
30,60,5 be regressand value number, is lost followed by true data calculation, to complete model training, specifically used loss function is such as
Under:
L=Lcls+pLreg
In formula, LclsIntersection entropy function, L are used for Classification LossregL1 function is used to return loss;
Step 5: the area-of-interest detection model completed using training is tested image, to obtain confidence
It spends highest 5 area-of-interests then to supply using the regions of non-interest in the image when interested less than 5, so that sense
Interest region is kept for 5;
Step 6: feature extraction and highest 5 probability of the confidence level for obtaining CT image, specifically:
1, training characteristics extract network, are multiplexed the UNet network;
2, by 5 area-of-interests, size 24x24x24x128 is separately input to the feature for having trained completion and mentions
Modulus type obtains the feature of 128-D;
3,128-D feature obtained is input to two layers of perceptron, hidden unit 64 and an output unit, activation
Function is 5 area-of-interests that Sigmoid function obtains the patient respectively.
Step 7: the last probability for target class is calculated using Leakey noisy-or gate, is obtained as target class
Possibility to complete mark task, specifically: introduce the probability P that imaginary area-of-interest is target classd, and utilize with
Lower formula completes automatic marking after obtaining last target class probability:
It should be understood by those skilled in the art that, embodiments herein can provide as method, system or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the application
Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the application, which can be used in one or more,
The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces
The form of product.
The application is referring to method, the process of equipment (system) and computer program product according to the embodiment of the present application
Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions
The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs
Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real
The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or
The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one
The step of function of being specified in a box or multiple boxes.
The above described is only a preferred embodiment of the present invention, being not that the invention has other forms of limitations, appoint
What those skilled in the art changed or be modified as possibly also with the technology contents of the disclosure above equivalent variations etc.
Imitate embodiment.But without departing from the technical solutions of the present invention, according to the technical essence of the invention to above embodiments institute
Any simple modification, equivalent variations and the remodeling made, still fall within the protection scope of technical solution of the present invention.
Claims (6)
1. a kind of area-of-interest automatic marking method of medical images data sets, which comprises the following steps:
Step S1: building area-of-interest detects network, obtains interested in every medical image in medical images data sets
Region;
Step S2: the highest N number of area-of-interest of confidence level is chosen from the area-of-interest;
Step S3: the feature of N number of area-of-interest is extracted using depth e-learning and respectively;
Step S4: obtained N number of feature will be extracted and be separately input in perceptron, calculated by sotfmax function N number of interested
Region is the probability of target class;
Step S5: for the probability that step S4 is obtained in conjunction with a Leakey noisy-or gate, obtaining finally is target class
A possibility that complete the automatic marking task of area-of-interest.
2. a kind of area-of-interest automatic marking method of medical images data sets according to claim 1, feature exist
In in the S1, the area-of-interest detection network is by a UNet network as core network and a RPN network conduct
Output layer composition.
3. a kind of area-of-interest automatic marking method of medical images data sets according to claim 1, feature exist
In in step S2, the N is 5, when area-of-interest is less than 5, is mended using the regions of non-interest image of same size
Together.
4. a kind of area-of-interest automatic marking method of medical images data sets according to claim 1, feature exist
In in step S3, the depth network is the core network UNet of multiplexing detection network interested.
5. a kind of area-of-interest automatic marking method of medical images data sets according to claim 1, feature exist
In in step S4, the perceptron is two layers, respectively hidden unit and output unit, and wherein hidden unit is 64, output
Unit is 1, and the activation primitive used obtains the probability that area-of-interest is target class for Sigmoid function.
6. a kind of area-of-interest automatic marking method of medical images data sets according to claim 1, feature exist
In step S5 specifically: introducing the probability that an imaginary area-of-interest is target class is Pd, and following formula is utilized, it obtains
Last target class probability is obtained, to complete mark task:
In formula, PiIndicate that i-th of area-of-interest is the probability of target class.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910606180.0A CN110298345A (en) | 2019-07-05 | 2019-07-05 | A kind of area-of-interest automatic marking method of medical images data sets |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910606180.0A CN110298345A (en) | 2019-07-05 | 2019-07-05 | A kind of area-of-interest automatic marking method of medical images data sets |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110298345A true CN110298345A (en) | 2019-10-01 |
Family
ID=68030548
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910606180.0A Pending CN110298345A (en) | 2019-07-05 | 2019-07-05 | A kind of area-of-interest automatic marking method of medical images data sets |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110298345A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112348082A (en) * | 2020-11-06 | 2021-02-09 | 上海依智医疗技术有限公司 | Deep learning model construction method, image processing method and readable storage medium |
CN114119519A (en) * | 2021-11-16 | 2022-03-01 | 高峰 | Collateral circulation assessment method |
CN115240014A (en) * | 2022-09-21 | 2022-10-25 | 山东大学齐鲁医院 | Medical image classification system based on residual error neural network |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180068198A1 (en) * | 2016-09-06 | 2018-03-08 | Carnegie Mellon University | Methods and Software for Detecting Objects in an Image Using Contextual Multiscale Fast Region-Based Convolutional Neural Network |
CN108171233A (en) * | 2016-12-07 | 2018-06-15 | 三星电子株式会社 | Use the method and apparatus of the object detection of the deep learning model based on region |
US20180247405A1 (en) * | 2017-02-27 | 2018-08-30 | International Business Machines Corporation | Automatic detection and semantic description of lesions using a convolutional neural network |
CN108876791A (en) * | 2017-10-23 | 2018-11-23 | 北京旷视科技有限公司 | Image processing method, device and system and storage medium |
CN109271539A (en) * | 2018-08-31 | 2019-01-25 | 华中科技大学 | A kind of image automatic annotation method and device based on deep learning |
CN109410273A (en) * | 2017-08-15 | 2019-03-01 | 西门子保健有限责任公司 | According to the locating plate prediction of surface data in medical imaging |
CN109523552A (en) * | 2018-10-24 | 2019-03-26 | 青岛智能产业技术研究院 | Three-dimension object detection method based on cone point cloud |
CN109785300A (en) * | 2018-12-27 | 2019-05-21 | 华南理工大学 | A kind of cancer medical image processing method, system, device and storage medium |
US20190205606A1 (en) * | 2016-07-21 | 2019-07-04 | Siemens Healthcare Gmbh | Method and system for artificial intelligence based medical image segmentation |
-
2019
- 2019-07-05 CN CN201910606180.0A patent/CN110298345A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190205606A1 (en) * | 2016-07-21 | 2019-07-04 | Siemens Healthcare Gmbh | Method and system for artificial intelligence based medical image segmentation |
US20180068198A1 (en) * | 2016-09-06 | 2018-03-08 | Carnegie Mellon University | Methods and Software for Detecting Objects in an Image Using Contextual Multiscale Fast Region-Based Convolutional Neural Network |
CN108171233A (en) * | 2016-12-07 | 2018-06-15 | 三星电子株式会社 | Use the method and apparatus of the object detection of the deep learning model based on region |
US20180247405A1 (en) * | 2017-02-27 | 2018-08-30 | International Business Machines Corporation | Automatic detection and semantic description of lesions using a convolutional neural network |
CN109410273A (en) * | 2017-08-15 | 2019-03-01 | 西门子保健有限责任公司 | According to the locating plate prediction of surface data in medical imaging |
CN108876791A (en) * | 2017-10-23 | 2018-11-23 | 北京旷视科技有限公司 | Image processing method, device and system and storage medium |
CN109271539A (en) * | 2018-08-31 | 2019-01-25 | 华中科技大学 | A kind of image automatic annotation method and device based on deep learning |
CN109523552A (en) * | 2018-10-24 | 2019-03-26 | 青岛智能产业技术研究院 | Three-dimension object detection method based on cone point cloud |
CN109785300A (en) * | 2018-12-27 | 2019-05-21 | 华南理工大学 | A kind of cancer medical image processing method, system, device and storage medium |
Non-Patent Citations (1)
Title |
---|
FANGZHOU LIAO等: ""Evaluate the Malignancy of Pulmonary Nodules Using the 3D Deep Leaky Noisy-or Network"", 《HTTPS://ARXIV.ORG/PDF/1711.08324.PDF》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112348082A (en) * | 2020-11-06 | 2021-02-09 | 上海依智医疗技术有限公司 | Deep learning model construction method, image processing method and readable storage medium |
CN114119519A (en) * | 2021-11-16 | 2022-03-01 | 高峰 | Collateral circulation assessment method |
CN115240014A (en) * | 2022-09-21 | 2022-10-25 | 山东大学齐鲁医院 | Medical image classification system based on residual error neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109558832B (en) | Human body posture detection method, device, equipment and storage medium | |
EP4002198A1 (en) | Posture acquisition method and device, and key point coordinate positioning model training method and device | |
CN110298345A (en) | A kind of area-of-interest automatic marking method of medical images data sets | |
CN113240691B (en) | Medical image segmentation method based on U-shaped network | |
CN109087306A (en) | Arteries iconic model training method, dividing method, device and electronic equipment | |
CN107845129A (en) | Three-dimensional reconstruction method and device, the method and device of augmented reality | |
CN109978077A (en) | Visual identity methods, devices and systems and storage medium | |
Mamdouh et al. | A New Model for Image Segmentation Based on Deep Learning. | |
Cirik et al. | Following formulaic map instructions in a street simulation environment | |
CN104112131B (en) | Method and device for generating training samples used for face detection | |
CN109993701A (en) | A method of the depth map super-resolution rebuilding based on pyramid structure | |
CN106570928A (en) | Image-based re-lighting method | |
Liu et al. | Video decolorization based on the CNN and LSTM neural network | |
CN106023079B (en) | The two stages human face portrait generation method of joint part and global property | |
Chen et al. | Cosa: Concatenated sample pretrained vision-language foundation model | |
Liu et al. | Fabric defect detection using fully convolutional network with attention mechanism | |
CN103473562A (en) | Automatic training and identifying system for specific human body action | |
Yang et al. | Shapeediter: a stylegan encoder for face swapping | |
Wang et al. | Personalized Hand Modeling from Multiple Postures with Multi‐View Color Images | |
CN110796150A (en) | Image emotion recognition method based on emotion significant region detection | |
CN104361621A (en) | Motion editing method based on four-dimensional spherical trajectory | |
Qiu et al. | Multi-scale Fusion for Visible Watermark Removal | |
Wu et al. | Marker-removal Networks to Collect Precise 3D Hand Data for RGB-based Estimation and its Application in Piano | |
CN117094895B (en) | Image panorama stitching method and system | |
CN108170270A (en) | A kind of gesture tracking method of VR helmets |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191001 |