US11830195B2 - Training label image correction method, trained model creation method, and image analysis device - Google Patents
Training label image correction method, trained model creation method, and image analysis device Download PDFInfo
- Publication number
- US11830195B2 US11830195B2 US17/265,760 US201817265760A US11830195B2 US 11830195 B2 US11830195 B2 US 11830195B2 US 201817265760 A US201817265760 A US 201817265760A US 11830195 B2 US11830195 B2 US 11830195B2
- Authority
- US
- United States
- Prior art keywords
- label
- image
- training
- areas
- label image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 238000012549 training Methods 0.000 title claims abstract description 536
- 238000000034 method Methods 0.000 title claims abstract description 176
- 238000010191 image analysis Methods 0.000 title claims description 47
- 238000003702 image correction Methods 0.000 title abstract description 31
- 230000008569 process Effects 0.000 claims abstract description 100
- 230000011218 segmentation Effects 0.000 claims abstract description 66
- 238000001514 detection method Methods 0.000 claims description 111
- 238000011156 evaluation Methods 0.000 claims description 76
- 238000010801 machine learning Methods 0.000 claims description 68
- 238000012937 correction Methods 0.000 claims description 60
- 238000004458 analytical method Methods 0.000 claims description 34
- 230000008602 contraction Effects 0.000 claims description 5
- 210000004027 cell Anatomy 0.000 description 66
- 238000003709 image segmentation Methods 0.000 description 20
- 210000000988 bone and bone Anatomy 0.000 description 18
- 238000010586 diagram Methods 0.000 description 15
- 230000006870 function Effects 0.000 description 8
- 238000010186 staining Methods 0.000 description 7
- 238000003384 imaging method Methods 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 6
- 238000011112 process operation Methods 0.000 description 6
- 210000001778 pluripotent stem cell Anatomy 0.000 description 5
- 238000012758 nuclear staining Methods 0.000 description 4
- 210000000170 cell membrane Anatomy 0.000 description 3
- 206010028980 Neoplasm Diseases 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 210000004197 pelvis Anatomy 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 238000002591 computed tomography Methods 0.000 description 1
- 238000013170 computed tomography imaging Methods 0.000 description 1
- 210000004292 cytoskeleton Anatomy 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000007447 staining method Methods 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
- 210000000689 upper leg Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/42—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
- G06V10/421—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation by analysing segments intersecting the pattern
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/10—Machine learning using kernel methods, e.g. support vector machines [SVM]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/695—Preprocessing, e.g. image segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/698—Matching; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/033—Recognition of patterns in medical or anatomical images of skeletal patterns
Definitions
- the present invention relates to a training label image correction method, a trained model creation method, and an image analysis device.
- training label image correction in machine learning is known.
- Such training label image correction is disclosed in Japanese Patent Laid-Open No. 2014-022837, for example.
- Japanese Patent Laid-Open No. 2014-022837 discloses that a classifier (trained model) to perform image classification that assigns a label indicating whether a detection target appears (a positive example) or does not appear (a negative example) in a certain video section in video data such as a TV program is constructed by machine learning. Training data including video and a label of whether it is a positive example or a negative example is used to construct the classifier.
- the label of the training data may contain errors and/or omissions. Therefore, Japanese Patent Laid-Open No. 2014-022837 discloses that the label of the initial training data or the label of the training data in which the label has been set is corrected based on an user input or the result of detection of the training data by another classifier.
- the classifier disclosed in Japanese Patent Laid-Open No. 2014-022837 simply classifies the input video and labels the input video in units of video, but as a technique different from such image segmentation, there is image segmentation.
- the image segmentation is a technique for dividing an image into a plurality of areas instead of the entire image, and divides an input image into a plurality of label areas by assigning a label indicating a detection target to an area in which the detection target appears.
- image segmentation in this specification is called semantic segmentation because a meaning is given to an area by a label, unlike simple area division (boundary detection) that does not assign a label such as edge detection.
- an input image and a training label image in which the input image has been divided into areas for each label are used.
- the training label image in this case is created by extracting the label areas from an original image by a threshold process, for example.
- a threshold process for example.
- creation of the training label image is shared by a large number of label creators, and each label creator manually adjusts a threshold to create the training label image.
- the creation of the training label image includes an individual difference in threshold setting, for example, and a certain label area may be extracted to a large or small extent. Furthermore, there is a case in which the boundaries of the label areas are ambiguous in the first place depending on an image, and in that case, the boundaries of the label areas tend to vary. Therefore, the boundaries in the training label image are corrected.
- the present invention is intended to solve at least one of the above problems.
- the present invention aims to provide a training label image correction method, a trained model creation method, and an image analysis device capable of easily performing correction of a training label image used for machine learning of an image segmentation process while ensuring the accuracy.
- the training label image correction method is a training label image correction method in machine learning for performing a segmentation process on an image, and includes performing the segmentation process on an input image of training data including the input image and a training label image by a trained model using the training data to create a determination label image divided into a plurality of label areas, comparing labels of corresponding portions in the created determination label image and the training label image with each other, and correcting the label areas included in the training label image based on label comparison results.
- the training label image can be corrected based on a comparison between the determination label image created by the trained model trained using the training data including the training label image containing variations in the boundaries of the label areas and the training label image. That is, under ordinary circumstances, machine learning is performed on the premise that the label areas of the training label image are correct, and the trained model is created.
- the segmentation results (determination label image) using the trained model that has undergone sufficient training intermediate results of variations in the boundaries of many training label images are obtained with respect to the boundary portions of the label areas, and thus the validity of the boundaries in the determination label image can be considered to be higher than that in the individual training label image including the variations.
- the label areas of the training label image are corrected based on the comparison results between the determination label image and the training label image with reference to the determination label image such that the variations in the boundaries can be reduced based on consistent criteria (the boundaries of the label areas of the determination label image) by automatic correction using a computer instead of a label correction operator. Accordingly, correction of the training label image used for machine learning of the image segmentation process can be simplified while ensuring the accuracy.
- the correcting the label areas may include correcting ranges of the label areas in the training label image so as to be closer to the label areas of the determination label image based on the label comparison results. Accordingly, the ranges of the label areas in the training label image are simply matched with the label areas of the determination label image or are simply set to intermediate ranges between the label areas of the training label image and the label areas of the determination label image, for example, such that a correction can be easily and effectively made to reduce the variations in the label areas of the individual training label image.
- the correcting the label areas may include correcting the label areas by at least one of expansion or contraction of the label areas in the training label image. Accordingly, the label areas of the training label image can be corrected by a simple process of expanding or contracting the ranges of the label areas in the training label image based on the label comparison results.
- the expansion of the label areas indicates increasing the areas of the label areas, and the contraction of the label areas indicates reducing the areas of the label areas.
- the comparing the labels may include acquiring a matching portion and a non-matching portion with the training label image in the determination label image for a label of interest by comparing the determination label image with the training label image, and the correcting the label areas may include correcting a range of a label area with the label of interest based on the matching portion and the non-matching portion.
- the comparing the labels may include acquiring a non-detection evaluation value for evaluating undetected labels in the determination label image and a false-detection evaluation value for evaluating falsely detected labels in the determination label image based on the matching portion and the non-matching portion, and the correcting the label areas may include correcting the label areas of the training label image based on a comparison between the non-detection evaluation value and the false-detection evaluation value.
- Non-detection indicates that a portion with a label of interest in the training label image has not been detected as the corresponding label area in the determination label image, and when there are many undetected areas, it is estimated that the label areas of the training label image are larger than the label areas of the determination label image.
- False detection indicates that a different label has been assigned to a portion in the training label image corresponding to a portion detected as a label area of interest in the determination label image, and when there are many falsely detected areas, it is estimated that the label areas of the training label image are smaller than the label areas of the determination label image. Therefore, the non-detection evaluation value and the false-detection evaluation value are compared such that it is possible to understand how the label areas of the training label image should be corrected (whether it should be increased or decreased), and thus the label areas of the individual training label image can be more appropriately corrected.
- the correcting the label areas may include expanding the label areas of the training label image when it is determined by the comparison between the non-detection evaluation value and the false-detection evaluation value that there are more falsely detected labels than undetected labels in the determination label image. Accordingly, when there are many falsely detected areas and it is estimated that the label areas of the training label image are smaller than the label areas of the reference determination label image, the label areas of the training label image are expanded such that a correction can be easily made to reduce the variations in the boundaries of the label areas.
- the correcting the label areas may include contracting the label areas of the training label image when it is determined by the comparison between the non-detection evaluation value and the false-detection evaluation value that there are more undetected labels than falsely detected labels in the determination label image. Accordingly, when there are many undetected areas and it is estimated that the label areas of the training label image are larger than the label areas of the reference determination label image, the label areas of the training label image are contracted such that a correction can be easily made to reduce the variations in the boundaries of the label areas.
- the aforementioned configuration including acquiring the non-detection evaluation value and the false-detection evaluation value may further include excluding the training data including the training label image determined to have at least one of more than a predetermined threshold number of undetected labels or more than a predetermined threshold number of falsely detected labels from training data set. Accordingly, when at least one of the number of undetected labels or the number of falsely detected labels is excessively large in the comparison with the reference determination label image, the training label image is considered to be inappropriate as the training data.
- the training label image in which at least one of the number of undetected labels or the number of falsely detected labels is large is excluded from the training data set such that the training data corresponding to a so-called statistical outlier that is a factor that lowers the accuracy of machine learning can be excluded.
- the quality of the training data set can be improved, and highly accurate machine learning can be performed.
- the aforementioned training label image correction method may further include setting priorities between labels of the training label image, and the correcting the label areas may include adjusting amounts of correction of the label areas according to the priorities.
- the segmentation process is to distinguish and label an area in an image in which a detection target appears, and thus different priorities can be set for the labels according to the purpose of the process in order to distinguish between the detection target (priority: high) and a non-detection target (priority: low), for example. Therefore, with the configuration described above, the amount of correction of a label with a higher priority can be intentionally biased such that detection omissions can be significantly reduced or prevented as much as possible, for example. Consequently, the training label image can be appropriately corrected according to the purpose of the segmentation process.
- the correcting the label areas may include correcting the label areas of the training label image while expansion of the label areas having a lower priority to the label areas having a higher priority is prohibited. Accordingly, even when the label areas having a lower priority are corrected by expansion, the label areas having a higher priority can be preserved. Therefore, it is possible to obtain the training label image (training data) capable of being learned, in which detection omissions of the label areas with a higher priority are significantly reduced or prevented while the variations in the boundaries are significantly reduced or prevented.
- the correcting the label areas may include dividing the training label image into a plurality of partial images, and correcting the label areas for at least one partial image of the divided training label image. Accordingly, only a specific portion of the training label image can be corrected, or each small portion can be corrected individually.
- the comparing the labels may include comparing the labels of the corresponding portions of the determination label image and the training label image with each other for every image pixel or every plurality of adjacent image pixels. Accordingly, the determination label image and the training label image can be compared with each other for each small area of a unit of one pixel or a unit of multiple pixels, and thus the variations in the label areas of the training label image can be evaluated more accurately.
- training label image correction method may further include creating the determination label image by the trained model using the training data including the corrected training label image, and the correcting the label areas may include correcting the label areas included in the corrected training label image again based on comparison results between the created determination label image and the corrected training label image. Accordingly, machine learning is performed using the corrected training label image, and then the determination label image is created again such that the label areas of the training label image can be repeatedly corrected. The label areas are repeatedly corrected such that the variations in the label areas of the training label image can be further reduced.
- a trained model creation method is a trained model creation method by machine learning for performing a segmentation process on an image, and includes acquiring a pre-trained model by the machine learning using training data including an input image and a training label image, performing the segmentation process on the input image of the training data by the acquired pre-trained model to create a determination label image divided into a plurality of label areas, comparing labels of corresponding portions in the created determination label image and the training label image with each other, correcting the label areas included in the training label image based on label comparison results, and creating a trained model by performing the machine learning using the training data including the corrected training label image.
- the creating the trained model can include both (1) creating the trained model by performing the machine learning (additional training) on the pre-trained model using the training data including the corrected training label image and (2) creating the trained model by performing the machine learning on an untrained training model using the training data including the corrected training label image.
- the label areas of the training label image are corrected based on the comparison results between the determination label image and the training label image with reference to the determination label image such that the variations in the boundaries can be reduced based on consistent criteria (the boundaries of the label areas of the determination label image) even by automatic correction using a computer. Accordingly, correction of the training label image used for machine learning of the image segmentation process can be simplified while ensuring the accuracy. Furthermore, the machine learning is performed on the pre-trained model using the training data including the corrected training label image such that the trained model with significantly reduced determination variations in the boundary portions of the label areas, which is capable of a high-quality segmentation process, can be obtained.
- An image analysis device includes an image input configured to receive an input of an analysis image, an analysis processor configured to perform a segmentation process on the analysis image using a trained model by machine learning to create a label image divided into a plurality of label areas, a storage configured to store the trained model, a determination image creating unit configured to perform the segmentation process on an input image of training data including the input image and a training label image with the trained model stored in the storage as a pre-trained model to create a determination label image, a comparator configured to compare labels of corresponding portions in the determination label image created by the determination image creating unit and the training label image with each other, and a label corrector configured to correct the label areas included in the training label image based on label comparison results by the comparator.
- the label areas of the training label image are corrected based on the comparison results between the determination label image and the training label image with reference to the determination label image created by the pre-trained model such that the variations in the boundaries can be reduced based on consistent criteria (the boundaries of the label areas of the determination label image) even by automatic correction using a computer. Accordingly, correction of the training label image used for machine learning of the image segmentation process can be simplified while ensuring the accuracy.
- FIG. 1 is a diagram for illustrating a training label image correction method according to a first embodiment.
- FIG. 2 is a diagram showing the overview of machine learning and a segmentation process.
- FIG. 3 is a diagram showing examples of an X-ray image of a bone and a label image of two classes.
- FIG. 4 is a diagram showing examples of a cell image and a label image of three classes.
- FIG. 5 is a diagram for illustrating a training label image.
- FIG. 6 is a graph for illustrating a change in the loss function with an increase in the number of times of learning.
- FIG. 7 is a diagram showing examples of an input image, a determination image, and the training label image.
- FIG. 8 is a diagram for illustrating a variation in the boundary of a label area of the training label image.
- FIG. 9 is a diagram for illustrating correction of label areas of the training label image.
- FIG. 10 is a flowchart for illustrating a method for correcting the label areas of the training label image.
- FIG. 11 is a diagram for illustrating a method for comparing the training label image with the determination image.
- FIG. 12 is a schematic view showing an example of contracting the label area of the training label image.
- FIG. 13 is a schematic view showing an example of expanding the label area of the training label image.
- FIG. 14 is a flowchart for illustrating a trained model creation method.
- FIG. 15 is a block diagram for illustrating a training data processor.
- FIG. 16 is a block diagram showing a first example of an image analysis device.
- FIG. 17 is a flowchart for illustrating an image analysis process of the image analysis device.
- FIG. 18 is a block diagram showing a second example of an image analysis device.
- FIG. 19 is a diagram for illustrating the label priorities in a second embodiment.
- FIG. 20 is a diagram for illustrating adjustment of the amounts of correction of label areas according to the label priorities.
- FIG. 21 is a diagram showing a modified example in which label area correction is performed for each partial image.
- FIG. 22 is a schematic view showing a modified example in which training label image correction and trained data creation are performed on the server side.
- FIG. 23 is a schematic view showing a modified example in which training label image correction is performed on the image analysis device side and trained data creation is performed on the server side.
- a training label image correction method, a trained model creation method, and an image analysis device are now described with reference to FIGS. 1 to 13 .
- the training label image correction method according to the first embodiment shown in FIG. 1 is a method for correcting a training label image in machine learning for performing an image segmentation process.
- performing the segmentation process may be paraphrased as “performing area division”.
- the image segmentation process is performed by a trained model 2 created by machine learning.
- the trained model 2 performs the segmentation process on an input image (an analysis image 15 or an input image 11 ) and outputs a label image (a label image 16 or a determination label image 14 ) divided into a plurality of label areas 13 (see FIGS. 3 and 4 ).
- a machine learning method any method such as a fully convolutional network (FCN), a neural network, a support vector machine (SVM), or boosting can be used.
- Such a trained model 2 includes an input layer into which an image is input, a convolution layer, and an output layer.
- machine learning is performed using a training data set 1 (see FIG. 1 ) that includes a large number of training data 10 .
- the training data 10 used for machine learning includes at least the input image 11 and a training label image 12 .
- the input image 11 is an original image before performing the segmentation process.
- the training label image 12 is created as a correct image to be generated as a result of the segmentation process on the input image 11 . That is, the training label image 12 is a label image obtained by dividing the input image 11 into a plurality of label areas 13 .
- Each of the label areas 13 is an area (a portion of an image) including a group of pixels with a common label in an image.
- the label is information indicating a meaning indicated by an image portion including the label area 13 . Segmentation is performed by assigning a label to each pixel in an image.
- the label may be assigned in units of a group (pixel group) of a plurality of pixels.
- the type of label is called a class.
- the number of classes is not particularly limited as long as there are a plurality of (two or more) classes.
- the classes include two classes, a “detection target” label and a “non-detection target” label.
- FIG. 3 an example is shown in which using an X-ray image of the human pelvis (a base of the femur) as the input image 11 , area division into a label area 13 a of a “bone” as a detection target and a label area 13 b of a “portion other than the bone” has been performed.
- FIG. 4 shows an example in which using an image (cell image) of a pluripotent stem cell, such as an iPS cell or an ES cell, as the input image 11 , area division into three classes is performed.
- area division into three classes including a label area 13 c of an “undifferentiated cell”, which is a cell that maintains pluripotency, a label area 13 d of an “undifferentiated deviant cell”, which is a cell that has deviated from the undifferentiated state (a cell that has started differentiation or is likely to differentiate), and a label area 13 e of a “background” other than those has been performed.
- the input image 11 is input to a training model, the training label image 12 is output, and a conversion process (segmentation process) from the input image 11 to the training label image 12 , which is a correct answer, is learned by the training model.
- the trained model 2 capable of the segmentation process is generated by performing machine learning. Consequently, the analysis image 15 to be analyzed is input into the trained model 2 such that the segmentation process for the analysis image 15 is performed using the trained model 2 by machine learning, and the label image divided into the plurality of label areas 13 is output.
- FIG. 5 shows an example of creating the training label image 12 in which area division into three classes has been performed with respect to the input image 11 of the pluripotent stem cell shown in FIG. 4 .
- a cell membrane staining image 91 in which a cell area has been stained with a staining agent and a nuclear staining image 92 in which a nuclear staining area of an undifferentiated cell has been stained with an undifferentiated marker are acquired, and after the cell membrane staining image 91 and the nuclear staining image 92 are binarized by a threshold process, a difference between the two images is acquired such that the training label image 12 is created.
- the training label image 12 is created based on a staining image of a cell, and thus there is an inevitable variation in the degree of staining. Therefore, it is necessary to manually adjust a threshold according to the degree of staining, and depending on setting of the threshold, a boundary portion of the cell becomes relatively large or small with respect to another training label image 12 and varies.
- a threshold according to the degree of staining
- a boundary portion of the cell becomes relatively large or small with respect to another training label image 12 and varies.
- a cytoskeleton as well as a nucleus is stained
- the nuclear staining image 92 only a nucleus is stained, and thus a stained area may differ slightly between an undifferentiated deviant cell and an undifferentiated cell.
- Such a difference may cause variations in the boundaries (sizes) of the label areas 13 in the training label image 12 .
- the X-ray image of the bone shown in FIG. 3 there is no difference in a stained area due to a difference in a staining method, but variations in the boundary portions due to the threshold process occur similarly to the example of the cell image.
- the boundaries of the label areas 13 in an individual training label image 12 vary, the criteria of which portions in an image are defined as boundaries varies. Therefore, in the trained model 2 that has been trained using the training data set 1 including the training label image 12 in which the boundaries of the label areas 13 have varied, the boundaries of the label areas 13 may be blurred or may have an unnatural shape in the generated label image 16 (see FIG. 2 ).
- the trained model 2 (hereinafter referred to as a pre-trained model 2 a ) trained using the training data 10 including the training label image 12 in which the boundaries of the label areas 13 have varied is used to correct the label areas 13 of the training label image 12 .
- the method for correcting the training label image in the first embodiment includes the following steps as shown in FIG. 1 :
- the pre-trained model 2 a that generates the determination label image 14 is a trained model trained using the training data 10 including the training label image 12 in which the boundaries of the label areas 13 vary.
- the trained model 2 (pre-trained model 2 a ) that generates the determination label image 14 , a trained model that has been trained using the training data set 1 including the training data 10 having a sufficient score and in which the training has appropriately converged is used. That is, in the pre-trained model 2 a , machine learning itself is appropriately performed, and the segmentation process can be performed with sufficient accuracy except for the boundaries of the label areas 13 . In the field of machine learning, the accuracy of the pre-trained model 2 a can be evaluated by obtaining an error (loss function) between the segmentation result (here, the determination label image 14 ) for each input image 11 and the training label image 12 of the input image 11 . As shown in FIG.
- a value of the loss function decreases with the number of times of training, and converges at a value corresponding to a limit caused by various inevitable factors including the variations in the boundaries of the label areas 13 .
- the vertical axis represents the loss function logarithmically, and the horizontal axis represents the number of training iterations.
- the training label image 12 is a training label image created according to consistent criteria to the extent that training of the pre-trained model 2 a can appropriately converge. That is, a large number of training label images 12 included in the training data set 1 have variations in the boundaries of the label areas 13 due to a factor such as the above threshold process, but statistical bias is small enough to allow training to appropriately converge. As can be seen from the images of the pelvis shown in FIG. 3 and the cell images shown in FIG. 4 , a ratio of the boundary portions to the entire label areas 13 is very small, and the variations in the boundaries do not interfere with training convergence.
- the input image 11 is input to the pre-trained model 2 a , and the segmentation process is performed such that the determination label image 14 is created as an output.
- the determination label image 14 is an image obtained by dividing the input image 11 into a plurality of label areas 13 , similarly to the training label image 12 .
- the labels of the corresponding portions in the determination label image 14 and the training label image 12 are compared with each other.
- the corresponding portions refer to portions that can be regarded as images of the same portions in the determination label image 14 and the training label image 12 , and indicate the same coordinates as long as the imaging field of view of each image is the same.
- the label comparison between the corresponding portions of the determination label image 14 and the training label image 12 can be performed for every image pixel or for every plurality of adjacent image pixels.
- the plurality of adjacent pixels include a certain pixel and surrounding pixels, and may be a square area of 4 (2 ⁇ 2) pixels or 9 (3 ⁇ 3) pixels, for example.
- the labels of the corresponding portions of the determination label image 14 and the training label image 12 are compared with each other for each pixel.
- a matching portion and a non-matching portion of the determination label image 14 with the training label image 12 are acquired for a label of interest by comparing the determination label image 14 with the training label image 12 .
- a range of the label area 13 with the label of interest is corrected based on the matching portion and the non-matching portion as the comparison results.
- the label areas 13 included in the training label image 12 are corrected by replacing the labels assigned to the label areas 13 to be corrected with other labels, for example. As shown in FIG. 1 , the label areas 13 are corrected such that a corrected training label image 12 a is created. The created corrected training label image 12 a is replaced with the original (uncorrected) training label image 12 in the training data 10 (that is, the training label image 12 is updated).
- the training label image 12 in which the boundaries of the label areas 13 vary in a direction in which the label areas 13 become relatively large and the training label image 12 in which the boundaries of the label areas 13 vary in a direction in which the label areas 13 become relatively small are distributed substantially evenly (exhibit substantially normal distribution), as shown in FIG. 8 .
- the horizontal axis of FIG. 8 represents the magnitude of the variation, and the vertical axis represents the number (frequency) of training label images having the corresponding variation.
- the boundaries of the label areas 13 of the determination label image 14 output by the pre-trained model 2 a are expected to be consistently intermediate results in the distribution of FIG. 8 . Therefore, it can be considered that the boundary portions of the label areas 13 of the individual training label image 12 vary by differences from the determination label image 14 based on the label areas 13 of the determination label image 14 generated by the pre-trained model 2 a.
- the ranges of the label areas 13 in the training label image 12 are brought closer to the label areas 13 of the determination label image 14 as shown in FIG. 9 based on the comparison results of the labels.
- the correction of the label areas 13 is performed by at least one of expansion or contraction of the label areas 13 in the training label image 12 . That is, for the training label image 12 in which the label area 13 is set relatively large (see a boundary 18 b ) with respect to a boundary 18 a of the label area 13 of the determination label image 14 , a correction is made to contract the label area 13 .
- the label areas 13 can be corrected by a morphology process, for example.
- the morphology process determines pixel values of corresponding pixels in the output image (here, the corrected training label image 12 a ) based on a comparison between pixels of the input image (here, the training label image 12 ) and pixels adjacent thereto.
- an expansion process is performed, the same labels as the label areas 13 are assigned to pixels at the boundaries of the label areas 13 detected by the comparison with the adjacent pixels.
- a contract process is performed, the label is removed from the pixels at the boundary of the label area 13 detected by the comparison with the adjacent pixels, and the same label as the label outside the boundary is assigned.
- the amount of correction by the morphology process (how many pixels to expand or contract the boundary) is not particularly limited, and a correction can be made for one pixel or a plurality of pixels.
- step S 1 the training data 10 including the training label image 12 to be corrected is selected from the training data set 1 .
- Step S 2 is the above step of creating the determination label image 14 .
- the determination label image 14 is created from the input image 11 by the pre-trained model 2 a.
- Step S 3 is the above step of comparing the labels of the determination label image 14 and the training label image 12 .
- a confusion matrix 21 of two-class (positive and negative) classification shown in FIG. 11 is illustrated.
- the vertical direction of the matrix shows the result of the segmentation in the training label image 12
- the horizontal direction of the matrix shows the result of the segmentation in the determination label image 14 .
- TP represents the total number of matching pixels between the determination label image 14 and the training label image 12 to which a “positive” label has been assigned.
- TN represents the total number of matching pixels between the determination label image 14 and the training label image 12 to which a “negative” label has been assigned.
- TP and TN each represent the number of pixels in the matching portion.
- FP represents the total number of pixels to which a “positive” label has been assigned in the determination label image 14 and a “negative” label has been assigned in the training label image 12 .
- FN represents the total number of pixels to which a “negative” label has been assigned in the determination label image 14 and a “positive” label has been assigned in the training label image 12 . Therefore, FP and FN each represent the number of pixels in the non-matching portion.
- an evaluation value for evaluating the degree of deviation between the determination label image 14 and the training label image 12 is calculated. Specifically, in the step of comparing the labels, a non-detection evaluation value 22 for evaluating undetected labels and a false-detection evaluation value 23 for evaluating falsely detected labels in the determination label image 14 are acquired based on the matching portion and the non-matching portion.
- the label areas 13 of the training label image 12 are corrected (assuming that it is incorrect), but the non-detection evaluation value 22 and the false-detection evaluation value 23 are evaluation values for evaluating the undetected and falsely detected labels in the determination label image 14 assuming that each label area 13 of the training label image 12 is correct.
- the non-matching portion with the training label image 12 in the determination label image 14 is a portion that is undetected or falsely detected. Therefore, the non-detection evaluation value 22 and the false-detection evaluation value 23 are values for numerically evaluating matching and non-matching with the training label image 12 in the determination label image 14 .
- the non-detection evaluation value 22 and the false-detection evaluation value 23 are not particularly limited as long as the same are numerical values indicating many (or few) undetected labels and many (or few) falsely detected labels, respectively.
- the non-detection evaluation value 22 is a detection rate (recall or true positive rate; TPR) expressed by the following formula (1).
- the detection rate is also called sensitivity.
- K TP /( TP+FN ) (1)
- the detection rate refers to a “ratio of items (pixels) that can be classified as positive correctly to items (pixels) that should be classified as positive”, and indicates few undetected labels. That is, when the detection rate is used as the non-detection evaluation value 22 , the larger value indicates fewer undetected labels.
- the false-detection evaluation value 23 is a precision expressed by the following formula (2).
- G TP /( TP+FP ) (2)
- the precision refers to a “ratio of items that are actually positive among the items classified as positive”, and indicates a small number of falsely detected labels. That is, when the precision is used as the false-detection evaluation value 23 , the larger value indicates fewer falsely detected labels.
- the label areas 13 of the training label image 12 are set relatively large or the label areas 13 are set relatively small by a comparison of the non-detection evaluation value 22 and the false-detection evaluation value 23 . Therefore, in the step of correcting the label areas 13 , the label areas 13 of the training label image 12 are corrected based on the comparison of the non-detection evaluation value 22 and the false-detection evaluation value 23 .
- FIGS. 12 and 13 The relationship between the comparison results of the non-detection evaluation value 22 and the false-detection evaluation value 23 and the size of each label area of the determination label image 14 and the training label image 12 is described using simplified virtual examples shown in FIGS. 12 and 13 .
- a colored portion shows the label area 13 of each image
- a hatched portion shows a label area 13 f of the determination label image 14 superimposed on the training label image 12 .
- a label area 13 g with a positive label in the training label image 12 is larger than the label area 13 f with a positive label in the determination label image 14 , and the boundary of the label area 13 g is outside the label area 13 f (the outside of the label area 13 f is undetected).
- step S 4 of correcting the label areas 13 the label areas 13 of the training label image 12 are contracted.
- the label area 13 g with a positive label in the training label image 12 is smaller than the label area 13 f with a positive label in the determination label image 14 , and the boundary of the label area 13 g is inside the label area 13 f (the outside of the label area 13 g is falsely detected).
- step S 4 of correcting the label areas 13 the label areas 13 of the training label image 12 are expanded.
- the ranges of the label areas 13 of the training label image 12 are expanded such that it can be brought closer to the label areas 13 of the determination label image 14 .
- the criteria for determining the precision G ⁇ 1 and the detection rate K ⁇ 1 may be 0.8 or more, or 0.9 or more, for example.
- a step of excluding the training data 10 including the training label image 12 determined to have at least one of more than a predetermined threshold number of undetected labels or more than a predetermined threshold number of falsely detected labels from the training data set 1 is further included.
- at least one of the number of undetected labels or the number of falsely detected labels is greater than the predetermined threshold, at least one of the detection rate K or the precision G is below a predetermined threshold Th (K ⁇ Th and/or G ⁇ Th).
- the predetermined threshold can be set according to the number of classes or 3 the accuracy (loss function) of the pre-trained model 2 a . For example, in the case of two-class classification, the threshold value can be set to 0.5.
- step S 4 the label areas 13 of the training label image 12 are corrected in step S 4 .
- Corrections can be made on the training label images 12 of all the training data 10 included in the training data set 1 . That is, in step S 5 , it is determined whether or not the training label images 12 of all the training data 10 have been corrected. When there is uncorrected training data (training label image 12 ), the process returns to step S 1 , and the operations in step S 2 to step S 4 are performed on the next selected training data 10 . When the correction process for the training label images 12 of all the training data 10 is completed, the correction process for the trained model 2 is completed. In addition, a correction can be made only on a specific training label image 12 .
- thresholds for determining whether or not a correction is made are set for the loss function or the non-detection evaluation value 22 and the false-detection evaluation value 23 such that a correction can be made only on a specific training label image 12 having the loss function, or the non-detection evaluation value 22 and the false-detection evaluation value 23 that exceed the thresholds.
- the label areas 13 of the training label image 12 can be corrected once or a plurality of times.
- the step of creating the determination label image 14 by the trained model 2 using the training data 10 including the corrected training label image 12 (corrected training label image 12 a ) is further included, and in the step of correcting the label areas, the label areas 13 included in the corrected training label image 12 a are corrected again based on the comparison results between the created determination label image 14 and the corrected training label image 12 a .
- the details are described below.
- the trained model creation method includes the following steps:
- FIG. 14 shows a process flow for executing the trained model creation method.
- step S 11 the pre-trained model 2 a is acquired.
- step S 12 the training label image 12 included in the training data set 1 is corrected. That is, the training label image 12 of each training data 10 is corrected by performing the process operations in step S 1 to step S 5 shown in FIG. 10 .
- step S 13 the corrected training data set 1 including the corrected training label image 12 a is acquired.
- step S 14 machine learning is performed using the training data set 1 including the corrected training label image 12 a . Consequently, the pre-trained model 2 a is additionally trained using the training data 10 including the corrected training label image 12 a in which the variations in the boundaries of the label areas 13 have been significantly reduced or prevented such that the trained model 2 is generated.
- step S 16 the created trained model 2 is stored in a storage of a computer.
- machine learning additional training
- machine learning may be performed on another trained model different from the pre-trained model 2 a or an untrained training model.
- the trained model 2 can be created repeatedly a predetermined number of times. At this time, the correction process for the training label image 12 is also repeatedly performed. That is, in step S 16 , it is determined whether or not the training has been repeated the predetermined number of times. When the predetermined number of times has not been reached, the process returns to step S 11 , and the trained model 2 is created again.
- step S 11 the trained model 2 stored in immediately preceding step S 15 is acquired as the pre-trained model 2 a this time. Then, in step S 12 , the corrected training label image 12 a is corrected again using the acquired pre-trained model 2 a and the training data set 1 including the corrected training label image 12 a.
- step S 14 machine learning is performed on the pre-trained model 2 a using the training data set 1 including the re-corrected corrected training label image 12 a .
- the operations in step S 11 to step S 15 are repeated in this manner such that the trained model 2 is created by machine learning using the training data set 1 including the updated and modified training label image 12 a while the training label image 12 included in training data set 1 is repeatedly corrected and updated.
- step S 16 when the training is repeated the predetermined number of times, the process is terminated.
- the above training label image correction method and trained model creation method are executed by a computer (such as a personal computer, a workstation, or a supercomputer) in which specified software (program) is installed in a storage, or a computer system including a plurality of computers.
- the computer includes a processor such as a CPU, GPU, or a specially designed FPGA, and a storage such as a ROM, a RAM, or a volatile or non-volatile storage device (such as an HDD or an SDD).
- a processor 50 a of the computer functions as a training data processor 50 to execute the training label image correction method and the trained model creation method by executing programs stored in a storage 54 .
- the training data processor 50 that executes the training label image correction method according to the first embodiment includes functional blocks to perform the operations in the steps (step S 1 to step S 5 in FIG. 10 ) of the training label image correction method, and can execute the training label image correction method by the process using various data (the pre-trained model 2 a and the training data set 1 ) stored in the storage 54 .
- the training data processor 50 that executes the trained model creation method according to the first embodiment includes functional blocks to perform the operations in the steps (step S 11 to step S 16 in FIG. 14 ) of the trained model creation method, and can execute the trained model creation method by the process using various data stored in the storage 54 .
- the training data processor 50 includes a determination image creating unit 51 , a comparator 52 , and a label corrector 53 as the functional blocks.
- the training data processor 50 that executes the trained model creation method according to the first embodiment further includes a training unit 55 as the functional block.
- the training data processor 50 does not need to include the training unit 55 .
- Various data such as the training data set 1 and the trained model 2 (pre-trained model 2 a ) and programs are stored in the storage 54 .
- the training data processor 50 selects the training data 10 in step S 1 of FIG. 10 .
- the determination image creating unit 51 is configured to create the determination label image 14 by performing the segmentation process on the input image 11 of the training data 10 including the input image 11 and the training label image 12 with the trained model 2 stored in the storage 54 as the pre-trained model 2 a .
- the determination image creating unit 51 performs the process operation in step S 11 of FIG. 14 and the process operations in step S 1 and step S 2 of FIG.
- the comparator 52 is configured to compare the labels of the corresponding portions in the determination label image 14 created by the determination image creating unit 51 and the training label image 12 with each other. That is, the comparator 52 performs the process operation in step S 3 of FIG. 10 .
- the label corrector 53 is configured to correct the label areas 13 included in the training label image 12 based on the label comparison results by the comparator 52 . That is, the label corrector 53 performs the process operation in step S 4 of FIG. 10 .
- the training data processor 50 repeats the selection of the training data 10 and the correction of the training label image 12 until all the training data 10 is selected in step S 5 of FIG. 10 .
- the training unit 55 is configured to create (update) the trained model 2 by performing machine learning using the training data 10 including the corrected training label image 12 a . That is, the training unit 55 performs the process operations in step S 13 to step S 15 of FIG. 14 .
- the training data processor 50 repeats the correction of the training label image 12 and the re-training of the trained model 2 the predetermined number of times in step S 16 of FIG. 14 .
- an image analysis device 100 shown in FIG. 16 is a cell analysis device that analyzes a cell image obtained by imaging a cell.
- the image analysis device 100 is configured to perform cell segmentation on an in-line holographic microscopic (IHM) phase image, which is the analysis image in which a cell has been imaged.
- IHM in-line holographic microscopic
- the image analysis device 100 creates the label image 16 divided for each area of the cell to be detected from the cell image by the segmentation process.
- the image analysis device 100 includes an imager 110 , a controller 120 , an operation unit 130 , which is a user interface, a display 140 , and the training data processor 50 .
- the imager 110 is an in-line holographic microscope, and includes a light source 111 including a laser diode, for example, and an image sensor 112 . At the time of imaging, a culture plate 113 including a cell colony (or a single cell) 114 is arranged between the light source 111 and the image sensor 112 .
- the controller 120 is configured to control the operation of the imager 110 and process data acquired by the imager 110 .
- the controller 120 is a computer including a processor and a storage 126 , and the processor includes an imaging controller 121 , a cell image creating unit 122 , and an image analyzer 123 as functional blocks.
- the image analyzer 123 includes an image input 124 that receives an input of the analysis image 15 and an analysis processor 125 that performs a segmentation process for the analysis image 15 using a trained model 2 by machine learning to create a label image divided into a plurality of label areas 13 as lower functional blocks.
- the training data processor 50 has the same configuration as that shown in FIG. 15 , and thus description thereof is omitted.
- the trained model 2 created in the training data processor 50 is stored in the storage 126 of the controller 120 and used for the cell segmentation process by the image analyzer 123 .
- the controller 120 controls the imager 110 with the imaging controller 121 to acquire hologram data.
- the imager 110 radiates coherent light from the light source 111 , and acquires, with the image sensor 112 , an image of interference fringes of light transmitted through the culture plate 113 and the cell colony 114 and light transmitted through an area in the vicinity of the cell colony 114 on the culture plate 113 .
- the image sensor 112 acquires hologram data (two-dimensional light intensity distribution data of a hologram formed on a detection surface).
- the cell image creating unit 122 calculates phase information by performing an arithmetic process for phase recovery on the hologram data acquired by the imager 110 . Furthermore, the cell image creating unit 122 creates the IHM phase image (analysis image 15 ) based on the calculated phase information.
- Known techniques can be used for the calculation of the phase information and a method for creating the IHM phase image, and thus detailed description thereof is omitted.
- the cell image shown in FIG. 4 is an example of the IHM phase image targeting an undifferentiated deviant cell colony in an iPS cell.
- the segmentation process is performed on the IHM phase image by the image analyzer 123 such that an area division process of dividing the IHM phase image (analysis image into the label area 13 c of the undifferentiated cell, the label area 13 d of the undifferentiated deviant cell, and the label area 13 e of the background is automatically performed.
- step S 21 the image input 124 acquires the IHM phase image, which is the analysis image 15 .
- step S 22 the analysis processor 125 performs the segmentation process on the analysis image 15 by the trained model 2 stored in the storage 126 .
- the label image 16 in which the analysis image 15 input this time has been divided into the plurality of label areas 13 is created.
- step S 23 the image analyzer 123 stores the created label image 16 in the storage 126 , and outputs the created label image 16 to the display 140 and an external server, for example.
- the image analyzer 123 in addition to the segmentation process, various analysis processes based on the results of the segmentation process can be performed. For example, the image analyzer 123 estimates at least one of the area of the cell area, the number of cells, or the density of the cells based on the label image 16 obtained by the segmentation process.
- an image analysis device 200 shown in FIG. 18 is a bone image analysis device that analyzes an X-ray image of an area including the bone of a subject.
- the image analysis device 200 is configured to perform bone segmentation on an X-ray image, which is the analysis image 15 in which the bone has been imaged.
- the image analysis device 200 creates the label image 16 of the bone obtained by dividing a bone area from the X-ray image by the segmentation process.
- the image analysis device 200 includes an imager 210 , a controller 220 , an operation unit 230 , a display 240 , and the training data processor 50 .
- the imager 210 includes a table 211 , an X-ray irradiator 212 , and an X-ray detector 213 .
- the table 211 is configured to support a subject O (person).
- the X-ray irradiator 212 is configured to irradiate the subject O with X-rays.
- the X-ray detector 213 includes a flat panel detector (FPD), for example, and is configured to detect X-rays radiated from the X-ray irradiator 212 and transmitted through the subject O.
- FPD flat panel detector
- the controller 220 is a computer including a processor and a storage 226 , and the processor includes an imaging controller 221 , an X-ray image creating unit 222 , and an image analyzer 223 as functional blocks.
- the image analyzer 223 includes an image input 224 and an analysis processor 225 as lower functional blocks, similarly to the example shown in FIG. 16 .
- the training data processor 50 has the same configuration as that shown in FIG. 15 , and thus description thereof is omitted.
- the trained model 2 created in the training data processor 50 is stored in the storage 226 of the controller 220 and used for the bone segmentation process by the image analyzer 223 .
- the training label image 12 for performing the bone segmentation process can be created using the CT image (computed tomography image) data of the subject in addition to the X-ray image. Specifically, a virtual projection is performed on the CT image data of the subject by simulating the geometrical conditions of the X-ray irradiator 212 and the X-ray detector 213 to generate a plurality of DRR images (digital reconstruction simulation images) showing the area including the bone, and an area in which a CT value is above a certain level is labeled as the bone area such that the training label image 12 can be created.
- the training data processor 50 can acquire a CT image from a CT imaging device (not shown) and generate the training label image 12 based on the acquired CT image.
- step S 21 the image input 224 acquires the X-ray image, which is the analysis image 15 .
- step S 22 the analysis processor 225 performs the segmentation process on the analysis image 15 by the trained model 2 stored in the storage 226 .
- the label image 16 in which the analysis image 15 input this time is divided into the plurality of label areas 13 is created.
- step S 23 the image analyzer 223 stores the created label image 16 in the storage 226 and outputs the created label image 16 to the display 240 and an external server, for example.
- the image analyzer 223 in addition to the segmentation process, various analysis processes based on the results of the segmentation process can be performed. For example, the image analyzer 223 estimates the density of the detected bone based on the label image 16 (see FIG. 3 ) obtained by the segmentation process.
- the training data processor 50 and each of the controllers 120 and 220 may be constructed by dedicated hardware for performing each process.
- the determination label image 14 is created, the labels of the corresponding portions in the created determination label image 14 and the training label image 12 are compared with each other, and the label areas 13 included in the training label image 12 are corrected based on the label comparison results.
- the label areas 13 of the training label image 12 are corrected based on the comparison results between the determination label image 14 and the training label image 12 with reference to the determination label image 14 , and thus the variations in the boundaries can be reduced based on consistent criteria by automatic correction using a computer instead of a label correction operator. Accordingly, correction of the training label image 12 used for machine learning of the image segmentation process can be simplified while ensuring the accuracy.
- the ranges of the label areas 13 in the training label image 12 are corrected so as to be closer to the label areas 13 of the determination label image 14 based on the label comparison results. Accordingly, the ranges of the label areas 13 in the training label image 12 are simply matched with the label areas 13 of the determination label image 14 or are simply set to intermediate ranges between the label areas 13 of the training label image 12 and the label areas 13 of the determination label image 14 , for example, such that a correction can be easily and effectively made to reduce the variations in the label areas 13 of the individual training label image 12 .
- correction of the label areas 13 is performed by at least one of expansion or contraction of the label areas 13 in the training label image 12 . Accordingly, the label areas 13 of the training label image 12 can be corrected by a simple process of expanding or contracting the ranges of the label areas 13 in the training label image 12 based on the label comparison results.
- the range of the label area 13 to which the label of interest has been assigned is corrected based on the matching portion and the non-matching portion with the training label image 12 in the determination label image 14 . Accordingly, it is possible to understand, from the matching portions and the non-matching portions between the determination label image 14 and the training label image 12 , how the label area 13 of the training label image 12 deviates from the reference determination label image 14 . Therefore, the label area 13 can be easily corrected based on the matching portion and the non-matching portion such that the variation in the boundary is reduced.
- the non-detection evaluation value 22 for evaluating undetected labels in the determination label image 14 and the false-detection evaluation value 23 for evaluating falsely detected labels in the determination label image 14 are acquired based on the matching portion and the non-matching portion, and in the step of correcting the label areas 13 , the label areas 13 of the training label image 12 are corrected based on the comparison between the non-detection evaluation value 22 and the false-detection evaluation value 23 .
- the non-detection evaluation value 22 and the false-detection evaluation value 23 are compared such that it is possible to understand how the label areas 13 of the training label image 12 should be corrected (whether it should be increased or decreased), and thus the label areas 13 of the individual training label image 12 can be more appropriately corrected.
- the label areas 13 of the training label image 12 are expanded. Accordingly, when there are many falsely detected areas and it is estimated that the label areas 13 of the training label image 12 are smaller than the label areas 13 of the reference determination label image 14 , the label areas 13 of the training label image 12 are expanded such that a correction can be easily made to reduce the variations in the boundaries of the label areas 13 .
- the label areas 13 of the training label image 12 are contracted. Accordingly, when there are many undetected areas and it is estimated that the label areas 13 of the training label image 12 are larger than the label areas 13 of the reference determination label image 14 , the label areas 13 of the training label image 12 are contracted such that a correction can be easily made to reduce the variations in the boundaries of the label areas 13 .
- the step of excluding the training data 10 including the training label image 12 determined to have at least one of more than a predetermined threshold number of undetected labels or more than a predetermined threshold number of falsely detected labels from the training data set is further included. Accordingly, when at least one of the number of undetected labels or the number of falsely detected labels is excessively large in the comparison with the reference determination label image 14 , the training label image 12 is considered to be inappropriate as the training data 10 . Therefore, the training label image 12 in which at least one of the number of undetected labels or the number of falsely detected labels is large is excluded from the training data set such that the training data 10 that is a factor that lowers the accuracy of machine learning can be excluded. Thus, the quality of the training data set can be improved, and highly accurate machine learning can be performed.
- the labels of the corresponding portions of the determination label image 14 and the training label image 12 are compared with each other for each image pixel. Accordingly, the determination label image 14 and the training label image 12 can be compared with each other on a pixel-by-pixel basis, and thus the variations in the label areas 13 of the training label image 12 can be evaluated more accurately.
- the step of creating the determination label image 14 by the trained model 2 using the training data 10 including the corrected training label image (corrected training label image 12 a ) is further included, and in the step of correcting the label areas 13 , the label areas 13 included in the corrected training label image 12 a are corrected again based on the comparison results between the created determination label image 14 and the corrected training label image 12 a . Accordingly, machine learning is performed using the corrected training label image 12 a , and then the determination label image 14 is created again such that the label areas 13 of the training label image 12 can be repeatedly corrected. The label areas 13 are repeatedly corrected such that the variations in the label areas 13 of the training label image 12 can be further reduced.
- the label areas 13 included in the training label image 12 are corrected based on the label comparison results between the determination label image 14 and the training label image 12 . Accordingly, the correction of the training label image 12 used for machine learning of the image segmentation process can be simplified while ensuring the accuracy. Furthermore, the trained model 2 is created by performing machine learning on the pre-trained model 2 a using the training data 10 including the corrected training label image 12 a , and thus the trained model 2 with significantly reduced determination variations in the boundary portions of the label areas 13 , which is capable of a high-quality segmentation process, can be obtained.
- the determination image creating unit 51 configured to create the determination label image 14
- the comparator 52 configured to compare the labels of the corresponding portions in the determination label image 14 created by the determination image creating unit 51 and the training label image 12 with each other
- the label corrector 53 configured to correct the label areas 13 included in the training label image 12 based on the label comparison results by the comparator 52 are provided. Accordingly, the correction of the training label image 12 used for machine learning of the image segmentation process can be simplified while ensuring the accuracy.
- a training label image correction method is now described with reference to FIGS. 19 and 20 .
- the second embodiment an example is described in which the amount of correction is adjusted according to the label priorities when a correction is made by expanding or contracting label areas in addition to the training label image correction method according to the first embodiment.
- the training label image correction method further includes a step of setting priorities 60 between labels of a training label image 12 . Furthermore, in a step of correcting label areas 13 , the amounts of correction of the label areas 13 are adjusted according to the priorities 60 .
- FIG. 19 shows an example in which three labels of an undifferentiated cell, an undifferentiated deviant cell, and a background are set on a cell image of a pluripotent stem cell, and the cell image is divided by a segmentation process into label areas 13 to which the three labels have been assigned.
- This cell image is used to prevent an undifferentiated cell that maintains pluripotency from differentiating by finding an undifferentiated deviant cell in a cell colony and removing it, and to culture only the undifferentiated cell when a pluripotent stem cell is cultured. That is, it is desired to reliably find an undifferentiated deviant cell, and thus among the labels of an undifferentiated cell, an undifferentiated deviant cell, and a background, the priorities 60 are set higher in the order of a background, an undifferentiated cell, and an undifferentiated deviant cell.
- the step of correcting the label areas 13 when the label areas 13 of the training label image 12 are corrected, expansion of the label areas 13 having a lower priority to the label areas 13 having a higher priority is prohibited. That is, the amounts of correction of the label areas 13 are adjusted according to the priorities 60 such that the label areas 13 having a higher priority are not reduced by the expansion of the label areas 13 having a lower priority.
- the remaining configurations of the second embodiment are similar to those of the first embodiment.
- expansion of the label areas 13 having a lower priority to the label areas 13 having a higher priority may not be prohibited.
- the amount of expansion from the label area 13 c having a lower priority to the label area 13 d having a higher priority is smaller than the amount of expansion from the label area 13 d having a higher priority to the label area 13 c having a lower priority. Even in this manner, the correction in the direction in which the label areas 13 having a label with a higher priority are reduced can be restricted.
- correction of the training label image 12 used for machine learning of the image segmentation process can be simplified while ensuring the accuracy.
- the step of setting the priorities 60 between the labels of the training label image 12 is further included, and in the step of correcting the label areas 13 , the amounts of correction of the label areas 13 are adjusted according to the priorities 60 . Accordingly, the amount of correction of a label with a higher priority can be intentionally biased such that detection omissions can be significantly reduced or prevented as much as possible. Consequently, the training label image 12 can be appropriately corrected according to the purpose of the segmentation process.
- the label areas 13 of the training label image 12 when the label areas 13 of the training label image 12 are corrected, expansion of the label areas 13 having a lower priority to the label areas 13 having a higher priority are prohibited. Accordingly, even when the label areas 13 having a lower priority (the label area 13 c , for example) are corrected by expansion, the label areas 13 having a higher priority (the label area 13 d , for example) can be preserved. Therefore, it is possible to obtain the training label image 12 (training data 10 ) capable of being learned, in which detection omissions of the label areas 13 with a higher priority are significantly reduced or prevented while variations in the boundaries are significantly reduced or prevented according to the purpose of the segmentation process.
- the present invention is not limited to this.
- the correction of the training label image according to the present invention can be applied to any image as long as the same is a training label image used for machine learning of the segmentation process.
- the correction of the training label image according to the present invention is to correct variations in the boundaries of the label areas, and thus it is particularly suitable in the field in which high accuracy is required at the boundary portions of the label areas.
- medical image particularly used in the medical field (the healthcare field or the medical science field).
- the medical image is used to diagnose a disease by a doctor, for example, and thus it is desired to significantly reduce or prevent the blurring or the unnatural shapes of the boundaries as much as possible even when the boundary portions of the label areas are fine. Therefore, the present invention, which can reduce variations in the boundaries of the label areas by correcting the training label image, is particularly suitable when used for segmentation of a medical image.
- the medical image in addition to a cell image and an X-ray image of a bone and other portions, an image showing a tumor (such as an endoscopic image) in a case in which a segmentation process is performed on a tumor area to be treated as a detection target may be used, for example.
- the present invention is not limited to this.
- a portion of the training label image 12 may be compared with the determination label image 14 , and a correction may be made.
- a training label image 12 is divided into a plurality of partial images 17 , and the label areas 13 are corrected for at least one partial image 17 of the divided training label image 12 .
- FIG. 21 in a modified example of the training label image correction method shown in FIG. 21 , in a step of correcting label areas 13 , a training label image 12 is divided into a plurality of partial images 17 , and the label areas 13 are corrected for at least one partial image 17 of the divided training label image 12 .
- the training label image 12 is divided into a matrix so as to have 6 ⁇ 6 rectangular partial images 17 a .
- a determination label image 14 is also divided into partial images 17 b , the training label image 12 and the determination label image 14 are compared with each other for each partial image 17 ( 17 a , 17 b ), and the partial images 17 of the training label image 12 are corrected based on the comparison results.
- the shape of each partial image 17 and the number of divisions are not particularly limited and are arbitrary. The correction may be made only for specific one or more partial images 17 , or may be made for all the partial images 17 . With the configuration as in this modified example, only a specific portion of the training label image 12 can be corrected, or each small portion can be corrected individually.
- the present invention is not limited to this.
- the correction process for the training label image 12 may be performed by another device other than the image analysis devices.
- the image analysis devices 100 and 200 create the trained model 2 by machine learning has been shown in each of the aforementioned first and second embodiments, the trained model 2 may be created by another device other than the image analysis devices.
- a training data processor 50 is provided as a server device separate from an image analysis device 300 , and is connected to the image analysis device 300 via a network.
- the training data processor 50 acquires, from the image analysis device 300 , a newly captured analysis image 15 as an input image 11 of training data 10 for machine learning.
- a training label image 12 is created from the input image 11 by the training data processor 50 or using another image processing device, and the training data 10 including the input image 11 and the training label image 12 is added to a training data set 1 .
- the training data processor 50 creates a determination label image 14 from the input image 11 by a pre-trained model 2 a , compares it with the training label image 12 , and corrects the training label image 12 based on the comparison results.
- the training data processor 50 re-trains the pre-trained model 2 a using the corrected training label image 12 and creates (updates) a trained model 2 .
- the training data processor 50 transmits the newly created trained model 2 to the image analysis device 300 via the network.
- the image analysis device 300 can perform a segmentation process using the newly created trained model 2 .
- a training label image 12 is corrected in an image analysis device 300 .
- a trained model 2 is created on the training data processor 50 side.
- the training label image 12 is created using a newly captured analysis image 15 as an input image 11 of training data 10 .
- the training data 10 including the input image 11 and the training label image 12 is added to a training data set 1 .
- the image analysis device 300 creates a determination label image 14 from the input image 11 by a pre-trained model 2 a , compares it with the training label image 12 , and corrects the training label image 12 based on the comparison results.
- the image analysis device 300 transmits the training data 10 including the input image 11 and a corrected training label image 12 a to the training data processor 50 via a network.
- the training data processor 50 adds the transmitted training data 10 to the training data set 1 , re-trains the pre-trained model 2 a , and creates (updates) the trained model 2 .
- the training data processor 50 transmits the newly created trained model 2 to the image analysis device 300 via the network.
- the image analysis device 300 can perform a segmentation process using the newly created trained model 2 .
- the training label image correction method and the trained model creation method according to the present invention may be executed in the form of a so-called cloud service or the like by the cooperation of a plurality of computers connected to the network.
- the present invention is not limited to this.
- the label areas may be corrected by a method other than the morphology process.
- the label areas 13 of the training label image 12 may be replaced with the label areas 13 of the determination label image 14 .
- the boundaries of the label areas 13 of the corrected training label image 12 a may be corrected so as to be located between the boundaries of the label areas 13 of the training label image 12 and the boundaries of the label areas 13 of the determination label image 14 . In these cases, it is not always necessary to use the non-detection evaluation value 22 and the false-detection evaluation value 23 unlike the first embodiment.
- K in the above formula (4) is a detection rate shown in the above formula (1)
- G is a precision shown in the above formula (2).
- IoU and F value are complex indexes for evaluating both undetected and falsely detected labels, and correspond to both the non-detection evaluation value 22 and the false-detection evaluation value 23 .
- the IoU or F value is used in combination with the detection rate K or the precision G such that undetected labels and falsely detected labels can be evaluated separately. For example, when IoU and the detection rate K are used, it is determined that there are few falsely detected labels and many undetected labels when K ⁇ IoU ⁇ 1, and it is determined that there are few undetected labels and many falsely detected labels when 1 ⁇ K>IoU.
- the non-detection evaluation value 22 and the false-detection evaluation value 23 have been described on the premise of the two-class confusion matrix 21 shown in FIG. 11 , but the same are also applicable to a multi-class segmentation process of three or more classes. Although the description is omitted, even in the case of three or more classes, the non-detection evaluation value and the false-detection evaluation value such as the detection rate and the precision may be obtained from a confusion matrix according to the number of classes.
Abstract
Description
-
- Patent Document 1: Japanese Patent Laid-Open No. 2014-022837
-
- (1) a step of performing the segmentation process on the
input image 11 of thetraining data 10 including theinput image 11 and thetraining label image 12 by the trainedmodel 2 using thetraining data 10 to create thedetermination label image 14 divided into a plurality oflabel areas 13, - (2) a step of comparing the labels of the corresponding portions in the created
determination label image 14 and thetraining label image 12 with each other, and - (3) a step of correcting the
label areas 13 included in thetraining label image 12 based on the label comparison results.
- (1) a step of performing the segmentation process on the
K=TP/(TP+FN) (1)
G=TP/(TP+FP) (2)
-
- (1) a step of acquiring the
pre-trained model 2 a by machine learning using thetraining data 10 including theinput image 11 and thetraining label image 12, - (2) a step of performing the segmentation process on the
input image 11 of thetraining data 10 by the acquiredpre-trained model 2 a to create thedetermination label image 14 divided into the plurality oflabel areas 13, - (3) a step of comparing the labels of the corresponding portions in the created
determination label image 14 and thetraining label image 12 with each other, - (4) a step of correcting the
label areas 13 included in thetraining label image 12 based on the label comparison results, and - (5) a step of creating the trained
model 2 by performing machine learning using thetraining data 10 including the correctedtraining label image 12 a.
- (1) a step of acquiring the
IoU=TP/(—+FN+FP) (3)
F value=2K×G/(K+G) (4)
-
- 1: training data set
- 2: trained model
- 2 a: pre-trained model
- 10: training data
- 11: input image
- 12: training label image
- 12 a: corrected training label image
- 13, 13 a, 13 b, 13 c, 13 d, 13 e, 13 f, 13 g: label area
- 14: determination label image
- 15: analysis image
- 16: label image
- 17, 17 a, 17 b: partial image
- 22: non-detection evaluation value
- 23: false-detection evaluation value
- 51: determination image creating unit
- 52: comparator
- 53: label corrector
- 54: storage
- 60: priority
- 100, 200: image analysis device
- 124, 224: image input
- 125, 225: analysis processor
- 126, 226: storage
Claims (15)
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2018/029473 WO2020031243A1 (en) | 2018-08-06 | 2018-08-06 | Method for correcting teacher label image, method for preparing learned model, and image analysis device |
Publications (2)
Publication Number | Publication Date |
---|---|
US20210272288A1 US20210272288A1 (en) | 2021-09-02 |
US11830195B2 true US11830195B2 (en) | 2023-11-28 |
Family
ID=69415200
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/265,760 Active 2039-08-15 US11830195B2 (en) | 2018-08-06 | 2018-08-06 | Training label image correction method, trained model creation method, and image analysis device |
Country Status (5)
Country | Link |
---|---|
US (1) | US11830195B2 (en) |
JP (1) | JP6996633B2 (en) |
KR (1) | KR102565074B1 (en) |
CN (1) | CN112424822A (en) |
WO (1) | WO2020031243A1 (en) |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3644275A1 (en) * | 2018-10-22 | 2020-04-29 | Koninklijke Philips N.V. | Predicting correctness of algorithmic segmentation |
JP7049974B2 (en) * | 2018-10-29 | 2022-04-07 | 富士フイルム株式会社 | Information processing equipment, information processing methods, and programs |
TWI697851B (en) * | 2019-05-03 | 2020-07-01 | 宏碁股份有限公司 | Electronic device and model updating method |
WO2021009906A1 (en) * | 2019-07-18 | 2021-01-21 | 株式会社島津製作所 | Cell image analysis method and cell image analysis device |
US11449716B2 (en) | 2020-03-31 | 2022-09-20 | International Business Machines Corporation | Model training using partially-annotated images |
US11526694B2 (en) * | 2020-03-31 | 2022-12-13 | International Business Machines Corporation | Model training using fully and partially-annotated images |
WO2022050078A1 (en) * | 2020-09-07 | 2022-03-10 | 富士フイルム株式会社 | Training data creation device, method, and program, machine learning device and method, learning model, and image processing device |
US11366983B2 (en) * | 2020-09-09 | 2022-06-21 | International Business Machines Corporation | Study-level multi-view processing system |
US11651499B2 (en) | 2020-09-17 | 2023-05-16 | International Business Machines Corporation | Reducing structural redundancy in automatic image segmentation |
WO2023008428A1 (en) * | 2021-07-29 | 2023-02-02 | 株式会社島津製作所 | Cell image analysis method |
DE102022119217A1 (en) * | 2021-08-04 | 2023-02-09 | Motional Ad Llc | Train a neural network using a dataset with labels at multiple granularities |
JP2023034230A (en) * | 2021-08-30 | 2023-03-13 | 公益財団法人神戸医療産業都市推進機構 | Cell image analysis system, cell image analysis device, and cell image analysis method |
WO2023078545A1 (en) * | 2021-11-03 | 2023-05-11 | Medimaps Group Sa | Method for analyzing a texture of a bone from a digitized image |
CN114037868B (en) * | 2021-11-04 | 2022-07-01 | 杭州医策科技有限公司 | Image recognition model generation method and device |
CN114154571B (en) * | 2021-12-01 | 2023-04-07 | 北京思路智园科技有限公司 | Intelligent auxiliary labeling method and system for image |
CN116188947B (en) * | 2023-04-28 | 2023-07-14 | 珠海横琴圣澳云智科技有限公司 | Semi-supervised signal point detection method and device based on domain knowledge |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5594809A (en) * | 1995-04-28 | 1997-01-14 | Xerox Corporation | Automatic training of character templates using a text line image, a text line transcription and a line image source model |
US20110099133A1 (en) * | 2009-10-28 | 2011-04-28 | Industrial Technology Research Institute | Systems and methods for capturing and managing collective social intelligence information |
JP2014022837A (en) | 2012-07-13 | 2014-02-03 | Nippon Hoso Kyokai <Nhk> | Learning device and program |
US20160026900A1 (en) * | 2013-04-26 | 2016-01-28 | Olympus Corporation | Image processing device, information storage device, and image processing method |
US20160379091A1 (en) * | 2015-06-23 | 2016-12-29 | Adobe Systems Incorporated | Training a classifier algorithm used for automatically generating tags to be applied to images |
US9558423B2 (en) * | 2013-12-17 | 2017-01-31 | Canon Kabushiki Kaisha | Observer preference model |
US20190030371A1 (en) * | 2017-07-28 | 2019-01-31 | Elekta, Inc. | Automated image segmentation using dcnn such as for radiation therapy |
JP2019101535A (en) * | 2017-11-29 | 2019-06-24 | コニカミノルタ株式会社 | Teacher data preparation device and method thereof and image segmentation device and method thereof |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102360494B (en) * | 2011-10-18 | 2013-09-04 | 中国科学院自动化研究所 | Interactive image segmentation method for multiple foreground targets |
JP2015087903A (en) * | 2013-10-30 | 2015-05-07 | ソニー株式会社 | Apparatus and method for information processing |
JP6291844B2 (en) * | 2014-01-06 | 2018-03-14 | 日本電気株式会社 | Data processing device |
JP6702716B2 (en) * | 2015-12-21 | 2020-06-03 | キヤノン株式会社 | Image processing device, image processing method, and program |
CN106023220B (en) * | 2016-05-26 | 2018-10-19 | 史方 | A kind of vehicle appearance image of component dividing method based on deep learning |
JP6622172B2 (en) * | 2016-11-17 | 2019-12-18 | 株式会社東芝 | Information extraction support device, information extraction support method, and program |
-
2018
- 2018-08-06 WO PCT/JP2018/029473 patent/WO2020031243A1/en active Application Filing
- 2018-08-06 CN CN201880095420.9A patent/CN112424822A/en active Pending
- 2018-08-06 US US17/265,760 patent/US11830195B2/en active Active
- 2018-08-06 KR KR1020207035363A patent/KR102565074B1/en active IP Right Grant
- 2018-08-06 JP JP2020535357A patent/JP6996633B2/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5594809A (en) * | 1995-04-28 | 1997-01-14 | Xerox Corporation | Automatic training of character templates using a text line image, a text line transcription and a line image source model |
US20110099133A1 (en) * | 2009-10-28 | 2011-04-28 | Industrial Technology Research Institute | Systems and methods for capturing and managing collective social intelligence information |
JP2014022837A (en) | 2012-07-13 | 2014-02-03 | Nippon Hoso Kyokai <Nhk> | Learning device and program |
US20160026900A1 (en) * | 2013-04-26 | 2016-01-28 | Olympus Corporation | Image processing device, information storage device, and image processing method |
US9552536B2 (en) * | 2013-04-26 | 2017-01-24 | Olympus Corporation | Image processing device, information storage device, and image processing method |
US9558423B2 (en) * | 2013-12-17 | 2017-01-31 | Canon Kabushiki Kaisha | Observer preference model |
US20160379091A1 (en) * | 2015-06-23 | 2016-12-29 | Adobe Systems Incorporated | Training a classifier algorithm used for automatically generating tags to be applied to images |
US9767386B2 (en) * | 2015-06-23 | 2017-09-19 | Adobe Systems Incorporated | Training a classifier algorithm used for automatically generating tags to be applied to images |
US20190030371A1 (en) * | 2017-07-28 | 2019-01-31 | Elekta, Inc. | Automated image segmentation using dcnn such as for radiation therapy |
JP2019101535A (en) * | 2017-11-29 | 2019-06-24 | コニカミノルタ株式会社 | Teacher data preparation device and method thereof and image segmentation device and method thereof |
Non-Patent Citations (1)
Title |
---|
Written Opinion by the International Search Authority for PCT application No. PCT/JP2018/029473, dated Sep. 25, 2018, submitted with a machine translation. |
Also Published As
Publication number | Publication date |
---|---|
JP6996633B2 (en) | 2022-01-17 |
WO2020031243A1 (en) | 2020-02-13 |
US20210272288A1 (en) | 2021-09-02 |
JPWO2020031243A1 (en) | 2021-08-02 |
KR20210008051A (en) | 2021-01-20 |
CN112424822A (en) | 2021-02-26 |
KR102565074B1 (en) | 2023-08-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11830195B2 (en) | Training label image correction method, trained model creation method, and image analysis device | |
CN106127730B (en) | Automated region of interest detection using machine learning and extended Hough transform | |
US7949181B2 (en) | Segmentation of tissue images using color and texture | |
JP6564018B2 (en) | Radiation image lung segmentation technology and bone attenuation technology | |
US9940545B2 (en) | Method and apparatus for detecting anatomical elements | |
US10223795B2 (en) | Device, system and method for segmenting an image of a subject | |
US20150305702A1 (en) | Radiation tomographic image generating apparatus, and radiation tomographic image generating method | |
US20110188743A1 (en) | Image processing apparatus, image processing method, image processing system, and recording medium | |
KR101700887B1 (en) | Device and method for counting cells using image processing and computer-readable recording medium storing program to implement the method | |
US20200193608A1 (en) | Method for Segmentation of Grayscale Images and Segmented Area Tracking | |
US11715208B2 (en) | Image segmentation | |
JP2016195755A (en) | Medical image processor, medical image processing method, and medical imaging device | |
US20220414869A1 (en) | Detecting and segmenting regions of interest in biomedical images using neural networks | |
Arabi et al. | Comparison of atlas-based bone segmentation methods in whole-body PET/MRI | |
WO2022271838A1 (en) | Detecting and segmenting regions of interest in biomedical images using neural networks | |
EP2693397A1 (en) | Method and apparatus for noise reduction in an imaging system | |
Becker et al. | From time lapse-data to genealogic trees: Using different contrast mechanisms to improve cell tracking | |
Bobotov et al. | Segmentation of Brain Tumors from Magnetic Resonance Images using Adaptive Thresholding and Graph Cut Algorithm | |
Jian et al. | Cloud image processing and analysis based flatfoot classification method | |
JP2005020337A (en) | Method, apparatus and program for image processing | |
CN112863641A (en) | Radiation therapy system and offset determination method and device of radiation source thereof | |
CN110728678A (en) | Image area classification method, system, device and storage medium | |
Drazkowska et al. | Application of Convolutional Neural Networks to femur tracking in a sequence of X-ray images | |
US11666299B2 (en) | Controlling a medical X-ray device | |
US10255678B2 (en) | Medical image processing apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: SHIMADZU CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TAKAHASHI, WATARU;AKAZAWA, AYAKO;OSHIKAWA, SHOTA;SIGNING DATES FROM 20210118 TO 20210129;REEL/FRAME:055812/0378 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |