WO2022185905A1 - 画像診断支援装置、画像診断支援方法、遠隔診断支援システム、ネット受託サービスシステム - Google Patents
画像診断支援装置、画像診断支援方法、遠隔診断支援システム、ネット受託サービスシステム Download PDFInfo
- Publication number
- WO2022185905A1 WO2022185905A1 PCT/JP2022/006024 JP2022006024W WO2022185905A1 WO 2022185905 A1 WO2022185905 A1 WO 2022185905A1 JP 2022006024 W JP2022006024 W JP 2022006024W WO 2022185905 A1 WO2022185905 A1 WO 2022185905A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- feature amount
- dictionary
- feature
- target image
- Prior art date
Links
- 238000003745 diagnosis Methods 0.000 title claims abstract description 61
- 238000000034 method Methods 0.000 title claims description 86
- 238000004171 remote diagnosis Methods 0.000 title claims description 8
- 238000012545 processing Methods 0.000 claims abstract description 53
- 238000010801 machine learning Methods 0.000 claims description 19
- 238000002059 diagnostic imaging Methods 0.000 claims description 17
- 238000003384 imaging method Methods 0.000 claims description 14
- 239000000284 extract Substances 0.000 claims description 5
- 206010028980 Neoplasm Diseases 0.000 description 53
- 230000002159 abnormal effect Effects 0.000 description 40
- 238000001514 detection method Methods 0.000 description 26
- 238000000605 extraction Methods 0.000 description 25
- 238000010586 diagram Methods 0.000 description 17
- 238000004891 communication Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 6
- 230000001575 pathological effect Effects 0.000 description 6
- 201000011510 cancer Diseases 0.000 description 5
- 238000007477 logistic regression Methods 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 5
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000010827 pathological analysis Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000003902 lesion Effects 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 230000005856 abnormality Effects 0.000 description 2
- 210000000481 breast Anatomy 0.000 description 2
- 238000004113 cell culture Methods 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000001172 regenerating effect Effects 0.000 description 2
- 230000032683 aging Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30024—Cell structures in vitro; Tissue sections in vitro
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Definitions
- the present invention relates to an image diagnosis support device, an image diagnosis support system, and an image diagnosis support method.
- the present invention relates to image processing technology for detecting specific tissues and cells (for example, cancer).
- pathological diagnosis by microscopic observation of tissue specimens of lesions has taken an important position.
- pathological diagnosis much of the process from preparation of specimens to diagnosis relies on manual labor, and automation is difficult.
- the pathologist's competence and experience in diagnosis is important and dependent on his individual competence.
- pathologists there is a shortage of pathologists in the medical field due to factors such as an increase in cancer patients due to the aging population. For these reasons, there is an increasing need for image processing technology and remote diagnosis that support pathological diagnosis.
- Patent Document 1 for example, for determining whether or not it is a pathological tissue.
- a low-magnification image is generated from a high-magnification image, and after simple classification of the images with the low-magnification image, the pathological tissue is classified using the high-magnification image that is the basis of the low-magnification image.
- a low-magnification image is generated from a high-magnification image, and after simple classification of the image with the low-magnification image, the tissue/cell is classified using the high-magnification image that is the basis of the low-magnification image.
- the tumor detection rate approaches 100%
- non-tumor cannot be classified as non-tumor
- non-tumor is judged as tumor, resulting in enormous over-detection of tumor.
- it cannot be determined whether the tumor or pathological tissue in the image is unlearned.
- the present invention has been made in view of such circumstances, and aims to provide a technique for suppressing over-detection while determining an object with high accuracy. Another object of the present invention is to provide a technique for determining whether an object included in an image has not been learned. Also, the present invention provides a technique for presenting the reason for identifying an object.
- an image diagnosis support apparatus includes a processor that executes a program for performing image processing on a target image, a memory that stores the result of the image processing,
- the processor includes processing for inputting an image, processing for extracting a feature amount of an object from the target image, processing for extracting a feature amount of a learning image to create a feature amount dictionary, and the feature A process of identifying a target image from a quantity and calculating an identification value, a process of identifying a feature amount similarity of the target image using the feature amount dictionary and calculating a feature amount similarity identification value, and the identification value and a process of determining the presence or absence of an object and the likelihood of the object for each of the target images using the feature quantity similarity identification value.
- an image diagnosis support apparatus includes a processor that executes a program for performing image processing on a target image, and a memory that stores the result of the image processing, The processor performs a process of inputting an image, a process of extracting a feature amount of an object from the target image, a process of extracting a feature amount of a learning image to create a feature amount dictionary, and using the feature amount dictionary. a process of determining whether or not the target image is unlearned by using the feature amount, a process of identifying the target image from the feature amount and calculating an identification value, and using the feature amount dictionary to identify the feature amount similarity of the target image.
- an image diagnosis support apparatus includes a processor that executes a program for performing image processing on a target image, and a memory that stores the result of the image processing, The processor performs a process of inputting an image, a process of extracting a feature amount of an object from the target image, a process of extracting a feature amount of a learning image to create a feature amount dictionary, and a process of extracting a feature amount of a learning image to create a feature amount dictionary.
- a process of identifying and calculating an identification value, a process of identifying the feature amount similarity of the target image using the feature amount dictionary and calculating a feature amount similarity identification value, and using the feature amount dictionary A process of calculating the similarity between the target image and the learning image, and presenting a reason for identifying the target image using the calculated similarity, and using the identification value and the feature amount similarity identification value, and determination processing for determining the presence or absence of an object and the likelihood of the object being present for each of the target images.
- the present invention it is possible to suppress over-detection while determining an object with high accuracy. Further, according to another aspect of the present invention, it is possible to determine whether an object included in an image has not been learned yet. Moreover, according to another aspect of the present invention, it is possible to further present the reason for identifying the object.
- FIG. 1 is a block diagram showing functions of an image diagnosis support apparatus according to a first embodiment of the present invention
- FIG. 1 is a diagram showing a hardware configuration example of an image diagnosis support apparatus according to first and second embodiments of the present invention
- FIG. It is a figure for demonstrating an example of operation
- FIG. 5 is a diagram for explaining an example of the operation of a drawing unit; FIG. FIG. 5 is a diagram for explaining an example of the operation of a drawing unit; FIG. FIG.
- FIG. 5 is a diagram for explaining an example of the operation of a drawing unit;
- FIG. 4 is a flowchart for explaining the operation of an identification unit;
- 4 is a flowchart for explaining the overall operation of the diagnostic imaging support apparatus according to the first embodiment;
- FIG. 10 is a diagram for explaining an example of a determination result display of a drawing unit;
- FIG. 5 is a block diagram showing functions of an image diagnosis support apparatus according to a second embodiment of the present invention;
- FIG. 10 is a diagram for explaining an example of the operation of an identification reason presenting unit;
- FIG. 10 is a diagram for explaining an example of the operation of an identification reason presenting unit; 9 is a flowchart for explaining the overall operation of the diagnostic imaging support apparatus according to the second embodiment; 1 is a diagram showing a schematic configuration of a remote diagnosis support system equipped with an image diagnosis support apparatus of the present invention; FIG. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a diagram showing a schematic configuration of an Internet consignment service providing system equipped with an image diagnosis support apparatus of the present invention;
- the embodiment of the present invention determines whether or not a target image is unlearned using a feature dictionary of a learning image, and further uses the discrimination result of a classifier and the discrimination result of using the feature dictionary to determine whether a target image is
- an image diagnosis support apparatus and method that can determine an object (tumor, etc.) in an image, suppress over-detection while determining the object with high accuracy, and present the reason for identifying the object. More specifically, even if the object (tumor, etc.) detection rate in the target image is approached to 100% and over-detection of the object (tumor, etc.) cannot be suppressed, a feature value dictionary for the learning image can be created.
- the embodiments of the present invention may be implemented by software running on a general-purpose computer, dedicated hardware, or a combination of software and hardware.
- each processing unit e.g., feature extraction unit, etc.
- the program is executed by a processor (CPU, etc.) Since the processing defined by the above is performed using a memory and a communication port (communication control device), the explanation may be made with the processor as the subject.
- FIG. 1 is a block diagram showing the functional configuration of an image diagnosis support apparatus according to an embodiment of the present invention.
- the image diagnosis support apparatus 1 includes an input unit 10, a feature extraction unit 11, a discrimination unit 12, a feature amount similarity discrimination unit 13, an unlearned determination unit 14, a discrimination result determination unit 15, and a drawing unit 16. , a recording unit 17 , a control unit 91 , and a memory 90 .
- the image diagnosis support device may be mounted in an image acquisition device having an imaging unit, or may be mounted in a tissue/cell image acquisition device such as a virtual slide. As in the fourth embodiment), it may be mounted in a server connected to the image acquisition device via a network.
- the input unit 10, the feature extraction unit 11, the identification unit 12, the feature amount similarity identification unit 13, the unlearned determination unit 14, the identification result determination unit 15, the drawing unit 16, and the recording unit 17 in the image diagnosis support apparatus 1 are: It may be implemented by a program, or may be implemented by modularization.
- Image data is input to the input unit 10 .
- the input unit 10 acquires encoded still image data in JPG, JPEG2000, PNG, BMP format, or the like captured at predetermined time intervals by an imaging means such as a camera built into a microscope, and obtains the data. An image may be used as the input image.
- the input unit 10 supports MotionJPEG, MPEG, H.264, etc. Still image data of frames at predetermined intervals may be extracted from moving image data in H.264, HD/SDI format, etc., and the extracted images may be used as input images.
- the input unit 10 may use an image acquired by the imaging means via a bus, a network, or the like as the input image. Further, the input unit 10 may use an image already stored in a detachable recording medium as an input image.
- the feature extraction unit 11 calculates a feature amount related to an object (tissue, cell, etc.) from within the image (tissue/cell image, etc.). Also, during learning, feature amounts of all learning images are calculated, and a feature amount dictionary 50 storing those feature amounts is created.
- the feature extraction unit 11 may create a feature amount dictionary using feature amounts of arbitrary layers in a machine learning network.
- the identification unit 12 uses all learning images and a known machine learning technique (for example, Convolutional Neural Network, etc.) to create a classifier for identifying whether it is an object or not, and stores the classifier in memory. 90.
- a known machine learning technique for example, Convolutional Neural Network, etc.
- the identification unit 12 reads a classifier from the memory 90, and uses the classifier to use the feature amount extracted from the input image to identify the object-likeness (tissue or cell abnormality, etc.) and the identification value, which is the identification result.
- CC is calculated, and the input image is classified as to whether it is an object to be detected (normal tissue or abnormal tissue, normal cell or abnormal cell, etc.).
- the feature quantity similarity identification unit 13 uses the feature quantity dictionary 50 to calculate the similarity between the feature quantities of the input image extracted by the feature extraction unit 11 and all the feature quantities in the feature quantity dictionary 50. to find the most similar feature amount, and use the feature amount and the discriminator to find the feature amount similarity discrimination result CF, which is the discriminant value.
- the unlearned determination unit 14 uses the feature amount dictionary 50 to calculate the degree of similarity between the feature amount of the input image extracted by the feature extraction unit 11 and all the feature amounts in the feature amount dictionary 50, and calculates the calculated similarity determines whether the object in the input image has been learned or not.
- the identification result determination unit 15 uses the identification result CC obtained by the identification unit 12 and the feature amount similarity identification result CF obtained by the feature amount similarity identification unit 13 to determine whether the object in the input image should be detected (tumor etc.) or not (non-tumor, etc.), and whether it is undeterminable or unlearned.
- the drawing unit 16 draws a detection frame on the image so as to surround the object (tumor, abnormal tissue, abnormal cell, etc.) determined by the identification result determination unit 15 .
- the recording unit 17 stores in the memory 90 the image obtained by drawing the detection frame on the input image by the drawing unit 16 .
- the identification unit 12 distinguishes objects such as normal tissues and cells in the input image from normal tissues and cells, and distinguishes abnormal tissues and cells in the input image from abnormal tissues and cells.
- Each parameter (filter coefficient, offset value, etc.) necessary for identification is calculated by performing machine learning.
- the control unit 91 is realized by a processor and connected to each element within the image diagnosis support apparatus 1 .
- Each component of the diagnostic imaging support apparatus 1 operates autonomously of each component described above or according to an instruction from the control unit 91 .
- the discrimination result determination section 15 determines whether the input image is an object to be detected (for example, non-tumor, tumor, normal tissue, etc.). or abnormal tissue, normal cells or abnormal cells, etc.).
- FIG. 2 is a diagram showing a hardware configuration example of the diagnostic imaging support apparatus 1 according to the embodiment of the present invention.
- the image diagnosis support apparatus 1 includes a CPU (processor) 201 that executes various programs, a memory 202 that stores various programs, a storage device (corresponding to the memory 90) 203 that stores various data, and outputs post-detection images. , an input device 205 for inputting user instructions, images, etc., and a communication device 206 for communicating with other devices. ing.
- the CPU 201 reads and executes various programs from the memory 202 as necessary.
- the memory 202 includes the input unit 10, the feature extraction unit 11, the identification unit 12, the feature amount similarity identification unit 13, the unlearned determination unit 14, the identification result determination unit 15, and the drawing unit 16 as programs. and the recording unit 17 are stored. However, the memory 202 of the diagnostic imaging support apparatus 1 of the first embodiment does not include the identification reason presenting unit 20 .
- the storage device 203 stores an image to be processed, the identification result and its numerical value for the input image generated by the identification unit 12, the feature amount similarity identification result and its numerical value for the input image generated by the feature amount similarity identification unit 13, Renders the determination result of whether or not the input image is unlearned generated by the learning determination unit 14, the determination result and numerical value of the input image generated by the identification result determination unit 15, and the detection frame generated by the rendering unit 16. position information, each parameter of equations (1) and (2), which will be described later, generated by the identification unit 12, and the like.
- the output device 204 is composed of devices such as a display, printer, and speaker. For example, the output device 204 displays data generated by the drawing unit 16 on the display screen.
- the input device 205 is composed of devices such as a keyboard, mouse, and microphone.
- a user's instruction (including determination of an image to be processed) is input to the diagnostic imaging support apparatus 1 by the input device 205 .
- the communication device 206 is not an essential component of the image diagnosis support apparatus 1, and the image diagnosis support apparatus 1 holds the communication device 206 when a communication device is included in a personal computer or the like connected to the image acquisition apparatus. It doesn't have to be.
- the communication device 206 receives data (including images) transmitted from another device (eg, server) connected via a network, and stores the data in the storage device 203 .
- the image diagnosis support apparatus 1 of the present invention calculates feature amounts of objects (tissues, cells, etc.) related to an input image, and uses these feature amounts to identify results of the likelihood of objects (tissues, cells, etc.) in the input image. Also, using those feature values and the feature value dictionary, we calculate the feature value similarity identification result and the unlearned judgment result of the object (tissue, cell, etc.) in the input image, and the identification result and the feature value similarity
- the object-likeness (tumor-likeness, tissue or cell-likeness-likeness, etc.) in the input image is determined using the degree identification result.
- Feature extraction unit 11 Obtain the feature value of the input image.
- FIG. 3 shows an example of obtaining a feature amount.
- CNN in FIG. 3 represents a Convolutional Neural Network.
- the feature amount FAi of the object (eg, tissue, cell, etc.) of the input image A1 is obtained from the equation (1). Also, by machine learning in advance, the feature amount FAi of all the learning images is calculated, and the feature amount dictionary 50 is created using those feature amounts FAi.
- the filter coefficient wj shown in equation (1) is used to discriminate objects other than objects to be detected (for example, normal tissue or normal cells from normal tissues or normal cells). is a coefficient obtained by machine learning or the like so as to distinguish an object to be detected (for example, an abnormal tissue or an abnormal cell from an abnormal tissue or an abnormal cell).
- Equation (1) pj is the pixel value, bi is the offset value, m is the number of filter coefficients, and h is the nonlinear function.
- the feature amount fi of an arbitrary filter i is obtained by obtaining the calculation result of each filter from the upper left to the lower right of the target image using Equation (1).
- the matrix of the feature quantity fi obtained by the feature extractor A is assumed to be the feature quantity FAi of the input image A1.
- a method for creating the feature extractor A will be described in the identifying unit 12, which will be described later.
- the identifying unit 12 uses the feature amount FAi (matrix f) of the feature extractor A obtained by the feature extracting unit 11 to perform logistic regression processing, and uses equation (2) to detect.
- the value of object-likeness is calculated to determine whether an object (e.g., tissue/cell) in the input image A1 is an object (tumor, etc.) to be detected or an object other than that to be detected (non-tumor, etc.).
- w is a matrix of weights
- b is an offset value
- g is a nonlinear function
- y is a calculation result. Calculate the weight of and the offset value of b.
- the identification unit 12 detects
- known machine learning techniques are used to learn feature amounts of objects (eg, tissues, cells, etc.) so as to determine objects other than those to be detected (eg, normal tissues, normal cells, etc.).
- the object in the input image is an object to be detected (for example, tissue or cells are abnormal tissue or abnormal cells, etc.)
- the object to be detected for example, abnormal tissue or abnormal cells, etc.
- it learns the feature quantity of an object for example, tissue, cell, etc.
- Convolutional Neural Network may be used as a machine learning technique.
- the identification unit 12 uses an input image A1 (for example, an HE-stained image) to detect objects (for example, abnormal tissue or abnormal cells) by machine learning in advance.
- A1 for example, an HE-stained image
- objects for example, abnormal tissue or abnormal cells
- the identification unit 12 repeats the feature extraction unit 11 and the identification unit 12 using a plurality of learning images, and obtains the weight w, the filter coefficient wj, the offset value b and bi, and create a feature extractor A that calculates the feature amount FAi of the input image A1 from the input image A1. Further, the identification unit 12 uses the feature extractor A to calculate the identification result of the target image.
- the identification unit 12 stores the obtained weight w, filter coefficient wj, offset values b and bi in the memory 90 .
- the feature amount similarity identification unit 13 uses the feature amount FAi of the input image obtained by the feature extraction unit 11 and the learning image for each group (for example, an object to be detected, an object other than the object to be detected, etc.) in the feature amount dictionary 50. using the feature amount FXi (X is 1 to N (N: group number)), the feature amount FXi that is most similar to the feature amount FAi and the SX value for each group are obtained by Equation (3). Next, the smallest SX value (Smin) and group number among all the groups are obtained, and the group number is set in the feature quantity similarity identification result.
- aj (j is 1 to m) indicates the value of each dimension of the feature amount FAi
- bj indicates the value of each dimension of the feature amount FXi.
- the most similar feature Find the quantities F1i and S1. Also, with respect to the feature amount FAi and the feature amount F2i, the most similar feature amounts F2i and S2 are obtained by Equation (3). Then, the minimum value Smin is found between S1 and S2.
- Unlearned determination unit 14 compares the minimum value Smin obtained by the feature quantity similarity identification unit 13 with the threshold value H, and if Smin ⁇ H, sets the unlearned determination result to, for example, 0 (learned), If Smin>H, the unlearned determination result is set to, for example, -1 (unlearned) to obtain the unlearned determination result.
- Identification result determination unit 15 uses the identification result obtained by the identification unit 12, the feature amount similarity identification result obtained by the feature amount similarity identification unit 13, and the unlearned determination result obtained by the unlearned determination unit 14 to input Obtain the judgment result of the image. That is, when the identification result and the feature amount similarity identification result indicate the same object to be detected (tumor, etc.), the object in the input image is determined to be the object to be detected, and 1 is set as the determination result. In addition, if the identification result and the feature amount similarity identification result are the same object other than the object to be detected (non-tumor, etc.), the object in the input image is judged to be other than the object to be detected, and the judgment result is set to 0, for example.
- the identification result and the feature amount similarity identification result do not match, it is determined that the object in the input image cannot be determined, and 2, for example, is set as the determination result. However, if the unlearning determination result is -1, the input image is determined to be unlearned regardless of the classification result and feature quantity similarity classification result, and the determination result is set to -1 (unlearned), for example.
- the identification result determination unit 15 determines that an object other than an object to be detected that is similar to an object to be detected in the input image cannot be determined, thereby determining an object other than the object to be detected in the input image as an object to be detected. can be suppressed, and over-detection can be suppressed.
- drawing unit 16 The drawing unit 16 inputs, as shown in FIG. A detection frame is drawn in the selected target image.
- an object to be detected for example, tumor, abnormal tissue, abnormal cell, etc.
- the drawing unit 16 clicks the mouse.
- a correct label for example, non-tumor, etc.
- the learning image and the correct label are stored in the memory 90 as a set as a new learning image.
- the drawing unit 16 draws a plurality of determination results on the target image, as shown in (B) of FIG. 8B. In this case, an object to be detected (for example, a tumor) and portions determined to be undeterminable or unlearned are emphasized and displayed on the screen in order to prompt the user to confirm them with priority.
- the input target image is displayed as it is without drawing the detection frame on the input target image.
- a GUI graphical user interface shown in FIG. 11 displays the result of object-likeness determination (for example, lesion-likeness determination).
- FIG. 11 is an example of the case of the breast, and is a diagram showing the classification results of non-tumor and tumor.
- the identification result determination unit 15 classifies the input target image as including a tumor, which is an abnormal tissue/cell, and calculates the object-likeness value of the tumor as 0.89.
- (v) Recording unit 17 The recording unit 17 stores the coordinate information for drawing the detection frame on the target image input by the drawing unit 16 and the target image in the memory 90 .
- FIG. 9 is a flow chart for explaining operations during machine learning of the feature extraction unit 11 and the identification unit 12 of the diagnostic imaging support apparatus 1 according to the embodiment of the present invention.
- the CPU 201 may be the main actor and the CPU 201 may execute each processing unit as a program.
- Step 901 The input unit 10 receives an image input for learning and outputs the input image A1 to the feature extraction unit 11 .
- step 902 The feature extraction unit 11 obtains a feature amount FAi of an object (for example, tissue, cell, etc.) in the input image A1 from the input image A1 using a filter using Equation (1). Also, the feature amounts FAi of all the learning images are calculated, and the feature amount dictionary 50 is created using the feature amounts FAi.
- an object for example, tissue, cell, etc.
- Step 903 The identification unit 12 uses machine learning to calculate the feature amount FAi of the object (eg, tissue, cell, etc.) in the input image A1 using the above formulas (1) and (2) using a filter. to create the feature amount FAi of the object (eg, tissue, cell, etc.) in the input image A1 using the above formulas (1) and (2) using a filter. to create the feature amount FAi of the object (eg, tissue, cell, etc.) in the input image A1 using the above formulas (1) and (2) using a filter. to create
- Step 904 The weight w, the filter coefficient wj, the offset values b and bi are obtained when f is a matrix composed of the feature amount FAi.
- Step 905 The identification unit 12 stores the calculated weight w of the feature extractor A, filter coefficient wj, offset values b and bi in the memory 90 .
- FIG. 10 is a flowchart for explaining the operation of the diagnostic imaging support apparatus 1 according to this embodiment.
- each processing unit (the input unit 10, the feature extraction unit 11, etc.) will be described as an operating entity, but the CPU 201 may be an operating entity and the CPU 201 may execute each processing unit as a program.
- Step S1001 The input section 10 outputs the input image A1 to the feature extraction section 11 .
- Step S1002 The feature extraction unit 11 reads the filter coefficient wj and the offset bi of the feature extractor A from the memory 90, and extracts the object to be detected from the input image A1 (for example, tissue, cell, etc.) using the filter according to the above equation (1). ) to find the feature quantity FAi.
- Step S1003 The identification unit 12 reads the weight w and the offset b from the memory 90, and calculates the calculation result y by using the equation (2) when f is the matrix composed of the feature amount FAi.
- Step S1004 The identification unit 12 compares the calculated calculation result y with the threshold value Th1. That is, if the calculation result y ⁇ threshold Th1, the process proceeds to step 1005 . On the other hand, if the calculation result y ⁇ threshold value Th1, the process proceeds to step 1006 .
- Step S1005 The identifying unit 12 sets an object to be detected (for example, 1: tumor) in the identification result cc.
- Step S1006 The identification unit 12 sets the identification result cc to something other than an object to be detected (eg, 0: non-tumor).
- Step S1007 The feature amount similarity identification unit 13 uses the feature amount FAi of the input image A1 calculated by the feature extraction unit 11 and the feature amount FXi (X is 1 to N) of the learning image for each group in the feature amount dictionary 50. , and formula (3), the feature amount FXi and the SX value that are most similar to the feature amount FAi are obtained for each group. Next, the minimum SX value and group number GN among all groups are obtained, and the group number (for example, 1: object to be detected (tumor, etc.), 0: object other than (Non-tumor, etc.) is set.
- Step S1008 The unlearned determining unit 14 compares Smin, which is the minimum value of SX obtained by the feature quantity similarity identifying unit 13, with the threshold value H. FIG. That is, if Smin ⁇ threshold H, the process proceeds to step 1009 . On the other hand, if Smin>threshold H, the process proceeds to step 1010 .
- Step S1009 The unlearned determination unit 14 sets the unlearned determination result ul to "learned" (eg, 0).
- Step S1010 The unlearning determination unit 14 sets unlearning (eg, -1) to the unlearning determination result ul.
- Step S1011 The identification result determination unit 15 uses the identification result cc obtained by the identification unit 12, the feature amount similarity identification result cf obtained by the feature amount similarity identification unit 13, and the unlearned determination result ul obtained by the unlearned determination unit 14. Then, the determination result jr of the input image is obtained. That is, if the unlearned determination result ul indicates that learning has been completed, the process proceeds to step S1012. On the other hand, if the unlearned determination result ul is unlearned, the process proceeds to step S1013.
- Step S1012 The identification result determination unit 15 determines whether the identification result cc obtained by the identification unit 12 and the feature amount similarity identification result cf obtained by the feature amount similarity identification unit 13 are the same object to be detected. That is, if cc and cf are the same object to be detected, the process proceeds to step S1014. On the other hand, if cc and cf are not the same object to be detected, the process proceeds to step S1015.
- Step S1013 The identification result determination unit 15 sets the determination result jr to unlearned (eg, -1: unlearned).
- Step S1014 The identification result determination unit 15 sets an object to be detected (for example, 1: tumor) in the determination result jr.
- Step S1015 The identification result determination unit 15 determines whether the identification result cc obtained by the identification unit 12 and the feature amount similarity identification result cf obtained by the feature amount similarity identification unit 13 do not match or are not an object to be detected. That is, if cc and cf are not the same object to be detected, the process proceeds to step 1016 . On the other hand, if cc and cf do not match, the process proceeds to step 1017 .
- Step S1016 The identification result determination unit 15 sets the determination result jr to something other than an object to be detected (for example, 0: non-tumor).
- Step S1017 The identification result determination unit 15 sets the determination result jr to be undeterminable (for example, 2: undeterminable).
- a lesion for example, a tumor
- y 0.89: value range (0 to 1)
- (xix) Step S1019 When the drawing unit 16 determines that the tumor is a tumor, cannot be determined, or is unlearned, the drawing unit 16 draws and displays a detection frame on the image, which indicates a portion for prompting the user to confirm, as shown in FIGS. 7, 8A, and 8B. do. The drawing unit 16 does not draw the detection frame on the image when the object is classified as other than the object to be detected (for example, normal tissue, normal cell, etc.). In addition, the drawing unit 16 displays values of object-likeness (lesion-likeness, etc.) calculated from the input image, as shown in FIG. 11 .
- Step S1020 The recording unit 17 stores the coordinate information for drawing the detection frame on the target image input by the drawing unit 16 and the target image in the memory 90 (corresponding to the storage device 203).
- machine learning is performed on the feature amount of an object (for example, tissue, cell, etc.) in the input image from the input image, the weight, filter coefficient, and offset are calculated, and whether the object is to be detected or not is determined.
- an object for example, tissue, cell, etc.
- the weight, filter coefficient, and offset are calculated, and whether the object is to be detected or not is determined.
- Create a classifier consisting of each feature extractor and logistic regression layer) that classifies the is used to determine whether or not there is an object to be detected, and the likelihood of the object being detected is determined. It is possible to classify objects (for example, tumors, abnormal tissues, abnormal cells, etc.) to be detected with high accuracy.
- FIG. 12 is a diagram showing a configuration example of an image diagnosis support apparatus 1 according to a second embodiment.
- the diagnostic imaging support apparatus 1 according to the second embodiment includes many of the same configurations as the diagnostic imaging support apparatus 1 (see FIG. 1) according to the first embodiment, but the operation of the drawing unit 26 is different from that in FIG.
- an identification reason presentation unit 20 is included as a new configuration. Therefore, here, mainly the configuration different from that of FIG. 1 will be described.
- the image diagnosis support apparatus 1 uses the feature dictionary to calculate the degree of similarity between the input image and the learning image, and uses the degree of similarity to display the reason for identifying the input image on the screen. indicate.
- (i) Identification reason presenting unit 20 When the identification reason button shown in (A) of FIG. 13A is pressed, the identification reason presentation unit 20 selects the feature amount of the input image and the learning object most similar to the feature amount of the input image as shown in (B) of FIG. 13B.
- the feature amount of the image for example, the feature amount is m-dimensional, and the value range of the value mi in each dimension is ⁇ 1 ⁇ mi ⁇ 1) and the similarity identification score are displayed.
- a similarity discrimination score SS is obtained from Equation (4).
- the learning image is also stored in the feature dictionary, the input image and the learning image most similar to the input image are also displayed on the screen.
- the drawing unit 26 has functions similar to those of the drawing unit 16 .
- the drawing unit 26 displays (A) of FIG. 13A instead of FIG. 11, and additionally displays an identification reason button in addition to the display contents of FIG.
- the diagnostic imaging support apparatus 1 according to the second embodiment has the same configuration as that shown in FIG. contains.
- the storage device 203 of the diagnostic imaging support apparatus 1 stores the identification score of the similarity calculated by the identification reason presentation unit 20, the learning image, and the detection frame generated by the drawing unit 26. It stores location information and the like.
- FIG. 14 is a flowchart for explaining the operation of the diagnostic imaging support apparatus 1 according to this embodiment.
- each processing unit (the input unit 10, the feature extraction unit 11, etc.) will be described as an operating entity, but the CPU 201 may be an operating entity and the CPU 201 may execute each processing unit as a program. Since steps S1401 to S1418 shown in FIG. 14 are the same as steps S1001 to S1018 shown in FIG. 10, processing after step S1419 will be described below.
- the drawing unit 26 indicates a portion for prompting the user to confirm, as shown in FIGS. Draws and displays the detection frame on the image.
- the drawing unit 26 does not draw the detection frame on the image when the object is classified as other than the object to be detected (for example, normal tissue, normal cell, etc.).
- the rendering unit 26 displays values of object-likeness (lesion-likeness, etc.) calculated from the input image, as shown in FIG. 13A (A).
- Step S1420 The identification reason presenting unit 20 calculates the similarity identification score SS from the feature amount of the input image and the feature amount of the learning image that is most similar to the feature amount of the input image using Equation (4).
- the identification reason button shown in (A) is pressed, as shown in (B) of FIG. display. If the feature dictionary also stores learning images, the input image and the learning image (determination image) that is most similar to the input image are also displayed on the screen.
- Step S1421 The recording unit 17 stores the coordinate information for drawing the detection frame on the target image input by the drawing unit 26, the target image, and the identification score of the degree of similarity calculated by the identification reason presentation unit 20 in the memory 90 (the storage device 203). equivalent).
- the feature amount of an object (for example, tissue, cell, etc.) in the input image is machine-learned from the input image, and the weight, filter coefficient, and offset are calculated and detected.
- Create a classifier consisting of each feature extractor and logistic regression layer) that classifies whether an object is an object or not, calculate the similarity between the input image and the learning image using the feature value dictionary, By determining whether or not an object should be detected using the identification result based on the degree of detection, it is possible to determine the presence or absence of an object to be detected and the likelihood of the object. This makes it possible to classify objects (for example, tumors, abnormal tissues, abnormal cells, etc.) to be detected from images with high accuracy.
- the identification score of similarity is calculated from the feature amount of the input image and the learning image in the feature amount dictionary, it is possible to present the feature amount indicating the reason for identification and the judgment image.
- FIG. 15 is a functional block diagram showing the configuration of a remote diagnosis support system 1500 according to a third embodiment.
- the remote diagnosis support system 1500 has a server 1503 and an image acquisition device 1505 .
- the image acquisition device 1505 is, for example, a device such as a virtual slide device or a personal computer equipped with a camera. 1504 and .
- the image acquisition device 1505 has a communication device (not shown) that transmits image data to the server or the like 1503 and receives data transmitted from the server or the like 1503 .
- the server or the like 1503 performs the image processing according to the first or second embodiment of the present invention on the image data transmitted from the image acquisition device 1505 , and the image diagnosis support device 1 outputs data from the image diagnosis support device 1 . and a storage unit 1502 for storing the determined result.
- the server or the like 1503 has a communication device (not shown) that receives image data transmitted from the image acquisition apparatus 1505 and transmits determination result data to the image acquisition apparatus 1505. .
- the image diagnosis support apparatus 1 detects the presence or absence of an object (eg, abnormal tissue, abnormal cell (eg, cancer), etc.) to be detected for an object (eg, tissue, cell, etc.) in the image data captured by the imaging unit 1501. classify. Also, the object to be detected (for example, Object-likeness of an object to be detected (e.g., abnormal tissue, abnormal cell (e.g., cancer), etc.) according to the state (e.g., progress) of the abnormal tissue, abnormal cell (e.g., cancer), etc. , lesion-likeness, etc.).
- the display unit 1504 displays the determination result transmitted from the server or the like 1503 on the display screen of the image acquisition device 1505 .
- a regenerative medical device having an imaging unit, an iPS cell culture device, an MRI, an ultrasonic imaging device, or the like may be used.
- an object in an image transmitted from a facility or the like at a different point is an object to be detected (eg, abnormal tissue, abnormal cell, etc.), or an object to be detected.
- an object to be detected eg, abnormal tissue, abnormal cell, etc.
- the object is other than the target (for example, normal tissue or normal cells)
- transmitting the determination result to a facility at a different location, and displaying the determination result on the display unit of the image acquisition device at that facility it becomes possible to provide a remote diagnosis support system.
- FIG. 16 is a functional block diagram showing the configuration of a network consignment service providing system 1600 according to a fourth embodiment of the present invention.
- the network consignment service providing system 1600 has a server or the like 1603 and an image acquisition device 1605 .
- the image acquisition device 1605 is, for example, a device such as a virtual slide device or a personal computer equipped with a camera.
- the classifier transmitted from the server or the like 1603 is read, and the object (eg, tissue, cell, etc.) in the image newly captured by the imaging unit 1601 of the image acquisition device 1605 is detected. or abnormal cells, etc.) or objects other than those to be detected (for example, normal tissue, normal cells, etc.). .
- the image acquisition device 1605 has a communication device that transmits image data to the server or the like 1603 and receives data transmitted from the server or the like 1603 .
- the server or the like 1603 performs the image processing according to the first or second embodiment of the present invention on the image data transmitted from the image acquisition device 1605, and the image diagnosis support device 1 outputs the image data. and a storage unit 1602 for storing the identified classifier.
- the server or the like 1603 has a communication device (not shown) that receives image data transmitted from the image acquisition apparatus 1605 and transmits a classifier to the image acquisition apparatus 1605 .
- the image diagnosis support apparatus 1 detects objects (for example, tissues, cells, etc.) in image data captured by the imaging unit 1601 other than objects to be detected (for example, normal tissues, cells, etc.).
- the machine determines that an object to be detected (e.g., abnormal tissue, cell, etc.) is an object to be detected (e.g., abnormal tissue, cell, etc.).
- a classifier that performs learning and calculates the feature values of objects (eg, tissues, cells, etc.) in images of facilities at different locations, and a feature value dictionary created from learning images are created.
- the storage unit 1604 stores the discriminator and the feature dictionary transmitted from the server or the like 1603 .
- the image diagnosis support apparatus 1 in the image acquisition apparatus 1605 reads the discriminator, the feature amount dictionary, etc. from the storage unit 1604, and uses the discriminator and the feature amount dictionary to newly generate images in the imaging unit 1601 of the image acquisition apparatus 1605. Determine whether objects (e.g., tissues, cells, etc.) in the captured image are objects to be detected (e.g., abnormal tissue or abnormal cells) or objects other than those to be detected (e.g., normal tissue or normal cells) Then, the determination result is displayed on the display screen of the output device 204 of the diagnostic imaging support apparatus 1 .
- objects e.g., tissues, cells, etc.
- objects other than those to be detected e.g., normal tissue or normal cells
- a regenerative medical device having an imaging unit, an iPS cell culture device, an MRI, an ultrasonic imaging device, or the like may be used.
- objects other than objects to be detected are not detected among objects (for example, tissues, cells, etc.) in images transmitted from facilities or the like at different locations.
- Objects other than those to be detected for example, normal tissues, cells, etc.
- objects to be detected for example, abnormal tissues, cells, etc.
- Machine learning is performed to create a discriminator, etc., and a feature dictionary is created from the learning image.
- logistic regression is used to machine-learn the feature values of objects (eg, tissues, cells, etc.), but linear regression, Poisson regression, etc. may also be used and have the same effect.
- the Manhattan distance was used to calculate the most similar feature amount.
- one feature extractor is used to calculate the feature amount for the input image, but two or more feature extractors may be used to calculate the feature amount, with the same effect. .
- the present invention can also be implemented by software program code that implements the functions of the embodiments.
- a storage medium recording the program code is provided to the system or device, and the computer (or CPU or MPU) of the system or device reads the program code stored in the storage medium.
- the program code itself read out from the storage medium implements the functions of the above-described embodiments, and the program code itself and the storage medium storing it constitute the present invention.
- Storage media for supplying such program code include, for example, flexible disks, CD-ROMs, DVD-ROMs, hard disks, optical disks, magneto-optical disks, CD-Rs, magnetic tapes, non-volatile memory cards, and ROMs. etc. are used.
- the OS operating system
- the processing implements the functions of the above-described embodiments.
- the CPU of the computer performs part or all of the actual processing based on the instructions of the program code. may implement the functions of the above-described embodiment.
- the program code of the software that realizes the functions of the embodiment can be transferred to storage means such as the hard disk and memory of the system or device, or storage media such as CD-RW and CD-R.
- storage means such as the hard disk and memory of the system or device, or storage media such as CD-RW and CD-R.
- the computer (or CPU or MPU) of the system or device may read and execute the program code stored in the storage means or the storage medium at the time of use.
- control lines and information lines indicate those that are considered necessary for explanation, and not all the control lines and information lines are necessarily indicated on the product. All configurations may be interconnected.
- Reference Signs List 1 image diagnosis support device 10: input unit 11: feature extraction unit 12: identification unit 13: feature amount similarity identification unit 14: unlearned determination unit 15: identification result Determining unit 16 Drawing unit 17 Recording unit 20 Identification reason presenting unit 26 Drawing unit 91 Control unit 1500 Remote diagnosis support system 1600 Net consignment service provision system.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Investigating Or Analysing Biological Materials (AREA)
Abstract
Description
<画像診断支援装置の機能構成>
図1は、本発明の実施形態による画像診断支援装置の機能構成を示すブロック図である。
図2は、本発明の実施形態による画像診断支援装置1のハードウェア構成例を示す図である。
以下、各要素の構成と動作について詳細に説明する。
入力画像の特徴量を求める。一例として、特徴量を求める例を図3に示す。図3のCNNは、Convolutional Neural Networkを表す。
識別部12は、図5に示すように、前記特徴抽出部11で求めた特徴抽出器Aの特徴量FAi(行列f)を用いて、ロジスティック回帰処理にて、式(2)により、検出すべき物体らしさ(腫瘍らしさ、病変らしさ等)の値を算出して、入力画像A1内の物体(例えば、組織・細胞)が検出すべき物体(腫瘍等)か検出すべき物体以外(非腫瘍等)かを判定する。式(2)において、wは重みの行列、bはオフセット値、gは非線形関数、yは計算結果をそれぞれ示し、後述に示すように、学習用画像を用いて、事前の機械学習により、wの重みとbのオフセット値を求める。
特徴量類似度識別部13は、特徴抽出部11で求めた入力画像の特徴量FAiと特徴量辞書50内のグループ(例えば、検出すべき物体、検出すべき物体以外等)毎の学習用画像の特徴量FXi(Xは1~N(N:グループ番号))を用いて、式(3)により、グループ毎に特徴量FAiに最も類似した特徴量FXiとSX値を求める。次に、全グループの中で最小のSX値(Smin)とグループ番号を求め、特徴量類似度識別結果にグループ番号を設定する。式(3)にて、aj(jは1~m)は特徴量FAiの各次元の値、bjは特徴量FXiの各次元の値をそれぞれ示す。
未学習判定部14は、特徴量類似度識別部13で求めた最小値のSminと閾値Hを比較して、Smin≦Hの場合、未学習判定結果に例えば0(学習済み)を設定し、Smin>Hの場合、未学習判定結果に例えば-1(未学習)を設定し、未学習判定結果を求める。
識別結果判定部15は、識別部12で求めた識別結果と特徴量類似度識別部13で求めた特徴量類似度識別結果と未学習判定部14で求めた未学習判定結果を用いて、入力画像の判定結果を求める。すなわち、識別結果と特徴量類似度識別結果が同様の検出すべき物体(腫瘍等)である場合、入力画像内の物体は検出すべき物体と判定し、判定結果に例えば1を設定する。また、識別結果と特徴量類似度識別結果が同様の検出すべき物体以外(非腫瘍等)である場合、入力画像内の物体は検出すべき物体以外と判定し、判定結果に例えば0を設定する。また、識別結果と特徴量類似度識別結果が不一致の場合、入力画像内の物体は判定不能と判定し、判定結果に例えば2を設定する。ただし、未学習判定結果が-1の場合は、識別結果と特徴量類似度識別結果に関わらず、入力画像は未学習と判定し、判定結果に例えば-1(未学習)を設定する。識別結果判定部15により、入力画像内の検出すべき物体に類似した検出すべき物体以外を判定不能と判定することで、入力画像内の検出すべき物体以外を検出すべき物体に判定することを抑制し、過検出を抑制することが可能となる。
描画部16は、識別結果判定部15において、ユーザに優先して確認を促す箇所(例えば、腫瘍、判定不能、未学習等)と判定した箇所を示すために、図7に示すように、入力した対象画像内に検出枠を描画する。また、検出すべき物体(例えば、腫瘍、異常組織や異常細胞等)と判定された場合、図7に示すように、判定した物体らしさ(例えば、病変らしさ等)の判定結果(例えば、腫瘍等)を表示する。
記録部17は、描画部16で入力した対象画像上に検出枠を描画するための座標情報とその対象画像をメモリ90に保存する。
図9は、本発明の実施形態による画像診断支援装置1の特徴抽出部11と識別部12の機械学習時の動作を説明するためのフローチャートである。以下では、特徴抽出部11と識別部12を動作主体として記述するが、CPU201を動作主体とし、CPU201がプログラムとしての各処理部を実行するように読み替えても良い。
入力部10は、学習用に入力された画像を受け付け、当該入力画像A1を特徴抽出部11に出力する。
特徴抽出部11は、式(1)により、フィルターを用いて入力画像A1から入力画像A1の物体(例えば、組織・細胞等)の特徴量FAiを求める。また、全学習用画像の特徴量FAiを算出し、それらの特徴量FAiを用いて特徴量辞書50を作成する。
識別部12は、機械学習によって、上述の式(1)と式(2)により、フィルターを用いて入力画像A1の物体(例えば、組織や細胞等)の特徴量FAiを算出する特徴抽出器Aを作成する。
特徴量FAiから成る行列をfとした場合について、重みw、フィルター係数wj、オフセット値bとbiを求める。
識別部12は、算出した特徴抽出器Aの重みw、フィルター係数wj、オフセット値b、biをメモリ90に保存する。
入力部10は、当該入力画像A1を特徴抽出部11に出力する。
特徴抽出部11は、メモリ90から特徴抽出器Aのフィルター係数wj、オフセットbiを読込み、上述の式(1)により、フィルターを用いて入力画像A1の検出すべき物体(例えば、組織・細胞等)の特徴量FAiを求める。
識別部12は、メモリ90から重みw、オフセットbを読込み、式(2)により、特徴量FAiから成る行列をfとした場合の計算結果yを算出する。
識別部12は、算出した計算結果yと閾値Th1を比較する。すなわち、計算結果y≧閾値Th1の場合、処理はステップ1005に移行する。一方、計算結果y<閾値Th1の場合、処理はステップ1006に移行する。
識別部12は、識別結果ccに検出すべき物体(例えば、1:腫瘍)を設定する。
識別部12は、識別結果ccに検出すべき物体以外(例えば、0:非腫瘍)を設定する。
特徴量類似度識別部13は、特徴抽出部11が算出した入力画像A1の特徴量FAiと特徴量辞書50内のグループ毎の学習用画像の特徴量FXi(Xは1~N)を用いて、式(3)により、グループ毎に特徴量FAiに最も類似した特徴量FXiとSX値を求める。次に、全グループの中で最小のSX値とグループ番号GNを求め、特徴量類似度識別結果cfにグループ番号(例えば、1:検出すべき物体(腫瘍等)、0:検出すべき物体以外(非腫瘍等)を設定する。
未学習判定部14は、特徴量類似度識別部13で求めたSXの最小値であるSminと閾値Hを比較する。すなわち、Smin≦閾値Hの場合、処理はステップ1009に移行する。一方、Smin>閾値Hの場合、処理はステップ1010に移行する。
未学習判定部14は、未学習判定結果ulに学習済み(例えば、0)を設定する。
(x)ステップS1010
未学習判定部14は、未学習判定結果ulに未学習(例えば、-1)を設定する。
識別結果判定部15は、識別部12で求めた識別結果ccと特徴量類似度識別部13で求めた特徴量類似度識別結果cfと未学習判定部14で求めた未学習判定結果ulを用いて、入力画像の判定結果jrを求める。すなわち、未学習判定結果ulが学習済みの場合、処理はステップS1012に移行する。一方、未学習判定結果ulが未学習の場合、処理はステップS1013に移行する。
識別結果判定部15は、識別部12で求めた識別結果ccと特徴量類似度識別部13で求めた特徴量類似度識別結果cfを同じ検出すべき物体であるか否かを判定する。すなわち、ccとcfが同じ検出すべき物体であれば、処理はステップS1014に移行する。一方、ccとcfが同じ検出すべき物体でない場合は、処理はステップS1015に移行する。
識別結果判定部15は、判定結果jrに未学習(例えば、-1:未学習)を設定する。
識別結果判定部15は、判定結果jrに検出すべき物体(例えば、1:腫瘍)を設定する。
識別結果判定部15は、識別部12で求めた識別結果ccと特徴量類似度識別部13で求めた特徴量類似度識別結果cfが不一致か検出すべき物体以外かを判定する。すなわち、ccとcfが同じ検出すべき物体以外であれば、処理はステップ1016に移行する。一方、ccとcfが不一致であれば、処理はステップ1017に移行する。
識別結果判定部15は、判定結果jrに検出すべき物体以外(例えば、0:非腫瘍)を設定する。
識別結果判定部15は、判定結果jrに判定不能(例えば、2:判定不能)を設定する。
識別結果判定部15は、判定結果jrから検出すべき物体の物体らしさを判定する。例えば、乳房について、判定結果jrには、腫瘍、非腫瘍等の結果が設定される。従って、判定結果jrにより、病変有無(例えば、腫瘍等)や病変らしさ(y=0.89:値域(0~1))を求めることが可能となる。
描画部16は、腫瘍、判定不能、未学習等と判定された場合は、図7や図8A、8Bに示すように、ユーザに確認を促す箇所を示す検出枠を画像上に描画して表示する。描画部16は、検出すべき物体以外(例えば、正常組織や正常細胞等)と分類された場合は、検出枠を画像上に描画することはしない。また、描画部16は、図11に示すように、入力画像から算出した物体らしさ(病変らしさ等)の値を表示する。
記録部17は、描画部16で入力した対象画像上に検出枠を描画するための座標情報とその対象画像をメモリ90(記憶装置203に相当)に保存する。
図12は、第2の実施形態に係る画像診断支援装置1の構成例を示す図である。第2の実施形態に掛かる画像診断支援装置1は、第1の実施形態における画像診断支援装置1(図1参照)と同様な構成を多く含むが、描画部26の動作が図1と異なる。また、新たな構成として識別理由提示部20を含む。従って、ここでは、主として図1とは異なる構成について説明をする。
以下、図1と異なる各要素の構成と動作について詳細に説明する。
識別理由提示部20は、図13Aの(A)に示す識別理由ボタンを押下すると、図13Bの(B)に示すように、入力画像の特徴量と入力画像の特徴量に最も類似した学習用画像の特徴量(例えば、m次元の特徴量で、各次元の値miの値域は-1≦mi≦1)および類似度の識別スコアを表示する。ここで、式(3)で求めたSXを用いて、式(4)より、類似度の識別スコアSSを求める。また、特徴量辞書の中に学習用画像も格納していた場合、入力画像と入力画像に最も類似した学習用画像も画面上に表示する。
描画部26は、描画部16と同様の機能を有する。描画部26は、図11の代わりに図13Aの(A)を表示し、図11の表示内容以外に識別理由ボタンを追加で表示する。
第2の実施形態に係る画像診断支援装置1は、図2に示す構成と同様であるが、第1の実施形態に係る画像診断支援装置1とは異なり、メモリ202に識別理由提示部20を含んでいる。
描画部26は、第1の実施形態の場合と同様、腫瘍、判定不能、未学習等と判定された場合は、図7や図8A、8Bに示すように、ユーザに確認を促す箇所を示す検出枠を画像上に描画して表示する。描画部26は、検出すべき物体以外(例えば、正常組織や正常細胞等)と分類された場合は、検出枠を画像上に描画することはしない。また、第2の実施形態では、描画部26は、図13Aの(A)に示すように、入力画像から算出した物体らしさ(病変らしさ等)の値を表示する。
識別理由提示部20は、式(4)を用いて、入力画像の特徴量と入力画像の特徴量に最も類似した学習用画像の特徴量から類似度の識別スコアSSを算出し、図13Aの(A)に示す識別理由ボタンを押下すると、図13Bの(B)に示すように、入力画像の特徴量と入力画像の特徴量に最も類似した学習用画像の特徴量および類似度の識別スコアを表示する。また、特徴量辞書の中に学習用画像も格納していた場合、入力画像と入力画像に最も類似した学習用画像(判定用画像)も画面上に表示する。
記録部17は、描画部26で入力した対象画像上に検出枠を描画するための座標情報とその対象画像、識別理由提示部20によって算出した類似度の識別スコアをメモリ90(記憶装置203に相当)に保存する。
図15は、第3の実施形態による遠隔診断支援システム1500の構成を示す機能ブロック図である。遠隔診断支援システム1500は、サーバー1503と、画像取得装置1505と、を有する。
図16は、本発明の第4の実施形態によるネット受託サービス提供システム1600の構成を示す機能ブロック図である。ネット受託サービス提供システム1600は、サーバー等1603と、画像取得装置1605と、を有している。
Claims (15)
- 対象画像に対して画像処理するためのプログラムを実行するプロセッサと、
画像処理の結果を格納するためのメモリと、を有し、
前記プロセッサは、
画像を入力する処理と、
前記対象画像から物体の特徴量を抽出する処理と、
学習用画像の特徴量を抽出して特徴量辞書を作成する処理と、
前記特徴量から対象画像を識別して識別値を算出する処理と、
前記特徴量辞書を用いて前記対象画像の特徴量類似度を識別して特徴量類似度識別値を算出する処理と、
前記識別値と前記特徴量類似度識別値を用いて、前記対象画像毎に物体の有無及び物体の確からしさを判定する処理と、
を実行することを特徴とする画像診断支援装置。 - 前記プロセッサは、前記特徴量辞書を作成する処理では、機械学習のネットワークにおける任意の層の特徴量を用いて特徴量辞書を作成することを特徴とする請求項1に記載の画像診断支援装置。
- 対象画像に対して画像処理するためのプログラムを実行するプロセッサと、
画像処理の結果を格納するためのメモリと、を有し、
前記プロセッサは、
画像を入力する処理と、
前記対象画像から物体の特徴量を抽出する処理と、
学習用画像の特徴量を抽出して特徴量辞書を作成する処理と、
前記特徴量辞書を用いて対象画像が未学習か否かを判定する処理と、
前記特徴量から対象画像を識別して識別値を算出する処理と、
前記特徴量辞書を用いて前記対象画像の特徴量類似度を識別して特徴量類似度識別値を算出する処理と、
前記未学習か否かの判定結果と前記識別値と前記特徴量類似度識別値を用いて、前記対象画像毎に物体の有無及び物体の確からしさを判定する判定処理と、
を実行することを特徴とする画像診断支援装置。 - 前記プロセッサは、前記特徴量辞書を作成する処理では、機械学習のネットワークにおける任意の層の特徴量を用いて特徴量辞書を作成することを特徴とする請求項3に記載の画像診断支援装置。
- 対象画像に対して画像処理するためのプログラムを実行するプロセッサと、
画像処理の結果を格納するためのメモリと、を有し、
前記プロセッサは、
画像を入力する処理と、
前記対象画像から物体の特徴量を抽出する処理と、
学習用画像の特徴量を抽出して特徴量辞書を作成する処理と、
前記特徴量から対象画像を識別して識別値を算出する処理と、
前記特徴量辞書を用いて前記対象画像の特徴量類似度を識別して特徴量類似度識別値を算出する処理と、
前記特徴量辞書を用いて前記対象画像と学習用画像との類似度を算出し、算出した前記類似度を用いて前記対象画像に対する識別理由を提示する処理と、
前記識別値と前記特徴量類似度識別値を用いて、前記対象画像毎に物体の有無及び物体の確からしさを判定する判定処理と、
を実行することを特徴とする画像診断支援装置。 - 前記プロセッサは、前記特徴量辞書を作成する処理では、機械学習のネットワークにおける任意の層の特徴量を用いて特徴量辞書を作成することを特徴とする請求項5に記載の画像診断支援装置。
- 対象画像において所望の物体を分類する画像診断支援方法であって、
前記対象画像に対して画像処理するためのプログラムを実行するプロセッサが、物体を撮像した画像を入力するステップと、
前記プロセッサが、前記対象画像の物体の特徴量を抽出するステップと、
前記プロセッサが、学習用画像の特徴量を抽出して特徴量辞書を作成するステップと、
前記プロセッサが、前記特徴量から対象画像を識別して識別値を算出するステップと、
前記プロセッサが、前記特徴量辞書を用いて前記対象画像の特徴量類似度を識別して特徴量類似度識別値を算出するステップと、
前記プロセッサが、前記識別値と前記特徴量類似度識別値を用いて、前記対象画像毎に物体の有無及び物体の確からしさを判定するステップと、
を実行することを特徴とする画像診断支援方法。 - 前記特徴量辞書を作成するステップでは、機械学習のネットワークにおける任意の層の特徴量を用いて特徴量辞書を作成することを特徴とする請求項7に記載の画像診断支援方法。
- 対象画像において所望の物体を分類する画像診断支援方法であって、
前記対象画像に対して画像処理するためのプログラムを実行するプロセッサが、物体を撮像した画像を入力するステップと、
前記プロセッサが、前記対象画像の物体の特徴量を抽出するステップと、
前記プロセッサが、学習用画像の特徴量を抽出して特徴量辞書を作成するステップと、
前記プロセッサが、前記特徴量辞書を用いて対象画像が未学習か否かを判定するステップと、
前記プロセッサが、前記特徴量から対象画像を識別して識別値を算出するステップと、
前記プロセッサが、前記特徴量辞書を用いて前記対象画像の特徴量類似度を識別して特徴量類似度識別値を算出するステップと、
前記プロセッサが、前記未学習か否かの判定結果と前記識別値と前記特徴量類似度識別値を用いて、前記対象画像毎に物体の有無及び物体の確からしさを判定するステップと、
を実行することを特徴とする画像診断支援方法。 - 前記特徴量辞書を作成するステップでは、機械学習のネットワークにおける任意の層の特徴量を用いて特徴量辞書を作成することを特徴とする請求項9に記載の画像診断支援方法。
- 対象画像において所望の物体を分類する画像診断支援方法であって、
前記対象画像に対して画像処理するためのプログラムを実行するプロセッサが、物体を撮像した画像を入力するステップと、
前記プロセッサが、前記対象画像の物体の特徴量を抽出するステップと、
前記プロセッサが、学習用画像の特徴量を抽出して特徴量辞書を作成するステップと、
前記プロセッサが、前記特徴量から対象画像を識別して識別値を算出するステップと、
前記プロセッサが、前記特徴量辞書を用いて前記対象画像の特徴量類似度を識別して特徴量類似度識別値を算出するステップと、
前記プロセッサが、前記特徴量辞書を用いて前記対象画像と学習用画像との類似度を算出し、算出した前記類似度を用いて前記対象画像に対する識別理由を提示するステップと、
前記プロセッサが、前記識別値と前記特徴量類似度識別値を用いて、前記対象画像毎に物体の有無及び物体の確からしさを判定するステップと、
を実行することを特徴とする画像診断支援方法。 - 前記特徴量辞書を作成するステップでは、機械学習のネットワークにおける任意の層の特徴量を用いて特徴量辞書を作成することを特徴とする請求項11に記載の画像診断支援方法。
- 対象画像において所望の物体を分類する画像診断支援方法であって、
前記対象画像に対して画像処理するためのプログラムを実行するプロセッサが、
物体を撮像した画像を入力するステップと、
前記プロセッサが、前記対象画像の物体の特徴量を抽出するステップと、
前記プロセッサが、学習用画像の特徴量を抽出して特徴量辞書を作成するステップと、
前記プロセッサが、前記特徴量辞書を用いて対象画像が未学習か否かを判定するステップと、
前記プロセッサが、前記特徴量から対象画像を識別して識別値を算出するステップと、
前記プロセッサが、前記特徴量辞書を用いて前記対象画像の特徴量類似度を識別して特徴量類似度識別値を算出するステップと、
前記プロセッサが、前記特徴量辞書を用いて前記対象画像と学習用画像との類似度を算出し、算出した前記類似度を用いて前記対象画像に対する識別理由を提示するステップと、
前記プロセッサが、前記未学習か否かの判定結果と前記識別値と前記特徴量類似度識別値を用いて、前記対象画像毎に物体の有無及び物体の確からしさを判定するステップと、
を実行することを特徴とする画像診断支援方法。 - 対象画像に対して画像処理するためのプログラムを実行するプロセッサと、画像処理の結果を格納するためのメモリと、を有し、前記プロセッサが、物体を撮像した画像を入力する処理と、前記対象画像の物体の特徴量を抽出する処理と、学習用画像の特徴量を抽出して特徴量辞書を作成する処理と、前記特徴量から対象画像を識別して識別値を算出する処理と、前記特徴量辞書を用いて前記対象画像の特徴量類似度を識別して特徴量類似度識別値を算出する処理と、前記識別値と前記特徴量類似度識別値を用いて、前記対象画像毎に物体の有無及び物体の確からしさを判定する判定処理と、を実行する画像診断支援装置を有するサーバーと、
画像データを撮影する撮像装置を有する画像取得装置と、を有し、
前記画像取得装置は、前記サーバーに前記画像データを送信し、
前記サーバーは、受信した前記画像データを前記画像診断支援装置で処理して前記判定された物体の画像と判定結果をメモリに格納するとともに、前記画像取得装置に送信し、
前記画像取得装置は、受信した前記判定された物体の画像と判定結果を表示装置に表示することを特徴とする遠隔診断支援システム。 - 対象画像に対して画像処理するためのプログラムを実行するプロセッサと、画像処理の結果を格納するためのメモリと、を有し、前記プロセッサが、物体を撮像した画像を入力する処理と、 前記対象画像の物体の特徴量を抽出する処理と、学習用画像の特徴量を抽出して特徴量辞書を作成する処理と、前記特徴量から対象画像を識別して識別値を算出する処理と、前記特徴量辞書を用いて前記対象画像の特徴量類似度を識別して特徴量類似度識別値を算出する処理と、前記識別値と前記特徴量類似度識別値を用いて、前記対象画像毎に物体の有無及び物体の確からしさを判定する判定処理と、を実行する画像診断支援装置を有するサーバーと、
画像データを撮影する撮像装置と前記画像診断支援装置を有する画像取得装置と、を有し、
前記画像取得装置は、前記サーバーに前記画像データを送信し、
前記サーバーは、受信した前記画像データを前記画像診断支援装置で処理して前記判定された物体の画像と識別器と特徴量辞書をメモリに格納するとともに、前記判定された物体の画像と識別器と特徴量辞書を前記画像取得装置に送信し、
前記画像取得装置は、受信した前記判定された物体の画像と識別器と特徴量辞書を格納し、
前記画像取得装置内の前記画像診断支援装置は、前記識別器と特徴量辞書を用いて撮像装置で新たに撮影した画像内の物体を判定するとともに、前記判定の結果を表示装置に表示することを特徴とするネット受託サービス提供システム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202280011731.9A CN116783616A (zh) | 2021-03-03 | 2022-02-15 | 图像诊断辅助装置、图像诊断辅助方法、远程诊断辅助系统、网络受托服务系统 |
US18/276,275 US20240119588A1 (en) | 2021-03-03 | 2022-02-15 | Image diagnosis support device, image diagnosis support method, remote diagnosis support system, and net contract service system |
EP22762962.3A EP4303809A1 (en) | 2021-03-03 | 2022-02-15 | Image diagnosis assistance device, image diagnosis assistance method, remote diagnosis assistance system, and internet contracting service system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021033977A JP2022134681A (ja) | 2021-03-03 | 2021-03-03 | 画像診断支援装置、画像診断支援方法、遠隔診断支援システム、ネット受託サービスシステム |
JP2021-033977 | 2021-03-03 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022185905A1 true WO2022185905A1 (ja) | 2022-09-09 |
Family
ID=83154127
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/006024 WO2022185905A1 (ja) | 2021-03-03 | 2022-02-15 | 画像診断支援装置、画像診断支援方法、遠隔診断支援システム、ネット受託サービスシステム |
Country Status (5)
Country | Link |
---|---|
US (1) | US20240119588A1 (ja) |
EP (1) | EP4303809A1 (ja) |
JP (1) | JP2022134681A (ja) |
CN (1) | CN116783616A (ja) |
WO (1) | WO2022185905A1 (ja) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000242784A (ja) * | 1999-02-18 | 2000-09-08 | Matsushita Electric Ind Co Ltd | 物体認識方法及び物体認識装置 |
JP2020009160A (ja) * | 2018-07-09 | 2020-01-16 | 株式会社日立ハイテクノロジーズ | 機械学習装置、画像診断支援装置、機械学習方法及び画像診断支援方法 |
-
2021
- 2021-03-03 JP JP2021033977A patent/JP2022134681A/ja active Pending
-
2022
- 2022-02-15 CN CN202280011731.9A patent/CN116783616A/zh active Pending
- 2022-02-15 US US18/276,275 patent/US20240119588A1/en active Pending
- 2022-02-15 WO PCT/JP2022/006024 patent/WO2022185905A1/ja active Application Filing
- 2022-02-15 EP EP22762962.3A patent/EP4303809A1/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000242784A (ja) * | 1999-02-18 | 2000-09-08 | Matsushita Electric Ind Co Ltd | 物体認識方法及び物体認識装置 |
JP2020009160A (ja) * | 2018-07-09 | 2020-01-16 | 株式会社日立ハイテクノロジーズ | 機械学習装置、画像診断支援装置、機械学習方法及び画像診断支援方法 |
Also Published As
Publication number | Publication date |
---|---|
US20240119588A1 (en) | 2024-04-11 |
CN116783616A (zh) | 2023-09-19 |
JP2022134681A (ja) | 2022-09-15 |
EP4303809A1 (en) | 2024-01-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6324338B2 (ja) | 細胞診断支援装置、細胞診断支援方法、遠隔診断支援システム、及びサービス提供システム | |
JP6979278B2 (ja) | 画像診断支援装置及び画像診断支援システム、並びに画像診断支援方法 | |
Komura et al. | Machine learning methods for histopathological image analysis | |
WO2021093448A1 (zh) | 图像处理方法、装置、服务器、医疗图像处理设备及存储介质 | |
EP2973217A1 (en) | System and method for reviewing and analyzing cytological specimens | |
JP2018072240A (ja) | 画像診断支援装置及びシステム、画像診断支援方法 | |
Oloko-Oba et al. | Diagnosing tuberculosis using deep convolutional neural network | |
US11972560B2 (en) | Machine learning device, image diagnosis support device, machine learning method and image diagnosis support method | |
Arjmand et al. | Deep learning in liver biopsies using convolutional neural networks | |
WO2022185905A1 (ja) | 画像診断支援装置、画像診断支援方法、遠隔診断支援システム、ネット受託サービスシステム | |
WO2021246013A1 (ja) | 画像診断方法、画像診断支援装置、及び計算機システム | |
Fan et al. | Positive-aware lesion detection network with cross-scale feature pyramid for OCT images | |
WO2021065937A1 (ja) | 機械学習装置 | |
Yang et al. | Leveraging auxiliary information from EMR for weakly supervised pulmonary nodule detection | |
Du et al. | False positive suppression in cervical cell screening via attention-guided semi-supervised learning | |
CN114170415A (zh) | 基于组织病理图像深度域适应的tmb分类方法及系统 | |
CN112967246A (zh) | 用于临床决策支持系统的x射线影像辅助装置及方法 | |
Bal et al. | Automated diagnosis of breast cancer with roi detection using yolo and heuristics | |
WO2023248788A1 (ja) | 識別器生成装置および画像診断支援装置 | |
WO2024045819A1 (zh) | 病变区域确定方法、模型训练方法及装置 | |
Deep | Check for Diagnosing Tuberculosis Using Deep Convolutional Neural Network Mustapha Oloko-Oba and Serestina Viriri (+) School of Mathematics, Statistics and Computer Science | |
Umamaheswari et al. | Optimizing Cervical Cancer Classification with SVM and Improved Genetic Algorithm on Pap Smear Images. | |
Shunmuga Priya et al. | Enhanced Skin Disease Image Analysis Using Hybrid CLAHE-Median Filter and Salient K-Means Cluster | |
CN113012171A (zh) | 一种基于协同优化网络的肺结节分割方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22762962 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280011731.9 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18276275 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022762962 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022762962 Country of ref document: EP Effective date: 20231004 |