WO2022176665A1 - 画像解析方法、画像解析装置、分類装置、分類システム、制御プログラム、記録媒体 - Google Patents
画像解析方法、画像解析装置、分類装置、分類システム、制御プログラム、記録媒体 Download PDFInfo
- Publication number
- WO2022176665A1 WO2022176665A1 PCT/JP2022/004597 JP2022004597W WO2022176665A1 WO 2022176665 A1 WO2022176665 A1 WO 2022176665A1 JP 2022004597 W JP2022004597 W JP 2022004597W WO 2022176665 A1 WO2022176665 A1 WO 2022176665A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- tissue
- image
- classification
- feature number
- feature
- Prior art date
Links
- 238000010191 image analysis Methods 0.000 title claims description 43
- 238000003703 image analysis method Methods 0.000 title claims description 25
- 238000013145 classification model Methods 0.000 claims abstract description 58
- 238000004364 calculation method Methods 0.000 claims abstract description 23
- 230000008859 change Effects 0.000 claims abstract description 22
- 210000001519 tissue Anatomy 0.000 claims description 304
- 238000012549 training Methods 0.000 claims description 32
- 210000003855 cell nucleus Anatomy 0.000 claims description 25
- 210000004072 lung Anatomy 0.000 claims description 23
- 208000009956 adenocarcinoma Diseases 0.000 claims description 19
- 230000006870 function Effects 0.000 claims description 19
- 206010014561 Emphysema Diseases 0.000 claims description 12
- 206010020718 hyperplasia Diseases 0.000 claims description 12
- 210000004027 cell Anatomy 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 7
- 238000003384 imaging method Methods 0.000 claims description 7
- 238000010801 machine learning Methods 0.000 claims description 7
- 238000010186 staining Methods 0.000 claims description 6
- 210000000805 cytoplasm Anatomy 0.000 claims description 4
- 238000003066 decision tree Methods 0.000 claims description 2
- 238000007477 logistic regression Methods 0.000 claims description 2
- 238000007637 random forest analysis Methods 0.000 claims description 2
- 238000012706 support-vector machine Methods 0.000 claims description 2
- 241000219873 Vicia Species 0.000 description 55
- 238000010586 diagram Methods 0.000 description 24
- 238000012545 processing Methods 0.000 description 18
- 230000003902 lesion Effects 0.000 description 14
- 238000004891 communication Methods 0.000 description 13
- 208000010507 Adenocarcinoma of Lung Diseases 0.000 description 11
- 238000004458 analytical method Methods 0.000 description 11
- 201000005249 lung adenocarcinoma Diseases 0.000 description 11
- 238000000034 method Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 7
- WZUVPPKBWHMQCE-UHFFFAOYSA-N Haematoxylin Chemical compound C12=CC(O)=C(O)C=C2CC2(O)C1C1=CC=C(O)C(O)=C1OC2 WZUVPPKBWHMQCE-UHFFFAOYSA-N 0.000 description 4
- 206010028980 Neoplasm Diseases 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 4
- 239000000470 constituent Substances 0.000 description 4
- 230000014509 gene expression Effects 0.000 description 4
- 206010058467 Lung neoplasm malignant Diseases 0.000 description 3
- 201000011510 cancer Diseases 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000009545 invasion Effects 0.000 description 3
- 201000005202 lung cancer Diseases 0.000 description 3
- 208000020816 lung neoplasm Diseases 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- YQGOJNYOYNNSMM-UHFFFAOYSA-N eosin Chemical compound [Na+].OC(=O)C1=CC=CC=C1C1=C2C=C(Br)C(=O)C(Br)=C2OC2=C(Br)C(O)=C(Br)C=C21 YQGOJNYOYNNSMM-UHFFFAOYSA-N 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 108010077544 Chromatin Proteins 0.000 description 1
- 102000010834 Extracellular Matrix Proteins Human genes 0.000 description 1
- 108010037362 Extracellular Matrix Proteins Proteins 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 210000003123 bronchiole Anatomy 0.000 description 1
- 230000024245 cell differentiation Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 210000003483 chromatin Anatomy 0.000 description 1
- 230000001086 cytosolic effect Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000002651 drug therapy Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 210000002744 extracellular matrix Anatomy 0.000 description 1
- 210000001035 gastrointestinal tract Anatomy 0.000 description 1
- 238000007490 hematoxylin and eosin (H&E) staining Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 210000004185 liver Anatomy 0.000 description 1
- 210000001165 lymph node Anatomy 0.000 description 1
- 210000005075 mammary gland Anatomy 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 210000000496 pancreas Anatomy 0.000 description 1
- 239000012188 paraffin wax Substances 0.000 description 1
- 238000010827 pathological analysis Methods 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 230000007170 pathology Effects 0.000 description 1
- 210000002307 prostate Anatomy 0.000 description 1
- 239000011347 resin Substances 0.000 description 1
- 229920005989 resin Polymers 0.000 description 1
- 210000003705 ribosome Anatomy 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000007447 staining method Methods 0.000 description 1
- 210000004881 tumor cell Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N33/00—Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
- G01N33/48—Biological material, e.g. blood, urine; Haemocytometers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
Definitions
- the present disclosure relates to an image analysis method, an image analysis device, a classification device, a classification system, and the like that analyze tissue images obtained by imaging tissues of a living body and classify changes occurring in the tissues.
- Non-Patent Document 1 discloses a technique of applying artificial intelligence to image diagnosis of lung cancer.
- a pathologist determines the presence or absence of changes in the tissue from the tissue image of the patient's tissue, and classifies the changes in the tissue. For example, a pathologist determines whether the tissue of the patient's lung is normal based on the tissue image of the patient's lung. And if not normal, the pathologist decides whether the change should be classified as emphysema or a precancerous lesion of lung adenocarcinoma.
- the progression of cancer is evaluated based on the size and depth of invasion of the lesion site.
- lung cancer especially lung adenocarcinoma
- an experienced pathologist may find it difficult to accurately classify premalignant lesions (or pre-invasive lesions) from tissue images of lung tissue.
- Non-Patent Document 1 The image analysis method described in Non-Patent Document 1 has been devised to improve this situation, but there is room for improvement in terms of accuracy in classifying premalignant lesions.
- One aspect of the present disclosure realizes an image analysis method, an image analysis device, a classification device, etc. that accurately classify changes occurring in tissues of a living body based on tissue images.
- an image analysis method is a tissue in which cell nuclei of cells contained in a tissue of a living body and components different from the cell nuclei are shown in different hues.
- the input data is input to a classification model that models the correspondence relationship between the first feature number, the second feature number, the third feature number, and the classification related to the change occurring in the tissue, and the tissue image and a classification step of outputting a classification result for changes occurring in the tissue imaged in the .
- the image analysis apparatus is a tissue image in which cell nuclei of cells contained in a tissue of a living body and components different from the cell nuclei appear in different hues.
- a single-color component image generating unit for generating a plurality of single-color component images based on pixel values corresponding to each of the plurality of color components;
- a binarization unit that generates a binarized image; and binarizes each of the plurality of binarized images generated from each of the plurality of single-color component images into a first pixel value and a second pixel value.
- a first characteristic number indicating the number of hole-shaped regions made up of pixels of the second pixel value after binarization, surrounded by pixels of the first pixel value after binarization; a feature number calculation unit that calculates a second feature number indicating the number of connected regions formed by connecting pixels, and a third feature number that is a ratio between the first feature number and the second feature number.
- the classification device acquires the first feature number, the second feature number, and the third feature number from the image analysis device according to ⁇ 2> above, input data including the first feature number, the second feature number, and the third feature number calculated for each of the binarized images generated from each of the plurality of single-color component images;
- the corresponding relationship between the number of 1 features, the number of the second features, the number of the third features, and the classification related to the changes occurring in the tissue is input into a classification model, and the tissue shown in the tissue image is input. and a classification unit that outputs classification results regarding changes occurring in the
- the classification device configures the tissue image from tissue images in which cell nuclei of cells contained in a living tissue and components different from the cell nuclei appear in different hues.
- a single-color component image generation unit for generating a plurality of single-color component images based on pixel values corresponding to each of the plurality of color components;
- a binarization unit that generates a valued image; and binarizes each of the plurality of binarized images generated from each of the plurality of single-color component images into a first pixel value and a second pixel value.
- a feature number calculation unit for calculating a second feature number indicating the number of connected regions formed by connecting the , and a third feature number that is a ratio between the first feature number and the second feature number; input data including the first feature number, the second feature number, and the third feature number calculated for each of the binarized images generated from each of the component images, the first feature number, A classification model that models the correspondence relationship between the second feature number and the third feature number and the classification related to the change occurring in the tissue, and the change occurring in the tissue shown in the tissue image and a classification unit that outputs a classification result related to the change.
- a classification system transmits the image analysis device according to ⁇ 2> above, the classification device according to ⁇ 3> above, and the tissue image to the image analysis device. It includes an external device and a presentation device that acquires the classification result output from the classification device and presents the classification result.
- the image analysis device and the classification device according to each aspect of the present disclosure may be realized by a computer.
- the computer by operating the computer as each part (software element) included in the image analysis device and the classification device
- a control program that causes a computer to implement the image analysis device and the classification device, and a computer-readable recording medium recording it are also included in the scope of the present disclosure.
- FIG. 4 is a diagram showing an example of a tissue image showing normal lung tissue.
- FIG. 4 is a diagram showing an example of a tissue image showing lung tissue classified as emphysema.
- FIG. 3 is a diagram showing an example of a tissue image showing lung tissue classified as atypical adenomatous hyperplasia.
- FIG. 4 is a diagram showing an example of a tissue image showing lung tissue classified as squamous adenocarcinoma.
- FIG. 4 is a diagram showing an example of a tissue image showing lung tissue classified as invasive adenocarcinoma.
- FIG. 4 is a diagram showing an example of a tissue image showing normal lung tissue.
- FIG. 4 is a diagram showing an example of a tissue image showing lung tissue classified as emphysema.
- FIG. 3 is a diagram showing an example of a tissue image showing lung tissue classified as atypical adenomatous hyperplasia.
- FIG. 4 is a diagram showing an
- FIG. 3 is a schematic diagram for explaining the Betch number in the concept of homology
- 1 is a functional block diagram illustrating a configuration example of a classification system including a classification device according to one aspect of the present disclosure
- FIG. 4 is a flow chart showing the flow of processing performed by the classification device
- FIG. 4 is a diagram showing an example of a data structure of a training tissue image
- 1 is a functional block diagram showing an example of a configuration of a main part of a classification device that generates a classification model
- FIG. 4 is a flow chart showing the flow of processing performed by a classification device to generate a classification model
- FIG. 5 is a diagram showing classification accuracy by an image analysis method according to one aspect of the present disclosure
- FIG. 4 is a functional block diagram showing another configuration example of the classification system according to one aspect of the present disclosure
- FIG. 4 is a functional block diagram showing another configuration example of the classification system according to one aspect of the present disclosure
- FIG. 4 is a functional block diagram showing another configuration example of the classification system
- the inventor of the present disclosure analyzes a tissue image in which cell nuclei of cells contained in a patient's (subject's) tissue (biological tissue) and components (for example, cytoplasm) different from the cell nucleus appear in different hues. Targeted.
- tissue image obtained by imaging a patient's lung tissue as an example of a tissue image to be analyzed.
- the inventors of the present disclosure then applied the concept of homology to describe the topological arrangement of cell nuclei in tissue images and to quantify changes occurring in the tissue.
- the inventors of the present disclosure focused on the fact that cell nuclei and cytoplasm appear in different hues in tissue images used for pathological diagnosis, which are images of living tissues.
- the inventors of the present disclosure extract a partial image showing a region to be analyzed from a tissue image, and extract a plurality of single-color components based on the pixel values corresponding to each of the plurality of color components forming the partial image. generated the image. For example, if the tissue image is a color image represented by red (R), green (G), and blue (B), a single-color component image based on pixel values corresponding to the R component and pixels corresponding to the G component A value-based monochrome component image and a pixel value-based monochrome component image corresponding to the B component were generated.
- the inventors of the present disclosure generated multiple binarized images with different binarization standards from each of the multiple single-color component images. Then, the inventors of the present disclosure obtained a one-dimensional Vetch number b1 (first feature number), a 0-dimensional Vetch number b0 (second feature number), and one-dimensional Vetch numbers b1 and 0 for each of the binarized images. A ratio R (third feature number) to the dimensional Vetch number b0 was calculated. Here, the ratio R may be b1/b0 or b0/b1.
- the inventors of the present disclosure can accurately classify the changes occurring in the tissue shown in the tissue image based on the calculated one-dimensional Betch number b1, zero-dimensional Betch number b0, and ratio R. and invented an image analysis method according to one aspect of the present disclosure. For example, by applying the image analysis method according to an aspect of the present disclosure, it is possible to accurately classify precancerous lesions such as emphysema and lung adenocarcinoma based on tissue images.
- Changes occurring in lung tissue are classified into the following five categories (1) to (5) according to the degree of change and the progression of cancer. ⁇ (1) normal ⁇ (2) Emphysema ⁇ (3) atypical adenomatous hyperplasia (AAH) ⁇ (4) Squamous adenocarcinoma (LP: lepidic pattern of adenocarcinoma) (5) Invasive adenocarcinoma (AC).
- AAH atypical adenomatous hyperplasia
- LP lepidic pattern of adenocarcinoma
- AC Invasive adenocarcinoma
- FIG. 1 is a diagram showing an example of a tissue image showing normal lung tissue.
- FIG. 2 is a diagram showing an example of a tissue image showing tissue classified as emphysema
- FIG. 3 is a diagram showing an example of a tissue image showing tissue classified as atypical adenomatous hyperplasia. be.
- FIG. 4 is a diagram showing an example of a tissue image showing tissue classified as squamous adenocarcinoma
- FIG. 5 is a diagram showing an example of a tissue image showing tissue classified as invasive adenocarcinoma. be.
- Each tissue image has a resolution of 1600 ⁇ 1200 pixels.
- tissue images shown in FIGS. 1 to 5 are 100-fold images of tissue sections taken from the patient's lung, embedded in paraffin, sliced into slices, and stained with HE (Hematoxylin-Eosin). This is an enlarged image.
- HE staining is one of the methods used for staining collected tissue pieces, and uses both hematoxylin staining and eosin staining. Hematoxylin stains chromatin in the cell nucleus and ribosomes in the cytoplasm blue-purple (first color). On the other hand, eosin stains cytoplasmic components and extracellular matrix red (secondary color).
- tissue images to be analyzed are not limited to images of HE-stained tissues, such as the tissue images shown in FIGS.
- the tissue image to be analyzed may be, for example, an image of a tissue stained using any known staining method capable of staining cell nuclei.
- the tissue image to be analyzed may be a color image to which RGB expression is applied, such as the tissue images shown in FIGS. 1 to 5, but is not limited to this.
- the tissue image may be a color image to which any representation other than RGB representation is applied, for example, a color image represented by cyan (Cy), magenta (Mg), and yellow (Ye). good.
- the tissue image may be an image of tissue taken from the patient's body.
- a tissue image obtained by imaging a patient's lung tissue is used as an analysis target will be described as an example.
- the image analysis method according to an aspect of the present disclosure can analyze, for example, tissue images of the prostate, mammary glands, gastrointestinal tract, liver, pancreas, lymph nodes, and the like.
- the image analysis method applies the concept of homology to the binarized image.
- Homology is a field of mathematics that facilitates the analysis of geometrical properties such as combinations of figures by algebraically replacing them.
- the concept of homology is a mathematical concept that represents the connection and contact of constituent elements.
- a tissue image is binarized by setting an appropriate binarization reference value (also referred to as a binarization parameter). Then, the 0-dimensional Vetch number and the 1-dimensional Vetch number b1 are calculated from the binarized image. Using the calculated 0-dimensional Betch number b0 and 1-dimensional Betch number b1, it is possible to evaluate the degree of connection between constituent elements of the tissue and the degree of contact between constituent elements.
- the Betch number is a topological suggestive number that is unrelated to the shape of figures (for example, corresponding to the constituent elements of an organization) and is related only to the contact and separation between figures.
- a q-dimensional singular homology group is finitely generated, this q-dimensional singular homology group can be divided into a direct sum of a free abelian group and a finite abelian group.
- the class of this free abelian group is called the Betti number.
- the zero-dimensional Vetch number b0 is mathematically defined as follows.
- the number of connected components of a figure (also called a one-dimensional complex) K formed by connecting a finite number of line segments is called a 0-dimensional Betch number.
- ⁇ A figure connecting a finite number of points with a finite number of line segments is connected'' means that any vertex of this figure can be reached from any other vertex by following the edges of this figure. do.
- pixels having one pixel value after binarization are connected.
- the one-dimensional Vetch number b1 is mathematically defined as follows. If the following conditions (1) and (2) are satisfied, the one-dimensional Betch number b1 of figure K is r. (1) For a figure (connected one-dimensional complex) K formed by connecting a finite number of line segments, appropriate r number of open one-dimensional simplexes (for example, line segments) is removed from figure K, the number of connected components of figure K does not increase. (2) If any (r+1) open one-dimensional simplexes are removed from K, then K becomes unconnected (ie, the number of connected components of K increases by one).
- a pixel surrounded by pixels having one pixel value after binarization (for example, a pixel value of 0 as a result of binarization)
- the number of hole-shaped regions is the one-dimensional Vetch number b1.
- FIG. 6 is a schematic diagram for explaining the Betch number in the concept of homology.
- the number of black areas is one. Therefore, the zero-dimensional Vetch number b0 of the figure M1 is one. Also, in the case of figure M1, the number of white areas surrounded by black areas is one. Therefore, the one-dimensional Vetch number b1 of the figure M1 is one.
- the number of black areas is two. Therefore, the 0-dimensional Vetch number b0 of the figure M2 is two. Also, in the case of figure M2, the number of white areas surrounded by black areas is three. Therefore, the one-dimensional Vetch number b1 of the figure M2 is three.
- the 0-dimensional Vetch number b0 is the number of groups of components connected to each other
- the 1-dimensional Vetch number b1 is the space surrounded by the connected components as an outer edge (hereinafter referred to as "hole-shaped area" (sometimes written as ).
- the number of hole-shaped regions is the total number of "holes" present in the connected component.
- FIG. 7 is a block diagram showing an example configuration of a classification system 100 including a classification device 1 that executes an image analysis method according to one aspect of the present disclosure.
- the classification system 100 includes a classification device 1, an external device 4 that transmits a tissue image 31 to the classification device 1, and a presentation device 5 that acquires classification results output from the classification device 1 and presents the classification results. contains. Note that FIG. 7 shows an example in which the classification system 100 is introduced by the medical institution H1.
- the external device 4 may be, for example, a microscope with an imaging function, or a computer connected to the microscope and capable of acquiring image data from the microscope.
- the external device 4 may be a server device within the medical institution H1 that stores and manages various medical image data, pathological image data, and the like.
- the classification device 1 acquires the tissue image 31 from the external device 4 that is separate from the classification device 1 is illustrated, it is not limited to this.
- the classification device 1 may be built in the external device 4 .
- the presentation device 5 may be a display and a speaker capable of presenting information output from the classification device 1 and the like.
- the presentation device 5 may be a display provided by the classification device 1 or the external device 4 .
- it may be a computer, a tablet terminal, or the like used by a pathologist, laboratory technician, researcher, or the like belonging to the medical institution H1.
- the classification device 1 and the external device 4, and the classification device 1 and the presentation device 5 may be connected by wireless communication, or may be connected by wired communication.
- the classification device 1 includes a control section 2 and a storage section 3 .
- a tissue image 31 and a classification model 33 may be stored in the storage unit 3 .
- Classification model 33 will be described later.
- the storage unit 3 may store control programs for each unit executed by the control unit 2, OS programs, application programs, and the like. Further, the storage unit 3 may store various data to be read when the control unit 2 executes these programs.
- the storage unit 3 is configured by a non-volatile storage device such as a hard disk or flash memory. In addition to the storage unit 3, it is a storage device used as a work area for temporarily holding data in the process of executing the various programs described above, and a volatile storage such as RAM (Random Access Memory). may be equipped with a device.
- the control unit 2 may be configured by a control device such as a CPU (central processing unit) or a dedicated processor, for example.
- a control device such as a CPU reads a program stored in the storage unit 3 realized by a ROM (read only memory) or the like to a RAM (random access memory) or the like. This can be achieved by executing
- the control unit 2 classifies changes occurring in the tissue shown in the tissue image 31, and outputs the classification result.
- the control unit 2 includes an image acquisition unit 21 , a single-color component image generation unit 22 , a binarization unit 23 , a feature number calculation unit 24 , a classification unit 25 and an output control unit 26 .
- the image acquisition unit 21 acquires a tissue image 31 obtained by imaging the tissue from the external device 4 .
- the tissue image 31 may be an image of a lung tissue piece taken from the patient's body and captured at a predetermined magnification.
- the image acquisition unit 21 may acquire a partial image corresponding to the region extracted from the tissue image 31 from the external device 4 .
- the image acquisition unit 21 may store the acquired tissue image 31 in the storage unit 3 .
- the image acquisition unit 21 may have a known image recognition function and image processing function. Thereby, the image acquisition unit 21 can cut out a region showing the tissue to be analyzed from the tissue image 31, or divide the tissue image 31 to generate a plurality of partial images. For example, the image acquiring unit 21 may be able to cut out a region in which the tissue appears in the tissue image 31 while distinguishing it from surrounding regions (for example, a region in which resin appears).
- the classification device 1 analyzes the tissue image 31 with the image resolution lowered to a predetermined level rather than analyzing the original tissue image 31 without changing the image resolution. It may be possible to more accurately classify changes occurring in existing tissues. More specifically, fine information appearing in the high-resolution tissue image 31 may become "noise" that affects each process described later. For example, intentionally lowering the resolution of the tissue image 31 is effective in blocking the inflow of fine information appearing in the original tissue image 31 . Therefore, the image acquisition unit 21 may generate the tissue image 31 by lowering the image resolution of the acquired original tissue image 31 .
- the classification device 1 may generate a plurality of tissue images 31 with different image resolutions from the acquired original tissue images and use them as analysis targets. This is because, by using a plurality of tissue images 31 with different image resolutions as analysis targets, it may be possible to more accurately classify the changes occurring in the tissue shown in the tissue images 31 . Therefore, the image acquisition unit 21 may generate a plurality of tissue images 31 with different image resolutions for each acquired tissue image. In this specification, for each acquired tissue image, the image acquisition unit 21 generates a plurality of tissue images 31 having different image resolutions, and uses these multiple tissue images 31 as analysis targets. ”.
- the image acquiring unit 21 For each tissue image, the image acquiring unit 21 generates one tissue image 31 adjusted to a predetermined image resolution, and using this tissue image 31 as an analysis target is called “single-scale analysis”. good too.
- the tissue image 31 generated by the image acquisition unit 21 and having the image resolution of the original tissue image 31 reduced may be stored in the storage unit 3 and subjected to processing described later.
- the single-color component image generation unit 22 generates a plurality of single-color component images from the tissue image 31 based on pixel values corresponding to each of the plurality of color components forming the tissue image 31 .
- the single-color component image generator 22 For example, if the tissue image 31 is a color image to which RGB expression is applied, the single-color component image generator 22 generates 3 pixels based on the pixel values corresponding to each of the R, G, and B components from the tissue image 31 . produces two monochromatic component images.
- the single-color component image generator 22 may also be configured to generate a grayscale image (single-color component image) based on pixel values corresponding to the luminance components of the tissue image 31 . In the case of multi-scale analysis, the single-color component image generator 22 generates multiple single-color component images for each of the multiple tissue images 31 with different image resolutions generated by the image acquisition unit 21 .
- the binarization unit 23 performs binarization processing on the monochromatic component image divided into monochromatic components to generate a plurality of binarized images with different binarization reference values.
- the binarization unit 23 generates a plurality of binarized images with different binarization reference values from each of the plurality of single-color component images.
- the binarization unit 23 converts pixels having pixel values larger than the binarization reference value into white pixels, and converts pixels having pixel values equal to or less than the binarization reference value into black pixels. pixels.
- the binarization unit 23 performs binarization processing on the single-color component image each time the reference value for binarization is changed, and generates a plurality of binarized images. That is, the binarization unit 23 generates a plurality of binarized images with different binarization reference values for all single-color component images generated from the tissue image 31 .
- the binarization unit 23 sets the reference value for binarization in the range of 0 to 255. For example, when the reference value for binarization is set to a pixel value of 100, the pixel values of pixels with a pixel value of 100 or less become 0 as a result of the binarization process, and the pixel values of pixels with a pixel value of more than 100 become 2. 255 is obtained as a result of the value conversion processing. In one example, the binarization unit 23 may change the reference value for binarization from 1 to 255 by 1 for each single-color component image to generate 255 binarized images.
- the binarization unit 23 may change the binarization reference value by one from 2 to 254 for each single-color component image to generate 253 binarized images.
- 253 binarized images may be generated by changing the binarization reference value by 5 from 2 to 254 for each single-color component image.
- the binarization unit 23 is not limited to these, and may generate a plurality of binarized images by changing the binarization reference value for each single-color component image according to a desired rule.
- the feature number calculation unit 24 For each of the plurality of binarized images, the feature number calculation unit 24 calculates the number of binarized pixels surrounded by pixels having one binarized pixel value (hereinafter referred to as a first pixel value). A one-dimensional Vetch number b1 indicating the number of hole-shaped regions composed of pixels having a value (hereinafter referred to as a second pixel value) is calculated. Further, the feature number calculation unit 24 calculates a zero-dimensional Vetch number b0 indicating the number of connected regions formed by connecting pixels of the first pixel value, and a ratio R between the one-dimensional Vetch number b1 and the zero-dimensional Vetch number b0. do. Although the case where the ratio R is b1/b0 will be exemplified below, the ratio R may be b0/b1.
- the connected area is, for example, an area in which pixels whose pixel value is 0 after the binarization process are gathered while being adjacent to each other.
- Each connected area is surrounded by pixels whose pixel value is 255 after binarization processing, and is an area independent of each other.
- the hole-shaped area is, for example, an area in which pixels having a pixel value of 255 after the binarization process are gathered while being adjacent to each other.
- Each hole-shaped region is surrounded by pixels whose pixel value after binarization is 0 and is independent of each other.
- the feature number calculator 24 calculates 255 one-dimensional Betch numbers.
- a ratio R of the 0-dimensional Betch number b0,255 of b1,255 is calculated.
- the values of the one-dimensional Vetch number b1 and the zero-dimensional Vetch number b0 calculated by the feature number calculation unit 24 are determined by the magnification and resolution set when the tissue image 31 is acquired, and the values of the tissue image 31. depends on the area of the region where Therefore, the feature number calculation unit 24 calculates the one-dimensional Vetch number b1 and the zero-dimensional Vetch number b0 for the tissue images 31 having the same magnification and resolution and having the same area of the region being imaged. It is desirable to
- An existing program can be used as the feature number calculation unit 24 .
- An example of such a program is CHomP.
- CHomP is freeware conforming to GNU (General Public License). Note that the program is not limited to this, and any program other than CHomP may be used as long as it can calculate the 0-dimensional Vetch number b0 and the 1-dimensional Vetch number b1 relating to an image.
- the classification unit 25 inputs input data including a combination of the one-dimensional Vetch number b1, the zero-dimensional Vetch number b0, and the ratio R to a classification model described later, and classifies changes occurring in the tissue shown in the tissue image 31. Print the result.
- the change occurring in the tissue may be the degree of cell differentiation calculated based on the structural characteristics, arrangement, and invasion mode of tumor cells occurring in the tissue.
- the classification model 33 is a model of the correspondence relationship between the one-dimensional Betch number b1, the zero-dimensional Betch number b0, the ratio R, and the classification relating to the changes occurring in the tissue. That is, the classification model 33 is generated by machine learning using a combination of (1) and (2) below as learning data.
- a training tissue image 32 obtained by capturing an image of a tissue, to which classification information that classifies changes occurring in the tissue shown in the training tissue image 32 is assigned in advance.
- the output control unit 26 causes the presentation device 5 to present information indicating the classification results output from the classification unit 25 .
- the output control unit 26 may be configured to cause the presentation device 5 to present the tissue image 31 to be analyzed together with the information indicating the classification result.
- the output control unit 26 may be configured to control the presentation device 5 so as to present the classification result of the region extracted from the tissue image 31 at a position corresponding to the region of the tissue image 31.
- the classification device 1 can present the output classification result of the tissue image 31 and the position of the region corresponding to the classification result to users including pathologists, laboratory technologists, and researchers. can be done.
- the method of presenting the classification results to the user may be any desired mode.
- the presentation device 5 may present the classification results, or the classification results may be output from a printer (not shown), a speaker (not shown), or the like. may
- FIG. 8 is a flowchart showing an example of the flow of processing performed by the classification device 1. As shown in FIG.
- the image acquisition unit 21 acquires the tissue image 31 from the external device 4 (step S1).
- the image acquisition unit 21 may change the image resolution of the tissue image 31 as necessary.
- the image acquisition unit 21 may generate a plurality of tissue images 31 with different image resolutions for each acquired tissue image.
- the single-color component image generation unit 22 generates a plurality of single-color component images from the tissue image 31 based on pixel values corresponding to each of the plurality of color components forming the tissue image 31 (step S2: single-color component image generation step).
- the binarization unit 23 generates a plurality of binarized images with different binarization reference values from each single-color component image (step S3: binarization step).
- the feature number calculation unit 24 calculates the one-dimensional Vetch number b1, the zero-dimensional Vetch number b0, and the ratio R calculated for each of the plurality of binarized images (step S4: feature number calculation step).
- the classification unit 25 inputs input data including a combination of the one-dimensional Vetch number b1, the zero-dimensional Vetch number b0, and the ratio R to the classification model 33 (step S5: classification step), A classification result regarding the change occurring in the existing tissue is output (step S6: classification step).
- the classification device 1 generates, for the tissue image 31, a plurality of single-color component images based on pixel values corresponding to each of the plurality of color components forming the tissue image 31, and each single-color component image A plurality of binarized images with different binarization reference values are generated.
- the classification device 1 calculates the one-dimensional Vetch number b1, the zero-dimensional Vetch number b0, and the ratio R between the one-dimensional Vetch number b1 and the zero-dimensional Vetch number b0 for each of the binarized images.
- the classification device 1 inputs the one-dimensional Vetch number b1, the zero-dimensional Vetch number b0, and the ratio R as a set of data to the classification model 33, and outputs the classification result regarding the change occurring in the tissue. As a result, the classification device 1 can accurately classify changes occurring in the tissue shown in the tissue image 31 .
- the changes that occur in the tissues of the body are precancerous lesions such as emphysema and lung adenocarcinoma, they are often manifested in changes in the uniformity of cell shape and size.
- the properties of the tissue image 31 obtained by imaging the tissue of the living body are mathematically analyzed using the concept of homology, and changes occurring in the tissue are classified based on the analysis results.
- This classification result is output from the classification model 33 generated by machine learning using the training tissue image 32, which will be described later. be. Therefore, the classification results are as understandable and reliable as the pathologist's judgment results.
- Training tissue images 32 may be used to generate a classification model 33 .
- FIG. 9 is a diagram showing an example of the data structure of the training tissue image 32. As shown in FIG.
- the training tissue image 32 includes a tissue image showing a tissue sampled from the patient's body and assigned a training tissue image ID.
- each tissue image included in the training tissue images 32 cell nuclei of cells included in the tissue and components different from the cell nuclei appear in different hues.
- Classification information has been pre-given by medical personnel indicating a classification for the changes that have occurred.
- the classification information is the determination result determined by the pathologist who checked the tissue images included in the training tissue images 32, and is information indicating the classification of changes occurring in the tissue shown in each tissue image.
- FIG. 10 is a functional block diagram showing an example of the main configuration of the classification device 1 that generates the classification model 33. As shown in FIG. For convenience of explanation, members having the same functions as the members explained with reference to FIG.
- FIG. 10 shows an example in which the training tissue image 32 is stored in advance in the storage unit 3 of the classification device 1, it is not limited to this.
- the image acquisition unit 21 shown in FIG. 7 may be configured to acquire the training tissue image 32 from the external device 4 .
- the classification device 1 has the function of generating the classification model 33, but it is not limited to this.
- the classification model 33 may be generated by causing another computer different from the classification device 1 to execute the processing described above.
- the classification model 33 generated by another computer may be stored in the storage unit 3 of the classification device 1 and the classification unit 25 may use the classification model 33 .
- classification model generation unit 27 executes a machine learning algorithm using the training tissue image 32 on the classification model candidate to generate a classification model 33 (trained).
- the classification model 33 (trained) is stored in the storage unit 3 .
- a known machine learning algorithm suitable for data classification can be applied.
- generation of the classification model 33 may use perceptrons, logistic regression, k-nearest neighbors, support vector machines, decision trees, random forests, or gradient boosting.
- FIG. 11 is a flowchart showing an example of the flow of processing performed by the classification device 1 to generate the classification model 33. As shown in FIG. 11
- the classification model generation unit 27 reads out the training tissue image 32 from the storage unit 3 (step S11), and selects an unselected training tissue image 32 from the training tissue image 32 (for example, a training tissue image 32 in FIG. 9). A tissue image whose image ID is "T1") is selected (step S12).
- the image resolution of the training tissue image 32 may be changed to a predetermined image resolution in advance, but is not limited to this.
- the classification model generator 27 may change the image resolution of the training tissue image 32 as necessary.
- the single-color component image generator 22 generates a plurality of single-color component images from the tissue image selected by the classification model generator 27 based on pixel values corresponding to each of the plurality of color components forming the tissue image. (step S13).
- the binarization unit 23 generates a plurality of binarized images with different binarization reference values for each of the plurality of single-color component images (step S14).
- the feature number calculation unit 24 calculates the one-dimensional Vetch number b1, the zero-dimensional Vetch number b0, and the ratio R for each of the plurality of binarized images (step S15).
- the classification unit 25 inputs the input data including the one-dimensional Vetch number b1, the zero-dimensional Vetch number b0, and the ratio R to the classification model candidates (step S16), and A result of classifying the changes is output (step S17).
- the classification model generation unit 27 compares the classification result output from the classification unit 25 and the classification information corresponding to the tissue image selected in step S12 to calculate an error (step S18). Further, the classification model generation unit 27 updates the classification model candidate outputting the classification result in step S18 so that the calculated error is minimized (step S19).
- step S12 If all tissue images included in the training tissue image 32 have not been selected in step S12 (NO in step S20), the classification model generation unit 27 returns to step S12, and no tissue image has been selected from the training tissue image 32.
- the classification model generation unit 27 converts the current classification model candidate into the classification model 33 (training completed) and stored in the storage unit 3 (step S21).
- the classification model generated in this way is based on input of the one-dimensional Vetch number b1, the zero-dimensional Vetch number b0, and the ratio R calculated for a certain tissue image (for example, the tissue image 31 in FIG. 7). With this, it is possible to output highly accurate classification results regarding changes occurring in the tissues included in the tissues shown in the tissue image 31 .
- the classification model 33 may be generated by causing a computer different from the classification device 1 to perform the processing shown in FIG. In this case, a trained classification model 33 may be installed in the classification device 1 .
- the image analysis method according to one aspect of the present disclosure is applied to the tissue image 31 of the lung, and based on each tissue image 31, the precancerous lesions of emphysema and lung adenocarcinoma are classified, and the classification accuracy is evaluated.
- FIG. 12 is a diagram illustrating classification accuracy by an image analysis method according to one aspect of the present disclosure.
- 94 lung tissue images 31 with classification results by a professional pathologist were used.
- 20 are classified as emphysema, 20 as normal, 23 as atypical adenomatous hyperplasia, 19 as squamous adenocarcinoma, and 12 as invasive adenocarcinoma.
- the analysis accuracy shown in FIG. 12 is the result of comparing the classification result by the pathologist and the classification result output from the classification model 33 for the 94 lung tissue images 31 .
- the analytical accuracy shown in FIG. 12 is the result obtained by applying multi-scale analysis including the steps shown below.
- the image acquisition unit 21 obtains 94 tissue images 31 (image resolution is 1600 ⁇ 1200) from each of the image resolutions of 1600 ⁇ 1200 (no change in image resolution), 1800 ⁇ 600, 400 ⁇ 300, and 200 ⁇ 150.
- a tissue image 31 was generated.
- the single-color component image generating unit 22 generates three single-color component images based on pixel values corresponding to the R, G, and B components, and each tissue image 31. Grayscale images based on pixel values corresponding to luminance components were generated as four monochromatic component images.
- the binarization unit 23 binarized these four types of single-color component images to generate a plurality of binarized images with different binarization reference values.
- the feature number calculator 24 calculates the one-dimensional Vetch number b1, the zero-dimensional Vetch number b0, and the ratio R for each of the plurality of binarized images.
- the classification unit 25 inputs input data including a combination of the one-dimensional Vetch number b1, the zero-dimensional Vetch number b0, and the ratio R to the classification model described later, and the tissue shown in each of the 94 tissue images 31 is subjected to the input data.
- the classification device 1 classified the tissue image 31 classified by the pathologist as having atypical adenomatous hyperplasia with a correct answer rate of 78.3% ( ⁇ 18/23).
- correct classification of atypical adenomatous hyperplasia is important because it means early detection of premalignant lesions of lung adenocarcinoma.
- the classifier 1 had only a 4.3% chance of misclassifying a tissue image 31 classified as atypical adenomatous hyperplasia by a pathologist as normal lung.
- the resulting tissue image 31 was classified with a correct answer rate of 83.3% ( ⁇ 10/12).
- the classification result obtained by applying the image analysis method according to one aspect of the present disclosure has a high degree of agreement with the classification result by a pathologist, indicating that lung emphysema and lung adenocarcinoma have a high degree of agreement. It can be seen that cancer lesions can be classified accurately.
- FIG. 7 shows an example in which the medical institution H1 introduced the classification system 100, it is not limited to this.
- the classification device 1a may be communicably connected to the external device 4 of the medical institution H2 via the communication network 50.
- FIG. A classification system 100a employing such a configuration will be described with reference to FIG.
- FIG. 13 is a functional block diagram illustrating a configuration example of a classification system 100a according to one aspect of the present disclosure.
- the classification device 1a includes a communication unit 6 that functions as a communication interface with the external device 4 of the medical institution H2.
- the image acquisition unit 21 can acquire the tissue image 31 from the external device 4 of the medical institution H2 via the communication network.
- the classification device 1a transmits the classification result output from the classification unit 25 to the external device 4 via the communication network 50.
- each tissue image 31 transmitted from the medical institution H2 to the classification device 1a includes an image ID indicating each tissue image 31, and a unique patient (for example, patient) from whom the tissue shown in the tissue image 31 was collected. classification number (patient ID) may be given. Furthermore, each tissue image 31 may be given a medical institution ID indicating the medical institution H2 that is the transmission source.
- the classification device 1a can classify the classification result obtained by analyzing the tissue image 31 acquired from each of a plurality of medical institutions to each medical institution that is the transmission source of the image data of the tissue image 31.
- the administrator who manages the classification device 1a may charge each medical institution a predetermined fee as compensation for the service of providing the classification result estimated from the acquired tissue image 31 .
- the classification device 1 shown in FIG. 7 and the classification device 1a shown in FIG. 13 have both an image analysis function for the tissue image 31 and a classification function using the classification model 33. However, it is not limited to this configuration. For example, by combining an image analysis device 1A that includes an image acquisition unit 21, a single-color component image generation unit 22, a binarization unit 23, and a feature number calculation unit 24, and a classification device 1B that includes a control unit 2B, a classification device 1 and 1a may be realized.
- the control section 2B has a classification section 25 .
- a classification system 100b employing such a configuration will be described with reference to FIG.
- FIG. 14 is a functional block diagram illustrating a configuration example of a classification system 100b according to one aspect of the present disclosure.
- the image analysis device 1A includes a communication unit 6A that functions as a communication interface with the external device 4 of the medical institution H2 and the classification device 1B.
- the image acquisition unit 21 can acquire the tissue image 31 from the external device 4 of the medical institution H2 via the communication network.
- the image analysis device 1A transmits the first feature number, the second feature number, and the third feature number calculated by the feature number calculation unit 24 to the classification device 1B via the communication network 50.
- the classification device 1B may be communicably connected to a plurality of image analysis devices 1A.
- various IDs may be assigned to the number of features (including the number of first features, the number of second features, and the number of third features) transmitted from the image analysis device 1A to the classification device 1B.
- the various IDs include a classification number (patient ID) unique to the patient from whom the tissue shown in the tissue image 31 to be analyzed was collected, a medical institution ID indicating the medical institution H2 that sent each tissue image 31, and a device ID or the like specific to the image analysis device 1A that executed the image analysis.
- the image analysis device 1A analyzes the tissue images 31 acquired from each of a plurality of medical institutions, calculates a predetermined number of features, and transmits them to the classification device 1B.
- the classification device 1B can output a classification result using the number of features acquired from the image analysis device 1A and provide the classification result to each medical institution that is the source of the image data of the tissue image 31 .
- the administrator who manages the classification device 1B may charge each medical institution a predetermined fee for the service of providing classification results classified from the acquired tissue image 31 .
- the classification device 1B distributes a computer program (hereinafter referred to as an image analysis application) for the computer (for example, the presentation device 5) deployed in the medical institution H2 to function as the image analysis device 1A. good too.
- a computer in which the image analysis application is installed can function as the image analysis device 1A.
- the classification device 1B may send a notification requesting payment to the computer that has received and installed the image analysis application.
- the administrator who manages the classification device 1B can receive a predetermined fee from the medical institution H2 as compensation for the service of providing the image analysis application.
- the notification requesting payment may be sent to a credit card company or the like contracted by the user of the computer that received and installed the image analysis application.
- various IDs may be assigned.
- the various IDs include a classification number (patient ID) unique to the patient from whom the tissue shown in the tissue image 31 to be analyzed was collected, a medical institution ID indicating the medical institution H2 that sent each tissue image 31, and a device ID or the like specific to the image analysis device 1A that executed the image analysis.
- the medical institution H2 does not need to transmit the tissue image 31 to the outside of the medical institution H2 (for example, the image analysis device 1A).
- the medical institution H2 analyzes each tissue image 31 using an image analysis application to calculate the number of first features, the number of second features, and the number of third features from each tissue image 31, and classifies them into It should be sent to 1B.
- the tissue image 31 relates to the patient's diagnostic information, it is necessary to consider the protection of personal information when transmitting the tissue image 31 to the outside of the medical institution H2.
- this configuration there is no need to transmit the tissue image 31 to the outside of the medical institution H2.
- this configuration it is possible to reduce the communication load compared to transmitting the tissue image 31 itself.
- the control block (especially the control unit 2) of the classification device 1 may be implemented by a logic circuit (hardware) formed in an integrated circuit (IC chip) or the like, or may be implemented by software.
- the classification device 1 is equipped with a computer that executes program instructions, which are software that implements each function.
- This computer includes, for example, one or more processors, and a computer-readable recording medium storing the program.
- the processor reads the program from the recording medium and executes it, thereby achieving the object of the present disclosure.
- a CPU Central Processing Unit
- the recording medium a "non-temporary tangible medium" such as a ROM (Read Only Memory), a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, etc. can be used.
- a RAM Random Access Memory
- the program may be supplied to the computer via any transmission medium (communication network, broadcast wave, etc.) capable of transmitting the program.
- any transmission medium communication network, broadcast wave, etc.
- one aspect of the present disclosure can also be implemented in the form of a data signal embedded in a carrier wave in which the program is embodied by electronic transmission.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Molecular Biology (AREA)
- Hematology (AREA)
- Biomedical Technology (AREA)
- Urology & Nephrology (AREA)
- Geometry (AREA)
- Food Science & Technology (AREA)
- Medicinal Chemistry (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Image Analysis (AREA)
Abstract
Description
以下、本開示の一実施形態について、詳細に説明する。
まず、本開示の一態様に係る画像解析方法の技術思想について以下に説明する。
ここで、肺気腫、および肺腺癌の前癌病変の分類例について、図1~図5を用いて説明する。
・(1)正常(normal)
・(2)肺気腫(emphysema)
・(3)異型腺腫様過形成(AAH:atypical adenomatous hyperplasia)
・(4)鱗状腺癌(LP:lepidic pattern of adenocarcinoma)
・(5)浸潤性腺癌(AC:invasive adenocarcinoma)。
次に、本開示の一態様に係る画像解析方法において、組織画像を解析するために適用される数学的表現について説明する。
0次元ベッチ数b0は、数学的には以下のように定義される。一般に有限個の線分を繋ぎ合わせて成る図形(1次元複体とも呼称される)Kの連結成分の個数を0次元ベッチ数という。「有限個の点を有限個の線分で結んだ図形が連結である」とは、この図形の任意の頂点から他の任意の頂点に、この図形の辺を辿って到達し得ることを意図する。
1次元ベッチ数b1は、数学的には以下のように定義される。以下の(1)および(2)の条件が満たされる場合、図形Kの1次元ベッチ数b1はrである。(1)有限個の線分を繋ぎ合わせて成る図形(連結な1次元複体)Kに対して、適当なr個の、開いた(両端を含まない)1次元単体(例えば、線分)を図形Kから取り去っても図形Kの連結成分の個数は増加しない。(2)任意の(r+1)個の、開いた1次元単体をKから取り去った場合にはKは連結でなくなる(すなわち、Kの連結成分の個数が1つ増加する)。
ここで、図6に示す例示的な図形を用いて、二値化画像における0次元ベッチ数b0および1次元ベッチ数b1について説明する。図6は、ホモロジーの概念におけるベッチ数を説明するための模式図である。図6に示す図形M1の場合、黒い領域の数は1つである。したがって、図形M1の0次元ベッチ数b0は1である。また、図形M1の場合、黒い領域によって囲まれた白い領域の数は1つである。したがって、図形M1の1次元ベッチ数b1は1である。
続いて、分類システム100の構成について、図7を用いて説明する。図7は、本開示の一態様に係る画像解析方法を実行する分類装置1を備える分類システム100の構成の一例を示すブロック図である。
分類装置1は、制御部2および記憶部3を備えている。記憶部3には、組織画像31および分類モデル33が記憶されていてもよい。分類モデル33については、後に説明する。
制御部2は、例えば、CPU(central processing unit)または専用プロセッサなどの制御装置により構成されてもよい。図7に示された制御部2の各部は、CPUなどの制御装置が、ROM(read only memory)などで実現された記憶部3に記憶されているプログラムをRAM(random access memory)などに読み出して実行することで実現できる。
画像取得部21は、外部機器4から、組織を撮像した組織画像31を取得する。ここで、解析対象の組織が肺である場合、組織画像31は、患者の身体から採取された肺の組織片を、所定の拡大倍率で撮像した画像であってもよい。画像取得部21は、外部機器4から、組織画像31から抽出された領域に対応する部分画像を取得してもよい。画像取得部21は、取得した組織画像31を記憶部3に格納してもよい。
単色成分画像生成部22は、組織画像31から、該組織画像31を構成する複数の色成分の各々に対応する画素値に基づく複数の単色成分画像を生成する。
二値化部23は、単色成分に分けられた単色成分画像に対して二値化処理を行い、二値化の基準値が異なる複数の二値化画像を生成する。
特徴数算出部24は、複数の二値化画像のそれぞれについて、二値化後における一方の画素値(以下、第1画素値)の画素に囲まれた、該二値化後における他方の画素値(以下、第2画素値)の画素からなる穴形状の領域の数を示す1次元ベッチ数b1を算出する。また、特徴数算出部24は、第1画素値の画素が連結して成る連結領域の数を示す0次元ベッチ数b0、および1次元ベッチ数b1と0次元ベッチ数b0との比Rを算出する。以下では、比Rはb1/b0である場合を例に挙げるが、比Rは、b0/b1であってもよい。
分類部25は、1次元ベッチ数b1、0次元ベッチ数b0、および比Rの組み合わせを含む入力データを後述の分類モデルに入力し、組織画像31に写っている組織に生じている変化に関する分類結果を出力する。ここで、組織に生じている変化とは、組織に生じた腫瘍細胞の構造上の特性、配置、および浸潤様式に基づいて算出される細胞の分化の度合いであってもよい。分類モデル33は、1次元ベッチ数b1、0次元ベッチ数b0、および比Rと、組織に生じている変化に関する分類との対応関係をモデル化したものである。すなわち、分類モデル33は、下記の(1)および(2)の組み合わせを学習データとして用いた機械学習によって生成されたものである。
・(1)組織を撮像した訓練用組織画像32であって、該訓練用組織画像32に写っている組織に生じている変化を分類した分類情報が予め付与されている訓練用組織画像32。
・(2)訓練用組織画像32から生成された、二値化の基準値が異なる複数の二値化画像のそれぞれについて算出された1次元ベッチ数b1、0次元ベッチ数b0、および比R。
出力制御部26は、分類部25から出力される分類結果を示す情報を、提示装置5に提示させる。なお、出力制御部26は、分類結果を示す情報と共に、解析対象となった組織画像31を提示装置5に提示させる構成であってもよい。
続いて、分類装置1が実行する処理について、図8を用いて説明する。図8は、分類装置1が行う処理の流れの一例を示すフローチャートである。
分類モデル33を生成するために、訓練用組織画像32が用いられてもよい。図9は、訓練用組織画像32のデータ構造の一例を示す図である。
次に、分類モデル33を生成するために学習アルゴリズムを実行中の分類装置1の構成について、図10を用いて説明する。図10は、分類モデル33を生成する分類装置1の要部構成の一例を示す機能ブロック図である。なお、説明の便宜上、図7にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を繰り返さない。
分類モデル生成部27は、分類モデル候補に対して、訓練用組織画像32を用いた機械学習アルゴリズムを実行し、分類モデル33(訓練済)を生成する。分類モデル33(訓練済)は、記憶部3に格納される。
続いて、分類モデル33を生成する処理について、図11を用いて説明する。図11は、分類モデル33を生成するために分類装置1が行う処理の流れの一例を示すフローチャートである。
本開示の一態様に係る画像解析方法を肺の組織画像31に適用し、各組織画像31に基づいて、肺気腫および肺腺癌の前癌病変を分類した場合の分類精度を評価した結果について、図12を用いて説明する。図12は、本開示の一態様に係る画像解析方法による分類精度を示す図である。
本開示の他の実施形態について、以下に説明する。なお、説明の便宜上、上記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を繰り返さない。
本開示の他の実施形態について、以下に説明する。なお、説明の便宜上、上記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を繰り返さない。
分類装置1Bは、医療機関H2内に配備されているコンピュータ(例えば、提示装置5など)に、該コンピュータが画像解析装置1Aとして機能するためのコンピュータプログラム(以下、画像解析アプリ)を配信してもよい。画像解析アプリがインストールされたコンピュータは、画像解析装置1Aとして機能することができる。この場合、例えば、分類装置1Bは、画像解析アプリを受信してインストールしたコンピュータに対して、対価を請求する通知を送信してもよい。これにより、分類装置1Bを管理する管理者は、画像解析アプリを提供するサービスに対する対価として、医療機関H2に対して所定の料金を受け取ることができる。なお、対価を請求する通知は、画像解析アプリを受信してインストールしたコンピュータのユーザが契約しているクレジットカード会社などに送信してもよい。
分類装置1の制御ブロック(特に制御部2)は、集積回路(ICチップ)等に形成された論理回路(ハードウェア)によって実現してもよいし、ソフトウェアによって実現してもよい。
1A 画像解析装置
4 外部機器
5 提示装置
22 単色成分画像生成部
23 二値化部
24 特徴数算出部
25 分類部
S2 単色成分画像生成ステップ
S3 二値化ステップ
S4 特徴数算出ステップ
S5、S6 分類ステップ
Claims (17)
- 生体の組織に含まれる細胞の細胞核と、細胞核とは異なる構成要素とが異なる色相で写っている組織画像から、該組織画像を構成する複数の色成分の各々に対応する画素値に基づく複数の単色成分画像を生成する単色成分画像生成ステップと、
前記複数の単色成分画像のそれぞれから、二値化の基準値が異なる複数の二値化画像を生成する二値化ステップと、
前記複数の単色成分画像のそれぞれから生成された前記複数の二値化画像のそれぞれについて、第1画素値と第2画素値とに二値化された後における前記第1画素値の画素に囲まれた、該二値化後における前記第2画素値の画素からなる穴形状の領域の数を示す第1特徴数、前記第1画素値の画素が連結して成る連結領域の数を示す第2特徴数、および前記第1特徴数と前記第2特徴数との比である第3特徴数を算出する特徴数算出ステップと、
前記複数の単色成分画像のそれぞれから生成された前記二値化画像のそれぞれに関して算出された、前記第1特徴数、前記第2特徴数、および前記第3特徴数の組み合わせを含む入力データを、前記第1特徴数、前記第2特徴数、および前記第3特徴数と、前記組織に生じている変化に関する分類との対応関係をモデル化した分類モデルに入力し、前記組織画像に写っている組織に生じている変化に関する分類結果を出力する分類ステップと、を含む、
画像解析方法。 - 前記組織画像は、前記細胞核を第1色に染色可能な成分と、細胞質を前記第1色とは異なる第2色に染色可能な成分とによって染色された前記組織を撮像した画像である、
請求項1に記載の画像解析方法。 - 前記分類モデルは、(1)組織に含まれる細胞の細胞核と、細胞核とは異なる構成要素とが異なる色相で写っている訓練用組織画像であって、該訓練用組織画像に写っている組織に生じている変化に関する分類を示す分類情報が医療関係者によって予め付与されている訓練用組織画像と、(2)前記訓練用組織画像から、前記特徴数算出ステップにおいて算出された前記第1特徴数、前記第2特徴数、および前記第3特徴数の組み合わせを含むデータと、を学習データとして用いた機械学習によって生成されたものである、
請求項1または2に記載の画像解析方法。 - 前記分類モデルは、パーセプトロン、ロジスティック回帰、k-近傍法、サポートベクトルマシン、決定木、ランダムフォレスト、または勾配ブースティングを用いて生成されたものである、
請求項3に記載の画像解析方法。 - 前記組織画像は、対象者の身体から採取された前記組織を撮像した画像である、
請求項1から4のいずれか1項に記載の画像解析方法。 - 前記組織は肺の組織であり、
前記分類ステップにおいて、前記肺の組織に変化が生じている場合、該変化が、肺気腫、異型腺腫様過形成、鱗状腺癌、および浸潤性腺癌のいずれであるか、を分類する、
請求項5に記載の画像解析方法。 - 生体の組織に含まれる細胞の細胞核と、細胞核とは異なる構成要素とが異なる色相で写っている組織画像から、該組織画像を構成する複数の色成分の各々に対応する画素値に基づく複数の単色成分画像を生成する単色成分画像生成部と、
前記複数の単色成分画像のそれぞれから、二値化の基準値が異なる複数の二値化画像を生成する二値化部と、
前記複数の単色成分画像のそれぞれから生成された前記複数の二値化画像のそれぞれについて、第1画素値と第2画素値とに二値化された後における前記第1画素値の画素に囲まれた、該二値化後における前記第2画素値の画素からなる穴形状の領域の数を示す第1特徴数、前記第1画素値の画素が連結して成る連結領域の数を示す第2特徴数、および前記第1特徴数と前記第2特徴数との比である第3特徴数を算出する特徴数算出部と、
を備える、
画像解析装置。 - 前記画像解析装置は、前記組織画像から、前記第1特徴数、前記第2特徴数、および前記第3特徴数を算出するためのコンピュータプログラムを配信する、
請求項7に記載の画像解析装置。 - 前記画像解析装置は、前記コンピュータプログラムをインストールしたコンピュータに対して、該コンピュータプログラムを提供したことに対する対価を請求する通知を送信する、
請求項8に記載の画像解析装置。 - 請求項7に記載の画像解析装置から、前記第1特徴数、前記第2特徴数、および前記第3特徴数を取得し、前記複数の単色成分画像のそれぞれから生成された前記二値化画像のそれぞれに関して算出された、前記第1特徴数、前記第2特徴数、および前記第3特徴数を含む入力データを、前記第1特徴数、前記第2特徴数、および前記第3特徴数と、前記組織に生じている変化に関する分類との対応関係をモデル化した分類モデルに入力して、前記組織画像に写っている組織に生じている変化に関する分類結果を出力する分類部、
を備える、
分類装置。 - 生体の組織に含まれる細胞の細胞核と、細胞核とは異なる構成要素とが異なる色相で写っている組織画像から、該組織画像を構成する複数の色成分の各々に対応する画素値に基づく複数の単色成分画像を生成する単色成分画像生成部と、
前記複数の単色成分画像のそれぞれから、二値化の基準値が異なる複数の二値化画像を生成する二値化部と、
前記複数の単色成分画像のそれぞれから生成された前記複数の二値化画像のそれぞれについて、第1画素値と第2画素値とに二値化された後における前記第1画素値の画素に囲まれた、該二値化後における前記第2画素値の画素からなる穴形状の領域の数を示す第1特徴数、前記第1画素値の画素が連結して成る連結領域の数を示す第2特徴数、および前記第1特徴数と前記第2特徴数との比である第3特徴数を算出する特徴数算出部と、
前記複数の単色成分画像のそれぞれから生成された前記二値化画像のそれぞれに関して算出された、前記第1特徴数、前記第2特徴数、および前記第3特徴数を含む入力データを、前記第1特徴数、前記第2特徴数、および前記第3特徴数と、前記組織に生じている変化に関する分類との対応関係をモデル化した分類モデルに入力して、前記組織画像に写っている組織に生じている変化に関する分類結果を出力する分類部と、
を備える、
分類装置。 - 前記組織画像から、前記第1特徴数、前記第2特徴数、および前記第3特徴数を算出するためのコンピュータプログラムを配信する、
請求項10または11に記載の分類装置。 - 前記コンピュータプログラムをインストールしたコンピュータに対して、該コンピュータプログラムを提供したことに対する対価を請求する通知を送信する、
請求項12に記載の分類装置。 - 請求項7に記載の画像解析装置と、
請求項10に記載の分類装置と、
前記組織画像を前記画像解析装置へ送信する外部機器と、
前記分類装置から出力された分類結果を取得して該分類結果を提示する提示装置と、を含む、
分類システム。 - 請求項7~9のいずれか1項に記載の画像解析装置としてコンピュータを機能させるための制御プログラムであって、前記単色成分画像生成部、前記二値化部、および前記特徴数算出部としてコンピュータを機能させるための制御プログラム。
- 請求項10に記載の分類装置としてコンピュータを機能させるための制御プログラムであって、前記分類部としてコンピュータを機能させるための制御プログラム。
- 請求項15または16に記載の制御プログラムを記録したコンピュータ読み取り可能な記録媒体。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023500734A JPWO2022176665A1 (ja) | 2021-02-18 | 2022-02-07 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021024606 | 2021-02-18 | ||
JP2021-024606 | 2021-02-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022176665A1 true WO2022176665A1 (ja) | 2022-08-25 |
Family
ID=82930466
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/004597 WO2022176665A1 (ja) | 2021-02-18 | 2022-02-07 | 画像解析方法、画像解析装置、分類装置、分類システム、制御プログラム、記録媒体 |
Country Status (2)
Country | Link |
---|---|
JP (1) | JPWO2022176665A1 (ja) |
WO (1) | WO2022176665A1 (ja) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08304284A (ja) * | 1995-05-09 | 1996-11-22 | Suzuki Motor Corp | 抗核抗体反応判定装置 |
WO2010087112A1 (ja) * | 2009-01-27 | 2010-08-05 | 国立大学法人大阪大学 | 画像解析装置、画像解析方法、画像解析プログラムおよび記録媒体 |
WO2017010397A1 (ja) * | 2015-07-15 | 2017-01-19 | 国立大学法人大阪大学 | 画像解析装置、画像解析方法、画像解析システム、画像解析プログラム、および記録媒体 |
JP2017085966A (ja) * | 2015-11-10 | 2017-05-25 | 株式会社Screenホールディングス | 分類器構成方法および細胞の生死判定方法 |
WO2020195258A1 (ja) * | 2019-03-26 | 2020-10-01 | 国立大学法人大阪大学 | 画像解析方法、画像解析プログラム、記録媒体、画像解析装置、画像解析システム |
-
2022
- 2022-02-07 JP JP2023500734A patent/JPWO2022176665A1/ja active Pending
- 2022-02-07 WO PCT/JP2022/004597 patent/WO2022176665A1/ja active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08304284A (ja) * | 1995-05-09 | 1996-11-22 | Suzuki Motor Corp | 抗核抗体反応判定装置 |
WO2010087112A1 (ja) * | 2009-01-27 | 2010-08-05 | 国立大学法人大阪大学 | 画像解析装置、画像解析方法、画像解析プログラムおよび記録媒体 |
WO2017010397A1 (ja) * | 2015-07-15 | 2017-01-19 | 国立大学法人大阪大学 | 画像解析装置、画像解析方法、画像解析システム、画像解析プログラム、および記録媒体 |
JP2017085966A (ja) * | 2015-11-10 | 2017-05-25 | 株式会社Screenホールディングス | 分類器構成方法および細胞の生死判定方法 |
WO2020195258A1 (ja) * | 2019-03-26 | 2020-10-01 | 国立大学法人大阪大学 | 画像解析方法、画像解析プログラム、記録媒体、画像解析装置、画像解析システム |
Also Published As
Publication number | Publication date |
---|---|
JPWO2022176665A1 (ja) | 2022-08-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Joseph et al. | Improved multi-classification of breast cancer histopathological images using handcrafted features and deep neural network (dense layer) | |
Laibacher et al. | M2u-net: Effective and efficient retinal vessel segmentation for real-world applications | |
Araújo et al. | Classification of breast cancer histology images using convolutional neural networks | |
CN110033456B (zh) | 一种医疗影像的处理方法、装置、设备和系统 | |
JP2022119882A (ja) | 腫瘍を識別するための畳み込みニューラルネットワークを用いた組織像の処理 | |
Laibacher et al. | M2U-Net: Effective and efficient retinal vessel segmentation for resource-constrained environments | |
Cheng et al. | Skin lesion classification using relative color features | |
Abbas et al. | Computer‐aided pattern classification system for dermoscopy images | |
De Guzman et al. | Design and evaluation of a multi-model, multi-level artificial neural network for eczema skin lesion detection | |
Salian et al. | Skin lesion classification using deep learning architectures | |
US10956795B2 (en) | Predicting recurrence in early stage non-small cell lung cancer (NSCLC) using spatial arrangement of clusters of tumor infiltrating lymphocytes and cancer nuclei | |
Møllersen et al. | Unsupervised segmentation for digital dermoscopic images | |
Kolla et al. | CNN‐Based Brain Tumor Detection Model Using Local Binary Pattern and Multilayered SVM Classifier | |
Bohaju | Brain tumor | |
Iqbal et al. | A heteromorphous deep CNN framework for medical image segmentation using local binary pattern | |
Hameed et al. | Immunohistochemical analysis of oral cancer tissue images using support vector machine | |
Maree et al. | Constructing a hybrid activation and parameter-fusion based CNN medical image classifier | |
Narhari et al. | Automated diagnosis of diabetic retinopathy enabled by optimized thresholding-based blood vessel segmentation and hybrid classifier | |
WO2022176665A1 (ja) | 画像解析方法、画像解析装置、分類装置、分類システム、制御プログラム、記録媒体 | |
Hiary et al. | Segmentation and localisation of whole slide images using unsupervised learning | |
US20230005148A1 (en) | Image analysis method, image analysis device, image analysis system, control program, and recording medium | |
Gandomkar et al. | Determining image processing features describing the appearance of challenging mitotic figures and miscounted nonmitotic objects | |
Turkeli et al. | A smart dermoscope design using artificial neural network | |
Razzaq et al. | Least complex oLSVN-based computer-aided healthcare system for brain tumor detection using MRI images | |
EP4372379A1 (en) | Pathology image analysis method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22755986 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023500734 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 17.10.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22755986 Country of ref document: EP Kind code of ref document: A1 |