WO2022004198A1 - 病理診断支援装置、病理診断支援装置の作動方法、病理診断支援装置の作動プログラム - Google Patents
病理診断支援装置、病理診断支援装置の作動方法、病理診断支援装置の作動プログラム Download PDFInfo
- Publication number
- WO2022004198A1 WO2022004198A1 PCT/JP2021/019793 JP2021019793W WO2022004198A1 WO 2022004198 A1 WO2022004198 A1 WO 2022004198A1 JP 2021019793 W JP2021019793 W JP 2021019793W WO 2022004198 A1 WO2022004198 A1 WO 2022004198A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- region
- pathological diagnosis
- evaluation information
- fibrosis
- fibrotic
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 238000010827 pathological analysis Methods 0.000 title claims abstract description 55
- 230000003176 fibrotic effect Effects 0.000 claims abstract description 91
- 206010016654 Fibrosis Diseases 0.000 claims abstract description 73
- 230000004761 fibrosis Effects 0.000 claims abstract description 73
- 238000011156 evaluation Methods 0.000 claims abstract description 70
- 238000001514 detection method Methods 0.000 claims abstract description 68
- 238000009795 derivation Methods 0.000 claims abstract description 45
- 210000004185 liver Anatomy 0.000 claims abstract description 23
- 230000002093 peripheral effect Effects 0.000 claims description 41
- 238000012545 processing Methods 0.000 claims description 36
- 210000003462 vein Anatomy 0.000 claims description 32
- 230000008569 process Effects 0.000 claims description 30
- 230000002792 vascular Effects 0.000 claims description 25
- 210000004204 blood vessel Anatomy 0.000 claims description 16
- 238000000862 absorption spectrum Methods 0.000 claims description 10
- 238000010521 absorption reaction Methods 0.000 claims description 9
- 238000004458 analytical method Methods 0.000 abstract description 29
- 238000003860 storage Methods 0.000 abstract description 25
- 238000003384 imaging method Methods 0.000 abstract description 3
- 238000010801 machine learning Methods 0.000 description 21
- 238000000605 extraction Methods 0.000 description 17
- 210000003240 portal vein Anatomy 0.000 description 15
- 230000006870 function Effects 0.000 description 12
- 238000007906 compression Methods 0.000 description 10
- 230000006835 compression Effects 0.000 description 8
- 239000000284 extract Substances 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 5
- 238000010191 image analysis Methods 0.000 description 5
- 208000008338 non-alcoholic fatty liver disease Diseases 0.000 description 5
- 238000010186 staining Methods 0.000 description 5
- 201000010099 disease Diseases 0.000 description 4
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 4
- 238000001574 biopsy Methods 0.000 description 3
- 239000003086 colorant Substances 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000012790 confirmation Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 239000003814 drug Substances 0.000 description 3
- 229940079593 drug Drugs 0.000 description 3
- 238000004043 dyeing Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 239000011521 glass Substances 0.000 description 3
- 210000004072 lung Anatomy 0.000 description 3
- 206010053219 non-alcoholic steatohepatitis Diseases 0.000 description 3
- 102000008186 Collagen Human genes 0.000 description 2
- 108010035532 Collagen Proteins 0.000 description 2
- WZUVPPKBWHMQCE-UHFFFAOYSA-N Haematoxylin Chemical compound C12=CC(O)=C(O)C=C2CC2(O)C1C1=CC=C(O)C(O)=C1OC2 WZUVPPKBWHMQCE-UHFFFAOYSA-N 0.000 description 2
- 101001139126 Homo sapiens Krueppel-like factor 6 Proteins 0.000 description 2
- 101000585359 Homo sapiens Suppressor of tumorigenicity 20 protein Proteins 0.000 description 2
- XEEYBQQBJWHFJM-UHFFFAOYSA-N Iron Chemical compound [Fe] XEEYBQQBJWHFJM-UHFFFAOYSA-N 0.000 description 2
- 102100029860 Suppressor of tumorigenicity 20 protein Human genes 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 229920001436 collagen Polymers 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 239000006059 cover glass Substances 0.000 description 2
- 238000004132 cross linking Methods 0.000 description 2
- 230000002440 hepatic effect Effects 0.000 description 2
- 239000012188 paraffin wax Substances 0.000 description 2
- 230000001575 pathological effect Effects 0.000 description 2
- 239000000049 pigment Substances 0.000 description 2
- -1 ponsoxylysine Chemical compound 0.000 description 2
- VZGDMQKNWNREIO-UHFFFAOYSA-N tetrachloromethane Chemical compound ClC(Cl)(Cl)Cl VZGDMQKNWNREIO-UHFFFAOYSA-N 0.000 description 2
- 208000004930 Fatty Liver Diseases 0.000 description 1
- 206010019708 Hepatic steatosis Diseases 0.000 description 1
- 101000911772 Homo sapiens Hsc70-interacting protein Proteins 0.000 description 1
- 206010028980 Neoplasm Diseases 0.000 description 1
- 241000700159 Rattus Species 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000000481 breast Anatomy 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- WZRZTHMJPHPAMU-UHFFFAOYSA-L disodium;(3e)-3-[(4-amino-3-sulfonatophenyl)-(4-amino-3-sulfophenyl)methylidene]-6-imino-5-methylcyclohexa-1,4-diene-1-sulfonate Chemical compound [Na+].[Na+].C1=C(S([O-])(=O)=O)C(=N)C(C)=CC1=C(C=1C=C(C(N)=CC=1)S([O-])(=O)=O)C1=CC=C(N)C(S(O)(=O)=O)=C1 WZRZTHMJPHPAMU-UHFFFAOYSA-L 0.000 description 1
- 208000010706 fatty liver disease Diseases 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 210000003494 hepatocyte Anatomy 0.000 description 1
- 229910052742 iron Inorganic materials 0.000 description 1
- 210000002429 large intestine Anatomy 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000007170 pathology Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 210000003491 skin Anatomy 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 231100000240 steatosis hepatitis Toxicity 0.000 description 1
- 210000002784 stomach Anatomy 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- XOSXWYQMOYSSKB-LDKJGXKFSA-L water blue Chemical compound CC1=CC(/C(\C(C=C2)=CC=C2NC(C=C2)=CC=C2S([O-])(=O)=O)=C(\C=C2)/C=C/C\2=N\C(C=C2)=CC=C2S([O-])(=O)=O)=CC(S(O)(=O)=O)=C1N.[Na+].[Na+] XOSXWYQMOYSSKB-LDKJGXKFSA-L 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N33/00—Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
- G01N33/48—Biological material, e.g. blood, urine; Haemocytometers
- G01N33/483—Physical analysis of biological material
- G01N33/4833—Physical analysis of biological material of solid biological material, e.g. tissue samples, cell cultures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/17—Systems in which incident light is modified in accordance with the properties of the material investigated
- G01N21/25—Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N33/00—Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
- G01N33/48—Biological material, e.g. blood, urine; Haemocytometers
- G01N33/50—Chemical analysis of biological material, e.g. blood, urine; Testing involving biospecific ligand binding methods; Immunological testing
- G01N33/68—Chemical analysis of biological material, e.g. blood, urine; Testing involving biospecific ligand binding methods; Immunological testing involving proteins, peptides or amino acids
- G01N33/6893—Chemical analysis of biological material, e.g. blood, urine; Testing involving biospecific ligand binding methods; Immunological testing involving proteins, peptides or amino acids related to diseases not provided for elsewhere
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2800/00—Detection or diagnosis of diseases
- G01N2800/08—Hepato-biliairy disorders other than hepatitis
- G01N2800/085—Liver diseases, e.g. portal hypertension, fibrosis, cirrhosis, bilirubin
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30024—Cell structures in vitro; Tissue sections in vitro
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30056—Liver; Hepatic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
Definitions
- the technology of the present disclosure relates to a pathological diagnosis support device, an operation method of the pathological diagnosis support device, and an operation program of the pathological diagnosis support device.
- a pathologist observes a specimen collected from a living tissue under a microscope and visually evaluates the degree of fibrosis.
- a pathologist observes a specimen collected from a living tissue under a microscope and visually evaluates the degree of fibrosis.
- an evaluation method imposes a heavy burden on the pathologist.
- the evaluation may change depending on the skill level of the pathologist.
- Non-Patent Document 1 an image of a sample (hereinafter referred to as a sample image) is image-analyzed by a computer to detect a fibrotic region, and an area occupancy rate of the fibrotic region is derived for pathology.
- a method of presenting to a user such as a doctor has been proposed.
- Non-Patent Document 1 uses a specimen image of a specimen in which a fibrotic region is stained with a dye such as Sirius red. Then, the fibrous region is detected by comparing the pixel values of each RGB (Red, Green, Blue) color channel of the sample image with the threshold value set in advance by the user.
- RGB Red, Green, Blue
- the fibrotic region is not uniformly dyed in the same color by dyeing with a dye such as sirius red.
- a dye such as sirius red.
- the hue changes depending on the surrounding environment such as the density of the pigment or the temperature and humidity.
- the hue changes depending on the device for taking the sample image. Therefore, the method of detecting the fibrotic region by a simple comparison between the pixel value of each RGB color channel of the sample image and the threshold value as in Non-Patent Document 1 has low detection accuracy of the fibrotic region.
- the pathological diagnosis support device of the present disclosure includes at least one processor, in which the processor is a sample image of a sample of a living tissue stained with a dye, and pixels of a plurality of color channels.
- the processor is a sample image having a value is acquired, and a fibrotic region of a living tissue is detected by comparing the ratio of the pixel values of two color channels among a plurality of color channels with a preset threshold value, and the detected fibrosis. Based on the chemical region, evaluation information indicating the degree of fibrosis of living tissue is derived, and the evaluation information is output.
- the processor calculates the ratio of the pixel values of the two color channels corresponding to the two color regions having a relatively large difference in absorption rate in the absorption spectrum of the dye.
- the processor derives the evaluation information separately for the perivascular region including the perimeter of the blood vessel passing through the living tissue and the non-perivascular region other than the perivascular region.
- the specimen was taken from the liver, and the processor derives evaluation information separately for the pericentral vein region including the perimeter of the central vein passing through the liver and the non-vascular perivascular region.
- the processor preferably derives numerical values for at least one of the area and length of the fibrotic region as evaluation information.
- the processor performs compression processing on the acquired sample image and then detects the fibrotic region.
- the method of operating the pathological diagnosis support device of the present disclosure is a sample image of a sample of a living tissue stained with a dye, an acquisition process for acquiring a sample image having pixel values of a plurality of color channels, and a plurality of colors.
- the processor executes a derivation process for deriving evaluation information indicating the degree of fibrosis and an output process for outputting the evaluation information.
- the operation program of the pathological diagnosis support device of the present disclosure is a sample image of a specimen of a living tissue stained with a dye, an acquisition process for acquiring a sample image having pixel values of a plurality of color channels, and a plurality of colors.
- the processor is made to execute the derivation process for deriving the evaluation information indicating the degree of fibrosis and the output process for outputting the evaluation information.
- a pathological diagnosis support device an operation method of the pathological diagnosis support device, and an operation program of the pathological diagnosis support device capable of improving the detection accuracy of the fibrotic region.
- FIG. 29A is a diagram showing another example of the evaluation information of the third embodiment
- FIG. 29A is an evaluation information including an area occupancy rate and a length ratio of a fibrotic region in the portal vein peripheral region
- FIG. The evaluation information including the number of the number and the number of PC bridging is shown respectively.
- the pathological diagnosis is performed, for example, through the following steps.
- a biopsy needle 10 is pierced into the liver LV from outside the body of patient P, and an elongated sample 11 is collected from the liver LV.
- the sample 11 is embedded in paraffin, it is sliced into a plurality of sections 12 by a microtome, and the sliced sections 12 are attached to the glass 13.
- the paraffin is removed, the section 12 is stained with a dye, and the stained section 12 is covered with a cover glass 14 to complete the specimen 20.
- Sirius Red is used as the dye.
- the liver LV is an example of "living tissue" according to the technique of the present disclosure.
- Each specimen 20 is set in an imaging device (not shown) such as a digital optical microscope and photographed.
- the sample image 21 of each sample 20 thus obtained is input to the pathological diagnosis support device 25 as the sample image group 21G.
- a patient ID (Identification Data) for uniquely identifying the patient P, a shooting date and time, and the like are attached to the sample image 21.
- the sample image 21 is a full-color image.
- the pixel value is, for example, a numerical value in the range of 0 to 255.
- the sample image 21 shows the sample 20 including the section 12.
- Section 12 has a vascular region 32 and a parenchymal region 33.
- the vascular region 32 is a region of blood vessels passing through the liver LV, such as the portal vein and the central vein.
- the vascular region 32 is hollow in section 12.
- the parenchymal region 33 is a region in which hepatocytes of liver LV are present.
- fibrotic regions 35 are present in places of the perivascular region 36, which is the region surrounding the blood vessel region 32, and the parenchymal region 33.
- a fibrotic region 35 connecting the two blood vessels that is, crosslinkable fibrosis, is also seen so as to be surrounded by the frame 37 of the two-dot chain line.
- Crosslinkable fibrosis is called PP bridging when the two blood vessels are both in the area of the portal vein and PC bridging when the two blood vessels are in the portal vein and the central vein.
- the pathological diagnosis support device 25 is, for example, a desktop personal computer, and has a display 40 and an input device 41.
- the input device 41 is a keyboard, a mouse, a touch panel, or the like.
- the pathological diagnosis support device 25 analyzes the sample image 21 and detects the fibrotic region 35. Then, the evaluation information 71 (see FIG. 3 and the like) such as the area occupancy of the fibrosis region 35 is derived, and the evaluation information 71 is displayed on the display 40.
- the computer constituting the pathological diagnosis support device 25 includes a storage device 45, a memory 46, a CPU (Central Processing Unit) 47, and a communication unit 48 in addition to the display 40 and the input device 41 described above. .. These are interconnected via a bus line 49.
- a storage device 45 a memory 46, a CPU (Central Processing Unit) 47, and a communication unit 48 in addition to the display 40 and the input device 41 described above. ..
- a CPU Central Processing Unit
- the storage device 45 is a hard disk drive built in the computer constituting the pathological diagnosis support device 25, or connected via a cable or a network. Alternatively, the storage device 45 is a disk array in which a plurality of hard disk drives are connected. The storage device 45 stores control programs such as an operating system, various application programs, and various data associated with these programs. A solid state drive may be used instead of the hard disk drive.
- the memory 46 is a work memory for the CPU 47 to execute a process.
- the CPU 47 comprehensively controls each part of the computer by loading the program stored in the storage device 45 into the memory 46 and executing the processing according to the program.
- the communication unit 48 is a network interface that controls transmission of various information via a network such as a LAN (Local Area Network).
- the display 40 displays various screens.
- the computer constituting the pathological diagnosis support device 25 receives input of an operation instruction from the input device 41 through various screens.
- the operation program 55 is stored in the storage device 45 of the pathological diagnosis support device 25.
- the operation program 55 is an application program for making the computer function as the pathological diagnosis support device 25. That is, the operation program 55 is an example of the "operation program of the pathological diagnosis support device" according to the technique of the present disclosure.
- the storage device 45 also stores the sample image group 21G, the threshold value TH, and the like.
- the CPU 47 of the computer constituting the pathological diagnosis support device 25 cooperates with the memory 46 and the like to read / write (hereinafter abbreviated as RW (Read Write)) control unit 60 and receive instructions. It functions as a unit 61, a detection unit 62, a derivation unit 63, and a display control unit 64.
- the CPU 47 is an example of a "processor" according to the technique of the present disclosure.
- the RW control unit 60 controls the storage of various data in the storage device 45 and the reading of various data in the storage device 45.
- the RW control unit 60 receives the sample image group 21G transmitted from the photographing device and stores it in the storage device 45. Further, the RW control unit 60 reads out the sample image group 21G in which the user's instruction for image analysis via the input device 41 is received by the instruction reception unit 61 from the storage device 45, and the detection unit 62, the derivation unit 63, and the derivation unit 63. Output to the display control unit 64. That is, the RW control unit 60 is responsible for the "acquisition process" according to the technique of the present disclosure. The user gives an instruction for image analysis via the input device 41, for example, by inputting and designating the patient ID and the shooting date and time attached to the sample image 21.
- the RW control unit 60 reads the threshold value TH from the storage device 45 and outputs it to the detection unit 62.
- the instruction receiving unit 61 receives various instructions by the user via the input device 41.
- the various instructions include the aforementioned image analysis instructions.
- the detection unit 62 detects the fibrosis region 35 from each of the plurality of sample images 21 constituting the sample image group 21G based on the threshold value TH. That is, the detection unit 62 is responsible for the "detection process" according to the technique of the present disclosure.
- the detection unit 62 outputs the detection result 70 of the fibrosis region 35 to the derivation unit 63 and the display control unit 64.
- the derivation unit 63 derives evaluation information 71 indicating the degree of fibrosis of liver LV based on the detection result 70. That is, the out-licensing unit 63 is responsible for the "deriving process" according to the technique of the present disclosure.
- the derivation unit 63 outputs the evaluation information 71 to the display control unit 64.
- the display control unit 64 controls to display various screens on the display 40.
- the various screens include an analysis result display screen 90 (see FIG. 12) showing the result of image analysis.
- the display control unit 64 generates an analysis result display screen 90 based on the sample image group 21G, the detection result 70, and the evaluation information 71.
- the detection unit 62 calculates the ratio PV_R / PV_G of the pixel value PV_G of the G channel corresponding to the G region and the pixel value PV_R of the R channel corresponding to the R region for each pixel 30 of the sample image 21.
- the detection unit 62 logarithmically converts the ratio PV_R / PV_G into a log (PV_R / PV_G) in order to make the numerical value easy to handle by a computer (see FIG. 5). It should be noted that the ratio PV_G / PV_R may be calculated by reversing the denominator and the numerator instead of the ratio PV_R / PV_G.
- the detection unit 62 compares the magnitude of the log (PV_R / PV_G) with the threshold value TH.
- the detection unit 62 detects the pixel 30 whose log (PV_R / PV_G) is equal to or higher than the threshold value TH as the fibrosis region 35.
- the detection unit 62 detects the pixel 30 whose log (PV_R / PV_G) is less than the threshold value TH as a non-fibrotic region.
- the threshold value TH is a value empirically obtained from past findings.
- the detection unit 62 outputs the binarized image 80 as the detection result 70.
- the derivation unit 63 derives a numerical value relating to at least one of the area and the length of the fibrosis region 35 as the evaluation information 71. More specifically, the evaluation information 71 includes the area occupancy of the fibrotic region 35 and the length ratio of the fibrotic region 35.
- the area occupancy of the fibrotic region 35 is a value obtained by dividing the area of the fibrotic region 35 by the area of the section 12.
- the length ratio of the fibrotic region 35 is a value obtained by dividing the length of the fibrotic region 35 by the area of the section 12.
- the derivation unit 63 detects a region (hereinafter referred to as a section region) 82 of each section 12 of the plurality of sample images 21 constituting the sample image group 21G (step ST10). For example, the derivation unit 63 determines that the portion where the difference between the pixel values of the two adjacent pixels 30 is equal to or greater than the threshold value is the boundary between the section 12 and the glass 13 outside the section 12, and detects the inside of the boundary as the section region 82. ..
- the derivation unit 63 derives the area of the section 12 reflected in each sample image 21 by counting the number of pixels of the pixels 30 existing in the section area 82 of each sample image 21 (step ST11). Further, the derivation unit 63 derives the area of the fibrosis region 35 by counting the number of pixels of the pixels 30 existing in the fibrosis region 35 of each binarized image 80 corresponding to each sample image 21 (step). ST12).
- the derivation unit 63 exists in the section region 82 of each sample image 21 counted in step ST11 the number of pixels of the pixels 30 existing in the fibrous region 35 of each binarized image 80 counted in step ST12. By dividing by the number of pixels of the pixel 30, the area occupancy of the fibrous region 35 is derived (step ST13).
- Steps ST10 and ST11 show a procedure for deriving the length ratio of the fibrosis region 35 in the deriving unit 63.
- Steps ST10 and ST11 are the same as the procedure for deriving the area occupancy of the fibrotic region 35 shown in FIGS. 8 and 9.
- the derivation unit 63 After deriving the area of the intercept 12 reflected in each sample image 21 in step ST11, the derivation unit 63 performs a thinning process on the fibrotic region 35 of each binarized image 80 corresponding to each sample image 21 (.
- Step ST20 The thinning process is a process of converting the fibrotic region 35 into a line 83 having a width of 1 pixel passing through the center of the width of the fibrotic region 35.
- the derivation unit 63 derives the length of the fibrosis region 35 by counting the number of pixels of the pixel 30 of the fibrosis region 35 after the thinning process, that is, the number of pixels of the pixels 30 constituting the line 83. (Step ST21). At this time, as shown in the frame 84A of the alternate long and short dash line, the out-licensing unit 63 counts the number of pixels of the pixels 30 connected vertically, horizontally, and diagonally as 1. Alternatively, as shown in the frame 84B of the alternate long and short dash line, the derivation unit 63 sets the number of pixels of the pixels 30 connected vertically and horizontally among the pixels 30 constituting the line 83 to 1, and the number of pixels of the pixels 30 diagonally connected. Is counted as ⁇ 2. As for the length of the fibrotic region 35, the method shown in the frame 84B is more accurate.
- the derivation unit 63 counts the number of pixels of the pixels 30 of the fibrosis region 35 after the thinning process counted in step ST21, and the pixels of the pixels 30 existing in the section region 82 of each sample image 21 counted in step ST11. By dividing by a number, the length ratio of the fibrotic region 35 is derived (step ST22).
- FIG. 12 shows an analysis result display screen 90 displayed on the display 40 under the control of the display control unit 64.
- a sample image 21 in which the fibrosis region 35 detected by the detection unit 62 is filled with blue is displayed.
- the display of the sample image 21 can be switched by operating the frame back button 91A and the frame advance button 91B.
- Evaluation information 71 is displayed at the bottom of the analysis result display screen 90. That is, the display control unit 64 is responsible for the "output processing" according to the technique of the present disclosure.
- a confirmation button 92 is provided next to the evaluation information 71.
- the display control unit 64 turns off the display of the analysis result display screen 90.
- the sample image 21 in which the fibrosis region 35 is color-coded and the evaluation information 71 can be stored in the storage device 45 according to the user's instruction.
- the CPU 47 of the pathological diagnosis support device 25 includes a RW control unit 60, an instruction reception unit 61, a detection unit 62, and a derivation unit. It functions as 63 and a display control unit 64.
- the RW control unit 60 reads out the sample image group 21G instructed by the user to analyze the image from the storage device 45 (step ST100).
- the sample image group 21G is output from the RW control unit 60 to the detection unit 62, the derivation unit 63, and the display control unit 64. Further, the RW control unit 60 reads out the threshold value TH from the storage device 45 and outputs the threshold value TH to the detection unit 62.
- step ST100 is an example of "acquisition processing" according to the technique of the present disclosure.
- the ratio PV_R / PV_G of the pixel value PV_G of the G channel and the pixel value PV_R of the R channel is calculated for each pixel 30 of the sample image 21 (step ST110).
- the magnitude of the logarithmic log (PV_R / PV_G) of the ratio PV_R / PV_G and the threshold value TH is compared.
- step ST120 the pixel 30 whose log (PV_R / PV_G) is equal to or higher than the threshold value TH is detected as the fibrotic region 35, and the pixel 30 whose log (PV_R / PV_G) is less than the threshold value TH is detected as the non-fibrotic region (step ST120).
- the detection unit 62 generates the binarized image 80 shown in FIG. 6 as the detection result 70 of the fibrosis region 35.
- the detection result 70 is output from the detection unit 62 to the derivation unit 63 and the display control unit 64.
- step ST120 is an example of "detection processing" according to the technique of the present disclosure.
- the derivation unit 63 derives the area occupancy of the fibrosis region 35 and the length ratio of the fibrosis region 35 as evaluation information 71 (step ST130).
- the evaluation information 71 is output from the derivation unit 63 to the display control unit 64.
- step ST130 is an example of the "derivation process" according to the technique of the present disclosure.
- step ST140 Under the control of the display control unit 64, the analysis result display screen 90 shown in FIG. 12 is displayed on the display 40 (step ST140). As a result, the fibrosis region 35 and the evaluation information 71 are presented to the user. The user grasps the degree of fibrosis of the liver LV of the patient P to be diagnosed by the analysis result display screen 90. Note that step ST140 is an example of "output processing" according to the technique of the present disclosure.
- the pathological diagnosis support device 25 includes a RW control unit 60, a detection unit 62, a derivation unit 63, and a display control unit 64.
- the RW control unit 60 acquires a sample image 21 showing a sample 20 of the liver LV stained with Sirius Red by reading it from the storage device 45.
- the detection unit 62 compares the ratio PV_R / PV_G of the pixel values PV_G and PV_R of the G channel and the R channel among the three RGB color channels of the sample image 21 with the preset threshold value TH, and determines the liver LV.
- the fibrotic region 35 of the liver is detected.
- the derivation unit 63 derives the evaluation information 71 indicating the degree of fibrosis of the liver LV based on the detected fibrosis region 35.
- the display control unit 64 outputs the evaluation information 71 by displaying the analysis result display screen 90 including the evaluation information 71 on the display 40.
- the ratio PV_R / PV_G is more robust than the pixel values PV_G and PV_R itself with respect to changes in dyeing hue due to differences in the density of Sirius red, the surrounding environment, the imaging device of the sample image 21, and the like. Therefore, it is possible to improve the detection accuracy of the fibrotic region 35. For this reason, the fibrotic region 35, which has been interrupted and detected due to the low detection accuracy of the fibrotic region 35, although it is actually crosslinked fibrosis, is surely regarded as crosslinkable fibrosis. Can be done.
- the detection unit 62 calculates the ratio PV_R / PV_G of the pixel values PV_G and PV_R of the G channel and the R channel corresponding to the G region and the R region where the difference in absorption rate is relatively large in the absorption spectrum 75 of Sirius Red. Therefore, the dynamic range of the ratio can be widened, and the detection accuracy of the fibrosis region 35 can be further improved.
- the derivation unit 63 is a numerical value relating to at least one of the area and the length of the fibrosis region 35, specifically, the area occupancy ratio of the fibrosis region 35 and the length ratio of the fibrosis region 35. Is derived. Therefore, the degree of fibrosis of liver LV can be objectively evaluated. When the numerical value regarding the length of the fibrotic region 35 is relatively large, it can be estimated that fibrosis has progressed to a relatively wide range of the liver LV.
- the pigment is not limited to the exemplified Sirius Red.
- iron hematoxylin, ponsoxylysine, fuchsin acid, and aniline blue used for Masson's trichrome staining may be used.
- the dye used for Masson's trichrome staining as shown in the absorption spectrum 95 of FIG. 14, the absorption rate in the G region is relatively high, and the absorption rate in the B region and the R region is relatively low. .. Further, there is almost no difference in the absorption rates of the B region and the R region.
- the detection unit 62 has the ratio PV_B / PV_G of the pixel value PV_B of the B channel corresponding to the B region and the pixel value PV_G of the G channel corresponding to the G region, and the pixels of the G channel corresponding to the G region.
- the ratio PV_R / PV_G of the value PV_G and the pixel value PV_R of the R channel corresponding to the R region is calculated for each pixel 30 of the sample image 21.
- the detection unit 62 logarithmically converts the ratio PV_B / PV_G into a log (PV_B / PV_G), and logarithmically converts the ratio PV_R / PV_G into a log (PV_R / PV_G) (see FIG. 15).
- the ratio PV_G / PV_B and the ratio PV_G / PV_R may be calculated.
- the detection unit 62 compares the magnitude of the log (PV_B / PV_G) with the first threshold value TH1. Further, the detection unit 62 compares the magnitude of the log (PV_R / PV_G) with the second threshold value TH2. The detection unit 62 detects pixels 30 having a log (PV_B / PV_G) of the first threshold value TH1 or higher and a log (PV_R / PV_G) of the second threshold value TH2 or higher as the fibrosis region 35.
- the detection unit 62 detects the pixel 30 whose log (PV_B / PV_G) is less than the first threshold value TH1 or whose log (PV_R / PV_G) is less than the second threshold value TH2 as a non-fibrotic region.
- the first threshold value TH1 and the second threshold value TH2 are values empirically obtained from past findings, as in the case of the threshold value TH.
- the ratio of the pixel values of the two color channels calculated to detect the fibrosis region 35 is not limited to one type.
- the pixel 30 having a log (PV_B / PV_G) of less than the first threshold value TH1 and a log (PV_R / PV_G) of less than the second threshold value TH2 is detected as a non-fibrotic region, and the log (PV_B / PV_G) is the first.
- a pixel 30 having a threshold value TH1 or higher or a log (PV_R / PV_G) of a second threshold value TH2 or higher may be detected as the fibrous region 35.
- the evaluation information 71 is not limited to the area occupancy rate of the fibrosis region 35 exemplified above and the length ratio of the fibrosis region 35.
- the area occupancy of the fibrosis region 35 is changed to five levels indicating the degree of fibrosis with reference to Table 97 shown in FIG. 16, and the level is derived as evaluation information 101 (see FIG. 17). You may.
- the levels corresponding to the range of the area occupancy of the fibrotic region 35 are registered. For example, when the area occupancy of the fibrotic region 35 is 0% or more and less than 5%, level 0 is registered, when it is 20% or more and less than 30%, level 3 is registered, and when it is 30% or more, level 4 is registered.
- the table 97 is stored in the storage device 45, is read out by the RW control unit 60, and is output to the derivation unit 63.
- FIG. 17 shows an analysis result display screen 100 in which a level indicating the degree of fibrosis is displayed as evaluation information 101.
- the stage of a known fibrosis evaluation index such as the new Inuyama classification and the METAVIR score may be derived and output as the evaluation information 106.
- Each stage of the new Inuyama classification and the METAVIR score is derived from an estimation formula that solves the stage, for example, using the area occupancy rate of the fibrosis region 35 and the length ratio of the fibrosis region 35 as parameters.
- NAS Non-Alcoholic Fatty Liver Disease; non-alcoholic fatty liver disease activity score
- the like may be derived and output as evaluation information.
- the evaluation information 113 is derived separately for the perivascular region 36 and the non-perivascular region 115 other than the perivascular region 36.
- the CPU of the pathological diagnosis support device of the second embodiment is in addition to the respective units 60 to 64 of the first embodiment (in FIG. 19, the derivation unit 63 and the display control unit 64 are not shown).
- the extraction unit 110 extracts the perivascular region 36 and the non-perivascular region 115 from each specimen image 21 constituting the specimen image group 21G by using the machine learning model 111.
- the machine learning model 111 is a convolutional neural network such as U-Net (U-Shaped Neural Network), SegNet, ResNet (Residal Network), and DenseNet (Densely Connected Convolutional Network).
- the machine learning model 111 is stored in the storage device 45, is read out by the RW control unit 60, and is output to the extraction unit 110.
- the machine learning model 111 uses the sample image 21 as an input image, and as shown in FIG. 20, for example, a binarized image in which the pixel 30 in the perivascular region 36 is replaced with white and the pixel 30 in the non-perivascular region 115 is replaced with black. This is a model to be used as an output image.
- the machine learning model 111 extracts a region surrounding the blood vessel region 32, for example, a region having a width of about 20 pixels, as the blood vessel peripheral region 36. Further, the machine learning model 111 extracts the pixels 30 of the section 12 other than the pixels 30 extracted as the perivascular region 36 as the non-perivascular region 115. Therefore, the non-perivascular region 115 includes the vascular region 32 and most of the parenchymal region 33.
- the machine learning model 111 is learned by using a plurality of pairs of a sample image 21 for learning and a correct answer image in which the blood vessel peripheral region 36 of the sample image 21 for learning is designated by the user.
- the extraction unit 110 extracts a binarized image in which the pixel 30 of the blood vessel peripheral region 36 is replaced with white and the pixel 30 of the non-blood vessel peripheral region 115 is replaced with black as the extraction result 112, and the extraction unit 63 and the display control unit 64. Output to.
- the derivation unit 63 uses the area occupancy rate of the fibrosis region 35 in the perivascular region 36 and the perivascular region 36 as evaluation information 113.
- the length ratio of the fibrotic region 35 in the above is derived.
- the derivation unit 63 derives the area occupancy rate of the fibrotic region 35 in the non-vascular peripheral region 115 and the length ratio of the fibrotic region 35 in the non-vascular peripheral region 115 as the evaluation information 113.
- the area occupancy of the fibrous region 35 in the perivascular region 36 is determined by counting the number of pixels of the pixels 30 existing in the fibrous region 35 in the perivascular region 36 in step ST12 shown in FIGS. 8 and 9. It is derived by dividing by the number of pixels of the pixels 30 existing in the section region 82 of each sample image 21 counted in step ST11.
- the length ratio of the fibrotic region 35 in the perivascular region 36 counts the number of pixels 30 of the fibrotic region 35 in the perivascular region 36 after the thinning treatment in step ST21 shown in FIGS. 10 and 11. This is derived by dividing by the number of pixels of the pixels 30 existing in the section region 82 of each sample image 21 counted in step ST11.
- the area occupancy of the fibrotic region 35 in the non-perivascular region 115 and the length ratio of the fibrotic region 35 in the non-perivascular region 115 are the same.
- the display control unit 64 displays the analysis result display screen 120 shown in FIG. 21 on the display 40.
- the fibrotic region 35 in the perivascular region 36 is, for example, green
- the fibrotic region 35 in the non-perivascular region 115 is, for example, blue.
- the filled specimen image 21 is displayed. Further, as the evaluation information 113, the area occupancy rate and the length ratio of the fibrotic region 35 in the perivascular region 36 and the area occupancy ratio and the length ratio of the fibrotic region 35 in the non-vascular peripheral region 115 are displayed.
- the evaluation information 71 is derived separately for the perivascular region 36 and the non-perivascular region 115. Therefore, it is possible to distinguish and evaluate the pathological conditions of the disease in which the progress of fibrosis is mainly observed in the perivascular region 36 and the disease in which the progress of fibrosis is mainly observed in the non-vascular region 115.
- the perivascular region 36 is apt to develop fibrosis due to the influence of the drug. Therefore, if the evaluation information 71 is derived separately for the perivascular region 36 and the non-perivascular region 115 as in the second embodiment, the degree of fibrosis excluding the influence of the drug can be evaluated.
- the fibrotic region 35 in the perivascular region 36 and the fibrotic region 35 in the non-perivascular region 115 are color-coded and used as evaluation information 113 in the perivascular region 36.
- the area occupancy and length ratio of the fibrotic region 35 and the area occupancy and length ratio of the fibrotic region 35 in the non-vascular peripheral region 115 are displayed, but are not limited to this.
- the analysis result display screen 125 shown in FIG. 23 only the fibrosis region 35 in the perivascular region 36 is painted in green, for example (see also FIG. 24 which is an enlarged view of the frame 126), and the fibrosis region in the perivascular region 36 is also shown.
- Only the area occupancy rate and the length rate of 35 may be displayed as the evaluation information 127.
- only the fibrotic region 35 in the non-vascular region 115 is filled with, for example, blue, and the area occupancy and length of the fibrotic region 35 in the non-vascular region 115. Only the rate may be displayed as evaluation information.
- the display mode shown in FIG. 21, the display mode shown in FIG. 23, and the display mode opposite to the case of FIG. 23 may be switchably configured.
- the perivascular region 36 may be extracted without using the machine learning model 111.
- a blood vessel region 32 is extracted using a well-known image recognition technique, and a region surrounding the extracted blood vessel region 32, for example, a region having a width of 20 pixels is extracted as a blood vessel peripheral region 36.
- the user may be allowed to specify the perivascular region 36.
- the evaluation information 133 is derived separately for the central vein peripheral region 36C including the periphery of the central vein passing through the liver LV and the non-vascular peripheral region 115.
- the extraction unit 130 of the third embodiment uses the machine learning model 131 to form the central vein peri-region 36C, the portal vein peri-region 36P, and the portal vein perimeter region 36P from each specimen image 21 constituting the specimen image group 21G.
- the non-vascular area 115 is extracted.
- the machine learning model 131 is a convolutional neural network such as U-Net, similar to the machine learning model 111 of the second embodiment.
- the machine learning model 131 is stored in the storage device 45, is read out by the RW control unit 60, and is output to the extraction unit 130.
- the machine learning model 131 uses the sample image 21 as an input image, and as shown in FIG. 26, for example, a binarized image in which the pixel 30 in the central vein peripheral region 36C is replaced with white and the pixel 30 in the non-vascular peripheral region 115 is replaced with black. Is a model whose output image is.
- the machine learning model 131 extracts a region surrounding the region 32C of the central vein, for example, a region having a width of about 20 pixels, as the region around the central vein 36C. Similarly, the machine learning model 131 extracts a region surrounding the portal vein region 32P, for example, a region having a width of about 20 pixels, as the portal vein peripheral region 36P. Further, the machine learning model 131 extracts the pixels 30 of the section 12 other than the pixels 30 extracted as the central vein peripheral region 36C and the portal vein peripheral region 36P as the non-vascular peripheral region 115. The machine learning model 131 uses a plurality of pairs of a sample image 21 for learning and a correct image in which the central vein peripheral region 36C and the portal vein peripheral region 36P of the sample image 21 for learning are designated by the user. Be learned.
- the extraction unit 130 uses the binarized image in which the pixel 30 in the central vein peripheral region 36C is replaced with white and the pixel 30 in the non-blood vessel peripheral region 115 is replaced with black as the extraction result 132, and the derivation unit 63 and the display control unit. Output to 64.
- the derivation unit 63 Based on the extraction result 132 from the extraction unit 130 and the detection result 70 from the detection unit 62, the derivation unit 63 provides evaluation information 133 such as the area occupancy rate of the fibrosis region 35 in the peripheral vein region 36C and the circumference of the central vein. The length ratio of the fibrotic region 35 in the region 36C is derived. Further, the derivation unit 63 derives the area occupancy rate of the fibrotic region 35 in the non-vascular peripheral region 115 and the length ratio of the fibrotic region 35 in the non-vascular peripheral region 115 as evaluation information 133.
- the area occupancy of the fibrous region 35 in the central vein peripheral region 36C is determined by counting the number of pixels of the pixels 30 existing in the fibrous region 35 in the central vein peripheral region 36C in step ST12 shown in FIGS. 8 and 9. This is derived by dividing by the number of pixels of the pixels 30 existing in the section region 82 of each sample image 21 counted in step ST11.
- the length ratio of the fibrotic region 35 in the central vein peripheral region 36C counts the number of pixels 30 of the fibrotic region 35 in the fibrotic region 36 after the thinning treatment in step ST21 shown in FIGS. 10 and 11. Then, this is derived by dividing by the number of pixels of the pixels 30 existing in the section region 82 of each sample image 21 counted in step ST11.
- the display control unit 64 displays the analysis result display screen 140 shown in FIG. 27 on the display 40.
- the fibrotic region 35 in the central vein peripheral region 36C is, for example, green
- the fibrotic region 35 in the non-vascular peripheral region 115 is, for example, blue.
- the sample image 21 filled with is displayed. Further, as the evaluation information 133, the area occupancy rate and the length ratio of the fibrotic region 35 in the central vein peripheral region 36C and the area occupancy ratio and the length ratio of the fibrotic region 35 in the non-vascular peripheral region 115 are displayed.
- the evaluation information 71 is derived separately for the central vein peripheral region 36C and the non-vascular peripheral region 115. Therefore, it is possible to correctly evaluate the pathological condition of a disease such as non-alcoholic steatohepatitis (NASH: Non-Alcoholic Steatohepatitis) in which fibrosis progresses from the central venous peri-region 36C toward the non-vascular peri-vessel region 115.
- NASH Non-alcoholic steatohepatitis
- the fibrotic region 35 in the peripheral vein region 36C is painted in green, for example, and only the area occupancy rate and the length ratio of the fibrotic region 35 in the peripheral peripheral region 36C are evaluated. May be displayed as.
- This display mode and the display mode shown in FIG. 27 may be configured to be switchable. Further, the region around the central vein 36C may be extracted without using the machine learning model 111 by having the user specify the region around the central vein 36C.
- the evaluation information 142 shown in FIG. 29A not only the area occupancy rate and length rate of the fibrotic region 35 in the central vein peripheral region 36C, but also the area occupancy rate and length of the fibrotic region 35 in the portal vein peripheral region 36P.
- the rate may be derived.
- the number of cross-linking fibrosis between the portal veins that is, PP bridging
- the number of cross-linking fibrosis between the portal vein and the central vein that is, the number of CC bridging. It may be derived.
- the CPU of the pathological diagnosis support device of the fourth embodiment functions as a compression unit 150 in addition to each unit 60 to 64 of the first embodiment (not shown in FIG. 30 except for the detection unit 62).
- the sample image group 21G read from the storage device 45 by the RW control unit 60 is input to the compression unit 150.
- the compression unit 150 performs a compression process on each of the plurality of sample images 21 constituting the sample image group 21G.
- the compression unit 150 outputs the sample image group 21G after the compression process to the detection unit 62.
- the detection unit 62 detects the fibrosis region 35 from the compressed sample image 21, and outputs the detection result 70 to the derivation unit 63 and the like.
- the sample image 21 acquired by the RW control unit 60 is compressed by the compression unit 150, and then the fibrosis region 35 is detected by the detection unit 62. Therefore, the processing load can be reduced as compared with the case where the fibrosis region 35 is detected from the full-size specimen image 21.
- the sample 20 collected from the liver LV of the patient P is exemplified, but the present invention is not limited to this.
- it may be a sample 156 prepared from a sample 155 collected from the lung LG of patient P using a biopsy needle 10. Further, it may be a specimen collected from the skin, breast, stomach, large intestine and the like.
- the color channel of the pixel value of the sample image 21 is not limited to the illustrated RGB color channel. It may be a four-color channel of CMYG (Cyan, Magenta, Yellow, Green).
- the output form of evaluation information is not limited to the form displayed on the analysis result display screen.
- the evaluation information may be printed out on a paper medium, or the evaluation information data file may be transmitted and output by e-mail or the like.
- the hardware configuration of the computer that constitutes the pathological diagnosis support device can be modified in various ways. It is also possible to configure the pathological diagnosis support device with a plurality of computers separated as hardware for the purpose of improving processing capacity and reliability. For example, the functions of the RW control unit 60 and the instruction reception unit 61, and the functions of the detection unit 62, the derivation unit 63, and the display control unit 64 are distributed to two computers. In this case, the pathological diagnosis support device is configured by two computers.
- the hardware configuration of the computer of the pathological diagnosis support device can be appropriately changed according to the required performance such as processing capacity, safety, and reliability.
- application programs such as the operation program 55 can be duplicated or distributed and stored in multiple storage devices for the purpose of ensuring safety and reliability. be.
- a processing unit that executes various processes such as an RW control unit 60, an instruction receiving unit 61, a detection unit 62, a derivation unit 63, a display control unit 64, an extraction unit 110, 130, and a compression unit 150.
- various processors processors shown below can be used.
- the CPU 47 which is a general-purpose processor that executes software (operation program 55) and functions as various processing units, after manufacturing an FPGA (Field Programgable Gate Array) or the like.
- Dedicated processor with a circuit configuration specially designed to execute specific processing such as programmable logic device (PLD), ASIC (Application Specific Integrated Circuit), which is a processor whose circuit configuration can be changed. Includes electrical circuits and the like.
- One processing unit may be composed of one of these various processors, or may be a combination of two or more processors of the same type or different types (for example, a combination of a plurality of FPGAs and / or a CPU). It may be configured in combination with FPGA). Further, a plurality of processing units may be configured by one processor.
- one processor is configured by a combination of one or more CPUs and software, as represented by a computer such as a client and a server.
- the processor functions as a plurality of processing units.
- SoC System On Chip
- SoC system On Chip
- the various processing units are configured by using one or more of the above-mentioned various processors as a hardware-like structure.
- an electric circuit in which circuit elements such as semiconductor elements are combined can be used.
- the technique of the present disclosure can be appropriately combined with the various embodiments described above and various modifications. Further, it is of course not limited to each of the above embodiments, and various configurations can be adopted as long as they do not deviate from the gist. Further, the technique of the present disclosure extends to a storage medium for storing the program non-temporarily in addition to the program.
- a and / or B is synonymous with "at least one of A and B". That is, “A and / or B” means that it may be A alone, B alone, or a combination of A and B. Further, in the present specification, when three or more matters are connected and expressed by "and / or", the same concept as “A and / or B" is applied.
- Pathological diagnosis support device 10 Biopsy needle 11, 155 Specimen 12 Section 13 Glass 14 Cover glass 20, 156 Specimen 21 Specimen image 21G Specimen image group 25 Pathological diagnosis support device 30 pixels 31 Table 32 Vascular region 32C Central vein region 32P Portal vein region 33 Parenchymal region 35 Fibrotic region 36 Perivascular region 36C Perivascular region 36P Periportal vein region 37, 84A, 84B, 121, 126, 141 Frame 40 Display 41 Input device 45 Storage device 46 Memory 47 CPU (processor) 48 Communication unit 49 Bus line 55 Operation program (operation program of pathological diagnosis support device) 60 Read / write control unit (RW control unit) 61 Instruction reception unit 62 Detection unit 63 Derivation unit 64 Display control unit 70 Detection result 71, 101, 106, 113, 127, 133, 142, 143 Evaluation information 75, 95 Absorption spectrum 80 Binary image 82 Section area 83 Line 90 , 100, 105, 120, 125, 140 Analysis result display screen
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Chemical & Material Sciences (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Public Health (AREA)
- Pathology (AREA)
- Immunology (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- Urology & Nephrology (AREA)
- Molecular Biology (AREA)
- Hematology (AREA)
- Quality & Reliability (AREA)
- Medicinal Chemistry (AREA)
- Food Science & Technology (AREA)
- Biophysics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Optics & Photonics (AREA)
- Geometry (AREA)
- Proteomics, Peptides & Aminoacids (AREA)
- Biotechnology (AREA)
- Cell Biology (AREA)
- Microbiology (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Investigating Or Analysing Biological Materials (AREA)
Abstract
Description
図1において、病理診断は、例えば以下の工程を経て行われる。まず、超音波観測装置(図示省略)の監視下において、患者Pの体外から肝臓LVに生検針10を刺して、肝臓LVから細長の検体11を採取する。そして、検体11をパラフィンに包埋した後、ミクロトームによって複数枚の切片12にスライスし、スライスした切片12をガラス13に貼り付ける。その後、パラフィンを脱して切片12を色素で染色し、染色後の切片12をカバーガラス14で覆って、標本20を完成させる。本例では、色素としてシリウスレッドが用いられる。なお、肝臓LVは、本開示の技術に係る「生体組織」の一例である。
図19~図22に示す第2実施形態では、血管周囲領域36と、血管周囲領域36以外の非血管周囲領域115とに分けて評価情報113を導出する。
図25~図28に示す第3実施形態では、肝臓LVを通る中心静脈の周囲を含む中心静脈周囲領域36Cと、非血管周囲領域115とに分けて評価情報133を導出する。
図30に示す第4実施形態では、標本画像21に対して圧縮処理を施した後、線維化領域35の検出を行う。
11、155 検体
12 切片
13 ガラス
14 カバーガラス
20、156 標本
21 標本画像
21G 標本画像群
25 病理診断支援装置
30 画素
31 表
32 血管領域
32C 中心静脈の領域
32P 門脈の領域
33 実質領域
35 線維化領域
36 血管周囲領域
36C 中心静脈周囲領域
36P 門脈周囲領域
37、84A、84B、121、126、141 枠
40 ディスプレイ
41 入力デバイス
45 ストレージデバイス
46 メモリ
47 CPU(プロセッサ)
48 通信部
49 バスライン
55 作動プログラム(病理診断支援装置の作動プログラム)
60 リードライト制御部(RW制御部)
61 指示受付部
62 検出部
63 導出部
64 表示制御部
70 検出結果
71、101、106、113、127、133、142、143 評価情報
75、95 吸収スペクトル
80 二値化画像
82 切片領域
83 線
90、100、105、120、125、140 解析結果表示画面
91A コマ戻しボタン
91B コマ送りボタン
92 確認ボタン
97 テーブル
110、130 抽出部
111、131 機械学習モデル
112、132 抽出結果
115 非血管周囲領域
150 圧縮部
LG 肺
LV 肝臓
P 患者
PV 画素値
ST10、ST11、ST12、ST13、ST20、ST21、ST22、ST110
ステップ
ST100 ステップ(取得処理)
ST120 ステップ(検出処理)
ST130 ステップ(導出処理)
ST140 ステップ(出力処理)
TH 閾値
TH1、TH2 第1閾値、第2閾値
Claims (8)
- 少なくとも1つのプロセッサを備え、
前記プロセッサは、
色素で染色された生体組織の標本を写した複数の色チャンネルの画素値を有する標本画像を取得し、
前記複数の色チャンネルのうちの2つの色チャンネルの画素値の比と、予め設定された閾値との比較により、前記生体組織の線維化領域を検出し、
検出した前記線維化領域に基づいて、前記生体組織の線維化の程度を表す評価情報を導出し、
前記評価情報を出力する、
病理診断支援装置。 - 前記プロセッサは、
前記色素の吸収スペクトルにおいて、吸収率の差が相対的に大きい2つの色領域に対応する2つの色チャンネルの画素値の比を算出する請求項1に記載の病理診断支援装置。 - 前記プロセッサは、
前記生体組織を通る血管の周囲を含む血管周囲領域と、前記血管周囲領域以外の非血管周囲領域とに分けて前記評価情報を導出する請求項1または請求項2に記載の病理診断支援装置。 - 前記標本は肝臓から採取されたものであり、
前記プロセッサは、前記肝臓を通る中心静脈の周囲を含む中心静脈周囲領域と、前記非血管周囲領域とに分けて前記評価情報を導出する請求項3に記載の病理診断支援装置。 - 前記プロセッサは、
前記評価情報として、前記線維化領域の面積および長さのうちの少なくともいずれかに関する数値を導出する請求項1から請求項4のいずれか1項に記載の病理診断支援装置。 - 前記プロセッサは、
取得した前記標本画像に対して圧縮処理を施した後、前記線維化領域の検出を行う請求項1から請求項5のいずれか1項に記載の病理診断支援装置。 - 色素で染色された生体組織の標本を写した複数の色チャンネルの画素値を有する標本画像を取得する取得処理と、
前記複数の色チャンネルのうちの2つの色チャンネルの画素値の比と、予め設定された閾値との比較により、前記生体組織の線維化領域を検出する検出処理と、
検出した前記線維化領域に基づいて、前記生体組織の線維化の程度を表す評価情報を導出する導出処理と、
前記評価情報を出力する出力処理と、
をプロセッサが実行する病理診断支援装置の作動方法。 - 色素で染色された生体組織の標本を写した複数の色チャンネルの画素値を有する標本画像を取得する取得処理と、
前記複数の色チャンネルのうちの2つの色チャンネルの画素値の比と、予め設定された閾値との比較により、前記生体組織の線維化領域を検出する検出処理と、
検出した前記線維化領域に基づいて、前記生体組織の線維化の程度を表す評価情報を導出する導出処理と、
前記評価情報を出力する出力処理と、
をプロセッサに実行させる病理診断支援装置の作動プログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022533741A JP7436666B2 (ja) | 2020-07-02 | 2021-05-25 | 病理診断支援装置、病理診断支援装置の作動方法、病理診断支援装置の作動プログラム |
CN202180045590.8A CN115769073A (zh) | 2020-07-02 | 2021-05-25 | 病理诊断辅助装置、病理诊断辅助装置的工作方法、病理诊断辅助装置的工作程序 |
EP21833599.0A EP4177605A4 (en) | 2020-07-02 | 2021-05-25 | PATHOLOGICAL DIAGNOSTIC DEVICE, OPERATING METHOD FOR THE PATHOLOGICAL DIAGNOSTIC DEVICE AND OPERATING PROGRAM FOR THE PATHOLOGICAL DIAGNOSTIC DEVICE |
US18/146,030 US20230125525A1 (en) | 2020-07-02 | 2022-12-23 | Pathological diagnosis support apparatus, operation method for pathological diagnosis support apparatus, and operation program for pathological diagnosis support apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020-115065 | 2020-07-02 | ||
JP2020115065 | 2020-07-02 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/146,030 Continuation US20230125525A1 (en) | 2020-07-02 | 2022-12-23 | Pathological diagnosis support apparatus, operation method for pathological diagnosis support apparatus, and operation program for pathological diagnosis support apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022004198A1 true WO2022004198A1 (ja) | 2022-01-06 |
Family
ID=79315908
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/019793 WO2022004198A1 (ja) | 2020-07-02 | 2021-05-25 | 病理診断支援装置、病理診断支援装置の作動方法、病理診断支援装置の作動プログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US20230125525A1 (ja) |
EP (1) | EP4177605A4 (ja) |
JP (1) | JP7436666B2 (ja) |
CN (1) | CN115769073A (ja) |
WO (1) | WO2022004198A1 (ja) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008215820A (ja) * | 2007-02-28 | 2008-09-18 | Tokyo Institute Of Technology | スペクトルを用いた解析方法 |
JP2013113680A (ja) * | 2011-11-28 | 2013-06-10 | Keio Gijuku | 病理診断支援装置、病理診断支援方法、及び病理診断支援プログラム |
US20150339816A1 (en) * | 2013-01-08 | 2015-11-26 | Agency For Science, Technology And Research | A method and system for assessing fibrosis in a tissue |
WO2019044579A1 (ja) * | 2017-08-31 | 2019-03-07 | 国立大学法人大阪大学 | 病理診断装置、画像処理方法及びプログラム |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2772882A1 (en) * | 2013-03-01 | 2014-09-03 | Universite D'angers | Automatic measurement of lesions on medical images |
EP2980753A1 (en) * | 2014-08-01 | 2016-02-03 | Centre Hospitalier Universitaire d'Angers | Method for displaying easy-to-understand medical images |
-
2021
- 2021-05-25 WO PCT/JP2021/019793 patent/WO2022004198A1/ja active Application Filing
- 2021-05-25 EP EP21833599.0A patent/EP4177605A4/en active Pending
- 2021-05-25 CN CN202180045590.8A patent/CN115769073A/zh active Pending
- 2021-05-25 JP JP2022533741A patent/JP7436666B2/ja active Active
-
2022
- 2022-12-23 US US18/146,030 patent/US20230125525A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008215820A (ja) * | 2007-02-28 | 2008-09-18 | Tokyo Institute Of Technology | スペクトルを用いた解析方法 |
JP2013113680A (ja) * | 2011-11-28 | 2013-06-10 | Keio Gijuku | 病理診断支援装置、病理診断支援方法、及び病理診断支援プログラム |
US20150339816A1 (en) * | 2013-01-08 | 2015-11-26 | Agency For Science, Technology And Research | A method and system for assessing fibrosis in a tissue |
WO2019044579A1 (ja) * | 2017-08-31 | 2019-03-07 | 国立大学法人大阪大学 | 病理診断装置、画像処理方法及びプログラム |
Non-Patent Citations (3)
Title |
---|
CALVARUSO, V. ET AL.: "Computer-assisted image analysis of liver collagen: Relationship to Ishak scoring and hepatic venous pressure gradient", HEPATOLOGY, vol. 49, no. 4, 2009, pages 1236 - 1244, XP055325291, DOI: 10.1002/hep.22745 * |
See also references of EP4177605A4 |
VINCENZA CALVARUSO ET AL.: "Computer-Assisted Image Analysis of Liver Collagen: Relationship to Ishak Scoring and Hepatic Venous Pressure Gradient", HEPATOLOGY, no. 49, April 2009 (2009-04-01), pages 1236 - 1244, XP055325291, DOI: 10.1002/hep.22745 |
Also Published As
Publication number | Publication date |
---|---|
CN115769073A (zh) | 2023-03-07 |
EP4177605A4 (en) | 2023-12-06 |
EP4177605A1 (en) | 2023-05-10 |
JPWO2022004198A1 (ja) | 2022-01-06 |
JP7436666B2 (ja) | 2024-02-21 |
US20230125525A1 (en) | 2023-04-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11937973B2 (en) | Systems and media for automatically diagnosing thyroid nodules | |
Militello et al. | A semi-automatic approach for epicardial adipose tissue segmentation and quantification on cardiac CT scans | |
US10165987B2 (en) | Method for displaying medical images | |
JP5442542B2 (ja) | 病理診断支援装置、病理診断支援方法、病理診断支援のための制御プログラムおよび該制御プログラムを記録した記録媒体 | |
JP2002165757A (ja) | 診断支援装置 | |
Di Leo et al. | A software tool for the diagnosis of melanomas | |
US10748279B2 (en) | Image processing apparatus, image processing method, and computer readable recording medium | |
Ebadi et al. | Automated detection of pneumonia in lung ultrasound using deep video classification for COVID-19 | |
Hatanaka et al. | Improvement of automatic hemorrhage detection methods using brightness correction on fundus images | |
Hatanaka et al. | CAD scheme to detect hemorrhages and exudates in ocular fundus images | |
Aibinu et al. | Automatic diagnosis of diabetic retinopathy from fundus images using digital signal and image processing techniques | |
WO2022004198A1 (ja) | 病理診断支援装置、病理診断支援装置の作動方法、病理診断支援装置の作動プログラム | |
Tolentino et al. | Detection of circulatory diseases through fingernails using artificial neural network | |
US20230117179A1 (en) | System and method for generating an indicator from an image of a histological section | |
KR102393661B1 (ko) | 다중 영상 내시경 시스템, 그 영상 제공 방법, 및 상기 방법을 실행시키기 위한 컴퓨터 판독 가능한 프로그램을 기록한 기록 매체 | |
CN110827255A (zh) | 一种基于冠状动脉ct图像的斑块稳定性预测方法及系统 | |
Kollorz et al. | Using power watersheds to segment benign thyroid nodules in ultrasound image data | |
JP2007236956A (ja) | 内視鏡診断支援装置及び内視鏡画像処理方法 | |
CN113038868A (zh) | 医疗图像处理系统 | |
CN108280832A (zh) | 医学图像分析方法、医学图像分析系统以及存储介质 | |
JP7092888B2 (ja) | 医療画像処理システム及び学習方法 | |
Patil et al. | Implementation of segmentation of blood vessels in retinal images on FPGA | |
JP6786260B2 (ja) | 超音波診断装置及び画像生成方法 | |
Sowmiya et al. | Survey or Review on the Deep Learning Techniques for Retinal Image Segmentation in Predicting/Diagnosing Diabetic Retinopathy | |
CN114209299B (zh) | 一种基于ippg技术的人体生理参数检测通道选择的方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21833599 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022533741 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2021833599 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2021833599 Country of ref document: EP Effective date: 20230202 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |