WO2022231200A1 - 유방암 병변 영역을 판별하기 위한 인공 신경망을 학습하기 위한 학습 방법, 및 이를 수행하는 컴퓨팅 시스템 - Google Patents
유방암 병변 영역을 판별하기 위한 인공 신경망을 학습하기 위한 학습 방법, 및 이를 수행하는 컴퓨팅 시스템 Download PDFInfo
- Publication number
- WO2022231200A1 WO2022231200A1 PCT/KR2022/005634 KR2022005634W WO2022231200A1 WO 2022231200 A1 WO2022231200 A1 WO 2022231200A1 KR 2022005634 W KR2022005634 W KR 2022005634W WO 2022231200 A1 WO2022231200 A1 WO 2022231200A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- neural network
- resolution patch
- artificial neural
- low
- layer
- Prior art date
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 165
- 238000000034 method Methods 0.000 title claims abstract description 64
- 230000003902 lesion Effects 0.000 title claims abstract description 57
- 206010006187 Breast cancer Diseases 0.000 title claims abstract description 20
- 208000026310 Breast neoplasm Diseases 0.000 title claims abstract description 20
- 238000012549 training Methods 0.000 title abstract description 10
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 71
- 238000012805 post-processing Methods 0.000 claims description 39
- 206010028980 Neoplasm Diseases 0.000 claims description 29
- 201000011510 cancer Diseases 0.000 claims description 29
- 201000009030 Carcinoma Diseases 0.000 claims description 23
- 238000010606 normalization Methods 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 12
- 230000007246 mechanism Effects 0.000 claims description 10
- 230000003044 adaptive effect Effects 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 7
- 210000001519 tissue Anatomy 0.000 description 70
- 238000010586 diagram Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 10
- 201000010099 disease Diseases 0.000 description 9
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 9
- 238000003860 storage Methods 0.000 description 6
- 238000003745 diagnosis Methods 0.000 description 5
- 230000001575 pathological effect Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 208000037396 Intraductal Noninfiltrating Carcinoma Diseases 0.000 description 3
- 230000004913 activation Effects 0.000 description 3
- 210000003855 cell nucleus Anatomy 0.000 description 3
- 208000028715 ductal breast carcinoma in situ Diseases 0.000 description 3
- 230000003511 endothelial effect Effects 0.000 description 3
- 230000007170 pathology Effects 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 238000004393 prognosis Methods 0.000 description 3
- 238000002271 resection Methods 0.000 description 3
- 206010073094 Intraductal proliferative breast lesion Diseases 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 210000004027 cell Anatomy 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 201000007273 ductal carcinoma in situ Diseases 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 208000006402 Ductal Carcinoma Diseases 0.000 description 1
- 238000001574 biopsy Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000002512 chemotherapy Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 239000013067 intermediate product Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000002980 postoperative effect Effects 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B10/00—Other methods or instruments for diagnosis, e.g. instruments for taking a cell sample, for biopsy, for vaccination diagnosis; Sex determination; Ovulation-period determination; Throat striking implements
- A61B10/0041—Detection of breast cancer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/67—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30068—Mammography; Breast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/032—Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.
Definitions
- the present invention relates to a learning method for learning an artificial neural network for determining a breast cancer lesion region, and a computing system for performing the same. More particularly, it relates to a learning method for learning an artificial neural network capable of discriminating a breast cancer lesion region in consideration of both microscopic and macroscopic features of a living tissue, and a computing system for performing the same.
- One of the major tasks performed in pathology or pathology is to perform a diagnosis to determine the condition, sign, or lesion of a specific disease by reading a patient's biological tissue image.
- This diagnosis is a method that depends on the experience and knowledge of skilled medical personnel for a long time.
- a great advantage that can be expected from the method of diagnosing biometric images using an artificial neural network learned from a large number of data is that it does not simply automate the experience and knowledge of conventionally skilled medical personnel, but it In terms of finding and deriving the desired answer, it is possible to find out the characteristics of disease factors that experienced medical personnel were not aware of in the image.
- mastectomy is widely used to excise areas with cancer lesions.
- pathological examination of the resected breast cancer tissue is required, and the invasive cancer lesion area and ductal carcinoma in situ in the breast cancer resection tissue are required.
- DCIS Also called ductal carcinoma in situ or ductal carcinoma
- the macroscopic tissue shape is very important to determine whether the breast cancer lesion area is invasive or ductal epithelial cancer through artificial neural networks such as convolutional neural networks. very important.
- the conventional single-resolution convolutional neural network has limitations in accurately classifying the lesion region by capturing these features well.
- the technical task of the present invention is to propose a structure of an artificial neural network capable of detecting a lesion region caused by a specific disease from a biological tissue image, and to provide a method for learning it.
- a structure of an artificial neural network that can determine the lesion area by considering both the macroscopic features of the tissue area, such as the tissue shape and the microscopic features of the cell shape and cell nucleus, and provide a learning method.
- both macroscopic and microscopic characteristics of the tissue need to be considered in determining whether the breast cancer lesion area is invasive or ductal epithelial cancer.
- This is to propose a deep learning model for bio-image analysis that detects the lesion area and classifies the detected lesion area as invasive cancer or ductal epithelial cancer in consideration of the shape and size of the cell nucleus.
- a system comprising the step of learning the artificial neural network by inputting the ith high-resolution patch and the ith low-resolution patch, wherein the artificial neural network includes a first encoding convolutional neural network, a second encoding convolutional neural network, and
- the first encoding convolutional neural network is a convolutional neural network that receives an ith high-resolution patch and outputs a first feature map corresponding to the ith high-resolution patch
- the second encoding convolutional neural network includes , a convolutional neural network that receives the ith low-resolution patch and outputs context information corresponding to the ith low-resolution patch
- the decoding neural network includes the first feature map corresponding to the ith high-resolution patch.
- An artificial neural network learning method which is a convolutional neural network that reflects context information corresponding to the i-th low-resolution patch and generates predetermined prediction information for determining a lesion region in the i-th high-resolution patch based on a result value reflecting the context information this is provided
- the living tissue slide is a breast cancer resection tissue slide, and an invasive cancer region, which is a lesion region due to invasive cancer, and an intraductal epithelial cancer region, which is a lesion region due to ductal epithelial cancer, may be annotated on the slide image.
- the decoding neural network determines a normalization parameter using the context information output from a first convolutional layer performing a convolution operation on the first feature map and the second encoding convolutional neural network, and , a first post-processing layer that reflects the context information in the first feature map by performing adaptive normalization on the result value output from the first convolutional layer with the determined normalization parameter.
- the decoding neural network includes a first convolutional layer performing a convolution operation on the first feature map and the second encoding convolutional neural network targeting a result value output from the first convolutional layer
- a first post-processing layer that reflects the context information in the first feature map may be included.
- the first encoding convolutional neural network further outputs a second feature map corresponding to the i-th high-resolution patch, wherein the second feature map is a feature map of a lower level than the first feature map-
- the decoding convolutional neural network includes a non-local block layer that performs a non-local block operation on the second feature map, a result transmitted from the first post-processing layer, and a result transmitted from the non-local block layer
- a concatenation layer for concatenating a second convolution layer for performing a convolution operation on the result transmitted from the concatenation layer, and a context corresponding to the i-th low-resolution patch on the result output from the second convolution layer
- It may further include a second post-processing layer reflecting the information, and output the prediction information based on a result output from the second post-processing layer.
- a method of providing a determination result for a predetermined biological tissue slide to be determined through an artificial neural network learned by the artificial neural network learning method wherein a computing system includes: obtaining a judgment target slide image; generating, by the computing system, a first judgment target high-resolution patch to an N-th judgment target high-resolution patch from the judgment target slide image; the computing system, the j-th judgment target high-resolution patch (
- the j-th judgment target high-resolution patch and the j-th judgment target low-resolution patch corresponding thereto are the same has a size, and the central point of the j-th judgment target high-resolution patch and the j-th judgment target low-resolution patch point to the same position on the slide of the biological tissue to be judged - and the computing system determines the j-th judgment target
- a computer program installed in a data processing apparatus and recorded in a medium for performing the above-described method.
- a computer-readable recording medium in which a computer program for performing the above-described method is recorded.
- an artificial neural network learning system including a processor and a memory storing a computer program, wherein the computer program, when executed by the processor, causes the artificial neural network learning system to An artificial neural network learning system for performing a learning method is provided.
- a system for providing a determination result for a slide of a biological tissue to be determined comprising a processor and a memory for storing a computer program, wherein the computer program, when executed by the processor, determines A judgment result providing system is provided, which causes the result providing system to perform the method for providing the judgment result.
- the technical idea of the present invention it is possible to determine the lesion region in consideration of both the macroscopic feature of the tissue region, such as the tissue shape, and the microscopic feature, such as the shape of the cell or the size of the cell nucleus. That is, an artificial neural network for judging a lesion area by considering microscopic features of the tissue through a high-resolution image and macroscopic features of the tissue through a low-resolution image and a method for learning the same can be provided.
- the lesion area can be detected very effectively and the detected lesion area can be classified as invasive cancer or ductal epithelial cancer.
- FIG. 1 is a diagram schematically illustrating an environment in which a method for learning an artificial neural network and a method for providing a judgment result for a biological tissue slide according to the technical idea of the present invention are performed.
- FIG. 2 is a flowchart illustrating an artificial neural network learning method according to an embodiment of the present invention.
- FIG. 3 is a diagram illustrating an example of a biological tissue slide image in which an invasive cancer region and an intraluminal epithelial cancer region are annotated.
- 4A is a diagram for explaining a high-resolution patch.
- 4B is a diagram illustrating an example of a region covered by a high-resolution patch whose center point is the same position on a slide image of a living tissue and an area covered by a corresponding low-resolution patch.
- FIG. 5 is a diagram for explaining the structure of an artificial neural network according to an embodiment of the present invention.
- FIG. 6 is a flowchart illustrating an example of a method for providing a determination result for a living tissue slide according to an embodiment of the present invention.
- FIG. 7 is a diagram illustrating a schematic configuration of an artificial neural network learning system according to an embodiment of the present invention
- FIG. 8 is a diagram illustrating a schematic configuration of a system for providing judgment results according to an embodiment of the present invention.
- the component when any one component 'transmits' data to another component, the component may directly transmit the data to the other component or through at least one other component. This means that the data may be transmitted to the other component. Conversely, when one component 'directly transmits' data to another component, it means that the data is transmitted from the component to the other component without passing through the other component.
- FIG. 1 is a diagram schematically illustrating an environment in which a method for learning an artificial neural network and a method for providing a judgment result for a biological tissue slide according to the technical idea of the present invention are performed.
- the artificial neural network learning method may be performed by the artificial neural network learning system 100 , and a method for providing a determination result for a biological tissue slide according to an embodiment of the present invention may be performed by the judgment result providing system 200 (hereinafter, referred to as a 'judgment result providing system') for a predetermined judgment target biological tissue slide.
- the judgment result providing system 200 hereinafter, referred to as a 'judgment result providing system'
- the artificial neural network learning system 100 may learn the artificial neural network 300 .
- the artificial neural network learning system 100 may learn the artificial neural network 300 based on learning data generated from a plurality of pathological specimens.
- the pathological specimen may be a biopsy taken from various organs of the human body or a living tissue excised by surgery.
- the artificial neural network learning system 100 can learn the artificial neural network 300 by generating individual learning data using a digital pathology slide image of a pathological specimen, and inputting it to the input layer of the artificial neural network 300 . .
- the artificial neural network 300 may be an artificial neural network that can be learned to output a probability value of whether or not a disease is present with respect to a predetermined disease.
- the artificial neural network 300 outputs a numerical value, that is, a probability value, indicating a determination result (eg, whether a lesion is caused by a specific disease (especially breast cancer)) for a target sample based on data input through an input layer. can do.
- an artificial neural network is a neural network artificially constructed based on the operation principle of a human neuron, includes a multi-layer perceptron model, and may refer to a set of information expressing a series of design items defining an artificial neural network.
- the artificial neural network 300 may be a convolutional neural network or may include a convolutional neural network.
- the convolutional neural network may include an input layer, a plurality of hidden layers, and an output layer.
- Each of the plurality of hidden layers may include a convolution layer and a pooling layer (or a sub-sampling layer).
- the convolutional neural network may be defined by a function, filter, stride, weight factor, etc. for defining each of these layers.
- the output layer may be defined as a fully connected FeedForward layer. The design details for each layer constituting the convolutional neural network are widely known.
- well-known functions may be used for each of the number of layers to be included in a plurality of layers, a convolution function for defining the plurality of layers, a pooling function, and an activation function, and to implement the technical idea of the present invention Separately defined functions may be used.
- the learned artificial neural network 300 may be stored in the determination result providing system 200 , and using the artificial neural network 300 learned by the determination result providing system 200 , a predetermined diagnosis target sample or diagnosis target Judgment can be made on the slide image of the specimen.
- the artificial neural network 300 receives a biological tissue slide image or a patch (also referred to as a 'tile') that is a part of a biological tissue slide image, and provides diagnostic information, prognosis information, and/or treatment method for biological tissue. It may be a neural network for providing response information.
- the artificial neural network 300 may receive a biological tissue slide image or a patch that is a part of the biological tissue slide image.
- the slide image may be a scanned image of a biological tissue slide
- the patch may be a portion of a biological tissue slide image divided in a grid form.
- the living tissue slide may be a breast cancer excision tissue slide, and in this case, the lesion region may include an invasive cancer region, which is a lesion region due to invasive cancer, and an intraductal epithelial cancer region, which is a lesion region due to ductal epithelial cancer.
- the artificial neural network 300 may be a region division model (ie, a pixel unit classification model) for an input image. That is, the output value of the artificial neural network 300 may be a region division result (ie, a pixel unit classification result). In other words, the artificial neural network 300 may output predetermined prediction information for determining the lesion region in the corresponding image. Also, the artificial neural network 300 may be a pixel-level classification neural network that classifies an input image in units of pixels. For example, the artificial neural network 300 may be a neural network that outputs, for each pixel constituting an image, a probability of an invasive cancer, a probability of an endothelial cancer, and a probability that it is not cancer.
- the artificial neural network learning system 100 may learn the artificial neural network 300 by inputting a plurality of biological tissue patches.
- the learning data ie, the biotissue patch
- the learning data may be annotated with an invasive cancer region, which is a lesion region due to invasive cancer, and an intraluminal epithelial cancer region, which is a lesion region due to ductal epithelial cancer.
- the determination result providing system 200 uses the learned artificial neural network 300 to make various determinations on the target sample (eg, determination of the lesion area, the presence or absence of disease expression, prognosis, determination of treatment method, etc.) can do.
- the artificial neural network learning system 100 and/or the judgment result providing system 200 may be a computing system that is a data processing device having computational power to implement the technical idea of the present invention, and generally, a client through a network It may include a computing device such as a personal computer or a portable terminal as well as a server that is an accessible data processing device.
- the artificial neural network learning system 100 and/or the determination result providing system 200 may be implemented as any one physical device, but if necessary, a plurality of physical devices may be organically coupled to each other according to the technical idea of the present invention.
- An average expert in the technical field of the present invention can easily infer that the artificial neural network learning system 100 and/or the judgment result providing system 200 can be implemented.
- the artificial neural network learning system 100 and/or the determination result providing system 200 may be implemented in the form of a subsystem of a predetermined parent system 10 .
- the parent system 10 may be a server.
- the server 10 means a data processing device having computational capability for implementing the technical idea of the present invention, and generally provides a specific service such as a personal computer, a mobile terminal, etc. as well as a data processing device that a client can access through a network.
- An average expert in the art of the present invention can easily infer that any device capable of performing can be defined as a server.
- the artificial neural network learning system 100 and the determination result providing system 200 may be implemented in a form separated from each other.
- FIG. 2 is a flowchart illustrating an artificial neural network learning method according to an embodiment of the present invention.
- the artificial neural network learning system 100 may acquire a biological tissue slide image ( S100 ).
- a lesion region may be annotated on the biological tissue slide image.
- the living tissue slide image may be annotated with an invasive cancer region, which is a lesion region due to invasive cancer, and an intraductal epithelial cancer region, which is a lesion region due to ductal epithelial cancer.
- Invasive cancer and intraluminal epithelial cancer may be present simultaneously in one living tissue slide image, and the type of lesion may be annotated for each lesion area.
- FIG. 3 is a diagram illustrating an example of a biological tissue slide image in which an invasive cancer region and an intraluminal epithelial cancer region are annotated.
- a region marked in red eg, 1
- a region marked in yellow eg, 2
- an endothelial cancer region e.g.
- the artificial neural network learning system 100 may obtain N high-resolution patches (where N is an integer of 2 or more) from the biological tissue slide image ( S110 ).
- the high resolution does not mean a specific magnification or higher than a specific resolution, but rather has a relatively high resolution compared to a low-resolution patch to be described later.
- the artificial neural network learning system 100 may obtain the N high-resolution patches by dividing the biological tissue slide image into a predetermined size.
- the artificial neural network learning system 100 may generate a high-resolution patch (eg, 11) by dividing the biological tissue slide image 10 in a grid shape.
- an original biological tissue slide image may be obtained.
- the N high-resolution patches may be mutually exclusive, but the present invention is not limited thereto, and at least some of the N high-resolution patches may have regions overlapping other high-resolution patches.
- the low-resolution patch may be a patch having a relatively lower resolution than a high-resolution patch.
- a high-resolution patch is a 50x image
- the low-resolution patch may be a 12.5x image.
- the high-resolution patch is an image at 50x and the low-resolution patch is an image at 12.5x. Assume and explain.
- the central point of the ith high-resolution patch and the central point of the ith low-resolution patch may point to the same position on the biological tissue slide.
- the ith high-resolution patch and the ith low-resolution patch may have the same size.
- the size of the high-resolution patch is 256 ⁇ 256
- the size of the low-resolution patch may also be 256 ⁇ 256.
- the high-resolution patch is a 50x image and the low-resolution patch is a 12.5x image, which is 1/4 ratio
- the area on the slide image covered by the high-resolution patch and the area on the slide image covered by the corresponding low-resolution patch The area of may be 1:16.
- the artificial neural network learning system 100 may extract a wide part (strictly, a part covered by the low-resolution patch) from the center point of the high-resolution patch and then reduce it to obtain a low-resolution patch. For example, in order to obtain a low-resolution patch corresponding to a high-resolution patch having a size of 256 ⁇ 256 and the center point coordinates of (2048, 2560), the artificial neural network learning system 100 sets the coordinates (2048, 2048, 2560), a low-resolution patch can be generated by extracting a 1024 ⁇ 1024 area and then reducing it to a size of 256 ⁇ 256.
- both the high-resolution patch 11 and the corresponding low-resolution patch may have the same center point 13 on the biological tissue slide 10, and the area of the area 12 covered by the low-resolution patch is The area of the region 11 covered by the high-resolution patch may be 16 times larger.
- the living tissue slide may include only images of a single magnification, but according to an embodiment, the living tissue slide may include a plurality of images ranging from high magnification to low magnification in a pyramidal format.
- the biological tissue image may include a high-resolution slide image with a magnification of 50 and a low-resolution slide image with a magnification of 12.5.
- the artificial neural network learning system 100 may obtain a plurality of high-resolution patches by dividing the high-resolution slide image, and extract a corresponding low-resolution patch from the low-resolution slide image for each high-resolution patch.
- a low-resolution patch corresponding to a high-resolution patch having a size of 256 ⁇ 256 and a center point coordinates of (2048, 2560)
- the artificial neural network learning system 100 coordinates (512, 640) among the low-resolution slide images. )
- a low-resolution patch can be obtained by extracting a 256x256-sized area as the center.
- the artificial neural network learning system 100 may learn the artificial neural network 300 by inputting the i-th high-resolution patch and the i-th low-resolution patch corresponding thereto to the artificial neural network 300 ( S140).
- step S100 to Step S140 may be performed.
- FIG. 5 is a diagram for explaining the structure of the artificial neural network 300 according to an embodiment of the present invention.
- the artificial neural network 300 may include a first encoding convolutional neural network 310 , a second convolutional neural network 320 , and a decoding convolutional neural network 330 .
- the first encoded convolutional neural network 310 and the second encoded convolutional neural network may be implemented as MNASNET, which is a type of convolutional neural network, but is not limited thereto, and may be implemented with other convolutional neural networks such as RESNET.
- MNASNET is a type of convolutional neural network, but is not limited thereto, and may be implemented with other convolutional neural networks such as RESNET.
- the first encoded convolutional neural network 310 may receive a high-resolution patch 301
- the second encoded convolutional neural network 320 may receive a low-resolution patch 302 corresponding to the high-resolution patch 301 .
- the first encoded convolutional neural network 310 receives a 50x high-resolution patch with a size of 512x512
- the second encoding convolutional neural network 320 receives a 12.5x low-resolution patch with a size of 512x512. will be input.
- the first encoding convolutional neural network 310 may generate two or more feature maps corresponding to the input high-resolution patch 301, and in FIG. 5, a first feature map 311 that is a low-level feature map and a high-level feature map are An example in which the second feature map 312 is generated is shown.
- the first feature map 311 is a 32 ⁇ 128 ⁇ 128 dimension (size 128 ⁇ 128, 32 channels) low-level feature map
- the second feature map 312 is a 128 ⁇ 32 ⁇ 32 dimension (size 32 ⁇ 32, 128). channel) may be a high-level feature map.
- the term "low-level" feature map is generated in a hidden layer relatively close to the input layer compared to the high-level feature map, or is relatively less abstract compared to the high-level feature map, and can mean that it has a larger amount of information.
- the dimension of a value output from a specific neural network or layer is expressed as c ⁇ a ⁇ b, the corresponding value indicates that the value is a value of a c channel having a size of a ⁇ b.
- the reason that the high-level feature map has a larger number of channels compared to the low-level feature map is that the size of the image is reduced by half horizontally and vertically through max pooling in the middle of the convolutional neural network. This is because the number of channels will be increased for That is, while data flows from the convolutional layer close to the input to the convolutional layer close to the output, the size of the feature map is reduced to try abstraction, and instead the number of channels is increased to increase the amount of abstract information.
- the second encoding convolutional neural network 320 may receive the low-resolution patch 302 and output context information 321 corresponding to the low-resolution patch 302 .
- the context information 321 output by the second encoding convolutional neural network 320 may not be the final output value of the second encoding convolutional neural network 320 , and the final output value of the second encoding convolutional neural network 320 .
- An output of a layer immediately preceding a fully connected layer for output may be the context information 321 .
- FIG. 5 shows an example of outputting the 1280-dimensional context information 321, it goes without saying that the size of the context information 321 may vary depending on the structure of the neural network.
- the decoding neural network 330 reflects the context information 321 corresponding to the low-resolution patch 302 in the feature map 311 and/or 312 output by the first encoding convolutional neural network 310, Predetermined prediction information 337 for determining a lesion area in the high-resolution patch 301 may be generated based on a result value reflecting the context information 321 .
- the decoding neural network 330 may output a probability corresponding to a normal region/invasive cancer lesion region/endothelial carcinoma lesion region for each pixel constituting the high-resolution patch 301 .
- the decoding neural network 330 may output prediction information of 3 ⁇ 512 ⁇ 512 dimensions, but is not limited thereto.
- the decoding neural network 330 performs a normal/invasive cancer lesion area/intraductal epithelial cancer lesion for each pixel group (eg, a pixel group consisting of four pixels in a square shape) constituting the high-resolution patch 301 . You can output the probability that it corresponds to the region.
- the decoding neural network 330 may output prediction information of 3 ⁇ 128 ⁇ 128 dimensions.
- the sum of the probability that each pixel or each pixel group is normal, the probability of an invasive cancer lesion area, and the probability of an intraluminal epithelial cancer lesion area may be 1 .
- the decoding neural network 330 may include a first convolutional layer 331 and a first post-processing layer 332 .
- the decoding neural network 330 includes a non-local block layer 333, a concatenation layer 334, a second convolution layer 335, and a second post-processing layer 336. It may include more.
- the decoding neural network 330 may include only a part of the layer shown in FIG. 5 , or may further include other layers other than the layer shown in FIG. 5 .
- the decoding neural network 330 may include only the first convolutional layer 331 and the first post-processing layer 332 among the layers shown in FIG. 5, and in some cases, one or more convolutional layers and It may further include a post-processing layer.
- the first convolution layer 331 may perform a convolution operation on the first feature map 311 .
- the first convolution layer 331 may output a 32 ⁇ 128 ⁇ 128 dimension result by performing a convolution operation through a 3 ⁇ 3 convolution filter.
- the first post-processing layer 332 may reflect the context information 321 output from the second encoding convolutional neural network 320 in the result generated by the first convolutional layer 331 .
- the first post-processing layer 332 may reflect the context information 321 through an adaptive normalization technique.
- Adaptive regularization refers to a technique in which a fully connected layer that outputs the mean and standard deviation (or variance), which are regularization parameters, is inputted as an input, and normalization is performed using the mean and standard deviation output here.
- the first post-processing layer 332 determines a normalization parameter (eg, mean and standard deviation) using the context information 321 , and uses the determined normalization parameter as the first convolutional layer.
- a normalization parameter eg, mean and standard deviation
- the context information 321 may be reflected in the first feature map 311 .
- the first post-processing layer 332 may reflect the context information 321 through an attention mechanism. That is, the first post-processing layer 332 is based on the context information 321 output from the second encoding convolutional neural network 320 with respect to the result value output from the first convolutional layer 331 . By performing the attention mechanism, the context information 321 may be reflected in the first feature map 311 .
- context information is used as input and passes through a fully connected layer that outputs parameters that can be used in the attention mechanism.
- the value output from the post-processing layer (eg, the second post-processing layer 336) including the first post-processing layer 332 is a predetermined activation function (such as Relu or Sigmoid) activation function) and can be transferred to the next layer.
- a predetermined activation function such as Relu or Sigmoid activation function
- Non-local block layer 333 may perform a non-local block operation on the second feature map.
- Non-local block operation refers to an operation used to calculate a non-local correlation of an input feature map, and a detailed description thereof can be found in Kaiming He et al.'s paper "Non-local Neural Networks" (https://arxiv. org/pdf/1711.07971.pdf).
- upscaling may be performed on the value output from the non-local block layer 333 , and through this, the result output from the first post-processing layer 332 and A result having the same size (eg, 128 ⁇ 128) may be generated and transmitted to the next layer (ie, the combining layer 334). Meanwhile, as an upscaling technique, interpolation or transposed convolution may be used.
- the concatenation layer 334 may concatenate the result transmitted from the first post-processing layer 332 and the result transmitted from the non-local block layer 333 .
- the combining layer may perform a combining operation through channel stacking. For example, when a 32 ⁇ 128 ⁇ 128-dimensional result is delivered from the first post-processing layer 332 and a 128 ⁇ 128 ⁇ 128-dimensional result is delivered from the non-local block layer 333, the combination
- the layer 334 may output a combined result of 160 ⁇ 128 ⁇ 128 dimensions through channel stacking.
- the second convolution layer 335 may perform a convolution operation on the result output from the combining layer 334 .
- the second convolution layer 334 may output a 128 ⁇ 128 ⁇ 128 dimension result by performing a convolution operation through a 3 ⁇ 3 convolution filter.
- the second post-processing layer 336 may reflect the context information 321 corresponding to the low-resolution patch to the result output from the second convolutional layer 335 . Like the first post-processing layer 335 , the second post-processing layer 336 may also perform an adaptive normalization technique or an attention mechanism to which the context information 321 is applied.
- the decoding convolutional neural network 330 may output the prediction information 340 based on a result output from the second post-processing layer 336 .
- one or more additional convolutional layers and an additional post-processing layer connected thereto may be further included between the second post-processing layer 336 and the layer that finally outputs the prediction information 340 .
- the decoding convolutional neural network 330 may further include a third convolutional layer 337 and a third post-processing layer 338 .
- the additional convolution layer performs a convolution operation on the result value output from the previous layer like the other convolution layers described above, and the additional post-processing layer also applies the context information 321 like the other post-processing layers described above.
- An adaptive normalization technique or an attention mechanism may be performed.
- the decoding convolutional neural network 330 may further output the prediction information 337 through an additional layer (eg, an additional convolutional layer and/or a fully connected layer, an output layer, etc.). have.
- the decoding convolutional neural network 330 may further include a fourth convolutional layer 339.
- the artificial neural network 300 covers a large area of a living tissue. It has a structure in which a value output from an encoding convolutional neural network receiving a low-resolution image including the input is reflected to an output value of an encoding convolutional neural network receiving a high-resolution image as an input.
- the artificial neural network 300 reflects not only the macroscopic features of the biological tissue appearing in the low-resolution image covering a wide area, but also the microscopic features extracted from the high-resolution image that covers a narrow area but shows the detailed characteristics of the tissue well. There are advantages to doing.
- FIG. 6 is a flowchart illustrating an example of a method for providing a determination result for a living tissue slide according to an embodiment of the present invention.
- the method of providing a determination result for a biological tissue slide according to FIG. 6 may be performed by the determination result providing system 200 , and the determination result providing system 200 includes a method previously learned by the artificial neural network learning system 100 .
- the artificial neural network 300 may be stored.
- the determination result providing system 200 may acquire a determination target slide image that is a slide image of a predetermined determination target biological tissue slide ( S200 ).
- the determination result providing system 200 may obtain a first determination object high-resolution patch to an N-th determination object high-resolution patch from the determination object slide image (S220).
- the determination result providing system 200 may generate a low-resolution patch corresponding to each of the first determination target high-resolution patch to the N-th high-resolution determination target high-resolution patch (S220 and S230).
- the process of obtaining the first judgment target high-resolution patch to the N-th judgment target high-resolution patch and the low-resolution patch corresponding to each of them from the slide image to be judged is very similar to the process described above with reference to FIGS. 4A to 4B, so a separate description is to be omitted.
- the determination result providing system 200 determines the j-th determination target high-resolution patch based on the prediction result output by the artificial neural network 300 receiving the j-th determination target high-resolution patch and the j-th determination target low-resolution patch input. It is possible to determine the lesion area included in (S240).
- FIG. 7 is a diagram illustrating a schematic configuration of an artificial neural network learning system 100 according to an embodiment of the present invention
- FIG. 8 is a schematic configuration of a judgment result providing system 200 according to an embodiment of the present invention. It is the drawing shown.
- the artificial neural network learning system 100 and the determination result providing system 200 may mean a logical configuration having hardware resources and/or software necessary to implement the technical idea of the present invention, and must be one It does not mean a physical component or a single device. That is, the artificial neural network learning system 100 and the determination result providing system 200 may mean a logical combination of hardware and/or software provided to implement the technical idea of the present invention, and may be spaced apart from each other if necessary. It can also be implemented as a set of logical configurations for implementing the technical idea of the present invention by being installed in an established device and performing each function. In addition, the artificial neural network learning system 100 and the determination result providing system 200 may refer to a set of components separately implemented for each function or role for implementing the technical idea of the present invention.
- Each component of the artificial neural network learning system 100 and the determination result providing system 200 may be located in different physical devices or may be located in the same physical device.
- the combination of software and/or hardware constituting each of the components of the artificial neural network learning system 100 and the determination result providing system 200 are also located in different physical devices, and are located in different physical devices. Positioned components may be organically combined with each other to implement the respective modules.
- a module may mean a functional and structural combination of hardware for carrying out the technical idea of the present invention and software for driving the hardware.
- the module may mean a logical unit of a predetermined code and a hardware resource for executing the predetermined code, and does not necessarily mean physically connected code or one type of hardware. can be easily inferred to an average expert in the technical field of the present invention.
- the artificial neural network learning system 100 may include a storage module 110 , an acquisition module 120 , a generation module 130 , and a learning module 140 .
- some of the above-described components may not necessarily correspond to the components essential for the implementation of the present invention, and according to the embodiment, the artificial neural network learning system 100 Of course, it may include more components than this.
- the artificial neural network learning system 100 further includes a communication module (not shown) for communicating with an external device, and a control module (not shown) for controlling components and resources of the artificial neural network learning system 100 .
- a communication module not shown
- a control module not shown
- the storage module 110 may store the artificial neural network 30 to be learned.
- the storage module 110 may further store data to be used for learning of the artificial neural network 30 (eg, a biological tissue slide image in which a lesion region is annotated).
- the acquisition module 120 may acquire a slide image of a biological tissue slide.
- N is an integer of 2 or more
- An i-th low-resolution patch corresponding to an arbitrary integer (N) may be obtained.
- the i-th high-resolution patch and the i-th low-resolution patch corresponding thereto have the same size, and the center point of the i-th high-resolution patch and the center point of the i-th low-resolution patch may point to the same position on the living tissue slide.
- the learning module 140 may learn the artificial neural network 300 by inputting the ith high-resolution patch and the ith low-resolution patch.
- the determination result providing system 200 may include a storage module 210 , an acquisition module 220 , a generation module 230 , and a determination module 240 .
- some of the above-described components may not necessarily correspond to the components essential for the implementation of the present invention, and also according to the embodiment, the determination result providing system 200 Of course, it may include more components than this.
- the determination result providing system 200 further includes a communication module (not shown) for communicating with an external device, and a control module (not shown) for controlling components and resources of the determination result providing system 200 .
- a communication module not shown
- a control module not shown
- the storage module 210 may store the previously learned artificial neural network 300 .
- the acquisition module 220 may acquire a slide image to be judged, which is a slide image of a slide image of a biological tissue slide to be judged.
- the j-th judgment target high-resolution patch and the j-th judgment target low-resolution patch corresponding thereto have the same size, and the center point of the j-th judgment target high-resolution patch and the j-th judgment target low-resolution patch target biological tissue slide It can point to the same location on the
- the artificial neural network learning system 100 and the determination result providing system 200 may include a processor and a memory for storing a program executed by the processor.
- the processor may include a single-core CPU or a multi-core CPU.
- the memory may include high-speed random access memory and may include non-volatile memory such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Access to memory by the processor and other components may be controlled by a memory controller.
- the method according to the embodiment of the present invention may be implemented in the form of a computer-readable program command and stored in a computer-readable recording medium, and the control program and the target program according to the embodiment of the present invention are also implemented in the computer. It may be stored in a readable recording medium.
- the computer-readable recording medium includes all types of recording devices in which data readable by a computer system is stored.
- the program instructions recorded on the recording medium may be specially designed and configured for the present invention, or may be known and available to those skilled in the software field.
- Examples of the computer-readable recording medium include magnetic media such as hard disks, floppy disks and magnetic tapes, optical recording media such as CD-ROMs and DVDs, and floppy disks. hardware devices specially configured to store and execute program instructions, such as magneto-optical media and ROM, RAM, flash memory, and the like.
- the computer-readable recording medium is distributed in a computer system connected through a network, so that the computer-readable code can be stored and executed in a distributed manner.
- Examples of the program instruction include not only machine code such as generated by a compiler, but also a device for electronically processing information using an interpreter or the like, for example, a high-level language code that can be executed by a computer.
- the hardware devices described above may be configured to operate as one or more software modules to perform the operations of the present invention, and vice versa.
- the present invention may be used in a learning method for learning an artificial neural network for determining a breast cancer lesion region, and a computing system for performing the same.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Theoretical Computer Science (AREA)
- Public Health (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Radiology & Medical Imaging (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Quality & Reliability (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Multimedia (AREA)
- Oncology (AREA)
- Heart & Thoracic Surgery (AREA)
- Surgery (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Image Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Processing (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
Description
Claims (14)
- 인공 신경망 학습 시스템이, 생체 조직 슬라이드의 슬라이드 이미지를 획득하는 단계;상기 인공 신경망 학습 시스템이, 상기 슬라이드 이미지로부터 제1고해상도 패치 내지 제N고해상도 패치(여기서, N은 2 이상의 정수)를 획득하는 단계;상기 인공 신경망 학습 시스템이, 제i고해상도 패치(여기서, i는 1<=i<=N인 임의의 정수)에 상응하는 제i저해상도 패치를 획득하는 단계- 여기서 상기 제i고해상도 패치와 그에 상응하는 제i저해상도 패치는 동일한 크기를 가지며, 상기 제i고해상도 패치의 중심점과 상기 제i저해상도 패치의 중심점은 상기 생체 조직 슬라이드 상의 동일한 위치를 가리킴-; 및상기 인공 신경망 학습 시스템이, 상기 제i고해상도 패치 및 상기 제i저해상도 패치를 입력하여 상기 인공 신경망을 학습하는 단계를 포함하되,상기 인공 신경망은, 제1인코딩 합성곱 신경망; 제2인코딩 합성곱 신경망; 및 디코딩 합성곱 신경망을 포함하고,상기 제1 인코딩 합성곱 신경망은,제i고해상도 패치를 입력받아 상기 제i고해상도 패치에 상응하는 제1피쳐맵을 출력하는 합성곱 신경망이며,상기 제2인코딩 합성곱 신경망은,상기 제i저해상도 패치를 입력받아 상기 제i저해상도 패치에 상응하는 컨텍스트 정보(context information)를 출력하는 합성곱 신경망이며,상기 디코딩 신경망은,상기 제i고해상도 패치에 상응하는 제1 피쳐맵에 상기 제i저해상도 패치에 상응하는 컨텍스트 정보를 반영하고, 상기 컨텍스트 정보를 반영한 결과 값에 기초하여 상기 제i고해상도 패치 내의 병변 영역을 판단하기 위한 소정의 예측 정보를 생성하는 합성곱 신경망인 인공 신경망 학습 방법.
- 제1항에 있어서,상기 생체 조직 슬라이드는, 유방암 절제 조직 슬라이드이며,상기 슬라이드 이미지에는 침윤암으로 인한 병변 영역인 침윤암 영역 및 관내상피암으로 인한 병변 영역인 관내상피암 영역이 어노테이션되어 있는 인공 신경망 학습 방법.
- 제1항에 있어서,상기 디코딩 신경망은,상기 제1피쳐맵에 대한 합성곱 연산을 수행하는 제1합성곱 계층; 및상기 제2 인코딩 합성곱 신경망에서 출력한 상기 컨텍스트 정보를 이용하여 정규화 파라미터를 결정하고, 결정된 상기 정규화 파라미터로 상기 제1합성곱 계층에서 출력된 결과 값에 대한 적응적 정규화(adaptive normalization)를 수행함으로써, 상기 제1 피쳐맵에 상기 컨텍스트 정보를 반영하는 제1후처리 계층을 포함하는 인공 신경망 학습 방법.
- 제1항에 있어서,상기 디코딩 신경망은,상기 제1피쳐맵에 대한 합성곱 연산을 수행하는 제1합성곱 계층; 및상기 제1합성곱 계층에서 출력된 결과 값을 대상으로 상기 제2 인코딩 합성곱 신경망에서 출력한 상기 컨텍스트 정보에 기반한 어텐션 메커니즘(attention mechanism)을 수행함으로써, 상기 제1 피쳐맵에 상기 컨텍스트 정보를 반영하는 제1후처리 계층을 포함하는 인공 신경망 학습 방법.
- 제3항 또는 제4항에 있어서,상기 제1인코딩 합성곱 신경망은,상기 제i고해상도 패치에 상응하는 제2피쳐맵을 더 출력하고-여기서, 상기 제2피쳐맵은 상기 제1피쳐맵 보다 저레벨의 피쳐맵임-,상기 디코딩 합성곱 신경망은,상기 제2피쳐맵에 대한 논-로컬 블록 연산을 수행하는 논-로컬 블록 계층;상기 제1후처리 계층으로부터 전달된 결과 및 상기 논-로컬 블록 계층으로부터 전달된 결과를 결합(concatenation)하는 결합 계층;상기 결합 계층으로부터 전달된 결과에 대한 합성곱 연산을 수행하는 제2합성곱 계층;상기 제2합성곱 계층에서 출력된 결과에 상기 제i저해상도 패치에 상응하는 컨텍스트 정보를 반영하는 제2후처리 계층을 더 포함하고,상기 제2후처리 계층에서 출력된 결과에 기초하여 상기 예측 정보를 출력하는 인공 신경망 학습 방법.
- 제1항에 기재된 인공 신경망 학습 방법에 의해 학습된 인공 신경망을 통해 소정의 판단 대상 생체 조직 슬라이드에 대한 판단 결과를 제공하는 방법으로서,컴퓨팅 시스템이, 상기 판단 대상 생체 조직 슬라이드의 판단 대상 슬라이드 이미지를 획득하는 단계;상기 컴퓨팅 시스템이, 상기 판단 대상 슬라이드 이미지로부터 제1판단 대상 고해상도 패치 내지 제N판단 대상 고해상도 패치를 생성하는 단계;상기 컴퓨팅 시스템이, 제j판단 대상 고해상도 패치(여기서, j는 1<=j<=N인 임의의 정수)에 상응하는 제j판단 대상 저해상도 패치를 생성하는 단계- 여기서 상기 제j판단 대상 고해상도 패치와 그에 상응하는 제j판단 대상 저해상도 패치는 동일한 크기를 가지며, 상기 제j판단 대상 고해상도 패치의 중심점과 상기 제j판단 대상 저해상도 패치의 중심점은 상기 판단 대상 생체 조직 슬라이드 상의 동일한 위치를 가리킴-; 및상기 컴퓨팅 시스템이, 상기 제j판단 대상 고해상도 패치 및 상기 제j판단 대상 저해상도 패치를 입력받은 상기 인공 신경망이 출력한 예측 결과에 기초하여 상기 제j판단 대상 고해상도 패치에 포함된 병변 영역을 판단하는 단계를 포함하는 방법.
- 데이터 처리장치에 설치되며 제1항 또는 제6항에 기재된 방법을 수행하기 위한 매체에 기록된 컴퓨터 프로그램.
- 제1항 또는 제6항에 기재된 방법을 수행하기 위한 컴퓨터 프로그램이 기록된 컴퓨터 판독 가능한 기록매체.
- 인공 신경망 학습 시스템으로서,프로세서; 및 컴퓨터 프로그램을 저장하는 메모리를 포함하고,상기 컴퓨터 프로그램은, 상기 프로세서에 의해 실행되는 경우, 상기 인공 신경망 학습 시스템으로 하여금 인공 신경망 학습 방법을 수행하도록 하며,상기 인공 신경망 학습 방법은,인공 신경망 학습 시스템이, 생체 조직 슬라이드의 슬라이드 이미지를 획득하는 단계;상기 인공 신경망 학습 시스템이, 상기 슬라이드 이미지로부터 제1고해상도 패치 내지 제N고해상도 패치(여기서, N은 2 이상의 정수)를 획득하는 단계;상기 인공 신경망 학습 시스템이, 제i고해상도 패치(여기서, i는 1<=i<=N인 임의의 정수)에 상응하는 제i저해상도 패치를 획득하는 단계- 여기서 상기 제i고해상도 패치와 그에 상응하는 제i저해상도 패치는 동일한 크기를 가지며, 상기 제i고해상도 패치의 중심점과 상기 제i저해상도 패치의 중심점은 상기 생체 조직 슬라이드 상의 동일한 위치를 가리킴-; 및상기 인공 신경망 학습 시스템이, 상기 제i고해상도 패치 및 상기 제i저해상도 패치를 입력하여 상기 인공 신경망을 학습하는 단계를 포함하되,상기 인공 신경망은, 제1인코딩 합성곱 신경망; 제2인코딩 합성곱 신경망; 및 디코딩 합성곱 신경망을 포함하고,상기 제1 인코딩 합성곱 신경망은,제i고해상도 패치를 입력받아 상기 제i고해상도 패치에 상응하는 제1피쳐맵을 출력하는 합성곱 신경망이며,상기 제2인코딩 합성곱 신경망은,상기 제i저해상도 패치를 입력받아 상기 제i저해상도 패치에 상응하는 컨텍스트 정보(context information)를 출력하는 합성곱 신경망이며,상기 디코딩 신경망은,상기 제i고해상도 패치에 상응하는 제1 피쳐맵에 상기 제i저해상도 패치에 상응하는 컨텍스트 정보를 반영하고, 상기 컨텍스트 정보를 반영한 결과 값에 기초하여 상기 제i고해상도 패치 내의 병변 영역을 판단하기 위한 소정의 예측 정보를 생성하는 합성곱 신경망인 인공 신경망 학습 시스템.
- 제9항에 있어서,상기 생체 조직 슬라이드는, 유방암 절제 조직 슬라이드이며,상기 슬라이드 이미지에는 침윤암으로 인한 병변 영역인 침윤암 영역 및 관내상피암으로 인한 병변 영역인 관내상피암 영역이 어노테이션되어 있는 인공 신경망 학습 시스템.
- 제9항에 있어서,상기 디코딩 신경망은,상기 제1피쳐맵에 대한 합성곱 연산을 수행하는 제1합성곱 계층; 및상기 제2 인코딩 합성곱 신경망에서 출력한 상기 컨텍스트 정보를 이용하여 정규화 파라미터를 결정하고, 결정된 상기 정규화 파라미터로 상기 제1합성곱 계층에서 출력된 결과 값에 대한 적응적 정규화(adaptive normalization)를 수행함으로써, 상기 제1 피쳐맵에 상기 컨텍스트 정보를 반영하는 제1후처리 계층을 포함하는 인공 신경망 학습 시스템.
- 제9항에 있어서,상기 디코딩 신경망은,상기 제1피쳐맵에 대한 합성곱 연산을 수행하는 제1합성곱 계층; 및상기 제1합성곱 계층에서 출력된 결과 값을 대상으로 상기 제2 인코딩 합성곱 신경망에서 출력한 상기 컨텍스트 정보에 기반한 어텐션 메커니즘(attention mechanism)을 수행함으로써, 상기 제1 피쳐맵에 상기 컨텍스트 정보를 반영하는 제1후처리 계층을 포함하는 인공 신경망 학습 시스템.
- 제11항 또는 제12항에 있어서,상기 제1인코딩 합성곱 신경망은,상기 제i고해상도 패치에 상응하는 제2피쳐맵을 더 출력하고-여기서, 상기 제2피쳐맵은 상기 제1피쳐맵 보다 저레벨의 피쳐맵임-,상기 디코딩 합성곱 신경망은,상기 제2피쳐맵에 대한 논-로컬 블록 연산을 수행하는 논-로컬 블록 계층;상기 제1후처리 계층으로부터 전달된 결과 및 상기 논-로컬 블록 계층으로부터 전달된 결과를 결합(concatenation)하는 결합 계층;상기 결합 계층으로부터 전달된 결과에 대한 합성곱 연산을 수행하는 제2합성곱 계층;상기 제2합성곱 계층에서 출력된 결과에 상기 제i저해상도 패치에 상응하는 컨텍스트 정보를 반영하는 제2후처리 계층을 더 포함하고,상기 제2후처리 계층에서 출력된 결과에 기초하여 상기 예측 정보를 출력하는 인공 신경망 학습 시스템.
- 소정의 판단 대상 생체 조직 슬라이드에 대한 판단 결과 제공 시스템으로서,프로세서; 및 컴퓨터 프로그램을 저장하는 메모리를 포함하고,상기 컴퓨터 프로그램은, 상기 프로세서에 의해 실행되는 경우, 판단 결과 제공 시스템으로 하여금 제1항에 기재된 인공 신경망 학습 방법에 의해 학습된 인공 신경망을 통해 상기 판단 대상 생체 조직 슬라이드에 대한 판단 결과를 제공하는 방법을 수행하도록 하며,상기 판단 결과를 제공하는 방법은,상기 판단 결과 제공 시스템이, 상기 판단 대상 생체 조직 슬라이드의 판단 대상 슬라이드 이미지를 획득하는 단계;상기 판단 결과 제공 시스템이, 상기 판단 대상 슬라이드 이미지로부터 제1판단 대상 고해상도 패치 내지 제N판단 대상 고해상도 패치를 생성하는 단계;상기 판단 결과 제공 시스템이, 제j판단 대상 고해상도 패치(여기서, j는 1<=j<=N인 임의의 정수)에 상응하는 제j판단 대상 저해상도 패치를 생성하는 단계- 여기서 상기 제j판단 대상 고해상도 패치와 그에 상응하는 제j판단 대상 저해상도 패치는 동일한 크기를 가지며, 상기 제j판단 대상 고해상도 패치의 중심점과 상기 제j판단 대상 저해상도 패치의 중심점은 상기 판단 대상 생체 조직 슬라이드 상의 동일한 위치를 가리킴-; 및상기 판단 결과 제공 시스템이, 상기 제j판단 대상 고해상도 패치 및 상기 제j판단 대상 저해상도 패치를 입력받은 상기 인공 신경망이 출력한 예측 결과에 기초하여 상기 제j판단 대상 고해상도 패치에 포함된 병변 영역을 판단하는 단계를 포함하는 판단 결과 제공 시스템.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023564222A JP2024519459A (ja) | 2021-04-28 | 2022-04-20 | 乳がんの病変領域を判別するための人工ニューラルネットワークの学習方法、及びこれを行うコンピュータシステム |
EP22796033.3A EP4318497A1 (en) | 2021-04-28 | 2022-04-20 | Training method for training artificial neural network for determining breast cancer lesion area, and computing system performing same |
CN202280031793.6A CN117256033A (zh) | 2021-04-28 | 2022-04-20 | 用于学习判断乳腺癌病变区域的人工神经网络的学习方法及执行其的计算系统 |
US18/288,380 US20240221373A1 (en) | 2021-04-28 | 2022-04-20 | Training method for training artificial neural network for determining breast cancer lesion area, and computing system performing same |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2021-0055207 | 2021-04-28 | ||
KR1020210055207A KR102446638B1 (ko) | 2021-04-28 | 2021-04-28 | 유방암 병변 영역을 판별하기 위한 인공 신경망을 학습하기 위한 학습 방법, 및 이를 수행하는 컴퓨팅 시스템 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022231200A1 true WO2022231200A1 (ko) | 2022-11-03 |
Family
ID=83452651
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2022/005634 WO2022231200A1 (ko) | 2021-04-28 | 2022-04-20 | 유방암 병변 영역을 판별하기 위한 인공 신경망을 학습하기 위한 학습 방법, 및 이를 수행하는 컴퓨팅 시스템 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20240221373A1 (ko) |
EP (1) | EP4318497A1 (ko) |
JP (1) | JP2024519459A (ko) |
KR (1) | KR102446638B1 (ko) |
CN (1) | CN117256033A (ko) |
WO (1) | WO2022231200A1 (ko) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20230095805A (ko) | 2021-12-22 | 2023-06-29 | (주)제이엘케이 | 클러스터링 알고리즘 기반의 효율적인 어노테이션 검수 작업 방법 및 시스템 |
CN118366002B (zh) * | 2024-06-19 | 2024-09-06 | 江汉大学 | 穴位反射区确定方法、装置、针灸机器人及存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20200017261A (ko) * | 2018-08-08 | 2020-02-18 | 주식회사 딥바이오 | 생체 이미지 진단 시스템, 생체 이미지 진단 방법, 및 이를 수행하기 위한 단말 |
JP6710373B2 (ja) * | 2017-03-30 | 2020-06-17 | 国立研究開発法人産業技術総合研究所 | 超音波画像診断支援方法、およびシステム |
WO2020198380A1 (en) * | 2019-03-26 | 2020-10-01 | Tempus Labs, Inc. | Determining biomarkers from histopathology slide images |
WO2020243556A1 (en) * | 2019-05-29 | 2020-12-03 | Leica Biosystems Imaging, Inc. | Neural network based identification of areas of interest in digital pathology images |
WO2021061947A1 (en) * | 2019-09-24 | 2021-04-01 | Carnegie Mellon University | System and method for analyzing medical images based on spatio-temporal data |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20190060606A (ko) * | 2017-11-24 | 2019-06-03 | 삼성전자주식회사 | 의료 영상 진단 장치 및 방법 |
KR102097741B1 (ko) * | 2019-07-25 | 2020-04-06 | 주식회사 딥노이드 | 인공지능 학습용 의료 영상 데이터 정제 시스템 및 그 구동방법 |
KR102205430B1 (ko) * | 2019-08-05 | 2021-01-20 | 에스케이텔레콤 주식회사 | 인공 신경망을 이용한 학습 방법 |
-
2021
- 2021-04-28 KR KR1020210055207A patent/KR102446638B1/ko active IP Right Grant
-
2022
- 2022-04-20 EP EP22796033.3A patent/EP4318497A1/en active Pending
- 2022-04-20 WO PCT/KR2022/005634 patent/WO2022231200A1/ko active Application Filing
- 2022-04-20 US US18/288,380 patent/US20240221373A1/en active Pending
- 2022-04-20 JP JP2023564222A patent/JP2024519459A/ja active Pending
- 2022-04-20 CN CN202280031793.6A patent/CN117256033A/zh active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6710373B2 (ja) * | 2017-03-30 | 2020-06-17 | 国立研究開発法人産業技術総合研究所 | 超音波画像診断支援方法、およびシステム |
KR20200017261A (ko) * | 2018-08-08 | 2020-02-18 | 주식회사 딥바이오 | 생체 이미지 진단 시스템, 생체 이미지 진단 방법, 및 이를 수행하기 위한 단말 |
WO2020198380A1 (en) * | 2019-03-26 | 2020-10-01 | Tempus Labs, Inc. | Determining biomarkers from histopathology slide images |
WO2020243556A1 (en) * | 2019-05-29 | 2020-12-03 | Leica Biosystems Imaging, Inc. | Neural network based identification of areas of interest in digital pathology images |
WO2021061947A1 (en) * | 2019-09-24 | 2021-04-01 | Carnegie Mellon University | System and method for analyzing medical images based on spatio-temporal data |
Non-Patent Citations (1)
Title |
---|
KAIMING HE ET AL., NON-LOCAL NEURAL NETWORKS, Retrieved from the Internet <URL:https://arxiv.org/pdf/1711.07971.pdf> |
Also Published As
Publication number | Publication date |
---|---|
EP4318497A1 (en) | 2024-02-07 |
CN117256033A (zh) | 2023-12-19 |
KR102446638B1 (ko) | 2022-09-26 |
US20240221373A1 (en) | 2024-07-04 |
JP2024519459A (ja) | 2024-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022231200A1 (ko) | 유방암 병변 영역을 판별하기 위한 인공 신경망을 학습하기 위한 학습 방법, 및 이를 수행하는 컴퓨팅 시스템 | |
CN111402228B (zh) | 图像检测方法、装置和计算机可读存储介质 | |
WO2021049729A1 (ko) | 인공지능 모델을 이용한 폐암 발병 가능성 예측 방법 및 분석 장치 | |
WO2022149894A1 (ko) | 병리 검체에 대한 판단 결과를 제공하는 인공 뉴럴 네트워크의 학습 방법, 및 이를 수행하는 컴퓨팅 시스템 | |
WO2021093451A1 (zh) | 病理切片图像的处理方法、装置、系统及存储介质 | |
WO2020032559A2 (ko) | 뉴럴 네트워크를 이용한 질병의 진단 시스템 및 방법 | |
WO2019235828A1 (ko) | 투 페이스 질병 진단 시스템 및 그 방법 | |
WO2021137454A1 (ko) | 인공지능 기반의 사용자 의료정보 분석 방법 및 시스템 | |
WO2023234730A1 (ko) | 패치 레벨 중증도 판단 방법, 슬라이드 레벨 중증도 판단 방법 및 이를 수행하는 컴퓨팅 시스템 | |
WO2021010671A2 (ko) | 뉴럴 네트워크 및 비국소적 블록을 이용하여 세그멘테이션을 수행하는 질병 진단 시스템 및 방법 | |
WO2022158843A1 (ko) | 조직 검체 이미지 정제 방법, 및 이를 수행하는 컴퓨팅 시스템 | |
WO2019189972A1 (ko) | 치매를 진단을 하기 위해 홍채 영상을 인공지능으로 분석하는 방법 | |
CN113792807B (zh) | 皮肤病分类模型训练方法、系统、介质和电子设备 | |
JP2024530388A (ja) | 多重免疫蛍光イメージングを使用する組織学的染色のデジタル合成 | |
WO2021125671A1 (ko) | 의료 영상 데이터의 익명화 방법, 프로그램, 컴퓨터 장치 | |
WO2020246676A1 (ko) | 자궁경부암 자동 진단 시스템 | |
CN114092427B (zh) | 一种基于多序列mri图像的克罗病与肠结核分类方法 | |
WO2022019355A1 (ko) | 다중 페이즈 생체 이미지를 이용하여 학습된 뉴럴 네트워크를 이용한 질병 진단 방법 및 이를 수행하는 질병 진단 시스템 | |
WO2022103122A1 (ko) | 딥러닝 기반의 병리 슬라이드 이미지 고해상도 변환 방법 및 이를 수행하는 컴퓨팅 시스템 | |
KR20240052193A (ko) | 다중 배율 비전 변환기 기반의 디지털 병리 영상 분석 방법 및 장치 | |
CN114974522A (zh) | 医学影像处理方法、装置、电子设备及存储介质 | |
CN113706449B (zh) | 基于病理图像的细胞分析方法、装置、设备及存储介质 | |
WO2022197045A1 (ko) | 뉴럴 네트워크를 통한 질병의 진단결과를 이용한 예후예측방법 및 그 시스템 | |
WO2022181879A1 (ko) | 딥러닝 모델 기반의 종양-스트로마 비율 예측 방법 및 분석장치 | |
WO2023113414A1 (ko) | 병리 검체에 대한 판단 결과를 제공하는 인공 뉴럴 네트워크의 학습 방법, 및 이를 수행하는 컴퓨팅 시스템 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22796033 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023564222 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022796033 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18288380 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280031793.6 Country of ref document: CN |
|
ENP | Entry into the national phase |
Ref document number: 2022796033 Country of ref document: EP Effective date: 20231023 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |