WO2018221625A1 - System and method for diagnostic support using pathological image of skin tissue - Google Patents

System and method for diagnostic support using pathological image of skin tissue Download PDF

Info

Publication number
WO2018221625A1
WO2018221625A1 PCT/JP2018/020858 JP2018020858W WO2018221625A1 WO 2018221625 A1 WO2018221625 A1 WO 2018221625A1 JP 2018020858 W JP2018020858 W JP 2018020858W WO 2018221625 A1 WO2018221625 A1 WO 2018221625A1
Authority
WO
WIPO (PCT)
Prior art keywords
distribution
cells
image representing
image
skin tissue
Prior art date
Application number
PCT/JP2018/020858
Other languages
French (fr)
Japanese (ja)
Inventor
悠自 太田
光介 志藤
要 小島
正朗 長▲崎▼
研志 山▲崎▼
節也 相場
Original Assignee
国立大学法人東北大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立大学法人東北大学 filed Critical 国立大学法人東北大学
Priority to JP2019521286A priority Critical patent/JP6757054B2/en
Publication of WO2018221625A1 publication Critical patent/WO2018221625A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/48Biological material, e.g. blood, urine; Haemocytometers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to a technique for supporting diagnosis using a pathological image of a skin tissue.
  • One paper describes what uses a convolutional neural network in two stages to detect Mitosis cells from a pathological image of a breast cancer pathological tissue. Here, it is sufficient to detect a specific type of cell, and it is not a problem to make a diagnosis in consideration of the distribution of the specific type of cell.
  • An object of the present invention is, according to one aspect, to provide a novel technique for providing diagnosis support for a skin disease.
  • the first diagnosis support system of the present invention includes (A) an image representing the distribution of dermal cells in the skin tissue, an image representing the distribution of epidermal cells in the skin tissue, and the distribution of atypical cells in the skin tissue.
  • a learned model of machine learning using as input an image representing the image
  • an output data generation unit that generates output data related to the skin disease based on the output from the learned model for the second image and the third image representing the distribution of atypical cells in the skin tissue to be processed.
  • the second diagnosis support system of the present invention includes (A) an image representing the distribution of dermal cells in the skin tissue included in the pathological image, an image representing the distribution of epidermal cells in the skin tissue, and the distribution of immune cells in the skin tissue.
  • a learned model of machine learning that receives an image representing the distribution of the cells forming the lumen in the skin tissue and an output representing the differentiation of a plurality of non-neoplastic skin diseases, and (B) processing A first image representing the distribution of dermal cells in the subject skin tissue, a second image representing the distribution of epidermal cells in the skin tissue to be treated, and a third image representing the distribution of immune cells in the skin tissue to be treated.
  • an output data generating unit for generating output data including.
  • FIG. 1A is a diagram showing an example of a pathological image of a skin tissue containing malignant melanoma.
  • FIG. 1B is a diagram illustrating an example of a pathological image of a skin tissue including pigmented nevus.
  • FIG. 2 is a diagram illustrating an example of a convolutional neural network.
  • FIG. 3A is a diagram showing an example of an image representing the distribution of dermal cells in skin tissue known to be malignant melanoma.
  • FIG. 3B is a diagram showing an example of an image representing the distribution of epidermal cells in skin tissue known to be malignant melanoma.
  • FIG. 3C is a diagram showing an example of an image representing the distribution of atypical cells in skin tissue known to be malignant melanoma.
  • FIG. 1A is a diagram showing an example of a pathological image of a skin tissue containing malignant melanoma.
  • FIG. 1B is a diagram illustrating an example of a pathological image
  • FIG. 4A is a diagram illustrating an example of an image representing a distribution of dermal cells in a skin tissue known to be pigmented nevus.
  • FIG. 4B is a diagram showing an example of an image representing the distribution of epidermal cells in skin tissue known to be pigmented nevus.
  • FIG. 4C is a diagram illustrating an example of an image representing the distribution of atypical cells in skin tissue known to be pigmented nevus.
  • FIG. 5 is a diagram illustrating a functional block configuration of the information processing apparatus according to the first embodiment.
  • FIG. 6 is a diagram illustrating a processing flow executed by the information processing apparatus according to the first embodiment.
  • FIG. 7 is a diagram for explaining divided images.
  • FIG. 8A is a diagram for explaining an initial image of the distribution of dermal cells.
  • FIG. 8B is a diagram for explaining an initial image of the distribution of epidermal cells.
  • FIG. 8C is a diagram for explaining an initial image of atypical cell distribution.
  • FIG. 9 is a diagram showing an example of an image representing the distribution of immune cells in skin tissue known to be malignant melanoma.
  • FIG. 10 is a diagram showing an example of an image representing the distribution of immune cells in skin tissue known as pigmented nevus.
  • FIG. 11 is a diagram showing an example of an image representing a distribution of cells that form a lumen in skin tissue known as malignant melanoma.
  • FIG. 12 is a diagram illustrating an example of an image representing a distribution of cells that form a lumen in skin tissue known as pigmented nevus.
  • FIG. 13 is a diagram illustrating a functional block configuration of the information processing apparatus according to the second embodiment.
  • FIG. 14 is a diagram illustrating a processing flow executed by the information processing apparatus according to the second embodiment.
  • FIG. 15A is a diagram illustrating a relationship between a disease name to be identified, a cell name that is derived from an atypical cell related to a tumor, and a cell type used for processing.
  • FIG. 15B is a diagram illustrating a relationship between a disease name to be identified, a cell name from which an atypical cell related to a tumor is derived, and a cell type used for processing.
  • FIG. 15A is a diagram illustrating a relationship between a disease name to be identified, a cell name that is derived from an atypical cell related to a tumor, and a cell type used for processing.
  • FIG. 15B is a diagram illustrating a relationship between a disease name to be identified, a cell name from which an atypical cell related to a tumor is derived
  • FIG. 15C is a diagram illustrating a relationship between a disease name to be identified, a cell name derived from an atypical cell related to a tumor, and a cell type used for processing.
  • FIG. 16 is a diagram showing the relationship between the names of non-neoplastic diseases and the types of cells used for treatment.
  • Embodiment 1 The purpose of this embodiment is to distinguish malignant melanoma from pigmented nevus.
  • Malignant melanoma is a cancer of pigment cells present in the epidermis.
  • pigmented nevus is a benign tumor of pigment cells, and it is often difficult to distinguish them.
  • FIG. 1A shows an example of a pathological image of a skin tissue containing malignant melanoma (however, a gray scale image).
  • FIG. 1B shows an example of a pathological image of a skin tissue including pigmented nevus (however, a gray scale image).
  • a pathological image is a digital image obtained by scanning a preparation of a pathological tissue.
  • the distribution of epidermal cells, the distribution of dermal cells, and the distribution of atypical cells in the skin tissue are specified in such a pathological image.
  • the atypical cells are either pigmented cells that are malignant melanoma or pigmented nevi that are benign tumors.
  • an atypical cell shall mean a cell that has become tumorous or is not normal.
  • an image representing the distribution of epidermal cells (an image representing the distribution is also referred to as a heat map), an image representing the distribution of dermal cells, and an image representing the distribution of atypical cells are used as malignant cells. Distinguish between melanoma and benign tumor, pigmented nevus, and others. However, you may make it distinguish malignant melanoma from others.
  • it is intended to estimate the malignancy of a pathological tissue based on the distribution of atypical cells relative to the distribution of epidermal cells and the distribution of dermal cells. That is, it is intended to predict the malignancy of a pathological tissue based on the distribution of the atypical cells in which part of the epidermis or dermis. In other words, it evaluates how an atypical cell group enters a dermis cell group or epidermis cell group that expresses the structure of the skin and exists from the normal time. Such prediction and evaluation are realized by performing learning on a convolutional neural network (CNN: Convolutional Neural Network).
  • CNN Convolutional Neural Network
  • the convolutional neural network has a structure as shown in FIG. It has a structure similar to the well-known convolutional neural network called AlexNet, and there are multiple combinations of convolutional layers and subsampling layers (pooling layers) between the input layer and the output layer ( In the figure, 2) a set is provided, and two additional coupling layers are added.
  • AlexNet convolutional neural network
  • an image representing the distribution of epidermal cells, an image representing the distribution of dermal cells, and an image representing the distribution of atypical cells are input to the input layer.
  • the output layer for example, the likelihood of malignant melanoma, the likelihood of pigmented nevus, and the likelihood in other cases are output.
  • the convolutional layer has parameters for the filter and activation function
  • the subsampling layer also has parameters for subsampling
  • the fully connected layer also has parameters for the activation function, so An appropriate value or the like is set for them by learning.
  • a known method is used as an algorithm for learning.
  • FIG. 3A shows an image representing the distribution of dermal cells in skin tissue known to be malignant melanoma.
  • FIG. 3B shows an image representing the distribution of epidermal cells in skin tissue known to be malignant melanoma.
  • FIG. 3C shows an image representing the distribution of atypical cells in skin tissue known to be malignant melanoma. Learning is performed for the purpose that when such an image is input to the input layer, an output having a likelihood of being malignant melanoma of 1 is output from the output layer.
  • an output having a likelihood of being malignant melanoma of 1 is output from the output layer.
  • FIG. 4A shows an image representing the distribution of dermal cells in the skin tissue known to be pigmented nevus.
  • FIG. 4B shows an image representing the distribution of epidermal cells in skin tissue known to be pigmented nevus.
  • FIG. 4C shows an image representing the distribution of atypical cells in skin tissue known to be pigmented nevus.
  • a convolutional neural network may also be used in the first stage processing.
  • a partial image of a pathological image including skin tissue (a partial image having a size similar to that of one cell or a partial image including one extracted cell) is input, and dermal cells and epidermis are input.
  • a trained convolutional neural network that outputs the likelihood of cells, atypical cells, and others (glass portion, etc.) is used.
  • the distribution of epidermal cells, the distribution of dermal cells, and the distribution of atypical cells may be extracted from the pathological image using other techniques such as region extraction.
  • FIG. 5 shows a functional configuration example of the information processing apparatus according to the present embodiment.
  • the information processing apparatus includes an input unit 101, a pathological image storage unit 103, a first preprocessing unit 105, a first image storage unit 107, a first CNN 109, and a first post processing unit 111. , Second image storage unit 113, second pre-processing unit 115, third image storage unit 117, second CNN 119, second post-processing unit 121, result storage unit 123, and first learning processing unit 131 And a second learning processing unit 133.
  • the input unit 101 receives an input of a pathological image to be processed from a user and stores it in the pathological image storage unit 103.
  • the pathological image is a digital image obtained by scanning a preparation of a pathological tissue.
  • it is an image scanned by enlarging the objective 20 times, for example, an RGB image of 1M ⁇ 1M pixels in length and width.
  • the first preprocessing unit 105 divides the pathological image stored in the pathological image storage unit 103 into a size of about one cell for calculation in the first CNN 109, and divides the pathological image (hereinafter referred to as a divided image). ) Is stored in the first image storage unit 107. For example, a square divided image of 32 ⁇ 32 pixels is generated.
  • the first CNN 109 is a convolutional neural network learned to determine whether each divided image stored in the first image storage unit 107 corresponds to a dermal cell, epidermal cell, atypical cell, or other cell. is there. Accordingly, the first CNN 109 outputs, for example, respective likelihoods for the divided images related to the processing.
  • the first post-processing unit 111 Based on the output from the first CNN 109, the first post-processing unit 111 generates an initial image that represents the distribution of dermal cells, an initial image that represents the distribution of epidermal cells, and an initial image that represents the distribution of atypical cells.
  • Store in the storage unit 113 For example, when it is determined that the divided image is a dermal cell, in the initial image representing the distribution of the dermal cell, all the pixels in the region of the position of the divided image are “1”, otherwise “0”. Is set as follows. That is, these initial images are monochrome images. In such a case, the size of these initial images is the same as the pathological image.
  • the second preprocessing unit 115 generates, from the image stored in the second image storage unit 113, an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, and an image representing the distribution of atypical cells.
  • the image is stored in the image storage unit 117.
  • a process of reducing each initial image to an image size that can be processed by the second CNN 119 (for example, vertical and horizontal 200 ⁇ 200 pixels) is executed.
  • a process of converting each pixel to a predetermined gradation (256 gradations) is also performed. That is, the image representing the distribution of dermal cells, the image representing the distribution of epidermal cells, and the image representing the distribution of atypical cells are grayscale images.
  • the second CNN 119 outputs whether or not it is malignant melanoma from the image stored in the third image storage unit 117. For example, the likelihood of malignant melanoma, the likelihood of pigmented nevus, and other likelihoods are output. Note that the likelihood of malignant melanoma and other likelihoods may be output.
  • the second post-processing unit 121 stores the discrimination result in the result storage unit 123 based on the output from the second CNN 119, and outputs the discrimination result to the display device or other output device.
  • the distribution of epidermal cells, the distribution of dermal cells, and the distribution of atypical cells are represented as separate images, so that the relationship between the distributions can be easily evaluated. This makes it possible to perform appropriate discrimination using a convolutional neural network.
  • the first learning processing unit 131 executes learning processing for the first CNN 109 using training data including a large number of sets of cell images and their types. Since the learning process method is the same as the conventional method, a detailed description is omitted.
  • the second learning processing unit 133 is training data including a large number of sets of images representing the distribution of dermal cells, images representing the distribution of epidermal cells, images representing the distribution of atypical cells, and discrimination results. Execute learning process. Since the learning process method is the same as the conventional method, a detailed description is omitted.
  • the input unit 101 receives an input of a pathological image to be processed from a user and stores it in the pathological image storage unit 103 (step S1). Although it is a grayscale image due to a problem in expression, an image (color image) as shown in FIGS. 1A and 1B is input as a pathological image.
  • the input pathological image may be stored in a storage unit of the information processing apparatus, a storage unit of another apparatus connected to the network, or the like. In this case, the input unit 101 reads out from the storage unit. And stored in the pathological image storage unit 103.
  • the first preprocessing unit 105 performs the first preprocessing on the pathological image to be processed stored in the pathological image storage unit 103, and stores the processing result in the first image storage unit 107 (step). S3).
  • the pathological image 1001 is divided into a large number of divided images 1002 of N ⁇ N pixels in the vertical and horizontal directions.
  • N is, for example, 32 or 64.
  • a divided image may be generated by performing processing for recognizing a cell nucleus to identify a cell and specifying an image region including the cell.
  • the first CNN 109 reads out each divided image stored in the first image storage unit 107, and executes a classification process on them (step S5).
  • the classification process is a process for determining whether the divided image to be processed corresponds to a dermal cell, an epidermal cell, an atypical cell, or another cell. For example, the likelihood for each is output.
  • the first post-processing unit 111 executes the first post-processing based on the output from the first CNN 109, and stores the processing result in the second image storage unit 113 (step S7).
  • the first post-processing as schematically shown in FIG. 8A, in the initial image representing the distribution of dermal cells, the pixel values of all the pixels in the region 1002a at the position of the divided image determined to be dermal cells are “1”. To "". For the other pixels, the pixel value is set to “0”.
  • the pixel values of all the pixels in the region 1002b at the position of the divided image determined to be epidermal cells are set to “1”. To do.
  • the pixel value is set to “0”. Further, as schematically shown in FIG. 8C, in the initial image representing the distribution of atypical cells, the pixel values of all the pixels in the region 1002c at the position of the divided image determined to be atypical cells are set to “1”. . For the other pixels, the pixel value is set to “0”.
  • first post-processing is merely an example, and the size of the initial image representing the distribution may be reduced.
  • one pixel may be associated with each divided image, and the pixel value of one pixel at the position of the divided image may be set to “1” in the initial image representing the distribution.
  • the size is 1 / N both vertically and horizontally with respect to the pathological image.
  • the second preprocessing unit 115 performs second preprocessing on the image stored in the second image storage unit 113, and stores the processing result in the third image storage unit 117 (step S9). ).
  • image reduction processing is executed.
  • the pixel value of the pixel after reduction is calculated by calculating the average of the pixel values of neighboring pixels. Since the pixel value in the initial image representing the distribution is “0” or “1”, the average is calculated to be a real number from “0” to “1”. This real value is linearly mapped so as to be an integer of, for example, “0” to “255”.
  • the initial image (monochrome image) representing the distribution is converted into an image (grayscale image) representing the distribution. Note that a non-average method may be employed in the reduction process, but since it is the same as the prior art, it will not be described in detail here.
  • an image representing the distribution of dermal cells as shown in FIGS. 3A and 4A, an image representing the distribution of epidermal cells as shown in FIGS. 3B and 4B, and FIGS. 3C and 4C An image representing the distribution of atypical cells as represented is obtained.
  • the reduction process is performed on the initial image representing the distribution due to the problem such as the load of the convolutional neural network. However, if there is no problem such as the load, the reduction process is not performed. good.
  • the second CNN 119 reads out the images stored in the third image storage unit 117, and executes a discrimination process on them (step S11).
  • a grayscale image group as shown in FIGS. 3A to 3C or 4A to 4C is processed as a grayscale image of each channel of a color image.
  • the discrimination process is here whether or not it is malignant melanoma.
  • the likelihood of malignant melanoma, the likelihood of pigmented nevus, and other likelihoods are output.
  • the second post-processing unit 121 stores the discrimination result in the result storage unit 123 based on the output from the second CNN 119, and outputs it to the display device or other output device (step S13).
  • the output device may be another device connected to the network.
  • auxiliary information such as the contour of the skin tissue and the distribution of other portions in the skin tissue in the images as shown in FIGS. 3A to 3C and FIGS. 4A to 4C may be useful.
  • the pathological image may be reduced to a gray scale image so as to match the size of the image representing the distribution.
  • the second preprocessing unit 115 reads out the pathological image from the pathological image storage unit 103, reduces it to a predetermined size, and stores it in the third image storage unit 117.
  • the second CNN 119 is further learned by using a reduced grayscale image of a pathological image in addition to an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, and an image representing the distribution of atypical cells.
  • the second CNN 119 performs the discrimination process, in addition to the image representing the distribution of dermal cells, the image representing the distribution of epidermal cells, and the image representing the distribution of atypical cells, a reduced grayscale image of the pathological image is input. To determine whether or not it is malignant melanoma.
  • the first CNN 109 is configured to determine whether each divided image is a dermal cell, an epidermal cell, an immune cell, an atypical cell, or the like, and an image of the dermal cell or an image of the epidermal cell. Learning is performed using images of immune cells and images of atypical cells. Then, the first CNN 109 outputs the likelihood for each of dermal cells, epidermal cells, immune cells, atypical cells, and the like.
  • the first post-processing unit 111 is an initial image representing the distribution of dermal cells, an initial image representing the distribution of epidermal cells, an initial image representing the distribution of immune cells, and an initial image representing the distribution of atypical cells. Generate an image. The initial image representing the distribution of immune cells is generated in the same manner as the initial image representing the other distribution.
  • the second preprocessing unit 115 generates an image representing the distribution of immune cells from the initial image representing the distribution of immune cells in the same manner as an image representing other distributions.
  • An image representing the distribution of immune cells in the skin tissue known as malignant melanoma is, for example, an image as shown in FIG.
  • an image representing the distribution of immune cells in the skin tissue known to be pigmented nevus is an image as shown in FIG. 10, for example.
  • the white cells indicate that the cells of interest exist. In FIG. 9, there is an overlap with the distribution of atypical cells.
  • the second CNN 119 is further learned using an image representing the distribution of immune cells in addition to an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, and an image representing the distribution of atypical cells.
  • the second CNN 119 performs the discrimination process, in addition to the image representing the distribution of dermal cells, the image representing the distribution of epidermal cells, and the image representing the distribution of atypical cells, an image representing the distribution of immune cells is input. And discriminate between malignant melanoma, pigmented nevus and others.
  • the cells are deformed to use cells that form a lumen instead of the immune cells in the modified example B.
  • the first CNN 109 is configured to determine whether each divided image is a dermal cell, an epidermal cell, a luminal cell, an atypical cell, or the like. Learning is performed using the image of the cell, the image of the cell forming the lumen, and the image of the atypical cell. If it does so, 1st CNN109 will output the likelihood about each of a dermal cell, an epidermal cell, a cell which forms a lumen, an atypical cell, and others.
  • the first post-processing unit 111 based on the output from the first CNN 109, an initial image representing the distribution of dermal cells, an initial image representing the distribution of epidermal cells, an initial image representing the distribution of cells forming a lumen, and the distribution of atypical cells An initial image representing is generated.
  • the initial image representing the distribution of the cells forming the lumen is generated in the same manner as the initial image representing the other distribution.
  • the second preprocessing unit 115 generates an image representing the distribution of the cells forming the lumen from the initial image representing the distribution of the cells forming the lumen, similarly to the image representing the other distribution.
  • An image representing the distribution of cells forming a lumen in skin tissue known as malignant melanoma is, for example, an image as shown in FIG.
  • the image showing the distribution of the cells forming the lumen in the skin tissue known as pigmented nevus is an image as shown in FIG. 12, for example.
  • the white cells indicate that the cells of interest exist. In FIG. 11, there are cells that form many lumens adjacent to a portion where the distribution of atypical cells is large.
  • the second CNN 119 is further learned using an image representing the distribution of cells forming the lumen in addition to an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, and an image representing the distribution of atypical cells. Keep it.
  • an image representing the distribution of dermal cells in addition to an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, an image representing the distribution of atypical cells, an image representing the distribution of cells forming the lumen To identify whether it is malignant melanoma, pigmented nevus, or others.
  • Modification D of Embodiment 1 Modifications A to C can be arbitrarily combined. That is, at least one of a reduced gray scale image of a pathological image, an image representing the distribution of immune cells, and an image representing the distribution of cells forming a lumen may be used.
  • an image representing any type of distribution may be used.
  • Embodiment 2 In addition to the differentiation of malignant melanoma as in the first embodiment, it is modified so as to perform prognosis prediction (life expectancy), stage classification prediction indicating the degree of progression, prediction of drug effect on a specific drug, etc. Also good.
  • a real number for example, 5 years
  • a real number may be predicted, or less than 1 year, 1 year to 3 years, 3 years to 5 years, 5 years or more. You may also predict the survival probability over time as represented by the remaining life range or survival rate curve.
  • Staging prediction predicts how bad malignant melanoma is in a predetermined number of stages and predicts that stage.
  • Prediction of drug effect predicts the effect when a specific drug (for example, expensive nivolumab) is administered to malignant melanoma, and predicts whether or not there is an effect.
  • a specific drug for example, expensive nivolumab
  • the distribution of epidermal cells, the distribution of dermal cells, and the distribution of atypical cells which are the main information in the first embodiment, are effective. This is the distribution of atypical cells relative to the distribution of dermal cells and epidermal cells, that is, in what part and in what distribution the atypical cells are present, and to what extent the pathological tissue is malignant. This is because if the malignancy is high, the prognosis is predicted to be poor.
  • immune cells especially neutrophils and lymphocytes, may be involved in the medication effects of certain drugs.
  • a convolutional neural network for predicting medication effects a convolutional neural network for predicting prognosis, and a disease Introduce a convolutional neural network for period classification prediction.
  • the first CNN 109 is also modified so that an image used in the discrimination process and the prediction process related to the medical condition can be generated.
  • dermal cells, epidermal cells, immune cells, cells that form lumens, and the like are discriminated.
  • type of immune cells neutralils, lymphocytes, macrophages
  • the type of immune cells can be further discriminated.
  • FIG. 13 shows a configuration example of the information processing apparatus according to this embodiment.
  • the same reference number is attached
  • the information processing apparatus includes an input unit 201, a pathological image storage unit 103, a first preprocessing unit 105, a first image storage unit 107, first CNNs 209a and 209b, and a first post processing unit. 211, the second image storage unit 113, the second preprocessing unit 115, the third image storage unit 117, the second CNNs 219a to 219d, the second post-processing unit 221, the result storage unit 123, and the first learning.
  • a processing unit 231 and a second learning processing unit 233 are included.
  • the input unit 201 receives an input of a pathological image to be processed from a user and stores it in the pathological image storage unit 103.
  • the input unit 201 also accepts instructions for processing details. That is, it accepts designation of one or a plurality of processes among differentiation of malignant melanoma, prognosis prediction (life expectancy), staging classification prediction indicating the degree of progression, and prediction of medication effect for a specific drug.
  • the processing content of the first preprocessing unit 105 is the same as that of the first embodiment.
  • the first CNN 209a determines whether each divided image stored in the first image storage unit 107 corresponds to a dermal cell, epidermal cell, immune cell, luminal cell, atypical cell, or other cell. It is a convolutional neural network learned as follows. Accordingly, the first CNN 209a outputs, for example, the respective likelihoods for the divided images related to the processing.
  • the first CNN 209b is a convolutional neural network learned so as to determine whether each divided image stored in the first image storage unit 107 corresponds to a neutrophil, lymphocyte, macrophage, or other cell. is there. Accordingly, the first CNN 209b outputs, for example, respective likelihoods for the divided images related to the processing.
  • the first CNN 209a and the first CNN 209b operate in response to an instruction from the input unit 201, for example. For example, when predicting the dosing effect for a specific drug, the first CNN 209a and the first CNN 209b are used. In other cases, only the first CNN 209a is used.
  • the first post-processing unit 211 based on the output from the first CNN 209a, an initial image that represents the distribution of dermal cells, an initial image that represents the distribution of epidermal cells, an initial image that represents the distribution of immune cells, and the distribution of cells that make up the lumen And an initial image representing the distribution of atypical cells are generated and stored in the second image storage unit 113.
  • an initial image representing the distribution of atypical cells are generated and stored in the second image storage unit 113.
  • the divided image is an epidermal cell
  • all the pixels in the region of the position of the divided image are “1” and “0” otherwise. Is set as follows. That is, these initial images are monochrome images. In such a case, the size of these initial images is the same as the pathological image.
  • the first post-processing unit 211 generates an initial image representing a neutrophil distribution, an initial image representing a lymphocyte distribution, and an initial image representing a macrophage distribution based on the output from the first CNN 209b.
  • the image is stored in the second image storage unit 113.
  • the specific processing contents are the same as in the case of output from the first CNN 209a.
  • the second preprocessing unit 115 is configured to display an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, an image representing the distribution of immune cells, and a cell forming a lumen. And an image representing the distribution of atypical cells are generated and stored in the third image storage unit 117.
  • the processing contents are the same as in the first embodiment.
  • the second preprocessing unit 115 stores an initial image representing the neutrophil distribution stored in the second image storage unit 113, an initial image representing the macrophage distribution, lymph From the initial image representing the sphere distribution, an image representing the neutrophil distribution, an image representing the macrophage distribution, and an image representing the lymphocyte distribution are generated and stored in the third image storage unit 117.
  • the second CNN 219a uses an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, an image representing the distribution of immune cells, an image representing the distribution of cells making up the lumen, and an image representing the distribution of atypical cells. It is a convolutional neural network that is trained to output any of tumor, pigmented nevus, and others.
  • the second CNN 219a outputs, for example, the likelihood of malignant melanoma, the likelihood of pigmented nevus, and other likelihoods from the image stored in the third image storage unit 117.
  • discrimination may be performed without using an image representing the distribution of immune cells or an image representing the distribution of cells forming a lumen.
  • the second CNN 219b predicts the prognosis from an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, an image representing the distribution of immune cells, an image representing the distribution of cells forming the lumen, and an image representing the distribution of atypical cells.
  • This is a convolutional neural network that is trained to output a predicted value of (life expectancy) (real number (year), likelihood for each life expectancy range, or survival probability over time as represented by a survival rate curve).
  • the second CNN 219b outputs a predicted value of prognosis prediction from the image stored in the third image storage unit 117.
  • the second CNN 219c is based on an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, an image representing the distribution of immune cells, an image representing the distribution of cells that form a lumen, and an image representing the distribution of atypical cells. It is a convolutional neural network learned to output classification prediction (for example, the likelihood of each stage). The second CNN 219c outputs the staging classification prediction (for example, the likelihood of each stage) for the image stored in the third image storage unit 117.
  • the second CNN 219d is an image showing the distribution of dermal cells, an image showing the distribution of epidermal cells, an image showing the distribution of neutrophils, an image showing the distribution of macrophages, an image showing the distribution of lymphocytes, and the cells that make up the lumen
  • It is a convolutional neural network learned to output a medication effect (for example, likelihood of being present and likelihood of being absent) of a specific drug from an image representing a distribution and an image representing a distribution of atypical cells.
  • the second CNN 219d outputs the medication effect of the specific medicine from the image stored in the third image storage unit 117.
  • the second post-processing unit 221 stores the discrimination result in the result storage unit 123 based on the output from the second CNN 219a, and outputs the discrimination result to the display device or other output device. Furthermore, when the prediction regarding the medical condition is performed, the second post-processing unit 221 stores the prediction result regarding the medical condition in the result storage unit 123 based on the output from at least one of the second CNNs 219b to 219d. The prediction result is output to a display device or other output device.
  • the distribution between epidermal cells, dermal cells, atypical cells, immune cells, and luminal cells are represented as separate images, thereby evaluating the relationship between the distributions. It is easy to do. This makes it possible to perform appropriate discrimination and prediction regarding a medical condition using a convolutional neural network.
  • the first learning processing unit 231 executes learning processing for the first CNNs 209a and 209b with training data including a large number of sets of cell images and their types. Since the learning process method is the same as the conventional method, a detailed description is omitted.
  • the second learning processing unit 233 displays an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, an image representing the distribution of atypical cells, an image representing the distribution of cells forming a lumen, and the distribution of immune cells.
  • a training process is performed on the second CNN 219a with training data including a large number of sets of images to be represented and identification results.
  • the second learning processing unit 233 includes an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, an image representing the distribution of atypical cells, an image representing the distribution of cells forming a lumen, and the distribution of immune cells.
  • the training process is executed on the second CNN 219b or 219c with training data including a large number of sets of images representing the prognosis and prognosis prediction (life expectancy) or staging classification prediction (stage number, etc.).
  • the second learning processing unit 233 includes an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, an image representing the distribution of atypical cells, an image representing the distribution of cells forming a lumen, and the distribution of neutrophils.
  • Training data including a large number of sets of images representing a macrophage distribution, an image representing a macrophage distribution, a lymphocyte distribution, and a medication effect (such as the presence or absence of an effect) of a specific drug, and executing learning processing on the second CNN 219d . Since the learning process method is the same as the conventional method, a detailed description is omitted.
  • the input unit 201 receives an input of a pathological image to be processed from a user and stores it in the pathological image storage unit 103 (step S31). This is the same as step S1 in FIG.
  • the input unit 201 receives an input of a prediction process instruction regarding a medical condition to be executed in addition to the discrimination process from the user (step S33).
  • the user gives an instruction by giving at least one of prognosis prediction, staging classification prediction, and medication effect of a specific drug.
  • the prediction regarding the medical condition may not be performed, and when the discrimination result is not malignant melanoma, the prediction regarding the medical condition is not performed.
  • the first preprocessing unit 105 performs the first preprocessing on the pathological image to be processed stored in the pathological image storage unit 103, and stores the processing result in the first image storage unit 107 (step). S35). This first preprocessing is the same as step S3 in FIG.
  • the first CNN 209a reads out each divided image stored in the first image storage unit 107, and executes a classification process on them (step S37).
  • This classification process is a process for determining whether the divided image to be processed corresponds to a dermal cell, epidermal cell, immune cell, luminal cell, atypical cell, or other cell. Output likelihood.
  • the input unit 201 instructs the first CNN 209b to execute the process. Then, the first CNN 209b also executes the classification process on each divided image stored in the first image storage unit 107.
  • This classification process is a process for determining whether the divided image to be processed corresponds to a neutrophil, a macrophage, a lymphocyte, or another cell. For example, the likelihood for each is output.
  • the first post-processing unit 211 executes the first post-processing based on the output from the first CNN 209a and / or the first CNN 209b, and stores the processing result in the second image storage unit 113 (step S39).
  • the first post-process according to the present embodiment is the same as the first post-process according to the first embodiment. That is, in the initial image representing the distribution of dermal cells, the pixel values of all the pixels in the region at the position of the divided image determined as dermal cells are set to “1”. For the other pixels, the pixel value is set to “0”. Similarly, in the initial image representing the distribution of epidermal cells, the pixel values of all the pixels in the region at the position of the divided image determined to be epidermal cells are set to “1”. For the other pixels, the pixel value is set to “0”. Further, in the initial image representing the distribution of atypical cells, the pixel values of all the pixels in the region at the position of the divided image determined as the atypical cell are set to “1”. For the other pixels, the pixel value is set to “0”.
  • the pixel values of all the pixels in the region at the position of the divided image determined as immune cells are set to “1”. For the other pixels, the pixel value is set to “0”. In the initial image representing the distribution of the cells forming the lumen, the pixel values of all the pixels in the region at the position of the divided image determined as the cell forming the lumen are set to “1”. For the other pixels, the pixel value is set to “0”.
  • the second preprocessing unit 115 performs second preprocessing on the image stored in the second image storage unit 113, and stores the processing result in the third image storage unit 117 (step S41). ).
  • the second preprocessing is the same as in the first embodiment.
  • the second CNN 219a reads out the images stored in the third image storage unit 117, and executes a discrimination process on them (step S43).
  • the discrimination process according to the present embodiment is the same as step S11 in FIG. 6 except that an image representing the distribution of immune cells and an image representing the distribution of cells forming a lumen are further used. That is, in the present embodiment, an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, an image representing the distribution of immune cells, an image representing the distribution of cells forming a lumen, and an image representing the distribution of atypical cells Are processed as a gray scale image of each channel of the color image.
  • the discrimination process is here whether or not it is malignant melanoma. For example, the likelihood of malignant melanoma, the likelihood of pigmented nevus, and other likelihoods are output. As mentioned above, it is also possible to differentiate malignant melanoma from others.
  • the second post-processing unit 221 specifies the discrimination result based on the output from the second CNN 219a, and determines whether or not to predict a medical condition according to an instruction from the input unit 201 (step S45). If the user instructs to perform only the discrimination process, and the discrimination result is not malignant melanoma, the process proceeds to step S49.
  • At least one of the second CNNs 219b to 219d according to the instruction performs a prediction process according to the instruction on the image stored in the third image storage unit 117 (step S47).
  • the second CNN 219b represents an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, an image representing the distribution of immune cells, an image representing the distribution of cells forming the lumen, and the distribution of atypical cells. From the image, a predicted value of prognosis prediction (life expectancy) (real number (year), likelihood for each life expectancy range, or survival probability over time as represented by a survival rate curve) is output.
  • the second CNN 219c is based on an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, an image representing the distribution of immune cells, an image representing the distribution of cells that form a lumen, and an image representing the distribution of atypical cells.
  • the classification prediction (for example, the likelihood of each stage) is output.
  • the second CNN 219d is an image showing the distribution of dermal cells, an image showing the distribution of epidermal cells, an image showing the distribution of neutrophils, an image showing the distribution of macrophages, an image showing the distribution of lymphocytes, and the cells that make up the lumen From the image representing the distribution and the image representing the distribution of the atypical cells, the medication effect (for example, the likelihood of being present and the likelihood of being absent) of a specific drug is output.
  • the second post-processing unit 221 specifies a prediction result related to the medical condition based on the output from the second CNN 219b to 219d, and stores the prediction result together with the discrimination result in the result storage unit 123 for display.
  • the data is output to the other output device (step S49).
  • the output device may be another device connected to the network.
  • the disease is any one of malignant melanoma, pigmented nevus, and the like, it is possible to execute a predetermined prediction regarding the disease state.
  • At least one of the second CNNs 219b to 219d may be executed in parallel with the process of the second CNN 219a.
  • At least one of the second CNNs 219b to 219d may be executed without performing the process of the second CNN 219a. For example, when it is known by other means that it is malignant melanoma, only the prediction process related to the disease state may be performed.
  • segmentation cell was shown, for example, it classify
  • the discrimination results of the immune cells for each type may be integrated.
  • the cells that make up the lumen are discriminated, and the image showing the distribution of the cells that make up the lumen is used.
  • the prediction of the medication effect of a specific drug and the staging classification prediction may not be used.
  • the prediction regarding the medical condition may be implemented so that at least one of them can be performed at the same time, instead of performing the three types simultaneously.
  • a convolutional neural network is trained so that a malignant melanoma can be differentiated from a disease related to a predetermined number of tumors (predetermined number is a relatively small number such as 10) that is frequently subjected to pathological examination in dermatology. You may do it.
  • the convolutional neural network that performs the discrimination process is different from the image representing the distribution of dermal cells, the image representing the distribution of epidermal cells, the image representing the distribution of immune cells, and the image representing the distribution of cells forming the lumen.
  • An image representing a cell distribution is used as an input, and learning is performed so as to output which of a plurality of predetermined diseases to be differentiated and others.
  • Embodiment 4 It may be modified so that various skin diseases related to tumors other than malignant melanoma are differentiated.
  • the disease name column lists the disease names to be identified
  • the atypical cell origin column indicates which cell is the atypical cell
  • epidermal cells indicates which cell distribution is necessary to distinguish the disease (O or X).
  • the convolutional neural network that classifies the divided images includes cells that form epithelial cells, dermal cells, lumens, and immune cells. Atypical cells for pigment cells, atypical cells for spiny cells, atypical cells for apocrine gland cells, atypical cells for Merkel cells, atypical cells for basal cells, atypical cells for hair follicle cells, etc.
  • the convolutional neural network learned to distinguish and output the atypical cells for each classification of the cells from which the atypical cells are derived, listed in the column of atypical cells.
  • the convolutional neural network that performs classification outputs the cell type or others (for example, their likelihood).
  • an image showing the distribution of atypical cells for pigment cells an image showing the distribution of atypical cells for spiny cells, an image showing the distribution of atypical cells for apocrine gland cells, and an image showing the distribution of atypical cells for Merkel cells
  • An image representing the distribution of atypical cells is generated for each cell that is derived, such as an image representing the distribution of atypical cells for basal cells and an image representing the distribution of atypical cells for hair follicle cells.
  • the convolutional neural network that performs the differentiation process includes an image representing the distribution of epidermal cells, an image representing the distribution of dermal cells, an image representing the distribution of cells that form a lumen, and an image representing the distribution of immune cells,
  • This is a convolutional neural network that has been trained to input an image representing the distribution of atypical cells for each cell to be derived and output any disease name or others. Therefore, the convolutional neural network that performs the discrimination process outputs one of the disease names or others (for example, their likelihood) to the image representing the distribution.
  • atypical cells for pigment cells atypical cells for apocrine gland cells, atypical cells for sebaceous cells, atypical cells for eccrine gland cells, epidermal cells, dermal cells, luminal cells, immune cells
  • malignant melanoma pigmented nevus, dermal pigment cell nevus, Paget's disease of breast, apocrine adenocarcinoma, extramammary Paget's disease, eccrine nevus, Eccrine sweat cysts, syringoma, eccrine sweat pores, skin mixed tumors (ecline type), eccrine spiral tumors, apocrine nevus, apocrine sweat cysts, papillary sweat adenoma, papillary sweat ducts It can be configured to differentiate cystadenoma, tubular apocrine adenoma, papillar
  • a disease to be differentiated it is possible to classify atypical cells from cells derived from atypical cells, and to learn a convolutional neural network so that differentiation can be performed using an image representing the distribution of those atypical cells. Good.
  • the divided images are classified into epidermal cells, dermal cells, luminal cells, and immune cells, and an image representing their distribution is generated. Then, using the images representing these distributions as input, the disease names are identified using a convolutional neural network learned to output any of the disease names listed in FIG. 16 or others.
  • the functional block configuration of the information processing apparatus described above is an example, and may not match the program module configuration.
  • the processing flow as long as the processing result does not change, the processing order may be changed or a plurality of steps may be executed in parallel. Further, when the output of CNN is likelihood, the likelihood may be output together.
  • the first CNN 109 and the second CNN 119 have been described on the assumption of the CNN, but the CNN is an example of a learned model in which supervised machine learning is performed. That is, the first CNN 109 may be such a learned model 1000. Similarly, the second CNN 119 may be such a learned model 2000.
  • a support vector machine Small Vector Vector
  • a perceptron or other neural network can be employed for supervised machine learning.
  • the first learning processing unit 131 and the second learning processing unit 133 are employed learning processing units for machine learning.
  • the first CNNs 209a and 209b in FIG. 13 are also examples of learned models in which supervised machine learning is performed, as in FIG. That is, the first CNN 209a and 209b may be such a learned model or a set 3000 of learned models.
  • the second CNNs 219a to 219d may also be such a learned model or a set 4000 of learned models.
  • the supervised machine learning a support vector machine, a perceptron or other neural network can be employed.
  • the first learning processing unit 231 and the second learning processing unit 233 serve as a learning processing unit for machine learning that is employed.
  • the information processing apparatus described above is a computer apparatus, and includes a memory, a CPU (Central Processing Unit), a hard disk drive (HDD: Hard Disk Drive), a display control unit connected to the display device, and a removable disk. And a communication control unit for connecting to a network are connected by a bus.
  • An operating system (OS: Operating System) and an application program for performing the processing in this embodiment are stored in the HDD, and are read from the HDD to the memory when executed by the CPU.
  • the CPU controls the display control unit, the communication control unit, and the drive device in accordance with the processing content of the application program, and performs a predetermined operation. Further, data in the middle of processing is mainly stored in the memory, but may be stored in the HDD.
  • an application program for performing the above-described processing is stored and distributed on a computer-readable removable disk, and installed from the drive device to the HDD.
  • the HDD is installed via a network such as the Internet and a communication control unit.
  • Such a computer apparatus realizes various functions as described above by organically cooperating hardware such as the CPU and memory described above with programs such as the OS and application programs.
  • the convolutional neural network may be implemented by a program, or a high-speed process may be performed by incorporating a dedicated arithmetic device (for example, Graphics Processing Unit) into the computer.
  • a dedicated arithmetic device for example, Graphics Processing Unit
  • the disease discrimination processing method includes (A) an image representing the distribution of dermal cells in the skin tissue, an image representing the distribution of epidermal cells in the skin tissue, and an image representing the distribution of atypical cells in the skin tissue.
  • the first image, the second image, and the third image read from the storage device that stores the second image representing the distribution of cells and the third image representing the distribution of atypical cells in the skin tissue to be processed.
  • the learned convolutional neural network further inputs at least one of a grayscale image of the skin tissue, an image representing the distribution of immune cells in the skin tissue, and an image representing the distribution of cells forming the lumen in the skin tissue.
  • the storage device described above is a fourth image that is a grayscale image of the skin tissue to be processed and a fifth image that represents the distribution of immune cells in the skin tissue to be processed.
  • at least one of the sixth image representing the distribution of the cells forming the lumen in the skin tissue of the subject is stored.
  • at least one of the fourth image, the fifth image, and the sixth image is further read from the storage device, and a predetermined calculation is executed by the learned convolutional neural network. You may do it. In this way, the accuracy of discrimination is improved.
  • the disease discrimination processing method generates (C) at least one of the first to third images and the fourth to sixth images from the image of the skin tissue to be processed. You may make it further contain the step stored in a memory
  • the disease state prediction processing method includes (A) an image representing the distribution of dermal cells in the skin tissue, an image representing the distribution of epidermal cells in the skin tissue, and an image representing the distribution of atypical cells in the skin tissue.
  • the distribution of the dermal cells in the skin tissue to be processed by the trained convolutional neural network that takes as input the image representing the distribution of immune cells in the skin tissue and outputs a prediction regarding the pathology of the specific skin disease related to the tumor.
  • a first image representing, a second image representing a distribution of epidermal cells in the skin tissue to be processed, a third image representing a distribution of atypical cells in the skin tissue to be treated, and immunity in the skin tissue to be treated A predetermined image for the first image, the second image, the third image, and the fourth image read from the storage device that stores the fourth image representing the distribution of cells; An executing step of executing a calculation based on the output from the (B) trained convolutional neural network, and outputting the predicted results for pathology of the specific skin diseases.
  • a plurality of images representing the distribution of immune cells in the skin tissue and a fourth image are used for each type of immune cells.
  • the prediction regarding the pathology of a specific skin disease may be any one of prediction of life expectancy, prediction of staging, and prediction of administration effect of a specific drug.
  • the learned convolutional neural network described above is learned by further using an image representing the distribution of the cells forming the lumen in the skin tissue as an input, and the storage device described above is used for the skin to be processed.
  • a fifth image representing the distribution of cells that make up the lumen in the tissue may be stored.
  • the fifth image may be further read from the storage device, and a predetermined calculation may be executed by the learned convolutional neural network. This improves the accuracy of prediction.
  • the disease discrimination processing method includes (A) an image representing the distribution of dermal cells in the skin tissue, an image representing the distribution of epidermal cells in the skin tissue, and an image representing the distribution of atypical cells in the skin tissue.
  • a learned convolutional neural network that receives an image representing the distribution of immune cells in the skin tissue and an image representing the distribution of cells that form lumens in the skin tissue, and outputs the differentiation of multiple types of skin diseases related to the tumor.
  • the network displays a first image representing the distribution of dermal cells in the skin tissue to be processed, a second image representing the distribution of epidermal cells in the skin tissue to be processed, and the distribution of atypical cells in the skin tissue to be processed.
  • an image representing the distribution of atypical cells in the skin tissue and a third image may be prepared for each cell from which the atypical cells are derived. As a result, the accuracy can be improved.
  • the disease discrimination processing method includes (A) an image representing the distribution of dermal cells in the skin tissue, an image representing the distribution of epidermal cells in the skin tissue, and an image representing the distribution of immune cells in the skin tissue.
  • Dermal cells in the skin tissue to be processed by a trained convolutional neural network that receives an image representing the distribution of cells that form lumens in the skin tissue and outputs a discrimination of a plurality of non-neoplastic skin diseases
  • a first image representing the distribution of the skin, a second image representing the distribution of epidermal cells in the skin tissue to be processed, a third image representing the distribution of immune cells in the skin tissue to be processed, and the skin to be processed Performing a predetermined operation on the first to fourth images read from a storage device storing a fourth image representing a distribution of cells that form a lumen in the tissue; B) based on the output from the learned convolutional neural network, and outputting the discrimination results of nonneoplastic plurality of types of skin disorders.
  • the diagnosis support system includes (A) an image representing the distribution of dermal cells in the skin tissue, an image representing the distribution of epidermal cells in the skin tissue, and the distribution of atypical cells in the skin tissue.
  • a learned model of machine learning using as input an image representing the image (B) a first image representing a distribution of dermal cells in the skin tissue to be processed, and a first image representing a distribution of epidermal cells in the skin tissue to be processed
  • An output data generation unit that generates output data related to a skin disease based on an output from a learned model of machine learning for the image of 2 and a third image representing the distribution of atypical cells in the skin tissue to be processed Have.
  • the output of the machine learning learned model described above is a discrimination of a specific skin disease related to a tumor
  • the output data related to the skin disease described above may include a discrimination result of a specific skin disease. is there. It is useful for identifying specific skin diseases related to tumors.
  • the machine learning learned model described above can be input from a grayscale image of skin tissue, an image representing the distribution of immune cells in the skin tissue, and an image representing the distribution of cells forming the lumen in the skin tissue. It may further include at least one of them.
  • the output data generation unit described above includes a fourth image that is a grayscale image of the skin tissue to be processed, a fifth image that represents the distribution of immune cells in the skin tissue to be processed, and the skin to be processed. Based on the output from the learned model of machine learning for at least one of the sixth image representing the distribution of the cells forming the lumen in the tissue and the first to third images, the discrimination result of the specific skin disease is obtained. You may make it produce
  • diagnosis support system may further include (C) an image generation unit that generates first to third images from the image of the skin tissue to be processed.
  • the image generation unit may generate at least one of the fourth to sixth images.
  • the input of the machine learning learned model described above may further include an image representing the distribution of immune cells in the skin tissue.
  • the output of the machine learning learned model described above may be a prediction related to the pathology of a specific skin disease related to a tumor.
  • the output data generation unit described above outputs the learned model of machine learning for the first to third images and the fourth image representing the distribution of immune cells in the skin tissue to be processed.
  • output data including a prediction result related to a medical condition of a specific skin disease may be generated. For example, the same effect as the disease state prediction processing method according to the second aspect can be obtained.
  • a plurality of images representing the distribution of immune cells in the skin tissue and a fourth image are used for each type of immune cells.
  • the prediction regarding the pathology of a specific skin disease may be any one of prediction of life expectancy, prediction of staging, and prediction of administration effect of a specific drug.
  • the input of the learned model of machine learning described above may further include an image representing the distribution of cells that form a lumen in the skin tissue.
  • the output data generation unit described above is the machine learning learned model for the first to fourth images and the fifth image representing the distribution of cells forming the lumen in the skin tissue to be processed. Based on the output, output data including a prediction result related to a medical condition of a specific skin disease may be generated. This improves the accuracy of prediction.
  • the input of the machine learning learned model described above may further include an image representing the distribution of immune cells in the skin tissue and an image representing the distribution of cells forming the lumen in the skin tissue.
  • the output of the learned model of machine learning may be a discrimination between a plurality of types of skin diseases related to a tumor.
  • the output data generation unit described above includes the first to third images, the fourth image representing the distribution of immune cells in the skin tissue to be processed, and the lumen in the skin tissue to be processed. Based on the output of the learned model of machine learning with respect to the fifth image representing the distribution of the cells that make up the output, output data including the discrimination results of a plurality of types of skin diseases may be generated.
  • an image representing the distribution of atypical cells and a third image may be prepared for each cell from which the atypical cells are derived. As a result, the accuracy can be improved.
  • the diagnosis support system includes (A) an image representing the distribution of dermal cells in the skin tissue, an image representing the distribution of epidermal cells in the skin tissue, and the distribution of immune cells in the skin tissue.
  • a learned model of machine learning that receives an image representing the distribution of the cells forming the lumen in the skin tissue and an output representing the differentiation of a plurality of non-neoplastic skin diseases, and (B) processing A first image representing the distribution of dermal cells in the subject skin tissue, a second image representing the distribution of epidermal cells in the skin tissue to be treated, and a third image representing the distribution of immune cells in the skin tissue to be treated.
  • the learned model of machine learning in the diagnosis support system may be a neural network (particularly a learned convolutional neural network) or a support vector machine.
  • system includes one or a plurality of information processing apparatuses. That is, a case where a plurality of information processing apparatuses connected via a network operate as one system in cooperation with each other and a case where the information processing apparatuses operate with one information processing apparatus are included.
  • a program for executing the above processing can be created, and the program can be read by a computer such as a flexible disk, an optical disk (CD-ROM, DVD-ROM, etc.), a magneto-optical disk, a semiconductor memory, a hard disk, etc. Stored in a storage medium or storage device.
  • the intermediate processing result is temporarily stored in a storage device such as a main memory.

Abstract

A diagnostic support system for performing dermatosis diagnostic support comprises an output data generation unit that generates output data related to dermatosis on the basis of the outputs from: (A) a learned model of machine learning in which the inputs are an image representing the distribution of dermal cells in the skin tissue included in a pathological image, an image representing the distribution of epidermal cells in skin tissue, and an image representing the distribution of atypical cells in the skin tissue; and (B) a learned model for a first image representing the distribution of dermal cells in the skin tissue to be processed, a second image representing the distribution of epidermal cells in the skin tissue to be processed, and a third image representing the distribution of atypical cells in the skin tissue to be processed.

Description

皮膚組織の病理画像を用いた診断支援のためのシステム及び方法System and method for diagnosis support using pathological image of skin tissue
 本発明は、皮膚組織の病理画像を用いた診断支援のための技術に関する。 The present invention relates to a technique for supporting diagnosis using a pathological image of a skin tissue.
 ある論文には、乳がん病理組織の病理画像から、Mitosis細胞を検出するために、畳み込みニューラルネットワークを二段階用いるものが記載されている。ここでは特定の種類の細胞を検出するだけで十分であり、特定の種類の細胞の分布などをさらに考慮した上で診断するようなことは問題となっていない。 One paper describes what uses a convolutional neural network in two stages to detect Mitosis cells from a pathological image of a breast cancer pathological tissue. Here, it is sufficient to detect a specific type of cell, and it is not a problem to make a diagnosis in consideration of the distribution of the specific type of cell.
 また、他の論文には、前立腺がんの病理画像から、腫瘍の部分を局所的に検出するために畳み込みニューラルネットワークを用いたものと、センチネルリンパ節における乳がんの転移箇所を局所的に検出するための畳み込みニューラルネットワークを用いたものが記載されている。この論文では、前立腺がんの病理画像における解析において、病理画像全体からがんであるかどうかを腫瘍であるかの尤度のヒストグラムの形状により、また乳がんの転移の有無については局所的に得られた転移があるかの尤度のうち最大のものをスコアすることにより判定を行っており、特定の種類の細胞の分布などを考慮した上で診断するようなことは問題となっていない。 Other papers also use a convolutional neural network to detect the tumor area locally from prostate cancer pathological images and detect breast cancer metastasis in the sentinel lymph nodes locally. For this purpose, a convolutional neural network is described. In this paper, in the analysis of prostate cancer pathological images, the histogram of likelihood of tumors or tumors from the entire pathological image was obtained, and the presence or absence of breast cancer metastasis was obtained locally. It is determined by scoring the maximum likelihood of whether there is metastasis, and it is not problematic to make a diagnosis in consideration of the distribution of specific types of cells.
 さらに、ある文献には、腫瘍の性質の識別において、細胞核、その周辺組織等の変化が重要であることを考慮し、病理画像から細胞核、空孔、細胞質、間質等を中心とするサブイメージを抽出し、サブイメージを学習パターン及び入力パターンとして入力することにより、サブイメージに基づいて高精度に腫瘍の有無、及び腫瘍の良性・悪性を判定することが開示されている。この文献には、病理学検査において採取した組織には染色(ヘマトキシレン、エオジン等による染色)が施されるため、細胞核、その周辺組織等がそれぞれ特有の色に染色されていることを考慮して、病理画像から細胞核、空孔、細胞質、間質等を中心とするサブイメージを抽出すると同時に、細胞核の色情報を抽出し、両者を特徴候補として記憶することにより、より高い精度で腫瘍の有無、及び腫瘍の良性・悪性を判定することも開示されている。しかしながら、皮膚組織における病理画像に特徴的な要素については考慮されていない。 Furthermore, some documents consider that changes in cell nuclei and surrounding tissues are important in identifying the nature of tumors, and subimages centered on cell nuclei, pores, cytoplasm, and stroma from pathological images. Is extracted and a sub-image is input as a learning pattern and an input pattern, so that the presence / absence of a tumor and the benign / malignant tumor of the tumor are determined with high accuracy based on the sub-image. This document considers that tissue collected in pathological examination is stained (stained with hematoxylene, eosin, etc.), so that cell nuclei and surrounding tissues are stained in unique colors. By extracting sub-images centering on cell nuclei, vacancies, cytoplasm, stroma, etc. from the pathological image, simultaneously extracting color information of cell nuclei and storing both as feature candidates, it is possible to obtain tumors with higher accuracy. The presence / absence and determination of benign / malignant tumor are also disclosed. However, elements characteristic of the pathological image in the skin tissue are not considered.
特開2006-153742号公報JP 2006-153742 A
 本発明の目的は、一側面によれば、皮膚疾患の診断支援を行うための新規な技術を提供することである。 An object of the present invention is, according to one aspect, to provide a novel technique for providing diagnosis support for a skin disease.
 本発明の第1の診断支援システムは、(A)病理画像に含まれる皮膚組織における真皮細胞の分布を表す画像と、皮膚組織における表皮細胞の分布を表す画像と、皮膚組織における異型細胞の分布を表す画像とを入力とする機械学習の学習済みモデルと、(B)処理対象の皮膚組織における真皮細胞の分布を表す第1の画像と、処理対象の皮膚組織における表皮細胞の分布を表す第2の画像と、処理対象の皮膚組織における異型細胞の分布を表す第3の画像とに対する、学習済みモデルからの出力に基づき、皮膚疾患に関する出力データを生成する出力データ生成部とを有する。 The first diagnosis support system of the present invention includes (A) an image representing the distribution of dermal cells in the skin tissue, an image representing the distribution of epidermal cells in the skin tissue, and the distribution of atypical cells in the skin tissue. A learned model of machine learning using as input an image representing the image, (B) a first image representing a distribution of dermal cells in the skin tissue to be processed, and a first image representing a distribution of epidermal cells in the skin tissue to be processed And an output data generation unit that generates output data related to the skin disease based on the output from the learned model for the second image and the third image representing the distribution of atypical cells in the skin tissue to be processed.
 本発明の第2の診断支援システムは、(A)病理画像に含まれる皮膚組織における真皮細胞の分布を表す画像と、皮膚組織における表皮細胞の分布を表す画像と、皮膚組織における免疫細胞の分布を表す画像と、皮膚組織における管腔を作る細胞の分布を表す画像とを入力とし且つ非腫瘍性の複数種類の皮膚疾患の鑑別を出力とする機械学習の学習済みモデルと、(B)処理対象の皮膚組織における真皮細胞の分布を表す第1の画像と、処理対象の皮膚組織における表皮細胞の分布を表す第2の画像と、処理対象の皮膚組織における免疫細胞の分布を表す第3の画像と、処理対象の皮膚組織における管腔を作る細胞の分布を表す第4の画像とに対する、機械学習の学習済みモデルの出力に基づき、非腫瘍性の複数種類の皮膚疾患の鑑別結果を含む出力データを生成する出力データ生成部とを有する。 The second diagnosis support system of the present invention includes (A) an image representing the distribution of dermal cells in the skin tissue included in the pathological image, an image representing the distribution of epidermal cells in the skin tissue, and the distribution of immune cells in the skin tissue. A learned model of machine learning that receives an image representing the distribution of the cells forming the lumen in the skin tissue and an output representing the differentiation of a plurality of non-neoplastic skin diseases, and (B) processing A first image representing the distribution of dermal cells in the subject skin tissue, a second image representing the distribution of epidermal cells in the skin tissue to be treated, and a third image representing the distribution of immune cells in the skin tissue to be treated. Based on the output of the learned model of machine learning for the image and the fourth image representing the distribution of the cells that make up the lumen in the skin tissue to be processed, the discrimination result of a plurality of types of non-neoplastic skin diseases And an output data generating unit for generating output data including.
図1Aは、悪性黒色腫を含む皮膚組織の病理画像の一例を示す図である。FIG. 1A is a diagram showing an example of a pathological image of a skin tissue containing malignant melanoma. 図1Bは、色素性母斑を含む皮膚組織の病理画像の一例を示す図である。FIG. 1B is a diagram illustrating an example of a pathological image of a skin tissue including pigmented nevus. 図2は、畳み込みニューラルネットワークの一例を示す図である。FIG. 2 is a diagram illustrating an example of a convolutional neural network. 図3Aは、悪性黒色腫と分かっている皮膚組織における真皮細胞の分布を表す画像の例を示す図である。FIG. 3A is a diagram showing an example of an image representing the distribution of dermal cells in skin tissue known to be malignant melanoma. 図3Bは、悪性黒色腫と分かっている皮膚組織における表皮細胞の分布を表す画像の例を示す図である。FIG. 3B is a diagram showing an example of an image representing the distribution of epidermal cells in skin tissue known to be malignant melanoma. 図3Cは、悪性黒色腫と分かっている皮膚組織における異型細胞の分布を表す画像の例を示す図である。FIG. 3C is a diagram showing an example of an image representing the distribution of atypical cells in skin tissue known to be malignant melanoma. 図4Aは、色素性母斑と分かっている皮膚組織における真皮細胞の分布を表す画像の例を示す図である。FIG. 4A is a diagram illustrating an example of an image representing a distribution of dermal cells in a skin tissue known to be pigmented nevus. 図4Bは、色素性母斑と分かっている皮膚組織における表皮細胞の分布を表す画像の例を示す図である。FIG. 4B is a diagram showing an example of an image representing the distribution of epidermal cells in skin tissue known to be pigmented nevus. 図4Cは、色素性母斑と分かっている皮膚組織における異型細胞の分布を表す画像の例を示す図である。FIG. 4C is a diagram illustrating an example of an image representing the distribution of atypical cells in skin tissue known to be pigmented nevus. 図5は、第1の実施の形態に係る情報処理装置の機能ブロック構成を示す図である。FIG. 5 is a diagram illustrating a functional block configuration of the information processing apparatus according to the first embodiment. 図6は、第1の実施の形態に係る情報処理装置により実行される処理フローを示す図である。FIG. 6 is a diagram illustrating a processing flow executed by the information processing apparatus according to the first embodiment. 図7は、分割画像を説明するための図である。FIG. 7 is a diagram for explaining divided images. 図8Aは、真皮細胞の分布の初期画像を説明するための図である。FIG. 8A is a diagram for explaining an initial image of the distribution of dermal cells. 図8Bは、表皮細胞の分布の初期画像を説明するための図である。FIG. 8B is a diagram for explaining an initial image of the distribution of epidermal cells. 図8Cは、異型細胞の分布の初期画像を説明するための図である。FIG. 8C is a diagram for explaining an initial image of atypical cell distribution. 図9は、悪性黒色腫と分かっている皮膚組織における免疫細胞の分布を表す画像の例を示す図である。FIG. 9 is a diagram showing an example of an image representing the distribution of immune cells in skin tissue known to be malignant melanoma. 図10は、色素性母斑と分かっている皮膚組織における免疫細胞の分布を表す画像の例を示す図である。FIG. 10 is a diagram showing an example of an image representing the distribution of immune cells in skin tissue known as pigmented nevus. 図11は、悪性黒色腫と分かっている皮膚組織における管腔を作る細胞の分布を表す画像の例を示す図である。FIG. 11 is a diagram showing an example of an image representing a distribution of cells that form a lumen in skin tissue known as malignant melanoma. 図12は、色素性母斑と分かっている皮膚組織における管腔を作る細胞の分布を表す画像の例を示す図である。FIG. 12 is a diagram illustrating an example of an image representing a distribution of cells that form a lumen in skin tissue known as pigmented nevus. 図13は、第2の実施の形態に係る情報処理装置の機能ブロック構成を示す図である。FIG. 13 is a diagram illustrating a functional block configuration of the information processing apparatus according to the second embodiment. 図14は、第2の実施の形態に係る情報処理装置により実行される処理フローを示す図である。FIG. 14 is a diagram illustrating a processing flow executed by the information processing apparatus according to the second embodiment. 図15Aは、鑑別対象の疾患名と腫瘍に係る異型細胞の由来となる細胞名と処理に用いられる細胞の種類との関係を表す図である。FIG. 15A is a diagram illustrating a relationship between a disease name to be identified, a cell name that is derived from an atypical cell related to a tumor, and a cell type used for processing. 図15Bは、鑑別対象の疾患名と腫瘍に係る異型細胞の由来となる細胞名と処理に用いられる細胞の種類との関係を表す図である。FIG. 15B is a diagram illustrating a relationship between a disease name to be identified, a cell name from which an atypical cell related to a tumor is derived, and a cell type used for processing. 図15Cは、鑑別対象の疾患名と腫瘍に係る異型細胞の由来となる細胞名と処理に用いられる細胞の種類との関係を表す図である。FIG. 15C is a diagram illustrating a relationship between a disease name to be identified, a cell name derived from an atypical cell related to a tumor, and a cell type used for processing. 図16は、非腫瘍性疾患名と処理に用いられる細胞の種類との関係を表す図である。FIG. 16 is a diagram showing the relationship between the names of non-neoplastic diseases and the types of cells used for treatment.
[実施の形態1]
 本実施の形態では、悪性黒色腫と色素性母斑とを鑑別することを目的とする。悪性黒色腫とは、表皮に存在する色素細胞のがんである。一方、色素性母斑は、色素細胞の良性の腫瘍であり、両者の鑑別は難しい場合が多い。
[Embodiment 1]
The purpose of this embodiment is to distinguish malignant melanoma from pigmented nevus. Malignant melanoma is a cancer of pigment cells present in the epidermis. On the other hand, pigmented nevus is a benign tumor of pigment cells, and it is often difficult to distinguish them.
 図1Aは、悪性黒色腫を含む皮膚組織の病理画像の例(但し、グレースケール画像)を示す。また、図1Bは、色素性母斑を含む皮膚組織の病理画像の例(但し、グレースケール画像)を示す。このように両者の鑑別は難しい。なお、このような病理画像は、病理組織のプレパラートをスキャンしてデジタル画像にしたものである。 FIG. 1A shows an example of a pathological image of a skin tissue containing malignant melanoma (however, a gray scale image). FIG. 1B shows an example of a pathological image of a skin tissue including pigmented nevus (however, a gray scale image). Thus, it is difficult to distinguish between the two. Such a pathological image is a digital image obtained by scanning a preparation of a pathological tissue.
 本実施の形態では、第1段階の処理として、このような病理画像において、皮膚組織における表皮細胞の分布と、真皮細胞の分布と、異型細胞の分布とを特定する。なお、異型細胞は、悪性黒色腫の腫瘍化した色素細胞と、良性腫瘍である色素性母斑の腫瘍化した色素細胞とのいずれかである。以下、異型細胞については、腫瘍化した又は平常ではない細胞を示すものとする。 In the present embodiment, as a first-stage process, the distribution of epidermal cells, the distribution of dermal cells, and the distribution of atypical cells in the skin tissue are specified in such a pathological image. The atypical cells are either pigmented cells that are malignant melanoma or pigmented nevi that are benign tumors. Hereinafter, an atypical cell shall mean a cell that has become tumorous or is not normal.
 次に、第2段階の処理として、表皮細胞の分布を表す画像(分布を表す画像をヒートマップとも呼ぶ)と、真皮細胞の分布を表す画像と、異型細胞の分布を表す画像とから、悪性黒色腫と良性腫瘍である色素性母斑とその他とを鑑別する。但し、悪性黒色腫とその他とを鑑別するようにしても良い。 Next, as a second stage process, an image representing the distribution of epidermal cells (an image representing the distribution is also referred to as a heat map), an image representing the distribution of dermal cells, and an image representing the distribution of atypical cells are used as malignant cells. Distinguish between melanoma and benign tumor, pigmented nevus, and others. However, you may make it distinguish malignant melanoma from others.
 本実施の形態では、表皮細胞の分布と真皮細胞の分布とに対する異型細胞の分布によって、病理組織の悪性度を推測することを意図している。すなわち、異型細胞が表皮や真皮中のどの部分にどのような分布で存在するのかを基に、病理組織の悪性度を予測することを意図する。言い換えれば、皮膚の構造を表し且つ平常時から存在する真皮細胞群や表皮細胞群などにおいて、異型細胞群がどのように入り込んでいるかを評価するものである。このような予測及び評価は、畳み込みニューラルネットワーク(CNN:Convolutional Neural Network)に対して学習を行わせることによって実現する。 In the present embodiment, it is intended to estimate the malignancy of a pathological tissue based on the distribution of atypical cells relative to the distribution of epidermal cells and the distribution of dermal cells. That is, it is intended to predict the malignancy of a pathological tissue based on the distribution of the atypical cells in which part of the epidermis or dermis. In other words, it evaluates how an atypical cell group enters a dermis cell group or epidermis cell group that expresses the structure of the skin and exists from the normal time. Such prediction and evaluation are realized by performing learning on a convolutional neural network (CNN: Convolutional Neural Network).
 畳み込みニューラルネットワークは、例えば図2に示すような構造を有している。これは、よく知られているAlexNetと呼ばれる畳み込みニューラルネットワークに類似する構造を有しており、入力層と出力層との間に、畳み込み層とサブサンプリング層(プーリング層)との組み合わせを複数(図では2)セット設け、さらに2つの全結合層を追加した構成になっている。但し、畳み込みニューラルネットワークの構造には、様々なバリエーションがあり、それらを用いても良い。例えば、畳み込み層とサブサンプリング層との組み合わせの数は、増加させても良い。 The convolutional neural network has a structure as shown in FIG. It has a structure similar to the well-known convolutional neural network called AlexNet, and there are multiple combinations of convolutional layers and subsampling layers (pooling layers) between the input layer and the output layer ( In the figure, 2) a set is provided, and two additional coupling layers are added. However, there are various variations in the structure of the convolutional neural network, and these may be used. For example, the number of combinations of convolution layers and subsampling layers may be increased.
 上で述べたように、表皮細胞の分布を表す画像と、真皮細胞の分布を表す画像と、異型細胞の分布を表す画像とを、入力層に入力する。一方、出力層では、例えば、悪性黒色腫の尤度と、色素性母斑の尤度と、それ以外の場合の尤度とが出力される。 As described above, an image representing the distribution of epidermal cells, an image representing the distribution of dermal cells, and an image representing the distribution of atypical cells are input to the input layer. On the other hand, in the output layer, for example, the likelihood of malignant melanoma, the likelihood of pigmented nevus, and the likelihood in other cases are output.
 よく知られているように、畳み込み層には、フィルタ及び活性化関数についてパラメータがあり、サブサンプリング層にもサブサンプリングについてのパラメータがあり、全結合層にも活性化関数についてパラメータがあるので、学習によってそれらに適切な値等を設定することになる。学習のためのアルゴリズムは、既知の方法を用いる。 As is well known, the convolutional layer has parameters for the filter and activation function, the subsampling layer also has parameters for subsampling, and the fully connected layer also has parameters for the activation function, so An appropriate value or the like is set for them by learning. A known method is used as an algorithm for learning.
 図3Aは、悪性黒色腫と分かっている皮膚組織における真皮細胞の分布を表す画像を示す。図3Bは、悪性黒色腫と分かっている皮膚組織における表皮細胞の分布を表す画像を示す。図3Cは、悪性黒色腫と分かっている皮膚組織における異型細胞の分布を表す画像を示す。このような画像を入力層に入力した場合に、出力層から悪性黒色腫である尤度が1という出力がなされる、ということを目的として学習が行われる。なお、これらの図中、白色の部分に、着目する細胞が存在することを示している。 FIG. 3A shows an image representing the distribution of dermal cells in skin tissue known to be malignant melanoma. FIG. 3B shows an image representing the distribution of epidermal cells in skin tissue known to be malignant melanoma. FIG. 3C shows an image representing the distribution of atypical cells in skin tissue known to be malignant melanoma. Learning is performed for the purpose that when such an image is input to the input layer, an output having a likelihood of being malignant melanoma of 1 is output from the output layer. In addition, in these figures, it has shown that the cell to which its attention exists exists in the white part.
 一方、図4Aは、色素性母斑と分かっている皮膚組織における真皮細胞の分布を表す画像を示す。図4Bは、色素性母斑と分かっている皮膚組織における表皮細胞の分布を表す画像を示す。図4Cは、色素性母斑と分かっている皮膚組織における異型細胞の分布を表す画像を示す。このような画像を入力層に入力した場合に、出力層から色素性母斑である尤度が1という出力がなされる、ということを目的として学習が行われる。なお、これらの図中でも、白色の部分に、着目する細胞が存在することを示している。 On the other hand, FIG. 4A shows an image representing the distribution of dermal cells in the skin tissue known to be pigmented nevus. FIG. 4B shows an image representing the distribution of epidermal cells in skin tissue known to be pigmented nevus. FIG. 4C shows an image representing the distribution of atypical cells in skin tissue known to be pigmented nevus. When such an image is input to the input layer, learning is performed for the purpose of outputting from the output layer a likelihood of pigmentous nevus being 1. In these figures, the white cells indicate that the cells of interest exist.
 なお、第1段階の処理においても、畳み込みニューラルネットワークを用いても良い。この場合、皮膚組織を含む病理画像の部分的な画像(おおよそ1細胞と同様の大きさを有する部分的な画像又は抽出された1細胞を含む部分的な画像)を入力とし、真皮細胞と表皮細胞と異型細胞とその他(ガラス部分その他など)との尤度を出力とする学習済みの畳み込みニューラルネットワークを用いる。但し、第1段階の処理については、領域抽出などの他の技術を用いて、病理画像から、表皮細胞の分布、真皮細胞の分布、異型細胞の分布を抽出するようにしても良い。 Note that a convolutional neural network may also be used in the first stage processing. In this case, a partial image of a pathological image including skin tissue (a partial image having a size similar to that of one cell or a partial image including one extracted cell) is input, and dermal cells and epidermis are input. A trained convolutional neural network that outputs the likelihood of cells, atypical cells, and others (glass portion, etc.) is used. However, for the first stage processing, the distribution of epidermal cells, the distribution of dermal cells, and the distribution of atypical cells may be extracted from the pathological image using other techniques such as region extraction.
 以下、このような処理を実施する情報処理装置の構成及び処理内容の詳細について説明する。 Hereinafter, the configuration of the information processing apparatus that performs such processing and details of the processing contents will be described.
 図5に、本実施の形態に係る情報処理装置の機能構成例を示す。 FIG. 5 shows a functional configuration example of the information processing apparatus according to the present embodiment.
 本実施の形態に係る情報処理装置は、入力部101と、病理画像格納部103と、第1前処理部105と、第1画像格納部107と、第1CNN109と、第1後処理部111と、第2画像格納部113と、第2前処理部115と、第3画像格納部117と、第2CNN119と、第2後処理部121と、結果格納部123と、第1学習処理部131と、第2学習処理部133とを有する。 The information processing apparatus according to the present embodiment includes an input unit 101, a pathological image storage unit 103, a first preprocessing unit 105, a first image storage unit 107, a first CNN 109, and a first post processing unit 111. , Second image storage unit 113, second pre-processing unit 115, third image storage unit 117, second CNN 119, second post-processing unit 121, result storage unit 123, and first learning processing unit 131 And a second learning processing unit 133.
 入力部101は、ユーザから、処理対象の病理画像の入力を受け付け、病理画像格納部103に格納する。 The input unit 101 receives an input of a pathological image to be processed from a user and stores it in the pathological image storage unit 103.
 上でも述べたが、病理画像は、病理組織のプレパラートをスキャンしてデジタル画像にしたものである。ここでは、例えば対物20倍に拡大してスキャンした画像であり、例えば縦横1M×1MピクセルのRGB画像である。 As described above, the pathological image is a digital image obtained by scanning a preparation of a pathological tissue. Here, for example, it is an image scanned by enlarging the objective 20 times, for example, an RGB image of 1M × 1M pixels in length and width.
 第1前処理部105は、第1CNN109における演算のため、1細胞程度のサイズに、病理画像格納部103に格納された病理画像を分割して、分割された病理画像(以下、分割画像と呼ぶ)を、第1画像格納部107に格納する。例えば、縦横32×32ピクセルの正方形の分割画像を生成する。 The first preprocessing unit 105 divides the pathological image stored in the pathological image storage unit 103 into a size of about one cell for calculation in the first CNN 109, and divides the pathological image (hereinafter referred to as a divided image). ) Is stored in the first image storage unit 107. For example, a square divided image of 32 × 32 pixels is generated.
 第1CNN109は、第1画像格納部107に格納されている各分割画像が、真皮細胞、表皮細胞、異型細胞、その他の細胞のいずれに該当するかを判別するように学習された畳み込みニューラルネットワークである。従って、第1CNN109は、処理に係る分割画像に対して、例えばそれぞれの尤度を出力する。 The first CNN 109 is a convolutional neural network learned to determine whether each divided image stored in the first image storage unit 107 corresponds to a dermal cell, epidermal cell, atypical cell, or other cell. is there. Accordingly, the first CNN 109 outputs, for example, respective likelihoods for the divided images related to the processing.
 第1後処理部111は、第1CNN109からの出力に基づき、真皮細胞の分布を表す初期画像、表皮細胞の分布を表す初期画像、及び異型細胞の分布を表す初期画像を生成し、第2画像格納部113に格納する。例えば、分割画像が真皮細胞であると判別された場合には、真皮細胞の分布を表す初期画像において、その分割画像の位置の領域内における全ピクセルが「1」それ以外は「0」となるように設定される。すなわち、これらの初期画像はモノクロ画像である。このような場合、これらの初期画像のサイズは、病理画像と同じである。 Based on the output from the first CNN 109, the first post-processing unit 111 generates an initial image that represents the distribution of dermal cells, an initial image that represents the distribution of epidermal cells, and an initial image that represents the distribution of atypical cells. Store in the storage unit 113. For example, when it is determined that the divided image is a dermal cell, in the initial image representing the distribution of the dermal cell, all the pixels in the region of the position of the divided image are “1”, otherwise “0”. Is set as follows. That is, these initial images are monochrome images. In such a case, the size of these initial images is the same as the pathological image.
 第2前処理部115は、第2画像格納部113に格納された画像から、真皮細胞の分布を表す画像、表皮細胞の分布を表す画像、異型細胞の分布を表す画像を生成し、第3画像格納部117に格納する。本実施の形態では、各初期画像を、第2CNN119で処理可能な画像サイズ(例えば縦横200×200ピクセル)に縮小する処理を実行する。例えば、この処理において、各ピクセルが所定階調(256階調)となるように変換する処理をも行う。すなわち、真皮細胞の分布を表す画像、表皮細胞の分布を表す画像、異型細胞の分布を表す画像は、グレースケール画像である。 The second preprocessing unit 115 generates, from the image stored in the second image storage unit 113, an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, and an image representing the distribution of atypical cells. The image is stored in the image storage unit 117. In the present embodiment, a process of reducing each initial image to an image size that can be processed by the second CNN 119 (for example, vertical and horizontal 200 × 200 pixels) is executed. For example, in this process, a process of converting each pixel to a predetermined gradation (256 gradations) is also performed. That is, the image representing the distribution of dermal cells, the image representing the distribution of epidermal cells, and the image representing the distribution of atypical cells are grayscale images.
 第2CNN119は、第3画像格納部117に格納された画像から、悪性黒色腫か否かを出力する。例えば、悪性黒色腫の尤度、色素性母斑の尤度、その他の尤度を出力する。なお、悪性黒色腫の尤度とその他の尤度を出力するようにしても良い。 The second CNN 119 outputs whether or not it is malignant melanoma from the image stored in the third image storage unit 117. For example, the likelihood of malignant melanoma, the likelihood of pigmented nevus, and other likelihoods are output. Note that the likelihood of malignant melanoma and other likelihoods may be output.
 第2後処理部121は、第2CNN119からの出力に基づき、鑑別結果を結果格納部123に格納し、表示装置その他の出力装置に鑑別結果を出力する。 The second post-processing unit 121 stores the discrimination result in the result storage unit 123 based on the output from the second CNN 119, and outputs the discrimination result to the display device or other output device.
 このようにすれば、病理画像を入力すれば、自動的に、悪性黒色腫であるか否かの鑑別結果を得ることができるようになる。 In this way, when a pathological image is input, it is possible to automatically obtain a discrimination result as to whether or not the patient is malignant melanoma.
 本実施の形態では、表皮細胞の分布と真皮細胞の分布と異型細胞の分布とをそれぞれ別の画像として表すことで、分布間の関係を評価しやすくしている。これによって、畳み込みニューラルネットワークによる適切な鑑別を可能にしている。 In this embodiment, the distribution of epidermal cells, the distribution of dermal cells, and the distribution of atypical cells are represented as separate images, so that the relationship between the distributions can be easily evaluated. This makes it possible to perform appropriate discrimination using a convolutional neural network.
 なお、第1学習処理部131は、細胞の画像とその種別とを多数セット含む訓練データで、第1CNN109に対する学習処理を実行する。学習処理の方法は、従来と同じなので詳細な説明については省略する。 The first learning processing unit 131 executes learning processing for the first CNN 109 using training data including a large number of sets of cell images and their types. Since the learning process method is the same as the conventional method, a detailed description is omitted.
 また、第2学習処理部133は、真皮細胞の分布を表す画像と表皮細胞の分布を表す画像と異型細胞の分布を表す画像と鑑別結果とを多数セット含む訓練データで、第2CNN119に対して学習処理を実行する。学習処理の方法は、従来と同じなので詳細な説明については省略する。 The second learning processing unit 133 is training data including a large number of sets of images representing the distribution of dermal cells, images representing the distribution of epidermal cells, images representing the distribution of atypical cells, and discrimination results. Execute learning process. Since the learning process method is the same as the conventional method, a detailed description is omitted.
 次に、図6乃至図8Cを用いて、本実施の形態に係る情報処理装置の処理内容を説明する。 Next, processing contents of the information processing apparatus according to the present embodiment will be described with reference to FIGS. 6 to 8C.
 まず、入力部101は、ユーザから、処理対象の病理画像の入力を受け付け、病理画像格納部103に格納する(ステップS1)。表現上の問題でグレースケール画像になっているが、図1Aや図1Bに示すような画像(カラー画像)が、病理画像として入力される。入力された病理画像は、本情報処理装置の格納部、ネットワークに接続された他の装置の格納部等に格納されている場合があり、その場合には入力部101は、その格納部から読み出して病理画像格納部103に格納する。 First, the input unit 101 receives an input of a pathological image to be processed from a user and stores it in the pathological image storage unit 103 (step S1). Although it is a grayscale image due to a problem in expression, an image (color image) as shown in FIGS. 1A and 1B is input as a pathological image. The input pathological image may be stored in a storage unit of the information processing apparatus, a storage unit of another apparatus connected to the network, or the like. In this case, the input unit 101 reads out from the storage unit. And stored in the pathological image storage unit 103.
 そうすると、第1前処理部105は、病理画像格納部103に格納された処理対象の病理画像に対して、第1前処理を実行し、処理結果を第1画像格納部107に格納する(ステップS3)。この第1前処理では、図7に模式的に示すように、病理画像1001を縦横N×Nピクセルの多数の分割画像1002に分割する。Nは例えば32や64である。なお、細胞の核を認識して細胞を特定し、細胞を含む画像領域を特定する処理を行うことで、分割画像を生成しても良い。 Then, the first preprocessing unit 105 performs the first preprocessing on the pathological image to be processed stored in the pathological image storage unit 103, and stores the processing result in the first image storage unit 107 (step). S3). In this first preprocessing, as schematically shown in FIG. 7, the pathological image 1001 is divided into a large number of divided images 1002 of N × N pixels in the vertical and horizontal directions. N is, for example, 32 or 64. Note that a divided image may be generated by performing processing for recognizing a cell nucleus to identify a cell and specifying an image region including the cell.
 その後、第1CNN109は、第1画像格納部107に格納されている各分割画像を読み出して、それらに対して分類処理を実行する(ステップS5)。分類処理は、処理対象の分割画像が、真皮細胞、表皮細胞、異型細胞、その他の細胞のいずれに該当するかを判別する処理であり、例えばそれぞれについての尤度を出力する。 Thereafter, the first CNN 109 reads out each divided image stored in the first image storage unit 107, and executes a classification process on them (step S5). The classification process is a process for determining whether the divided image to be processed corresponds to a dermal cell, an epidermal cell, an atypical cell, or another cell. For example, the likelihood for each is output.
 そして、第1後処理部111は、第1CNN109からの出力に基づき、第1後処理を実行し、処理結果を第2画像格納部113に格納する(ステップS7)。第1後処理では、図8Aに模式的に示すように、真皮細胞の分布を表す初期画像において、真皮細胞と判別された分割画像の位置における領域1002a内の全てのピクセルの画素値を「1」に設定する。それ以外のピクセルについては画素値を「0」に設定する。同様に、図8Bに模式的に示すように、表皮細胞の分布を表す初期画像において、表皮細胞と判別された分割画像の位置における領域1002b内の全てのピクセルの画素値を「1」に設定する。それ以外のピクセルについては画素値を「0」に設定する。さらに、図8Cに模式的に示すように、異型細胞の分布を表す初期画像において、異型細胞と判別された分割画像の位置における領域1002c内の全てのピクセルの画素値を「1」に設定する。それ以外のピクセルについては画素値を「0」に設定する。 Then, the first post-processing unit 111 executes the first post-processing based on the output from the first CNN 109, and stores the processing result in the second image storage unit 113 (step S7). In the first post-processing, as schematically shown in FIG. 8A, in the initial image representing the distribution of dermal cells, the pixel values of all the pixels in the region 1002a at the position of the divided image determined to be dermal cells are “1”. To "". For the other pixels, the pixel value is set to “0”. Similarly, as schematically shown in FIG. 8B, in the initial image representing the distribution of epidermal cells, the pixel values of all the pixels in the region 1002b at the position of the divided image determined to be epidermal cells are set to “1”. To do. For the other pixels, the pixel value is set to “0”. Further, as schematically shown in FIG. 8C, in the initial image representing the distribution of atypical cells, the pixel values of all the pixels in the region 1002c at the position of the divided image determined to be atypical cells are set to “1”. . For the other pixels, the pixel value is set to “0”.
 なお、このような第1後処理は一例であって、分布を表す初期画像のサイズを小さくするようにしても良い。例えば、分割画像1つにつき1ピクセルを対応付けて、分布を表す初期画像において、分割画像の位置の1ピクセルの画素値を「1」に設定するようにしても良い。このようにすれば、病理画像に対して縦も横も1/Nのサイズになる。 Note that such first post-processing is merely an example, and the size of the initial image representing the distribution may be reduced. For example, one pixel may be associated with each divided image, and the pixel value of one pixel at the position of the divided image may be set to “1” in the initial image representing the distribution. In this way, the size is 1 / N both vertically and horizontally with respect to the pathological image.
 次に、第2前処理部115は、第2画像格納部113に格納されている画像に対して、第2前処理を実行し、処理結果を第3画像格納部117に格納する(ステップS9)。第2前処理では、画像の縮小処理を実行する。画像の縮小処理では、近隣ピクセルの画素値の平均を算出することで、縮小後のピクセルの画素値を算出する。分布を表す初期画像において画素値は「0」又は「1」であるから、平均を算出すると「0」乃至「1」の実数となる。この実数値を、例えば「0」乃至「255」のいずれかの整数となるように、線形にマップする。これによって、分布を表す初期画像(モノクロ画像)は、分布を表す画像(グレースケール画像)に変換される。なお、縮小処理において平均ではない方法を採用する場合もあるが、従来技術と同様であるからここでは詳細には述べない。 Next, the second preprocessing unit 115 performs second preprocessing on the image stored in the second image storage unit 113, and stores the processing result in the third image storage unit 117 (step S9). ). In the second preprocessing, image reduction processing is executed. In the image reduction process, the pixel value of the pixel after reduction is calculated by calculating the average of the pixel values of neighboring pixels. Since the pixel value in the initial image representing the distribution is “0” or “1”, the average is calculated to be a real number from “0” to “1”. This real value is linearly mapped so as to be an integer of, for example, “0” to “255”. As a result, the initial image (monochrome image) representing the distribution is converted into an image (grayscale image) representing the distribution. Note that a non-average method may be employed in the reduction process, but since it is the same as the prior art, it will not be described in detail here.
 このような処理を実行することで、図3A及び図4Aに表すような真皮細胞の分布を表す画像、図3B及び図4Bに表すような表皮細胞の分布を表す画像、図3C及び図4Cに表すような異型細胞の分布を表す画像が得られるようになる。 By executing such processing, an image representing the distribution of dermal cells as shown in FIGS. 3A and 4A, an image representing the distribution of epidermal cells as shown in FIGS. 3B and 4B, and FIGS. 3C and 4C An image representing the distribution of atypical cells as represented is obtained.
 なお、本実施の形態では、畳み込みニューラルネットワークの負荷等の問題から分布を表す初期画像に対して縮小処理を実行するようにしているが、負荷等の問題が無いのであれば、縮小せずとも良い。 In the present embodiment, the reduction process is performed on the initial image representing the distribution due to the problem such as the load of the convolutional neural network. However, if there is no problem such as the load, the reduction process is not performed. good.
 そうすると、第2CNN119は、第3画像格納部117に格納されている画像を読み出して、それらに対して鑑別処理を実行する(ステップS11)。本実施の形態では、図3A乃至図3C又は図4A乃至図4Cのようなグレースケール画像群を、カラー画像の各チャネルのグレースケール画像として処理することになる。鑑別処理は、ここでは悪性黒色腫であるか否かであり、例えば、悪性黒色腫の尤度と、色素性母斑の尤度と、それ以外の尤度とが出力される。上でも述べたように、悪性黒色腫とその他との鑑別としても良い。 Then, the second CNN 119 reads out the images stored in the third image storage unit 117, and executes a discrimination process on them (step S11). In the present embodiment, a grayscale image group as shown in FIGS. 3A to 3C or 4A to 4C is processed as a grayscale image of each channel of a color image. The discrimination process is here whether or not it is malignant melanoma. For example, the likelihood of malignant melanoma, the likelihood of pigmented nevus, and other likelihoods are output. As mentioned above, it is also possible to differentiate malignant melanoma from others.
 第2後処理部121は、第2CNN119からの出力に基づき、鑑別結果を結果格納部123に格納し、表示装置その他の出力装置に出力する(ステップS13)。出力装置は、ネットワークに接続された他の装置である場合もある。 The second post-processing unit 121 stores the discrimination result in the result storage unit 123 based on the output from the second CNN 119, and outputs it to the display device or other output device (step S13). The output device may be another device connected to the network.
 以上のようにすれば、処理対象の皮膚組織における、表皮細胞の分布と真皮細胞の分布とに対する異型細胞の分布に基づき、病理組織の悪性度が学習されて、適切に鑑別結果が得られるようになる。 As described above, based on the distribution of atypical cells relative to the distribution of epidermal cells and the distribution of dermal cells in the skin tissue to be treated, the malignancy of the pathological tissue is learned, and an appropriate discrimination result can be obtained. become.
 なお、検査の簡略化や患者の負担軽減も望める。 It should be noted that the examination can be simplified and the burden on the patient can be reduced.
[実施の形態1の変形例A]
 図3A乃至図3Cや図4A乃至図4Cのような画像における皮膚組織の輪郭や皮膚組織内におけるその他部分の分布などの補助情報が有用な場合もある。そのような場合には、病理画像を、分布を表す画像のサイズに合わせるように縮小してグレースケール画像にしたものを用いるようにしても良い。
[Modification A of Embodiment 1]
In some cases, auxiliary information such as the contour of the skin tissue and the distribution of other portions in the skin tissue in the images as shown in FIGS. 3A to 3C and FIGS. 4A to 4C may be useful. In such a case, the pathological image may be reduced to a gray scale image so as to match the size of the image representing the distribution.
 すなわち、第2前処理部115は、病理画像格納部103から病理画像を読み出して、所定のサイズに縮小して、第3画像格納部117に格納する。 That is, the second preprocessing unit 115 reads out the pathological image from the pathological image storage unit 103, reduces it to a predetermined size, and stores it in the third image storage unit 117.
 第2CNN119については、真皮細胞の分布を表す画像、表皮細胞の分布を表す画像、異型細胞の分布を表す画像に加えて、病理画像の縮小グレースケール画像をさらに用いて学習しておく。 The second CNN 119 is further learned by using a reduced grayscale image of a pathological image in addition to an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, and an image representing the distribution of atypical cells.
 そして、第2CNN119により鑑別処理を行う場合には、真皮細胞の分布を表す画像、表皮細胞の分布を表す画像、異型細胞の分布を表す画像に加えて、病理画像の縮小グレースケール画像を入力して、悪性黒色腫であるか否かの鑑別を実行させる。 When the second CNN 119 performs the discrimination process, in addition to the image representing the distribution of dermal cells, the image representing the distribution of epidermal cells, and the image representing the distribution of atypical cells, a reduced grayscale image of the pathological image is input. To determine whether or not it is malignant melanoma.
 これによって鑑別精度が向上する。 This improves the discrimination accuracy.
[実施の形態1の変形例B]
 さらに、悪性黒色腫の進行度合いが上がると、免疫細胞が増加するという現象があることに着目すると、免疫細胞の分布を表す画像を、第2CNN119の入力としてさらに用いるようにしても良い。
[Modification B of Embodiment 1]
Furthermore, when attention is paid to the phenomenon that immune cells increase when the degree of progression of malignant melanoma increases, an image representing the distribution of immune cells may be further used as the input of the second CNN 119.
 すなわち、第1CNN109については、各分割画像について、真皮細胞、表皮細胞、免疫細胞、異型細胞、その他のいずれであるかという判別を行うように構成しておき、真皮細胞の画像、表皮細胞の画像、免疫細胞の画像、異型細胞の画像を用いて学習しておく。そうすれば、第1CNN109は、真皮細胞、表皮細胞、免疫細胞、異型細胞、その他のそれぞれについての尤度を出力する。 That is, the first CNN 109 is configured to determine whether each divided image is a dermal cell, an epidermal cell, an immune cell, an atypical cell, or the like, and an image of the dermal cell or an image of the epidermal cell. Learning is performed using images of immune cells and images of atypical cells. Then, the first CNN 109 outputs the likelihood for each of dermal cells, epidermal cells, immune cells, atypical cells, and the like.
 第1後処理部111は、第1CNN109からの出力に基づき、真皮細胞の分布を表す初期画像、表皮細胞の分布を表す初期画像、免疫細胞の分布を表す初期画像、異型細胞の分布を表す初期画像を生成する。免疫細胞の分布を表す初期画像は、他の分布を表す初期画像と同様に生成される。 Based on the output from the first CNN 109, the first post-processing unit 111 is an initial image representing the distribution of dermal cells, an initial image representing the distribution of epidermal cells, an initial image representing the distribution of immune cells, and an initial image representing the distribution of atypical cells. Generate an image. The initial image representing the distribution of immune cells is generated in the same manner as the initial image representing the other distribution.
 また、第2前処理部115は、免疫細胞の分布を表す初期画像から、他の分布を表す画像と同様に、免疫細胞の分布を表す画像を生成する。悪性黒色腫と分かっている皮膚組織における免疫細胞の分布を表す画像は、例えば図9に示すような画像である。また、色素性母斑と分かっている皮膚組織における免疫細胞の分布を表す画像は、例えば図10に示すような画像である。なお、これらの図中でも、白色の部分に、着目する細胞が存在することを示している。図9の方が、異型細胞の分布との重なりが見られる。 Also, the second preprocessing unit 115 generates an image representing the distribution of immune cells from the initial image representing the distribution of immune cells in the same manner as an image representing other distributions. An image representing the distribution of immune cells in the skin tissue known as malignant melanoma is, for example, an image as shown in FIG. Further, an image representing the distribution of immune cells in the skin tissue known to be pigmented nevus is an image as shown in FIG. 10, for example. In these figures, the white cells indicate that the cells of interest exist. In FIG. 9, there is an overlap with the distribution of atypical cells.
 さらに、第2CNN119については、真皮細胞の分布を表す画像、表皮細胞の分布を表す画像、異型細胞の分布を表す画像に加えて、免疫細胞の分布を表す画像をさらに用いて学習しておく。 Furthermore, the second CNN 119 is further learned using an image representing the distribution of immune cells in addition to an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, and an image representing the distribution of atypical cells.
 そして、第2CNN119により鑑別処理を行う場合には、真皮細胞の分布を表す画像、表皮細胞の分布を表す画像、異型細胞の分布を表す画像に加えて、免疫細胞の分布を表す画像を入力して、悪性黒色腫と色素性母斑とその他とのいずれかであるかの鑑別を実行させる。 When the second CNN 119 performs the discrimination process, in addition to the image representing the distribution of dermal cells, the image representing the distribution of epidermal cells, and the image representing the distribution of atypical cells, an image representing the distribution of immune cells is input. And discriminate between malignant melanoma, pigmented nevus and others.
 これによって鑑別精度が向上する。 This improves the discrimination accuracy.
[実施の形態1の変形例C]
 さらに、悪性黒色腫の進行度合いなどを含む病状と管腔を作る細胞の分布とが関係していると推定されることがあるため、管腔を作る細胞の分布を表す画像を、第2CNN119の入力としてさらに用いるようにしても良い。
[Modification C of Embodiment 1]
Furthermore, since it may be presumed that the pathological condition including the degree of progression of malignant melanoma and the distribution of cells forming the lumen are related, an image representing the distribution of the cells forming the lumen is displayed in the second CNN119. It may be further used as an input.
 この場合には、変形例Bにおける免疫細胞に代わって管腔を作る細胞を用いるように変形する。 In this case, the cells are deformed to use cells that form a lumen instead of the immune cells in the modified example B.
 すなわち、第1CNN109については、各分割画像について、真皮細胞、表皮細胞、管腔を作る細胞、異型細胞、その他のいずれであるかという判別を行うように構成しておき、真皮細胞の画像、表皮細胞の画像、管腔を作る細胞の画像、異型細胞の画像を用いて学習しておく。そうすれば、第1CNN109は、真皮細胞、表皮細胞、管腔を作る細胞、異型細胞、その他のそれぞれについての尤度を出力する。 That is, the first CNN 109 is configured to determine whether each divided image is a dermal cell, an epidermal cell, a luminal cell, an atypical cell, or the like. Learning is performed using the image of the cell, the image of the cell forming the lumen, and the image of the atypical cell. If it does so, 1st CNN109 will output the likelihood about each of a dermal cell, an epidermal cell, a cell which forms a lumen, an atypical cell, and others.
 第1後処理部111は、第1CNN109からの出力に基づき、真皮細胞の分布を表す初期画像、表皮細胞の分布を表す初期画像、管腔を作る細胞の分布を表す初期画像、異型細胞の分布を表す初期画像を生成する。管腔を作る細胞の分布を表す初期画像は、他の分布を表す初期画像と同様に生成される。 The first post-processing unit 111, based on the output from the first CNN 109, an initial image representing the distribution of dermal cells, an initial image representing the distribution of epidermal cells, an initial image representing the distribution of cells forming a lumen, and the distribution of atypical cells An initial image representing is generated. The initial image representing the distribution of the cells forming the lumen is generated in the same manner as the initial image representing the other distribution.
 また、第2前処理部115は、管腔を作る細胞の分布を表す初期画像から、他の分布を表す画像と同様に、管腔を作る細胞の分布を表す画像を生成する。悪性黒色腫と分かっている皮膚組織における管腔を作る細胞の分布を表す画像は、例えば図11に示すような画像である。また、色素性母斑と分かっている皮膚組織における管腔を作る細胞の分布を表す画像は、例えば図12に示すような画像である。なお、これらの図中でも、白色の部分に、着目する細胞が存在することを示している。図11の方が、異型細胞の分布が多い部分に隣接して多くの管腔を作る細胞が存在している。 Also, the second preprocessing unit 115 generates an image representing the distribution of the cells forming the lumen from the initial image representing the distribution of the cells forming the lumen, similarly to the image representing the other distribution. An image representing the distribution of cells forming a lumen in skin tissue known as malignant melanoma is, for example, an image as shown in FIG. Moreover, the image showing the distribution of the cells forming the lumen in the skin tissue known as pigmented nevus is an image as shown in FIG. 12, for example. In these figures, the white cells indicate that the cells of interest exist. In FIG. 11, there are cells that form many lumens adjacent to a portion where the distribution of atypical cells is large.
 さらに、第2CNN119については、真皮細胞の分布を表す画像、表皮細胞の分布を表す画像、異型細胞の分布を表す画像に加えて、管腔を作る細胞の分布を表す画像をさらに用いて学習しておく。 Furthermore, the second CNN 119 is further learned using an image representing the distribution of cells forming the lumen in addition to an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, and an image representing the distribution of atypical cells. Keep it.
 そして、第2CNN119により鑑別処理を行う場合には、真皮細胞の分布を表す画像、表皮細胞の分布を表す画像、異型細胞の分布を表す画像に加えて、管腔を作る細胞の分布を表す画像を入力して、悪性黒色腫と色素性母斑とその他とのいずれであるかの鑑別を実行させる。 In the case of performing the discrimination process by the second CNN 119, in addition to an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, an image representing the distribution of atypical cells, an image representing the distribution of cells forming the lumen To identify whether it is malignant melanoma, pigmented nevus, or others.
 これによって鑑別精度が向上する。 This improves the discrimination accuracy.
[実施の形態1の変形例D]
 変形例A乃至Cについては、任意に組み合わせることができる。すなわち、病理画像の縮小グレースケール画像、免疫細胞の分布を表す画像、管腔を作る細胞の分布を表す画像の少なくともいずれかを用いるようにしても良い。
[Modification D of Embodiment 1]
Modifications A to C can be arbitrarily combined. That is, at least one of a reduced gray scale image of a pathological image, an image representing the distribution of immune cells, and an image representing the distribution of cells forming a lumen may be used.
 なお、免疫細胞については、好中球、リンパ球、マクロファージといった種類もあるので、いずれかの種類の分布を表す画像を用いるようにしても良い。 Since there are various types of immune cells such as neutrophils, lymphocytes, and macrophages, an image representing any type of distribution may be used.
[実施の形態2]
 第1の実施の形態のように悪性黒色腫の鑑別を行うだけではなく、予後予測(余命)、進行度を表す病期分類予測、特定薬剤に対する投薬効果の予測などを行うように変形してもよい。
[Embodiment 2]
In addition to the differentiation of malignant melanoma as in the first embodiment, it is modified so as to perform prognosis prediction (life expectancy), stage classification prediction indicating the degree of progression, prediction of drug effect on a specific drug, etc. Also good.
 なお、予後予測(余命)については、実数(例えば5年など)を予測しても良いし、1年未満、1年以上3年、3年から5年、5年以上といったような予め決められた余命レンジや生存率曲線で表されるような時間経過における生存確率を予測しても良い。 As for the prognosis prediction (life expectancy), a real number (for example, 5 years) may be predicted, or less than 1 year, 1 year to 3 years, 3 years to 5 years, 5 years or more. You may also predict the survival probability over time as represented by the remaining life range or survival rate curve.
 病期分類予測は、悪性黒色腫がどれほど悪いかを、所定数のステージで表して、そのステージを予測するものである。 Staging prediction predicts how bad malignant melanoma is in a predetermined number of stages and predicts that stage.
 投薬効果予測は、悪性黒色腫に特定の薬剤(例えば高価なニボルマブなど)を投与した場合の効果を予測するもので、効果あり又はなしを予測するものである。 Prediction of drug effect predicts the effect when a specific drug (for example, expensive nivolumab) is administered to malignant melanoma, and predicts whether or not there is an effect.
 このような病状に関する予測には、第1の実施の形態における主要な情報である表皮細胞の分布、真皮細胞の分布、及び異型細胞の分布は、有効である。これは、真皮細胞の分布及び表皮細胞の分布に対する異型細胞の分布、すなわち異型細胞がどのような部分にどのような分布で存在するかということは、どの程度病理組織が悪性であるかを表すからであり、悪性度が高ければ予後が悪いことが予測されるからである。 For the prediction regarding such a disease state, the distribution of epidermal cells, the distribution of dermal cells, and the distribution of atypical cells, which are the main information in the first embodiment, are effective. This is the distribution of atypical cells relative to the distribution of dermal cells and epidermal cells, that is, in what part and in what distribution the atypical cells are present, and to what extent the pathological tissue is malignant. This is because if the malignancy is high, the prognosis is predicted to be poor.
 加えて、悪性黒色腫の進行度合いが上がると、免疫細胞が増加するという現象があること、管腔を作る細胞内に異型細胞が存在する場合には転移の危険性があることに着目すると、これらの免疫細胞及び管腔を作る細胞の分布についても、予後に関連する。 In addition, when the progression of malignant melanoma increases, there is a phenomenon that immune cells increase, and there is a risk of metastasis when atypical cells are present in the cells that make up the lumen, The distribution of these immune cells and the cells that make up the lumen is also related to prognosis.
 さらに、特定の薬剤の投薬効果には免疫細胞、特に好中球及びリンパ球が関係する可能性がある。 In addition, immune cells, especially neutrophils and lymphocytes, may be involved in the medication effects of certain drugs.
 従って、第1の実施の形態と基本的な構成はほぼ同じだが、鑑別処理を行う第2CNN119に加えて、投薬効果予測のための畳み込みニューラルネットワークと、予後予測のための畳み込みニューラルネットワークと、病期分類予測のための畳み込みニューラルネットワークとを追加導入する。 Accordingly, although the basic configuration is almost the same as that of the first embodiment, in addition to the second CNN 119 that performs the discrimination process, a convolutional neural network for predicting medication effects, a convolutional neural network for predicting prognosis, and a disease Introduce a convolutional neural network for period classification prediction.
 さらに、第1CNN109についても、鑑別処理及び病状に関する予測処理で用いられる画像を生成できるように変形する。本実施の形態では、真皮細胞、表皮細胞、免疫細胞、管腔を作る細胞、その他を判別するようにする。但し、特定の薬剤に対する投薬効果予測を行う場合には、免疫細胞の種類(好中球、リンパ球、マクロファージ)をさらに判別できるようにする。 Furthermore, the first CNN 109 is also modified so that an image used in the discrimination process and the prediction process related to the medical condition can be generated. In this embodiment, dermal cells, epidermal cells, immune cells, cells that form lumens, and the like are discriminated. However, in the case of predicting the dosing effect for a specific drug, the type of immune cells (neutrophils, lymphocytes, macrophages) can be further discriminated.
 本実施の形態に係る情報処理装置の構成例を図13に示す。なお、第1の実施の形態に係る情報処理装置の構成要素と同様の機能を有する場合には、同じ参照番号を付している。 FIG. 13 shows a configuration example of the information processing apparatus according to this embodiment. In addition, the same reference number is attached | subjected when it has the same function as the component of the information processing apparatus which concerns on 1st Embodiment.
 本実施の形態に係る情報処理装置は、入力部201と、病理画像格納部103と、第1前処理部105と、第1画像格納部107と、第1CNN209a及び209bと、第1後処理部211と、第2画像格納部113と、第2前処理部115と、第3画像格納部117と、第2CNN219a乃至219dと、第2後処理部221と、結果格納部123と、第1学習処理部231と、第2学習処理部233とを有する。 The information processing apparatus according to the present embodiment includes an input unit 201, a pathological image storage unit 103, a first preprocessing unit 105, a first image storage unit 107, first CNNs 209a and 209b, and a first post processing unit. 211, the second image storage unit 113, the second preprocessing unit 115, the third image storage unit 117, the second CNNs 219a to 219d, the second post-processing unit 221, the result storage unit 123, and the first learning. A processing unit 231 and a second learning processing unit 233 are included.
 入力部201は、ユーザから、処理対象の病理画像の入力を受け付け、病理画像格納部103に格納する。また、入力部201は、処理内容についても指示を受け付ける。すなわち、悪性黒色腫の鑑別、予後予測(余命)、進行度を表す病期分類予測、特定薬剤に対する投薬効果の予測のうち1又は複数の処理の指定を受け付ける。 The input unit 201 receives an input of a pathological image to be processed from a user and stores it in the pathological image storage unit 103. The input unit 201 also accepts instructions for processing details. That is, it accepts designation of one or a plurality of processes among differentiation of malignant melanoma, prognosis prediction (life expectancy), staging classification prediction indicating the degree of progression, and prediction of medication effect for a specific drug.
 第1前処理部105の処理内容は、第1の実施の形態と同様である。 The processing content of the first preprocessing unit 105 is the same as that of the first embodiment.
 第1CNN209aは、第1画像格納部107に格納されている各分割画像が、真皮細胞、表皮細胞、免疫細胞、管腔を作る細胞、異型細胞、その他の細胞のいずれに該当するかを判別するように学習された畳み込みニューラルネットワークである。従って、第1CNN209aは、処理に係る分割画像に対して、例えばそれぞれの尤度を出力する。 The first CNN 209a determines whether each divided image stored in the first image storage unit 107 corresponds to a dermal cell, epidermal cell, immune cell, luminal cell, atypical cell, or other cell. It is a convolutional neural network learned as follows. Accordingly, the first CNN 209a outputs, for example, the respective likelihoods for the divided images related to the processing.
 第1CNN209bは、第1画像格納部107に格納されている各分割画像が、好中球、リンパ球、マクロファージ、その他の細胞のいずれに該当するかを判別するように学習された畳み込みニューラルネットワークである。従って、第1CNN209bは、処理に係る分割画像に対して、例えばそれぞれの尤度を出力する。 The first CNN 209b is a convolutional neural network learned so as to determine whether each divided image stored in the first image storage unit 107 corresponds to a neutrophil, lymphocyte, macrophage, or other cell. is there. Accordingly, the first CNN 209b outputs, for example, respective likelihoods for the divided images related to the processing.
 第1CNN209aと第1CNN209bは、例えば入力部201からの指示に応じて動作する。例えば、特定の薬剤に対する投薬効果予測を行う場合には、第1CNN209aと第1CNN209bを用いるようにする。その他の場合には、第1CNN209aのみを用いるようにする。 The first CNN 209a and the first CNN 209b operate in response to an instruction from the input unit 201, for example. For example, when predicting the dosing effect for a specific drug, the first CNN 209a and the first CNN 209b are used. In other cases, only the first CNN 209a is used.
 第1後処理部211は、第1CNN209aからの出力に基づき、真皮細胞の分布を表す初期画像、表皮細胞の分布を表す初期画像、免疫細胞の分布を表す初期画像、管腔を作る細胞の分布を表す初期画像、及び異型細胞の分布を表す初期画像を生成し、第2画像格納部113に格納する。例えば、分割画像が表皮細胞であると判別された場合には、表皮細胞の分布を表す初期画像において、その分割画像の位置の領域内における全ピクセルが「1」それ以外は「0」となるように設定される。すなわち、これらの初期画像はモノクロ画像である。このような場合、これらの初期画像のサイズは、病理画像と同じである。 The first post-processing unit 211, based on the output from the first CNN 209a, an initial image that represents the distribution of dermal cells, an initial image that represents the distribution of epidermal cells, an initial image that represents the distribution of immune cells, and the distribution of cells that make up the lumen And an initial image representing the distribution of atypical cells are generated and stored in the second image storage unit 113. For example, when it is determined that the divided image is an epidermal cell, in the initial image representing the distribution of the epidermal cell, all the pixels in the region of the position of the divided image are “1” and “0” otherwise. Is set as follows. That is, these initial images are monochrome images. In such a case, the size of these initial images is the same as the pathological image.
 同様に、第1後処理部211は、第1CNN209bからの出力に基づき、好中球の分布を表す初期画像、リンパ球の分布を表す初期画像、及びマクロファージの分布を表す初期画像を生成し、第2画像格納部113に格納する。具体的な処理内容は、第1CNN209aからの出力の場合と同様である。 Similarly, the first post-processing unit 211 generates an initial image representing a neutrophil distribution, an initial image representing a lymphocyte distribution, and an initial image representing a macrophage distribution based on the output from the first CNN 209b. The image is stored in the second image storage unit 113. The specific processing contents are the same as in the case of output from the first CNN 209a.
 第2前処理部115は、第2画像格納部113に格納された画像から、真皮細胞の分布を表す画像、表皮細胞の分布を表す画像、免疫細胞の分布を表す画像、管腔を作る細胞の分布を表す画像、異型細胞の分布を表す画像を生成し、第3画像格納部117に格納する。処理内容は、第1の実施の形態と同様である。なお、第1CNN209bからの出力がある場合には、第2前処理部115は、第2画像格納部113に格納された好中球の分布を表す初期画像、マクロファージの分布を表す初期画像、リンパ球の分布を表す初期画像から、好中球の分布を表す画像、マクロファージの分布を表す画像、リンパ球の分布を表す画像を生成し、第3画像格納部117に格納する。 From the image stored in the second image storage unit 113, the second preprocessing unit 115 is configured to display an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, an image representing the distribution of immune cells, and a cell forming a lumen. And an image representing the distribution of atypical cells are generated and stored in the third image storage unit 117. The processing contents are the same as in the first embodiment. When there is an output from the first CNN 209b, the second preprocessing unit 115 stores an initial image representing the neutrophil distribution stored in the second image storage unit 113, an initial image representing the macrophage distribution, lymph From the initial image representing the sphere distribution, an image representing the neutrophil distribution, an image representing the macrophage distribution, and an image representing the lymphocyte distribution are generated and stored in the third image storage unit 117.
 第2CNN219aは、真皮細胞の分布を表す画像、表皮細胞の分布を表す画像、免疫細胞の分布を表す画像、管腔を作る細胞の分布を表す画像、異型細胞の分布を表す画像から、悪性黒色腫と色素性母斑とその他とのいずれかを出力するように学習された畳み込みニューラルネットワークである。第2CNN219aは、第3画像格納部117に格納されている画像から、例えば悪性黒色腫の尤度、色素性母斑の尤度、その他の尤度を出力する。但し、第1の実施の形態のように、免疫細胞の分布を表す画像や管腔を作る細胞の分布を表す画像を用いずに鑑別を行っても良い。 The second CNN 219a uses an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, an image representing the distribution of immune cells, an image representing the distribution of cells making up the lumen, and an image representing the distribution of atypical cells. It is a convolutional neural network that is trained to output any of tumor, pigmented nevus, and others. The second CNN 219a outputs, for example, the likelihood of malignant melanoma, the likelihood of pigmented nevus, and other likelihoods from the image stored in the third image storage unit 117. However, as in the first embodiment, discrimination may be performed without using an image representing the distribution of immune cells or an image representing the distribution of cells forming a lumen.
 第2CNN219bは、真皮細胞の分布を表す画像、表皮細胞の分布を表す画像、免疫細胞の分布を表す画像、管腔を作る細胞の分布を表す画像、異型細胞の分布を表す画像から、予後予測(余命)の予測値(実数(年)、余命レンジ毎の尤度、又は生存率曲線で表されるような時間経過における生存確率)を出力するように学習された畳み込みニューラルネットワークである。第2CNN219bは、第3画像格納部117に格納されている画像から、予後予測の予測値を出力する。 The second CNN 219b predicts the prognosis from an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, an image representing the distribution of immune cells, an image representing the distribution of cells forming the lumen, and an image representing the distribution of atypical cells. This is a convolutional neural network that is trained to output a predicted value of (life expectancy) (real number (year), likelihood for each life expectancy range, or survival probability over time as represented by a survival rate curve). The second CNN 219b outputs a predicted value of prognosis prediction from the image stored in the third image storage unit 117.
 第2CNN219cは、真皮細胞の分布を表す画像、表皮細胞の分布を表す画像、免疫細胞の分布を表す画像、管腔を作る細胞の分布を表す画像、異型細胞の分布を表す画像から、病期分類予測(例えば各ステージの尤度)を出力するように学習された畳み込みニューラルネットワークである。第2CNN219cは、第3画像格納部117に格納されている画像に対する病期分類予測(例えば各ステージの尤度)を出力する。 The second CNN 219c is based on an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, an image representing the distribution of immune cells, an image representing the distribution of cells that form a lumen, and an image representing the distribution of atypical cells. It is a convolutional neural network learned to output classification prediction (for example, the likelihood of each stage). The second CNN 219c outputs the staging classification prediction (for example, the likelihood of each stage) for the image stored in the third image storage unit 117.
 第2CNN219dは、真皮細胞の分布を表す画像、表皮細胞の分布を表す画像、好中球の分布を表す画像、マクロファージの分布を表す画像、リンパ球の分布を表す画像、管腔を作る細胞の分布を表す画像、異型細胞の分布を表す画像から、特定の薬剤の投薬効果(例えば有りの尤度と無しの尤度)を出力するように学習された畳み込みニューラルネットワークである。第2CNN219dは、第3画像格納部117に格納されている画像から、特定の薬剤の投薬効果を出力する。 The second CNN 219d is an image showing the distribution of dermal cells, an image showing the distribution of epidermal cells, an image showing the distribution of neutrophils, an image showing the distribution of macrophages, an image showing the distribution of lymphocytes, and the cells that make up the lumen It is a convolutional neural network learned to output a medication effect (for example, likelihood of being present and likelihood of being absent) of a specific drug from an image representing a distribution and an image representing a distribution of atypical cells. The second CNN 219d outputs the medication effect of the specific medicine from the image stored in the third image storage unit 117.
 第2後処理部221は、第2CNN219aからの出力に基づき、鑑別結果を結果格納部123に格納し、表示装置その他の出力装置に鑑別結果を出力する。さらに、病状に関する予測を行うようになっている場合には、第2後処理部221は、第2CNN219b乃至219dの少なくともいずれかからの出力に基づき、病状に関する予測結果を結果格納部123に格納し、表示装置その他の出力装置に予測結果を出力する。 The second post-processing unit 221 stores the discrimination result in the result storage unit 123 based on the output from the second CNN 219a, and outputs the discrimination result to the display device or other output device. Furthermore, when the prediction regarding the medical condition is performed, the second post-processing unit 221 stores the prediction result regarding the medical condition in the result storage unit 123 based on the output from at least one of the second CNNs 219b to 219d. The prediction result is output to a display device or other output device.
 このようにすれば、病理画像を入力すれば、自動的に、悪性黒色腫又は色素性母斑であるか否かの鑑別結果、さらに病状に関する予測結果を得ることができるようになる。 In this way, when a pathological image is input, it is possible to automatically obtain a discrimination result as to whether it is malignant melanoma or pigmented nevus, and further a prediction result regarding a disease state.
 本実施の形態では、表皮細胞の分布と真皮細胞の分布と異型細胞の分布と免疫細胞の分布と管腔を作る細胞の分布とをそれぞれ別の画像として表すことで、分布間の関係を評価しやすくしている。これによって、畳み込みニューラルネットワークによる適切な鑑別や病状に関する予測を可能にしている。 In this embodiment, the distribution between epidermal cells, dermal cells, atypical cells, immune cells, and luminal cells are represented as separate images, thereby evaluating the relationship between the distributions. It is easy to do. This makes it possible to perform appropriate discrimination and prediction regarding a medical condition using a convolutional neural network.
 特に、免疫細胞の分布や管腔を作る細胞の分布を加えて評価することで、悪性黒色腫であった場合における腫瘍の悪性度合い、転移の可能性を評価できるようになる。また、特定の薬剤との関係が推定される免疫細胞の分布をさらに評価すれば、特定の薬剤の効果の有無を予測できるため、例えば高価な薬剤の投与の適否を判断できるようになる。 In particular, by evaluating the distribution of immune cells and the cells that make up the lumen, it becomes possible to evaluate the degree of malignancy of the tumor and the possibility of metastasis in the case of malignant melanoma. Further, if the distribution of immune cells estimated to have a relationship with a specific drug is further evaluated, it is possible to predict whether or not the effect of the specific drug is effective. For example, it is possible to determine whether or not the administration of an expensive drug is appropriate.
 なお、第1学習処理部231は、細胞の画像とその種別とを多数セット含む訓練データで、第1CNN209a及び209bに対する学習処理を実行する。学習処理の方法は、従来と同じなので詳細な説明については省略する。 Note that the first learning processing unit 231 executes learning processing for the first CNNs 209a and 209b with training data including a large number of sets of cell images and their types. Since the learning process method is the same as the conventional method, a detailed description is omitted.
 また、第2学習処理部233は、真皮細胞の分布を表す画像と表皮細胞の分布を表す画像と異型細胞の分布を表す画像と管腔を作る細胞の分布を表す画像と免疫細胞の分布を表す画像と鑑別結果とを多数セット含む訓練データで、第2CNN219aに対して学習処理を実行する。同様に、第2学習処理部233は、真皮細胞の分布を表す画像と表皮細胞の分布を表す画像と異型細胞の分布を表す画像と管腔を作る細胞の分布を表す画像と免疫細胞の分布を表す画像と、予後予測(余命)又は病期分類予測(ステージ番号など)とを多数セット含む訓練データで、第2CNN219b又は219cに対して学習処理を実行する。さらに、第2学習処理部233は、真皮細胞の分布を表す画像と表皮細胞の分布を表す画像と異型細胞の分布を表す画像と管腔を作る細胞の分布を表す画像と好中球の分布を表す画像とマクロファージの分布を表す画像とリンパ球の分布を表す画像と、特定薬剤の投薬効果(効果の有無など)とを多数セット含む訓練データで、第2CNN219dに対して学習処理を実行する。学習処理の方法は、従来と同じなので詳細な説明については省略する。 In addition, the second learning processing unit 233 displays an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, an image representing the distribution of atypical cells, an image representing the distribution of cells forming a lumen, and the distribution of immune cells. A training process is performed on the second CNN 219a with training data including a large number of sets of images to be represented and identification results. Similarly, the second learning processing unit 233 includes an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, an image representing the distribution of atypical cells, an image representing the distribution of cells forming a lumen, and the distribution of immune cells. The training process is executed on the second CNN 219b or 219c with training data including a large number of sets of images representing the prognosis and prognosis prediction (life expectancy) or staging classification prediction (stage number, etc.). Further, the second learning processing unit 233 includes an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, an image representing the distribution of atypical cells, an image representing the distribution of cells forming a lumen, and the distribution of neutrophils. Training data including a large number of sets of images representing a macrophage distribution, an image representing a macrophage distribution, a lymphocyte distribution, and a medication effect (such as the presence or absence of an effect) of a specific drug, and executing learning processing on the second CNN 219d . Since the learning process method is the same as the conventional method, a detailed description is omitted.
 次に、図14を用いて、本実施の形態に係る情報処理装置の処理内容を説明する。 Next, processing contents of the information processing apparatus according to the present embodiment will be described with reference to FIG.
 まず、入力部201は、ユーザから、処理対象の病理画像の入力を受け付け、病理画像格納部103に格納する(ステップS31)。図6のステップS1と同様である。 First, the input unit 201 receives an input of a pathological image to be processed from a user and stores it in the pathological image storage unit 103 (step S31). This is the same as step S1 in FIG.
 また、入力部201は、ユーザから、鑑別処理に加えて実行すべき病状に関する予測処理の指示の入力を受け付ける(ステップS33)。例えば、ユーザが、予後予測、病期分類予測、及び特定の薬剤の投薬効果のうち少なくともいずれかの指示を行って、当該指示を受け付ける。但し、病状に関する予測を行わないようにしても良いし、鑑別結果が悪性黒色腫ではない場合には、病状に関する予測を行わないものとする。 In addition, the input unit 201 receives an input of a prediction process instruction regarding a medical condition to be executed in addition to the discrimination process from the user (step S33). For example, the user gives an instruction by giving at least one of prognosis prediction, staging classification prediction, and medication effect of a specific drug. However, the prediction regarding the medical condition may not be performed, and when the discrimination result is not malignant melanoma, the prediction regarding the medical condition is not performed.
 そうすると、第1前処理部105は、病理画像格納部103に格納された処理対象の病理画像に対して、第1前処理を実行し、処理結果を第1画像格納部107に格納する(ステップS35)。この第1前処理は、図6のステップS3と同様である。 Then, the first preprocessing unit 105 performs the first preprocessing on the pathological image to be processed stored in the pathological image storage unit 103, and stores the processing result in the first image storage unit 107 (step). S35). This first preprocessing is the same as step S3 in FIG.
 その後、第1CNN209aは、第1画像格納部107に格納されている各分割画像を読み出して、それらに対して分類処理を実行する(ステップS37)。この分類処理は、処理対象の分割画像が、真皮細胞、表皮細胞、免疫細胞、管腔を作る細胞、異型細胞、その他の細胞のいずれに該当するかを判別する処理であり、例えばそれぞれについての尤度を出力する。 Thereafter, the first CNN 209a reads out each divided image stored in the first image storage unit 107, and executes a classification process on them (step S37). This classification process is a process for determining whether the divided image to be processed corresponds to a dermal cell, epidermal cell, immune cell, luminal cell, atypical cell, or other cell. Output likelihood.
 なお、特定の薬剤の投薬効果の予測を指示された場合には、入力部201は、第1CNN209bに対して処理の実行を指示する。そうすると、第1CNN209bも、第1画像格納部107に格納されている各分割画像に対して、分類処理を実行する。この分類処理は、処理対象の分割画像が、好中球、マクロファージ、リンパ球、その他の細胞のいずれに該当するかを判別する処理であり、例えばそれぞれについての尤度を出力する。 In addition, when the prediction of the medication effect of a specific medicine is instructed, the input unit 201 instructs the first CNN 209b to execute the process. Then, the first CNN 209b also executes the classification process on each divided image stored in the first image storage unit 107. This classification process is a process for determining whether the divided image to be processed corresponds to a neutrophil, a macrophage, a lymphocyte, or another cell. For example, the likelihood for each is output.
 そして、第1後処理部211は、第1CNN209a及び/又は第1CNN209bからの出力に基づき、第1後処理を実行し、処理結果を第2画像格納部113に格納する(ステップS39)。 The first post-processing unit 211 executes the first post-processing based on the output from the first CNN 209a and / or the first CNN 209b, and stores the processing result in the second image storage unit 113 (step S39).
 本実施の形態に係る第1後処理は、第1の実施の形態に係る第1後処理と基本的な処理は同じである。すなわち、真皮細胞の分布を表す初期画像において、真皮細胞と判別された分割画像の位置における領域内の全てのピクセルの画素値を「1」に設定する。それ以外のピクセルについては画素値を「0」に設定する。同様に、表皮細胞の分布を表す初期画像において、表皮細胞と判別された分割画像の位置における領域内の全てのピクセルの画素値を「1」に設定する。それ以外のピクセルについては画素値を「0」に設定する。さらに、異型細胞の分布を表す初期画像において、異型細胞と判別された分割画像の位置における領域内の全てのピクセルの画素値を「1」に設定する。それ以外のピクセルについては画素値を「0」に設定する。 The first post-process according to the present embodiment is the same as the first post-process according to the first embodiment. That is, in the initial image representing the distribution of dermal cells, the pixel values of all the pixels in the region at the position of the divided image determined as dermal cells are set to “1”. For the other pixels, the pixel value is set to “0”. Similarly, in the initial image representing the distribution of epidermal cells, the pixel values of all the pixels in the region at the position of the divided image determined to be epidermal cells are set to “1”. For the other pixels, the pixel value is set to “0”. Further, in the initial image representing the distribution of atypical cells, the pixel values of all the pixels in the region at the position of the divided image determined as the atypical cell are set to “1”. For the other pixels, the pixel value is set to “0”.
 さらに、免疫細胞の分布を表す初期画像において、免疫細胞と判別された分割画像の位置における領域内の全てのピクセルの画素値を「1」に設定する。それ以外のピクセルについては画素値を「0」に設定する。管腔を作る細胞の分布を表す初期画像において、管腔を作る細胞と判別された分割画像の位置における領域内の全てのピクセルの画素値を「1」に設定する。それ以外のピクセルについては画素値を「0」に設定する。 Further, in the initial image representing the distribution of immune cells, the pixel values of all the pixels in the region at the position of the divided image determined as immune cells are set to “1”. For the other pixels, the pixel value is set to “0”. In the initial image representing the distribution of the cells forming the lumen, the pixel values of all the pixels in the region at the position of the divided image determined as the cell forming the lumen are set to “1”. For the other pixels, the pixel value is set to “0”.
 また、好中球と判別された分割画像、マクロファージと判別された分割画像、リンパ球と判別された分割画像がある場合には、それぞれについて上で述べたような処理を行って、好中球の分布を表す初期画像、マクロファージの分布を表す初期画像、リンパ球の分布を表す初期画像をも生成する。 In addition, when there are divided images determined to be neutrophils, divided images determined to be macrophages, and divided images determined to be lymphocytes, the above-described processing is performed for each, An initial image representing the distribution of the macrophages, an initial image representing the distribution of macrophages, and an initial image representing the distribution of lymphocytes are also generated.
 次に、第2前処理部115は、第2画像格納部113に格納されている画像に対して、第2前処理を実行し、処理結果を第3画像格納部117に格納する(ステップS41)。第2前処理は、第1の実施の形態と同様である。 Next, the second preprocessing unit 115 performs second preprocessing on the image stored in the second image storage unit 113, and stores the processing result in the third image storage unit 117 (step S41). ). The second preprocessing is the same as in the first embodiment.
 そうすると、第2CNN219aは、第3画像格納部117に格納されている画像を読み出して、それらに対して鑑別処理を実行する(ステップS43)。本実施の形態に係る鑑別処理は、免疫細胞の分布を表す画像及び管腔を作る細胞の分布を表す画像をさらに用いることを除けば、図6のステップS11と同様である。すなわち、本実施の形態では、真皮細胞の分布を表す画像と表皮細胞の分布を表す画像と免疫細胞の分布を表す画像と管腔を作る細胞の分布を表す画像と異型細胞の分布を表す画像とを含むグレースケール画像群を、カラー画像の各チャネルのグレースケール画像として処理することになる。鑑別処理は、ここでは悪性黒色腫であるか否かであり、例えば、悪性黒色腫の尤度と、色素性母斑の尤度と、その他の尤度とが出力される。上でも述べたように、悪性黒色腫とその他との鑑別としても良い。 Then, the second CNN 219a reads out the images stored in the third image storage unit 117, and executes a discrimination process on them (step S43). The discrimination process according to the present embodiment is the same as step S11 in FIG. 6 except that an image representing the distribution of immune cells and an image representing the distribution of cells forming a lumen are further used. That is, in the present embodiment, an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, an image representing the distribution of immune cells, an image representing the distribution of cells forming a lumen, and an image representing the distribution of atypical cells Are processed as a gray scale image of each channel of the color image. The discrimination process is here whether or not it is malignant melanoma. For example, the likelihood of malignant melanoma, the likelihood of pigmented nevus, and other likelihoods are output. As mentioned above, it is also possible to differentiate malignant melanoma from others.
 第2後処理部221は、第2CNN219aからの出力に基づき、鑑別結果を特定し、入力部201からの指示によって病状に関する予測を行うことになっているか判断する(ステップS45)。ユーザから鑑別処理のみを実行するように指示されている場合、鑑別結果が悪性黒色腫ではない場合には、処理はステップS49に移行する。 The second post-processing unit 221 specifies the discrimination result based on the output from the second CNN 219a, and determines whether or not to predict a medical condition according to an instruction from the input unit 201 (step S45). If the user instructs to perform only the discrimination process, and the discrimination result is not malignant melanoma, the process proceeds to step S49.
 病状に関する予測を行う場合には、指示に応じた第2CNN219b乃至219dの少なくともいずれかは、第3画像格納部117に格納されている画像に対して、指示に応じた予測処理を実行する(ステップS47)。 When performing a prediction regarding a medical condition, at least one of the second CNNs 219b to 219d according to the instruction performs a prediction process according to the instruction on the image stored in the third image storage unit 117 (step S47).
 具体的には、第2CNN219bは、真皮細胞の分布を表す画像、表皮細胞の分布を表す画像、免疫細胞の分布を表す画像、管腔を作る細胞の分布を表す画像、異型細胞の分布を表す画像から、予後予測(余命)の予測値(実数(年)、余命レンジ毎の尤度、又は生存率曲線で表されるような時間経過における生存確率)を出力する。 Specifically, the second CNN 219b represents an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, an image representing the distribution of immune cells, an image representing the distribution of cells forming the lumen, and the distribution of atypical cells. From the image, a predicted value of prognosis prediction (life expectancy) (real number (year), likelihood for each life expectancy range, or survival probability over time as represented by a survival rate curve) is output.
 第2CNN219cは、真皮細胞の分布を表す画像、表皮細胞の分布を表す画像、免疫細胞の分布を表す画像、管腔を作る細胞の分布を表す画像、異型細胞の分布を表す画像から、病期分類予測(例えば各ステージの尤度)を出力する。 The second CNN 219c is based on an image representing the distribution of dermal cells, an image representing the distribution of epidermal cells, an image representing the distribution of immune cells, an image representing the distribution of cells that form a lumen, and an image representing the distribution of atypical cells. The classification prediction (for example, the likelihood of each stage) is output.
 第2CNN219dは、真皮細胞の分布を表す画像、表皮細胞の分布を表す画像、好中球の分布を表す画像、マクロファージの分布を表す画像、リンパ球の分布を表す画像、管腔を作る細胞の分布を表す画像、異型細胞の分布を表す画像から、特定の薬剤の投薬効果(例えば有りの尤度と無しの尤度)を出力する。 The second CNN 219d is an image showing the distribution of dermal cells, an image showing the distribution of epidermal cells, an image showing the distribution of neutrophils, an image showing the distribution of macrophages, an image showing the distribution of lymphocytes, and the cells that make up the lumen From the image representing the distribution and the image representing the distribution of the atypical cells, the medication effect (for example, the likelihood of being present and the likelihood of being absent) of a specific drug is output.
 第2後処理部221は、第2CNN219b乃至219dの少なくともいずれかが用いられた場合にはそれらからの出力に基づき、病状に関する予測結果を特定し、鑑別結果と共に結果格納部123に格納し、表示装置その他の出力装置に出力する(ステップS49)。出力装置は、ネットワークに接続された他の装置である場合もある。 When at least one of the second CNNs 219b to 219d is used, the second post-processing unit 221 specifies a prediction result related to the medical condition based on the output from the second CNN 219b to 219d, and stores the prediction result together with the discrimination result in the result storage unit 123 for display. The data is output to the other output device (step S49). The output device may be another device connected to the network.
 以上のようにすれば、悪性黒色腫、色素性母斑、その他のいずれかであるという鑑別結果に加えて、病状に関する所定の予測をも併せて実行することができるようになる。 As described above, in addition to the discrimination result that the disease is any one of malignant melanoma, pigmented nevus, and the like, it is possible to execute a predetermined prediction regarding the disease state.
 なお、第2CNN219aの処理に並行して、第2CNN219b乃至219dのうち少なくともいずれかの処理を実行するようにしても良い。 Note that at least one of the second CNNs 219b to 219d may be executed in parallel with the process of the second CNN 219a.
 また、第2CNN219aの処理を行わずに、第2CNN219b乃至219dのうち少なくともいずれかの処理を実行するようにしても良い。例えば、悪性黒色腫であることが他の手段で分かっている場合には、病状に関する予測処理のみを行うようにしても良い。 Further, at least one of the second CNNs 219b to 219d may be executed without performing the process of the second CNN 219a. For example, when it is known by other means that it is malignant melanoma, only the prediction process related to the disease state may be performed.
 このようにすれば、検査の簡略化や患者の負担軽減に加え、患者に対して病状に関する説明をしやすくなる。また、治療についての指針も立てやすくなる。 In this way, in addition to simplifying the examination and reducing the burden on the patient, it becomes easier to explain the medical condition to the patient. It also makes it easier to set guidelines for treatment.
 なお、分割細胞の分類において2つのCNNを用いる例を示したが、例えば1つのCNNで真皮細胞、表皮細胞、管腔を作る細胞、異型細胞、種類毎の免疫細胞のいずれかに分類して、後処理において、種類毎の免疫細胞の分布が不要となる場合には種類毎の免疫細胞の判別結果を統合するようにしても良い。 In addition, although the example which uses two CNN in the classification | category of a division | segmentation cell was shown, for example, it classify | categorizes into any one of a dermal cell, an epidermal cell, a cell which makes a lumen, an atypical cell, and an immune cell for every kind In the post-processing, when the distribution of the immune cells for each type becomes unnecessary, the discrimination results of the immune cells for each type may be integrated.
 さらに、上では管腔を作る細胞を判別し、管腔を作る細胞の分布を表す画像を用いていたが、病状に関する予測については用いないようにしても良い場合もある。例えば、特定の薬剤の投薬効果の予測や病期分類予測については用いないようにしても良い。また、病状に関する予測についても3種類同時に行うことができるようにするのではなく、少なくとも1つのみを行うことができるように実装しても良い。 Furthermore, in the above, the cells that make up the lumen are discriminated, and the image showing the distribution of the cells that make up the lumen is used. For example, the prediction of the medication effect of a specific drug and the staging classification prediction may not be used. Further, the prediction regarding the medical condition may be implemented so that at least one of them can be performed at the same time, instead of performing the three types simultaneously.
[実施の形態3]
 上で述べた鑑別処理は、悪性黒色腫に着目したものであった。しかしながら、上で述べたような分布を表す画像を用いれば、他の腫瘍に係る皮膚疾患の鑑別をも行うことができる。
[Embodiment 3]
The differentiation process described above focused on malignant melanoma. However, if an image representing the distribution as described above is used, it is possible to differentiate skin diseases related to other tumors.
 例えば、悪性黒色腫と、皮膚科において病理検査対象となる頻度の高い所定種類数(所定数は例えば10など比較的少数)の腫瘍に係る疾患とを鑑別できるように、畳み込みニューラルネットワークを学習させるようにしても良い。 For example, a convolutional neural network is trained so that a malignant melanoma can be differentiated from a disease related to a predetermined number of tumors (predetermined number is a relatively small number such as 10) that is frequently subjected to pathological examination in dermatology. You may do it.
 具体的には、鑑別処理を行う畳み込みニューラルネットワークは、真皮細胞の分布を表す画像と表皮細胞の分布を表す画像と免疫細胞の分布を表す画像と管腔を作る細胞の分布を表す画像と異型細胞の分布を表す画像とを入力とし、予め定められた複数の鑑別対象疾患及びその他のうちいずれであるかを出力するように学習される。 Specifically, the convolutional neural network that performs the discrimination process is different from the image representing the distribution of dermal cells, the image representing the distribution of epidermal cells, the image representing the distribution of immune cells, and the image representing the distribution of cells forming the lumen. An image representing a cell distribution is used as an input, and learning is performed so as to output which of a plurality of predetermined diseases to be differentiated and others.
 そして、第1の実施の形態における第2CNN119の代わりに、このような学習がなされた畳み込みニューラルネットワークを用いれば、該当する疾患又はその他という出力を得ることができるようになる。 Then, instead of the second CNN 119 in the first embodiment, using a convolutional neural network in which such learning is performed, an output of a corresponding disease or others can be obtained.
 そうすれば、より検査の簡略化や患者の負担軽減がなされるようになる。 This will make the examination easier and reduce the burden on the patient.
[実施の形態4]
 悪性黒色腫以外の腫瘍に係る様々な皮膚疾患の鑑別を行わせるように変形しても良い。
[Embodiment 4]
It may be modified so that various skin diseases related to tumors other than malignant melanoma are differentiated.
 具体的には、図15A乃至図15Cに列挙するような腫瘍疾患に対処するように変形することができる。 Specifically, it can be modified to cope with tumor diseases as listed in FIGS. 15A to 15C.
 図15A乃至図15Cにおいて疾患名の列には、鑑別対象となる疾患名が列挙されており、異型細胞の由来の列には、どの細胞についての異型細胞であるかを示しており、表皮細胞から免疫細胞までの列には、その疾患を鑑別するのにどの細胞の分布が必要か否か(○又は×)を示している。 In FIG. 15A to FIG. 15C, the disease name column lists the disease names to be identified, the atypical cell origin column indicates which cell is the atypical cell, and epidermal cells. The column from to the immune cell indicates which cell distribution is necessary to distinguish the disease (O or X).
 例えば、図15A乃至図15Cにおいて列挙された全疾患を鑑別する場合には、分割画像の分類を行う畳み込みニューラルネットワークは、細胞の画像について、表皮細胞と真皮細胞と管腔を作る細胞と免疫細胞とに加えて、色素細胞についての異型細胞、有棘細胞についての異型細胞、アポクリン腺細胞についての異型細胞、メルケル細胞についての異型細胞、基底細胞についての異型細胞、毛包細胞についての異型細胞といったように、異型細胞の由来の列に列挙されている、異型細胞の由来となる細胞の分類ごとに異型細胞を区別して出力するように学習された畳み込みニューラルネットワークである。よって、分類を行う畳み込みニューラルネットワークは、細胞の種類又はその他(例えば、それらの尤度)を出力する。 For example, when discriminating all the diseases listed in FIGS. 15A to 15C, the convolutional neural network that classifies the divided images includes cells that form epithelial cells, dermal cells, lumens, and immune cells. Atypical cells for pigment cells, atypical cells for spiny cells, atypical cells for apocrine gland cells, atypical cells for Merkel cells, atypical cells for basal cells, atypical cells for hair follicle cells, etc. The convolutional neural network learned to distinguish and output the atypical cells for each classification of the cells from which the atypical cells are derived, listed in the column of atypical cells. Thus, the convolutional neural network that performs classification outputs the cell type or others (for example, their likelihood).
 よって、色素細胞についての異型細胞の分布を表す画像、有棘細胞についての異型細胞の分布を表す画像、アポクリン腺細胞についての異型細胞の分布を表す画像、メルケル細胞についての異型細胞の分布を表す画像、基底細胞についての異型細胞の分布を表す画像、毛包細胞についての異型細胞の分布を表す画像といったように、由来となる細胞毎に異型細胞の分布を表す画像を生成する。 Thus, an image showing the distribution of atypical cells for pigment cells, an image showing the distribution of atypical cells for spiny cells, an image showing the distribution of atypical cells for apocrine gland cells, and an image showing the distribution of atypical cells for Merkel cells An image representing the distribution of atypical cells is generated for each cell that is derived, such as an image representing the distribution of atypical cells for basal cells and an image representing the distribution of atypical cells for hair follicle cells.
 そして、鑑別処理を行う畳み込みニューラルネットワークは、表皮細胞の分布を表す画像と真皮細胞の分布を表す画像と管腔を作る細胞の分布を表す画像と免疫細胞の分布を表す画像とに加えて、由来となる細胞毎の異型細胞の分布を表す画像とを入力として、いずれかの疾患名又はその他を出力するように学習された畳み込みニューラルネットワークである。よって、鑑別処理を行う畳み込みニューラルネットワークは、それらの分布を表す画像に対して、いずれかの疾患名又はその他(例えば、それらの尤度)を出力する。 Then, the convolutional neural network that performs the differentiation process includes an image representing the distribution of epidermal cells, an image representing the distribution of dermal cells, an image representing the distribution of cells that form a lumen, and an image representing the distribution of immune cells, This is a convolutional neural network that has been trained to input an image representing the distribution of atypical cells for each cell to be derived and output any disease name or others. Therefore, the convolutional neural network that performs the discrimination process outputs one of the disease names or others (for example, their likelihood) to the image representing the distribution.
 なお、図15A乃至図15Cにおいて列挙された疾患の一部について鑑別可能なように構成しても良い。例えば、色素細胞についての異型細胞と、アポクリン腺細胞についての異型細胞と、脂腺細胞についての異型細胞と、エクリン腺細胞についての異型細胞と、表皮細胞と、真皮細胞と、管腔を作る細胞と、免疫細胞とに、分割画像を分類して、悪性黒色腫と、色素性母斑と、真皮色素細胞母斑と、乳房パジェット病と、アポクリン腺癌と、乳房外パジェット病と、エクリン母斑と、エクリン汗嚢腫と、汗管腫と、エクリン汗孔腫と、皮膚混合腫瘍(エクリン型)と、エクリンらせん腫と、アポクリン母斑と、アポクリン汗嚢腫と、乳頭状汗腺腫と、乳頭状汗管嚢胞腺腫と、管状アポクリン腺腫と、乳頭腺腫と、皮膚混合腫瘍(アポクリン型)と、脂腺癌と、脂腺腺腫と、脂腺増殖症と、脂腺腫とを鑑別できるように構成しても良い。 Note that some of the diseases listed in FIGS. 15A to 15C may be distinguished. For example, atypical cells for pigment cells, atypical cells for apocrine gland cells, atypical cells for sebaceous cells, atypical cells for eccrine gland cells, epidermal cells, dermal cells, luminal cells, immune cells By dividing the image into cells, malignant melanoma, pigmented nevus, dermal pigment cell nevus, Paget's disease of breast, apocrine adenocarcinoma, extramammary Paget's disease, eccrine nevus, Eccrine sweat cysts, syringoma, eccrine sweat pores, skin mixed tumors (ecline type), eccrine spiral tumors, apocrine nevus, apocrine sweat cysts, papillary sweat adenoma, papillary sweat ducts It can be configured to differentiate cystadenoma, tubular apocrine adenoma, papillary adenoma, skin mixed tumor (apocrine type), sebaceous carcinoma, sebaceous adenoma, sebaceous hyperplasia, and sebaceous adenoma good.
 すなわち、鑑別したい疾患について、異型細胞の由来となる細胞についての異型細胞をも分類できるようにし、それらの異型細胞の分布を表す画像をも用いて鑑別できるように畳み込みニューラルネットワークを学習させればよい。 In other words, for a disease to be differentiated, it is possible to classify atypical cells from cells derived from atypical cells, and to learn a convolutional neural network so that differentiation can be performed using an image representing the distribution of those atypical cells. Good.
 なお、良性リンパ腫と悪性リンパ腫についてのみ鑑別する場合には、管腔を作る細胞については分類しなくても良い。 Note that when distinguishing only benign lymphoma and malignant lymphoma, it is not necessary to classify the cells forming the lumen.
 このようにすれば腫瘍にかかる疾患を自動的に鑑別できるようになる。 In this way, it will be possible to automatically identify a disease associated with a tumor.
 なお、ここでは鑑別を行ったが、第2の実施の形態のように病状に関する予測を行うようにさらに変形しても良い。 In addition, although discrimination was performed here, you may further deform | transform so that the prediction regarding a medical condition may be performed like 2nd Embodiment.
[実施の形態5]
 上で述べた実施の形態では腫瘍に係る皮膚疾患について鑑別することについて述べたが、非腫瘍性疾患についても応用可能である。
[Embodiment 5]
In the embodiment described above, it has been described that a skin disease related to a tumor is differentiated, but the present invention can also be applied to a non-neoplastic disease.
 例えば、図16に列挙するような疾患については、分割画像を、表皮細胞と、真皮細胞と、管腔を作る細胞と、免疫細胞とに分類して、それらの分布を表す画像を生成する。そして、それらの分布を表す画像を入力として、図16に列挙されたいずれかの疾患名又はその他を出力するように学習された畳み込みニューラルネットワークを用いて、疾患名を鑑別するようにする。 For example, for the diseases listed in FIG. 16, the divided images are classified into epidermal cells, dermal cells, luminal cells, and immune cells, and an image representing their distribution is generated. Then, using the images representing these distributions as input, the disease names are identified using a convolutional neural network learned to output any of the disease names listed in FIG. 16 or others.
 このようにすれば、非腫瘍性疾患についても自動的に鑑別できるようになる。 In this way, non-neoplastic diseases can be automatically identified.
 なお、本実施の形態でも、鑑別に加えて、第2の実施の形態のように病状に関する予測を行うようにさらに変形しても良い。 In addition, in this embodiment, in addition to the discrimination, it may be further modified so as to make a prediction regarding a medical condition as in the second embodiment.
 以上本発明の実施の形態を説明したが、本発明はこれに限定されるものではない。例えば、目的に応じて、上で述べた実施の形態における任意の技術的特徴を削除したり、実施の形態を組み合わせた上で任意の技術的特徴を削除したりするようにしても良い。 Although the embodiment of the present invention has been described above, the present invention is not limited to this. For example, depending on the purpose, any technical feature in the above-described embodiment may be deleted, or any technical feature may be deleted after combining the embodiments.
 上で述べた情報処理装置の機能ブロック構成は一例であって、プログラムモジュール構成とは一致しない場合もある。処理フローについても、処理結果が変わらない限り、処理順番を入れ替えたり、複数ステップを並列に実行するようにしても良い。また、CNNの出力が尤度である場合には、当該尤度を併せて出力するようにしても良い。 The functional block configuration of the information processing apparatus described above is an example, and may not match the program module configuration. As for the processing flow, as long as the processing result does not change, the processing order may be changed or a plurality of steps may be executed in parallel. Further, when the output of CNN is likelihood, the likelihood may be output together.
 図5については、第1CNN109と第2CNN119とについては、CNNを前提として説明したが、CNNは、教師付き機械学習が行われた学習済みモデルの一例である。すなわち、第1CNN109は、このような学習済みモデル1000であってもよい。同様に、第2CNN119も、このような学習済みモデル2000であってもよい。なお、教師付き機械学習としては、サポートベクターマシン(Support Vector Machine)や、パーセプトロンその他のニューラルネットワークなどを採用可能である。なお、この場合、第1学習処理部131及び第2学習処理部133は、採用した機械学習のための学習処理部となる。 In FIG. 5, the first CNN 109 and the second CNN 119 have been described on the assumption of the CNN, but the CNN is an example of a learned model in which supervised machine learning is performed. That is, the first CNN 109 may be such a learned model 1000. Similarly, the second CNN 119 may be such a learned model 2000. For supervised machine learning, a support vector machine (Support Vector Vector), a perceptron or other neural network can be employed. In this case, the first learning processing unit 131 and the second learning processing unit 133 are employed learning processing units for machine learning.
 図13における第1CNN209a及び209bについても、図5と同様に、教師付き機械学習が行われた学習済みモデルの一例である。すなわち、第1CNN209a及び209bは、このような学習済みモデル又は学習済みモデルの集合3000であってもよい。また、第2CNN219a乃至219dも、このような学習済みモデル又は学習済みモデルの集合4000であってもよい。この場合にも、教師付き機械学習としては、サポートベクターマシン、パーセプトロンその他のニューラルネットワーク等を採用可能である。なお、この場合、第1学習処理部231及び第2学習処理部233は、採用した機械学習のための学習処理部となる。 The first CNNs 209a and 209b in FIG. 13 are also examples of learned models in which supervised machine learning is performed, as in FIG. That is, the first CNN 209a and 209b may be such a learned model or a set 3000 of learned models. The second CNNs 219a to 219d may also be such a learned model or a set 4000 of learned models. Also in this case, as the supervised machine learning, a support vector machine, a perceptron or other neural network can be employed. In this case, the first learning processing unit 231 and the second learning processing unit 233 serve as a learning processing unit for machine learning that is employed.
 なお、上で述べた情報処理装置は、コンピュータ装置であって、メモリとCPU(Central Processing Unit)とハードディスク・ドライブ(HDD:Hard Disk Drive)と表示装置に接続される表示制御部とリムーバブル・ディスク用のドライブ装置と入力装置とネットワークに接続するための通信制御部とがバスで接続されている。オペレーティング・システム(OS:Operating System)及び本実施例における処理を実施するためのアプリケーション・プログラムは、HDDに格納されており、CPUにより実行される際にはHDDからメモリに読み出される。CPUは、アプリケーション・プログラムの処理内容に応じて表示制御部、通信制御部、ドライブ装置を制御して、所定の動作を行わせる。また、処理途中のデータについては、主としてメモリに格納されるが、HDDに格納されるようにしてもよい。本発明の実施例では、上で述べた処理を実施するためのアプリケーション・プログラムはコンピュータ読み取り可能なリムーバブル・ディスクに格納されて頒布され、ドライブ装置からHDDにインストールされる。インターネットなどのネットワーク及び通信制御部を経由して、HDDにインストールされる場合もある。このようなコンピュータ装置は、上で述べたCPU、メモリなどのハードウエアとOS及びアプリケーション・プログラムなどのプログラムとが有機的に協働することにより、上で述べたような各種機能を実現する。 The information processing apparatus described above is a computer apparatus, and includes a memory, a CPU (Central Processing Unit), a hard disk drive (HDD: Hard Disk Drive), a display control unit connected to the display device, and a removable disk. And a communication control unit for connecting to a network are connected by a bus. An operating system (OS: Operating System) and an application program for performing the processing in this embodiment are stored in the HDD, and are read from the HDD to the memory when executed by the CPU. The CPU controls the display control unit, the communication control unit, and the drive device in accordance with the processing content of the application program, and performs a predetermined operation. Further, data in the middle of processing is mainly stored in the memory, but may be stored in the HDD. In the embodiment of the present invention, an application program for performing the above-described processing is stored and distributed on a computer-readable removable disk, and installed from the drive device to the HDD. In some cases, the HDD is installed via a network such as the Internet and a communication control unit. Such a computer apparatus realizes various functions as described above by organically cooperating hardware such as the CPU and memory described above with programs such as the OS and application programs.
 なお、畳み込みニューラルネットワークについても、プログラムにより実装するようにしても良いし、専用の演算装置(例えばGraphics Processing Unit)をコンピュータに組み込むことで高速処理ができるようにしても良い。 Note that the convolutional neural network may be implemented by a program, or a high-speed process may be performed by incorporating a dedicated arithmetic device (for example, Graphics Processing Unit) into the computer.
 以上述べた本実施の形態をまとめると以下のようになる。 The above-described embodiment can be summarized as follows.
 第1の態様に係る疾患鑑別処理方法は、(A)皮膚組織における真皮細胞の分布を表す画像と、皮膚組織における表皮細胞の分布を表す画像と、皮膚組織における異型細胞の分布を表す画像とを入力とし且つ腫瘍に係る特定の皮膚疾患の鑑別を出力とする学習済みの畳み込みニューラルネットワークにより、処理対象の皮膚組織における真皮細胞の分布を表す第1の画像と、処理対象の皮膚組織における表皮細胞の分布を表す第2の画像と、処理対象の皮膚組織における異型細胞の分布を表す第3の画像とを格納する記憶装置から読み出された第1の画像と第2の画像と第3の画像とに対して所定の演算を実行する実行ステップと、(B)学習済みの畳み込みニューラルネットワークからの出力に基づき、上記特定の皮膚疾患の鑑別結果を出力するステップとを含む。 The disease discrimination processing method according to the first aspect includes (A) an image representing the distribution of dermal cells in the skin tissue, an image representing the distribution of epidermal cells in the skin tissue, and an image representing the distribution of atypical cells in the skin tissue. A first image representing the distribution of dermis cells in the skin tissue to be processed, and the epidermis in the skin tissue to be processed by a learned convolutional neural network that receives the identification of a specific skin disease related to the tumor as an input The first image, the second image, and the third image read from the storage device that stores the second image representing the distribution of cells and the third image representing the distribution of atypical cells in the skin tissue to be processed. And (B) based on the output from the learned convolutional neural network, the discrimination result of the specific skin disease is obtained. And a step of force.
 このように真皮細胞の分布と表皮細胞の分布とに対する異型細胞の分布の関係と特定の皮膚疾患の有無との対応関係を予め学習しておき、真皮細胞の分布を表す画像と表皮細胞の分布を表す画像と異型細胞の分布を表す画像とを別々の画像で入力して評価すれば、腫瘍に係る特定の皮膚疾患(例えば悪性黒色腫)の鑑別を行うことができるようになる。 In this way, the relationship between the distribution of atypical cells and the presence or absence of a specific skin disease with respect to the distribution of dermal cells and the distribution of epidermal cells is learned in advance, and an image representing the distribution of dermal cells and the distribution of epidermal cells If an image representing the distribution and an image representing the distribution of atypical cells are input and evaluated as separate images, a specific skin disease related to the tumor (for example, malignant melanoma) can be differentiated.
 なお、学習済みの畳み込みニューラルネットワークは、皮膚組織のグレースケール画像と皮膚組織における免疫細胞の分布を表す画像と皮膚組織における管腔を作る細胞の分布を表す画像とのうち少なくともいずれかをさらに入力に用いて学習されており、上で述べた記憶装置が、処理対象の皮膚組織のグレースケール画像である第4の画像と処理対象の皮膚組織における免疫細胞の分布を表す第5の画像と処理対象の皮膚組織における管腔を作る細胞の分布を表す第6の画像との少なくともいずれかを格納する場合がある。そして、上で述べた実行ステップにおいて、第4の画像と第5の画像と第6の画像との少なくともいずれかを記憶装置からさらに読み出して、学習済みの畳み込みニューラルネットワークにより所定の演算を実行するようにしても良い。このようにすれば鑑別の精度が向上する。 The learned convolutional neural network further inputs at least one of a grayscale image of the skin tissue, an image representing the distribution of immune cells in the skin tissue, and an image representing the distribution of cells forming the lumen in the skin tissue. And the storage device described above is a fourth image that is a grayscale image of the skin tissue to be processed and a fifth image that represents the distribution of immune cells in the skin tissue to be processed. There may be a case where at least one of the sixth image representing the distribution of the cells forming the lumen in the skin tissue of the subject is stored. In the execution step described above, at least one of the fourth image, the fifth image, and the sixth image is further read from the storage device, and a predetermined calculation is executed by the learned convolutional neural network. You may do it. In this way, the accuracy of discrimination is improved.
 また、第1の態様に係る疾患鑑別処理方法は、(C)処理対象の皮膚組織の画像から、第1乃至第3の画像と、第4乃至第6の画像の少なくともいずれかとを生成して記憶装置に格納するステップをさらに含むようにしても良い。このような処理については、別の畳み込みニューラルネットワークを用いても良いし、他の細胞種別判別技術を用いても良い。 Further, the disease discrimination processing method according to the first aspect generates (C) at least one of the first to third images and the fourth to sixth images from the image of the skin tissue to be processed. You may make it further contain the step stored in a memory | storage device. For such processing, another convolutional neural network may be used, or another cell type discrimination technique may be used.
 第2の態様に係る病状予測処理方法は、(A)皮膚組織における真皮細胞の分布を表す画像と、皮膚組織における表皮細胞の分布を表す画像と、皮膚組織における異型細胞の分布を表す画像と、皮膚組織における免疫細胞の分布を表す画像とを入力とし且つ腫瘍に係る特定の皮膚疾患の病状に関する予測を出力とする学習済みの畳み込みニューラルネットワークにより、処理対象の皮膚組織における真皮細胞の分布を表す第1の画像と、処理対象の皮膚組織における表皮細胞の分布を表す第2の画像と、処理対象の皮膚組織における異型細胞の分布を表す第3の画像と、処理対象の皮膚組織における免疫細胞の分布を表す第4の画像とを格納する記憶装置から読み出した第1の画像と第2の画像と第3の画像と第4の画像とに対して所定の演算を実行する実行ステップと、(B)学習済みの畳み込みニューラルネットワークからの出力に基づき、上記特定の皮膚疾患の病状に関する予測結果を出力するステップとを含む。 The disease state prediction processing method according to the second aspect includes (A) an image representing the distribution of dermal cells in the skin tissue, an image representing the distribution of epidermal cells in the skin tissue, and an image representing the distribution of atypical cells in the skin tissue. The distribution of the dermal cells in the skin tissue to be processed by the trained convolutional neural network that takes as input the image representing the distribution of immune cells in the skin tissue and outputs a prediction regarding the pathology of the specific skin disease related to the tumor. A first image representing, a second image representing a distribution of epidermal cells in the skin tissue to be processed, a third image representing a distribution of atypical cells in the skin tissue to be treated, and immunity in the skin tissue to be treated A predetermined image for the first image, the second image, the third image, and the fourth image read from the storage device that stores the fourth image representing the distribution of cells; An executing step of executing a calculation based on the output from the (B) trained convolutional neural network, and outputting the predicted results for pathology of the specific skin diseases.
 このように真皮細胞の分布と表皮細胞の分布とに対する異型細胞の分布及び免疫細胞の分布の関係と腫瘍に係る特定の皮膚疾患の病状に関する判定結果との対応関係を予め学習しておき、真皮細胞の分布を表す画像と表皮細胞の分布を表す画像と異型細胞の分布を表す画像と免疫細胞の分布を表す画像とを別々の画像で入力して評価すれば、腫瘍に係る特定の皮膚疾患(例えば悪性黒色腫)の病状に関する予測を行うことができるようになる。 In this way, the relationship between the distribution of atypical cells and the distribution of immune cells with respect to the distribution of dermal cells and the distribution of epidermal cells and the correspondence between the determination results regarding the pathology of a specific skin disease related to a tumor are learned in advance, and the dermis If an image representing the distribution of cells, an image representing the distribution of epidermal cells, an image representing the distribution of atypical cells, and an image representing the distribution of immune cells are input and evaluated as separate images, a specific skin disease related to a tumor It becomes possible to make a prediction regarding the pathology of (for example, malignant melanoma).
 なお、皮膚組織における免疫細胞の分布を表す画像及び第4の画像が、免疫細胞の種類毎に複数用いられるようにする場合もある。また、特定の皮膚疾患の病状に関する予測は、余命の予測と、病期分類の予測と、特定薬剤の投与効果の予測とのいずれかである場合もある。 In some cases, a plurality of images representing the distribution of immune cells in the skin tissue and a fourth image are used for each type of immune cells. In addition, the prediction regarding the pathology of a specific skin disease may be any one of prediction of life expectancy, prediction of staging, and prediction of administration effect of a specific drug.
 また、上で述べた学習済みの畳み込みニューラルネットワークは、皮膚組織における管腔を作る細胞の分布を表す画像をさらに入力に用いて学習されており、上で述べた記憶装置が、処理対象の皮膚組織における管腔を作る細胞の分布を表す第5の画像を格納している場合がある。その場合、上で述べた実行ステップにおいて、第5の画像を記憶装置からさらに読み出して、学習済みの畳み込みニューラルネットワークにより所定の演算を実行するようにしても良い。これにより予測の精度が向上する。 In addition, the learned convolutional neural network described above is learned by further using an image representing the distribution of the cells forming the lumen in the skin tissue as an input, and the storage device described above is used for the skin to be processed. A fifth image representing the distribution of cells that make up the lumen in the tissue may be stored. In that case, in the execution step described above, the fifth image may be further read from the storage device, and a predetermined calculation may be executed by the learned convolutional neural network. This improves the accuracy of prediction.
 第3の態様に係る疾患鑑別処理方法は、(A)皮膚組織における真皮細胞の分布を表す画像と、皮膚組織における表皮細胞の分布を表す画像と、皮膚組織における異型細胞の分布を表す画像と、皮膚組織における免疫細胞の分布を表す画像と、皮膚組織における管腔を作る細胞の分布を表す画像とを入力とし且つ腫瘍に係る複数種類の皮膚疾患の鑑別を出力とする学習済みの畳み込みニューラルネットワークにより、処理対象の皮膚組織における真皮細胞の分布を表す第1の画像と、処理対象の皮膚組織における表皮細胞の分布を表す第2の画像と、処理対象の皮膚組織における異型細胞の分布を表す第3の画像と、処理対象の皮膚組織における免疫細胞の分布を表す第4の画像と、処理対象の皮膚組織における管腔を作る細胞の分布を表す第5の画像とを格納する記憶装置から読み出した第1乃至第5の画像に対して所定の演算を実行するステップと、(B)学習済みの畳み込みニューラルネットワークからの出力に基づき、複数種類の皮膚疾患の鑑別結果を出力するステップとを含む。 The disease discrimination processing method according to the third aspect includes (A) an image representing the distribution of dermal cells in the skin tissue, an image representing the distribution of epidermal cells in the skin tissue, and an image representing the distribution of atypical cells in the skin tissue. A learned convolutional neural network that receives an image representing the distribution of immune cells in the skin tissue and an image representing the distribution of cells that form lumens in the skin tissue, and outputs the differentiation of multiple types of skin diseases related to the tumor. The network displays a first image representing the distribution of dermal cells in the skin tissue to be processed, a second image representing the distribution of epidermal cells in the skin tissue to be processed, and the distribution of atypical cells in the skin tissue to be processed. A third image representing, a fourth image representing a distribution of immune cells in the skin tissue to be processed, and a distribution of cells forming a lumen in the skin tissue to be processed. A step of performing a predetermined calculation on the first to fifth images read from the storage device storing the fifth image, and (B) a plurality of types based on the output from the learned convolutional neural network. Outputting a discrimination result of the skin disease.
 腫瘍に係る複数種類の皮膚疾患を鑑別できるようになる。 It will be possible to distinguish multiple types of skin diseases related to tumors.
 なお、皮膚組織における異型細胞の分布を表す画像及び第3の画像が、異型細胞の由来となる細胞毎に用意される場合もある。これによって精度を向上させることができるようになる。 Note that an image representing the distribution of atypical cells in the skin tissue and a third image may be prepared for each cell from which the atypical cells are derived. As a result, the accuracy can be improved.
 第4の態様に係る疾患鑑別処理方法は、(A)皮膚組織における真皮細胞の分布を表す画像と、皮膚組織における表皮細胞の分布を表す画像と、皮膚組織における免疫細胞の分布を表す画像と、皮膚組織における管腔を作る細胞の分布を表す画像とを入力とし且つ非腫瘍性の複数種類の皮膚疾患の鑑別を出力とする学習済みの畳み込みニューラルネットワークにより、処理対象の皮膚組織における真皮細胞の分布を表す第1の画像と、処理対象の皮膚組織における表皮細胞の分布を表す第2の画像と、処理対象の皮膚組織における免疫細胞の分布を表す第3の画像と、処理対象の皮膚組織における管腔を作る細胞の分布を表す第4の画像とを格納する記憶装置から読み出した第1乃至第4の画像に対して所定の演算を実行するステップと、(B)学習済みの畳み込みニューラルネットワークからの出力に基づき、非腫瘍性の複数種類の皮膚疾患の鑑別結果を出力するステップとを含む。 The disease discrimination processing method according to the fourth aspect includes (A) an image representing the distribution of dermal cells in the skin tissue, an image representing the distribution of epidermal cells in the skin tissue, and an image representing the distribution of immune cells in the skin tissue. Dermal cells in the skin tissue to be processed by a trained convolutional neural network that receives an image representing the distribution of cells that form lumens in the skin tissue and outputs a discrimination of a plurality of non-neoplastic skin diseases A first image representing the distribution of the skin, a second image representing the distribution of epidermal cells in the skin tissue to be processed, a third image representing the distribution of immune cells in the skin tissue to be processed, and the skin to be processed Performing a predetermined operation on the first to fourth images read from a storage device storing a fourth image representing a distribution of cells that form a lumen in the tissue; B) based on the output from the learned convolutional neural network, and outputting the discrimination results of nonneoplastic plurality of types of skin disorders.
 第5の態様に係る診断支援システムは、(A)病理画像に含まれる皮膚組織における真皮細胞の分布を表す画像と、皮膚組織における表皮細胞の分布を表す画像と、皮膚組織における異型細胞の分布を表す画像とを入力とする機械学習の学習済みモデルと、(B)処理対象の皮膚組織における真皮細胞の分布を表す第1の画像と、処理対象の皮膚組織における表皮細胞の分布を表す第2の画像と、処理対象の皮膚組織における異型細胞の分布を表す第3の画像とに対する、機械学習の学習済みモデルからの出力に基づき、皮膚疾患に関する出力データを生成する出力データ生成部とを有する。 The diagnosis support system according to the fifth aspect includes (A) an image representing the distribution of dermal cells in the skin tissue, an image representing the distribution of epidermal cells in the skin tissue, and the distribution of atypical cells in the skin tissue. A learned model of machine learning using as input an image representing the image, (B) a first image representing a distribution of dermal cells in the skin tissue to be processed, and a first image representing a distribution of epidermal cells in the skin tissue to be processed An output data generation unit that generates output data related to a skin disease based on an output from a learned model of machine learning for the image of 2 and a third image representing the distribution of atypical cells in the skin tissue to be processed Have.
 このように病理画像に含まれる皮膚組織における特定の細胞の分布を各々表す複数の画像を入力に用いる機械学習の学習済みモデルを用意すると、皮膚疾患に関する様々な出力データを生成できるようになり、精度の高い診断の支援が行えるようになる。 In this way, by preparing a machine learning learned model that uses a plurality of images each representing the distribution of specific cells in the skin tissue included in the pathological image as input, various output data related to skin diseases can be generated, Support for diagnosis with high accuracy.
 なお、上で述べた機械学習の学習済みモデルの出力は、腫瘍に係る特定の皮膚疾患の鑑別であり、上で述べた皮膚疾患に関する出力データが、特定の皮膚疾患の鑑別結果を含む場合もある。腫瘍に係る特定の皮膚疾患の鑑別に有用である。 Note that the output of the machine learning learned model described above is a discrimination of a specific skin disease related to a tumor, and the output data related to the skin disease described above may include a discrimination result of a specific skin disease. is there. It is useful for identifying specific skin diseases related to tumors.
 また、上で述べた機械学習の学習済みモデルの入力は、皮膚組織のグレースケール画像と皮膚組織における免疫細胞の分布を表す画像と皮膚組織における管腔を作る細胞の分布を表す画像とのうち少なくともいずれかをさらに含む場合がある。この場合、上で述べた出力データ生成部は、処理対象の皮膚組織のグレースケール画像である第4の画像と処理対象の皮膚組織における免疫細胞の分布を表す第5の画像と処理対象の皮膚組織における管腔を作る細胞の分布を表す第6の画像との少なくともいずれかと第1乃至第3の画像とに対する、機械学習の学習済みモデルからの出力に基づき、特定の皮膚疾患の鑑別結果を含む出力データを生成するようにしてもよい。このようにすれば鑑別の精度が向上する。 In addition, the machine learning learned model described above can be input from a grayscale image of skin tissue, an image representing the distribution of immune cells in the skin tissue, and an image representing the distribution of cells forming the lumen in the skin tissue. It may further include at least one of them. In this case, the output data generation unit described above includes a fourth image that is a grayscale image of the skin tissue to be processed, a fifth image that represents the distribution of immune cells in the skin tissue to be processed, and the skin to be processed. Based on the output from the learned model of machine learning for at least one of the sixth image representing the distribution of the cells forming the lumen in the tissue and the first to third images, the discrimination result of the specific skin disease is obtained. You may make it produce | generate the output data containing. In this way, the accuracy of discrimination is improved.
 さらに、第5の態様に係る診断支援システムは、(C)処理対象の皮膚組織の画像から、第1乃至第3の画像を生成する画像生成部をさらに有するようにしてもよい。なお、画像生成部は、第4乃至第6の画像の少なくともいずれかを生成するようにしてもよい。 Furthermore, the diagnosis support system according to the fifth aspect may further include (C) an image generation unit that generates first to third images from the image of the skin tissue to be processed. Note that the image generation unit may generate at least one of the fourth to sixth images.
 また、上で述べた機械学習の学習済みモデルの入力は、皮膚組織における免疫細胞の分布を表す画像をさらに含む場合もある。また、上で述べた機械学習の学習済みモデルの出力は、腫瘍に係る特定の皮膚疾患の病状に関する予測である場合もある。このような場合、上で述べた出力データ生成部は、第1乃至第3の画像と処理対象の皮膚組織における免疫細胞の分布を表す第4の画像とに対する、機械学習の学習済みモデルの出力に基づき、特定の皮膚疾患の病状に関する予測結果を含む出力データを生成するようにしてもよい。例えば、第2の態様に係る病状予測処理方法と同様の効果が得られる。 Also, the input of the machine learning learned model described above may further include an image representing the distribution of immune cells in the skin tissue. Further, the output of the machine learning learned model described above may be a prediction related to the pathology of a specific skin disease related to a tumor. In such a case, the output data generation unit described above outputs the learned model of machine learning for the first to third images and the fourth image representing the distribution of immune cells in the skin tissue to be processed. On the basis of the above, output data including a prediction result related to a medical condition of a specific skin disease may be generated. For example, the same effect as the disease state prediction processing method according to the second aspect can be obtained.
 なお、皮膚組織における免疫細胞の分布を表す画像及び第4の画像が、免疫細胞の種類毎に複数用いられる場合もある。また、特定の皮膚疾患の病状に関する予測は、余命の予測と、病期分類の予測と、特定薬剤の投与効果の予測とのいずれかである場合もある。 In some cases, a plurality of images representing the distribution of immune cells in the skin tissue and a fourth image are used for each type of immune cells. In addition, the prediction regarding the pathology of a specific skin disease may be any one of prediction of life expectancy, prediction of staging, and prediction of administration effect of a specific drug.
 さらに、上で述べた機械学習の学習済みモデルの入力は、皮膚組織における管腔を作る細胞の分布を表す画像をさらに含むようにしてもよい。この場合、上で述べた出力データ生成部は、第1乃至第4の画像と処理対象の皮膚組織における管腔を作る細胞の分布を表す第5の画像とに対する、機械学習の学習済みモデルの出力に基づき、特定の皮膚疾患の病状に関する予測結果を含む出力データを生成するようにしてもよい。これにより予測の精度が向上する。 Furthermore, the input of the learned model of machine learning described above may further include an image representing the distribution of cells that form a lumen in the skin tissue. In this case, the output data generation unit described above is the machine learning learned model for the first to fourth images and the fifth image representing the distribution of cells forming the lumen in the skin tissue to be processed. Based on the output, output data including a prediction result related to a medical condition of a specific skin disease may be generated. This improves the accuracy of prediction.
 また、上で述べた機械学習の学習済みモデルの入力は、皮膚組織における免疫細胞の分布を表す画像と、皮膚組織における管腔を作る細胞の分布を表す画像とをさらに含む場合もある。また、機械学習の学習済みモデルの出力は、腫瘍に係る複数種類の皮膚疾患の鑑別である場合もある。このような場合には、上で述べた出力データ生成部は、第1乃至第3の画像と処理対象の皮膚組織における免疫細胞の分布を表す第4の画像と処理対象の皮膚組織における管腔を作る細胞の分布を表す第5の画像とに対する、機械学習の学習済みモデルの出力に基づき、複数種類の皮膚疾患の鑑別結果を含む出力データを生成するようにしてもよい。 The input of the machine learning learned model described above may further include an image representing the distribution of immune cells in the skin tissue and an image representing the distribution of cells forming the lumen in the skin tissue. In addition, the output of the learned model of machine learning may be a discrimination between a plurality of types of skin diseases related to a tumor. In such a case, the output data generation unit described above includes the first to third images, the fourth image representing the distribution of immune cells in the skin tissue to be processed, and the lumen in the skin tissue to be processed. Based on the output of the learned model of machine learning with respect to the fifth image representing the distribution of the cells that make up the output, output data including the discrimination results of a plurality of types of skin diseases may be generated.
 なお、異型細胞の分布を表す画像及び第3の画像が、異型細胞の由来となる細胞毎に用意される場合もある。これによって精度を向上させることができるようになる。 Note that an image representing the distribution of atypical cells and a third image may be prepared for each cell from which the atypical cells are derived. As a result, the accuracy can be improved.
 第6の態様に係る診断支援システムは、(A)病理画像に含まれる皮膚組織における真皮細胞の分布を表す画像と、皮膚組織における表皮細胞の分布を表す画像と、皮膚組織における免疫細胞の分布を表す画像と、皮膚組織における管腔を作る細胞の分布を表す画像とを入力とし且つ非腫瘍性の複数種類の皮膚疾患の鑑別を出力とする機械学習の学習済みモデルと、(B)処理対象の皮膚組織における真皮細胞の分布を表す第1の画像と、処理対象の皮膚組織における表皮細胞の分布を表す第2の画像と、処理対象の皮膚組織における免疫細胞の分布を表す第3の画像と、処理対象の皮膚組織における管腔を作る細胞の分布を表す第4の画像とに対する、機械学習の学習済みモデルの出力に基づき、非腫瘍性の複数種類の皮膚疾患の鑑別結果を含む出力データを生成する出力データ生成部とを有する。 The diagnosis support system according to the sixth aspect includes (A) an image representing the distribution of dermal cells in the skin tissue, an image representing the distribution of epidermal cells in the skin tissue, and the distribution of immune cells in the skin tissue. A learned model of machine learning that receives an image representing the distribution of the cells forming the lumen in the skin tissue and an output representing the differentiation of a plurality of non-neoplastic skin diseases, and (B) processing A first image representing the distribution of dermal cells in the subject skin tissue, a second image representing the distribution of epidermal cells in the skin tissue to be treated, and a third image representing the distribution of immune cells in the skin tissue to be treated. Based on the output of the learned model of machine learning for the image and the fourth image representing the distribution of the cells that make up the lumen in the skin tissue to be processed, it is possible to differentiate multiple types of non-neoplastic skin diseases. Generating output data including an output data generating unit.
 第5及び第6の態様に係る診断支援システムにおける機械学習の学習済みモデルは、ニューラルネットワーク(特に学習済みの畳み込みニューラルネットワーク)や、サポートベクターマシンであってもよい。 The learned model of machine learning in the diagnosis support system according to the fifth and sixth aspects may be a neural network (particularly a learned convolutional neural network) or a support vector machine.
 なお、本願では、システムと記した場合には、1又は複数の情報処理装置を含むものとする。すなわち、ネットワークで接続された複数の情報処理装置が連携して1つのシステムとして動作する場合や、1台の情報処理装置で動作する場合とを含むものとする。 In the present application, the term “system” includes one or a plurality of information processing apparatuses. That is, a case where a plurality of information processing apparatuses connected via a network operate as one system in cooperation with each other and a case where the information processing apparatuses operate with one information processing apparatus are included.
 また、上記処理を実行するためのプログラムを作成することができ、当該プログラムは、例えばフレキシブルディスク、光ディスク(CD-ROM、DVD-ROMなど)、光磁気ディスク、半導体メモリ、ハードディスク等のコンピュータ読み取り可能な記憶媒体又は記憶装置に格納される。尚、中間的な処理結果はメインメモリ等の記憶装置に一時保管される。 In addition, a program for executing the above processing can be created, and the program can be read by a computer such as a flexible disk, an optical disk (CD-ROM, DVD-ROM, etc.), a magneto-optical disk, a semiconductor memory, a hard disk, etc. Stored in a storage medium or storage device. The intermediate processing result is temporarily stored in a storage device such as a main memory.

Claims (17)

  1.  病理画像に含まれる皮膚組織における真皮細胞の分布を表す画像と、前記皮膚組織における表皮細胞の分布を表す画像と、前記皮膚組織における異型細胞の分布を表す画像とを入力とする機械学習の学習済みモデルと、
     処理対象の皮膚組織における真皮細胞の分布を表す第1の画像と、前記処理対象の皮膚組織における表皮細胞の分布を表す第2の画像と、前記処理対象の皮膚組織における異型細胞の分布を表す第3の画像とに対する、前記機械学習の学習済みモデルからの出力に基づき、皮膚疾患に関する出力データを生成する出力データ生成部と、
     を有する診断支援システム。
    Learning of machine learning using as input an image representing a distribution of dermal cells in a skin tissue included in a pathological image, an image representing a distribution of epidermal cells in the skin tissue, and an image representing a distribution of atypical cells in the skin tissue Finished model,
    A first image representing the distribution of dermal cells in the skin tissue to be processed, a second image representing the distribution of epidermal cells in the skin tissue to be processed, and the distribution of atypical cells in the skin tissue to be processed An output data generation unit that generates output data related to a skin disease based on the output from the learned model of the machine learning for the third image;
    A diagnostic support system.
  2.  前記機械学習の学習済みモデルの出力は、腫瘍に係る特定の皮膚疾患の鑑別であり、
     前記皮膚疾患に関する出力データが、前記特定の皮膚疾患の鑑別結果を含む、
     請求項1記載の診断支援システム。
    The output of the learned model of machine learning is a differentiation of a specific skin disease related to the tumor,
    The output data relating to the skin disease includes a discrimination result of the specific skin disease,
    The diagnosis support system according to claim 1.
  3.  前記機械学習の学習済みモデルの入力は、前記皮膚組織のグレースケール画像と前記皮膚組織における免疫細胞の分布を表す画像と前記皮膚組織における管腔を作る細胞の分布を表す画像とのうち少なくともいずれかをさらに含み、
     前記出力データ生成部は、
     前記処理対象の皮膚組織のグレースケール画像である第4の画像と前記処理対象の皮膚組織における免疫細胞の分布を表す第5の画像と前記処理対象の皮膚組織における管腔を作る細胞の分布を表す第6の画像との少なくともいずれかと前記第1乃至第3の画像とに対する、前記機械学習の学習済みモデルからの出力に基づき、前記特定の皮膚疾患の鑑別結果を含む出力データを生成する
     請求項2記載の診断支援システム。
    The machine learning learned model input is at least one of a grayscale image of the skin tissue, an image representing a distribution of immune cells in the skin tissue, and an image representing a distribution of cells forming a lumen in the skin tissue. Further including
    The output data generation unit
    A fourth image that is a grayscale image of the skin tissue to be processed, a fifth image that represents a distribution of immune cells in the skin tissue to be processed, and a distribution of cells that form a lumen in the skin tissue to be processed. Generating output data including a discrimination result of the specific skin disease based on an output from the learned model of the machine learning for at least one of the sixth image to be represented and the first to third images. Item 3. The diagnosis support system according to Item 2.
  4.  前記処理対象の皮膚組織の画像から、前記第1乃至第3の画像を生成する画像生成部
     をさらに有する請求項1又は2記載の診断支援システム。
    The diagnosis support system according to claim 1, further comprising an image generation unit configured to generate the first to third images from the image of the skin tissue to be processed.
  5.  前記機械学習の学習済みモデルの入力は、前記皮膚組織における免疫細胞の分布を表す画像をさらに含み、
     前記機械学習の学習済みモデルの出力は、腫瘍に係る特定の皮膚疾患の病状に関する予測であり、
     前記出力データ生成部は、
     前記第1乃至第3の画像と前記処理対象の皮膚組織における免疫細胞の分布を表す第4の画像とに対する、前記機械学習の学習済みモデルの出力に基づき、前記特定の皮膚疾患の病状に関する予測結果を含む出力データを生成する
     請求項1記載の診断支援システム。
    The machine learning learned model input further includes an image representing a distribution of immune cells in the skin tissue;
    The output of the learned model of machine learning is a prediction regarding the pathology of a particular skin disease related to the tumor,
    The output data generation unit
    Prediction regarding the pathology of the specific skin disease based on the output of the learned model of the machine learning for the first to third images and the fourth image representing the distribution of immune cells in the skin tissue to be processed The diagnosis support system according to claim 1, wherein output data including a result is generated.
  6.  前記皮膚組織における免疫細胞の分布を表す画像及び前記第4の画像が、前記免疫細胞の種類毎に複数用いられる
     請求項5記載の診断支援システム。
    The diagnosis support system according to claim 5, wherein a plurality of images representing the distribution of immune cells in the skin tissue and the fourth image are used for each type of the immune cells.
  7.  前記機械学習の学習済みモデルの入力は、前記皮膚組織における管腔を作る細胞の分布を表す画像をさらに含み、
     前記出力データ生成部は、
     前記第1乃至第4の画像と前記処理対象の皮膚組織における管腔を作る細胞の分布を表す第5の画像とに対する、前記機械学習の学習済みモデルの出力に基づき、前記特定の皮膚疾患の病状に関する予測結果を含む出力データを生成する
     請求項5又は6記載の診断支援システム。
    The machine learning learned model input further includes an image representing a distribution of cells that form lumens in the skin tissue;
    The output data generation unit
    Based on the output of the learned model of the machine learning for the first image to the fourth image and the fifth image representing the distribution of cells forming a lumen in the skin tissue to be processed, the specific skin disease The diagnosis support system according to claim 5 or 6, wherein output data including a prediction result related to a medical condition is generated.
  8.  前記特定の皮膚疾患の病状に関する予測は、
     余命の予測と、病期分類の予測と、特定薬剤の投与効果の予測とのいずれかである
     請求項5乃至7のいずれか1つ記載の診断支援システム。
    The prediction regarding the pathology of the specific skin disease is:
    The diagnosis support system according to any one of claims 5 to 7, which is any one of prediction of life expectancy, prediction of staging classification, and prediction of administration effect of a specific drug.
  9.  前記機械学習の学習済みモデルの入力は、前記皮膚組織における免疫細胞の分布を表す画像と、前記皮膚組織における管腔を作る細胞の分布を表す画像とをさらに含み、
     前記機械学習の学習済みモデルの出力は、腫瘍に係る複数種類の皮膚疾患の鑑別であり、
     前記出力データ生成部は、
     前記第1乃至第3の画像と前記処理対象の皮膚組織における免疫細胞の分布を表す第4の画像と前記処理対象の皮膚組織における管腔を作る細胞の分布を表す第5の画像とに対する、前記機械学習の学習済みモデルの出力に基づき、前記複数種類の皮膚疾患の鑑別結果を含む出力データを生成する
     請求項1記載の診断支援システム。
    The input of the learned model of the machine learning further includes an image representing a distribution of immune cells in the skin tissue and an image representing a distribution of cells forming a lumen in the skin tissue,
    The output of the learned model of the machine learning is a differentiation of multiple types of skin diseases related to a tumor,
    The output data generation unit
    With respect to the first to third images, the fourth image representing the distribution of immune cells in the skin tissue to be processed, and the fifth image representing the distribution of cells forming a lumen in the skin tissue to be processed, The diagnosis support system according to claim 1, wherein output data including discrimination results of the plurality of types of skin diseases is generated based on an output of the learned model of the machine learning.
  10.  前記皮膚組織における異型細胞の分布を表す画像及び前記第3の画像が、異型細胞の由来となる細胞毎に用意される
     請求項9記載の診断支援システム。
    The diagnosis support system according to claim 9, wherein an image representing the distribution of atypical cells in the skin tissue and the third image are prepared for each cell from which the atypical cells are derived.
  11.  病理画像に含まれる皮膚組織における真皮細胞の分布を表す画像と、前記皮膚組織における表皮細胞の分布を表す画像と、前記皮膚組織における免疫細胞の分布を表す画像と、前記皮膚組織における管腔を作る細胞の分布を表す画像とを入力とし且つ非腫瘍性の複数種類の皮膚疾患の鑑別を出力とする機械学習の学習済みモデルと、
     処理対象の皮膚組織における真皮細胞の分布を表す第1の画像と、前記処理対象の皮膚組織における表皮細胞の分布を表す第2の画像と、前記処理対象の皮膚組織における免疫細胞の分布を表す第3の画像と、前記処理対象の皮膚組織における管腔を作る細胞の分布を表す第4の画像とに対する、前記機械学習の学習済みモデルの出力に基づき、前記非腫瘍性の複数種類の皮膚疾患の鑑別結果を含む出力データを生成する出力データ生成部と、
     を有する診断支援システム。
    An image representing the distribution of dermal cells in the skin tissue included in the pathological image, an image representing the distribution of epidermal cells in the skin tissue, an image representing the distribution of immune cells in the skin tissue, and a lumen in the skin tissue A machine learning learned model that takes as input an image representing the distribution of cells to be made and outputs an identification of multiple types of non-neoplastic skin diseases,
    A first image representing the distribution of dermal cells in the skin tissue to be processed, a second image representing the distribution of epidermal cells in the skin tissue to be processed, and the distribution of immune cells in the skin tissue to be processed Based on the output of the learned model of the machine learning with respect to the third image and the fourth image representing the distribution of cells that form a lumen in the skin tissue to be processed, the plurality of types of non-neoplastic skin An output data generation unit for generating output data including disease discrimination results;
    A diagnostic support system.
  12.  病理画像に含まれる皮膚組織における真皮細胞の分布を表す画像と、前記皮膚組織における表皮細胞の分布を表す画像と、前記皮膚組織における異型細胞の分布を表す画像とを入力とする機械学習の学習済みモデルから、処理対象の皮膚組織における真皮細胞の分布を表す第1の画像と、前記処理対象の皮膚組織における表皮細胞の分布を表す第2の画像と、前記処理対象の皮膚組織における異型細胞の分布を表す第3の画像とに対する出力を取得するステップと、
     前記機械学習の学習済みモデルからの出力に基づき、皮膚疾患に関する出力データを生成するステップと、
     を含み、コンピュータにより実行される診断支援方法。
    Learning of machine learning using as input an image representing a distribution of dermal cells in a skin tissue included in a pathological image, an image representing a distribution of epidermal cells in the skin tissue, and an image representing a distribution of atypical cells in the skin tissue A first image representing the distribution of dermal cells in the skin tissue to be treated, a second image representing the distribution of epidermal cells in the skin tissue to be treated, and the atypical cells in the skin tissue to be treated. Obtaining an output for a third image representing a distribution of
    Generating output data relating to skin disease based on the output from the learned model of machine learning;
    A diagnostic support method executed by a computer.
  13.  前記機械学習の学習済みモデルの出力は、腫瘍に係る特定の皮膚疾患の鑑別であり、
     前記皮膚疾患に関する出力データが、前記特定の皮膚疾患の鑑別結果を含む、
     請求項12記載の診断支援方法。
    The output of the learned model of machine learning is a differentiation of a specific skin disease related to the tumor,
    The output data relating to the skin disease includes a discrimination result of the specific skin disease,
    The diagnosis support method according to claim 12.
  14.  前記機械学習の学習済みモデルの入力は、前記皮膚組織における免疫細胞の分布を表す画像をさらに含み、
     前記機械学習の学習済みモデルの出力は、腫瘍に係る特定の皮膚疾患の病状に関する予測であり、
     前記出力データを生成するステップにおいて、
     前記第1乃至第3の画像と前記処理対象の皮膚組織における免疫細胞の分布を表す第4の画像とに対する、前記機械学習の学習済みモデルの出力に基づき、前記特定の皮膚疾患の病状に関する予測結果を含む出力データを生成する
     請求項12記載の診断支援方法。
    The machine learning learned model input further includes an image representing a distribution of immune cells in the skin tissue;
    The output of the learned model of machine learning is a prediction regarding the pathology of a particular skin disease related to the tumor,
    In the step of generating the output data,
    Prediction regarding the pathology of the specific skin disease based on the output of the learned model of the machine learning for the first to third images and the fourth image representing the distribution of immune cells in the skin tissue to be processed The diagnosis support method according to claim 12, wherein output data including a result is generated.
  15.  前記機械学習の学習済みモデルの入力は、前記皮膚組織における免疫細胞の分布を表す画像と、前記皮膚組織における管腔を作る細胞の分布を表す画像とをさらに含み、
     前記機械学習の学習済みモデルの出力は、腫瘍に係る複数種類の皮膚疾患の鑑別であり、
     前記出力データを生成するステップにおいて、
     前記第1乃至第3の画像と前記処理対象の皮膚組織における免疫細胞の分布を表す第4の画像と前記処理対象の皮膚組織における管腔を作る細胞の分布を表す第5の画像とに対する、前記機械学習の学習済みモデルの出力に基づき、前記複数種類の皮膚疾患の鑑別結果を含む出力データを生成する
     請求項12記載の診断支援方法。
    The input of the learned model of the machine learning further includes an image representing a distribution of immune cells in the skin tissue and an image representing a distribution of cells forming a lumen in the skin tissue,
    The output of the learned model of the machine learning is a differentiation of multiple types of skin diseases related to a tumor,
    In the step of generating the output data,
    With respect to the first to third images, the fourth image representing the distribution of immune cells in the skin tissue to be processed, and the fifth image representing the distribution of cells forming a lumen in the skin tissue to be processed, The diagnosis support method according to claim 12, wherein output data including a discrimination result of the plurality of types of skin diseases is generated based on an output of the learned model of the machine learning.
  16.  病理画像に含まれる皮膚組織における真皮細胞の分布を表す画像と、前記皮膚組織における表皮細胞の分布を表す画像と、前記皮膚組織における免疫細胞の分布を表す画像と、前記皮膚組織における管腔を作る細胞の分布を表す画像とを入力とし且つ非腫瘍性の複数種類の皮膚疾患の鑑別を出力とする機械学習の学習済みモデルから、処理対象の皮膚組織における真皮細胞の分布を表す第1の画像と、前記処理対象の皮膚組織における表皮細胞の分布を表す第2の画像と、前記処理対象の皮膚組織における免疫細胞の分布を表す第3の画像と、前記処理対象の皮膚組織における管腔を作る細胞の分布を表す第4の画像とに対する出力を取得するステップと、
     前記機械学習の学習済みモデルの出力に基づき、前記非腫瘍性の複数種類の皮膚疾患の鑑別結果を含む出力データを生成するステップと、
     を含み、コンピュータにより実行される診断支援方法。
    An image representing the distribution of dermal cells in the skin tissue included in the pathological image, an image representing the distribution of epidermal cells in the skin tissue, an image representing the distribution of immune cells in the skin tissue, and a lumen in the skin tissue A first learning model representing a distribution of dermal cells in a skin tissue to be processed from a machine learning learned model that receives an image representing a distribution of cells to be created and outputs a discrimination of a plurality of non-neoplastic skin diseases as an output. An image; a second image representing a distribution of epidermal cells in the skin tissue to be treated; a third image representing a distribution of immune cells in the skin tissue to be treated; and a lumen in the skin tissue to be treated. Obtaining an output for a fourth image representing a distribution of cells that make up
    Generating output data including discrimination results of the plurality of non-neoplastic skin diseases based on the output of the learned model of the machine learning;
    A diagnostic support method executed by a computer.
  17.  請求項12乃至16の診断支援方法を1又は複数のプロセッサに実行させるための実行するプログラム。 A program to be executed for causing one or more processors to execute the diagnosis support method according to claim 12 to 16.
PCT/JP2018/020858 2017-05-30 2018-05-30 System and method for diagnostic support using pathological image of skin tissue WO2018221625A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2019521286A JP6757054B2 (en) 2017-05-30 2018-05-30 Systems and methods for diagnostic support using pathological images of skin tissue

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017-106205 2017-05-30
JP2017106205 2017-05-30

Publications (1)

Publication Number Publication Date
WO2018221625A1 true WO2018221625A1 (en) 2018-12-06

Family

ID=64454665

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/020858 WO2018221625A1 (en) 2017-05-30 2018-05-30 System and method for diagnostic support using pathological image of skin tissue

Country Status (2)

Country Link
JP (1) JP6757054B2 (en)
WO (1) WO2018221625A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020183267A1 (en) * 2019-03-08 2020-09-17 株式会社半導体エネルギー研究所 Image search method and image search system
CN113192077A (en) * 2021-04-15 2021-07-30 华中科技大学 Automatic classification method and system for pathological graphs of cells and regional levels
JP7387340B2 (en) 2019-08-30 2023-11-28 株式会社 資生堂 Biological structure identification device, biological structure identification method, and computer program for biological structure identification

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011511272A (en) * 2008-01-24 2011-04-07 ボールター インク Method for identifying malignant and benign tissue lesions

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6471559B2 (en) * 2015-03-20 2019-02-20 カシオ計算機株式会社 Diagnostic device, image processing method, image processing system, and program for the diagnostic device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011511272A (en) * 2008-01-24 2011-04-07 ボールター インク Method for identifying malignant and benign tissue lesions

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KOJIMA, KANAE: "Present status of analyzing bio-information such as genome and medical images using deep learning (non-official translation)", JOURNAL OF CLINICAL AND EXPERIMENTAL MEDICINE, vol. 263, no. 8, 25 November 2017 (2017-11-25), pages 641 - 645 *
KOJIMA, KANAE: "Present status of analyzing bio-information such as genome and medical images using deep learning (non-official translation)", JOURNAL OF JAPANESE DERMATOLOGICAL ASSOCIATION, vol. 127, no. 5, 15 May 2017 (2017-05-15), pages 837 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020183267A1 (en) * 2019-03-08 2020-09-17 株式会社半導体エネルギー研究所 Image search method and image search system
JP7387340B2 (en) 2019-08-30 2023-11-28 株式会社 資生堂 Biological structure identification device, biological structure identification method, and computer program for biological structure identification
CN113192077A (en) * 2021-04-15 2021-07-30 华中科技大学 Automatic classification method and system for pathological graphs of cells and regional levels
CN113192077B (en) * 2021-04-15 2022-08-02 华中科技大学 Automatic classification method and system for pathological graphs of cells and regional levels

Also Published As

Publication number Publication date
JP6757054B2 (en) 2020-09-16
JPWO2018221625A1 (en) 2020-04-02

Similar Documents

Publication Publication Date Title
Mohanakurup et al. Breast cancer detection on histopathological images using a composite dilated Backbone Network
Raza et al. Micro-Net: A unified model for segmentation of various objects in microscopy images
Silva-Rodríguez et al. Going deeper through the Gleason scoring scale: An automatic end-to-end system for histology prostate grading and cribriform pattern detection
Gecer et al. Detection and classification of cancer in whole slide breast histopathology images using deep convolutional networks
Araújo et al. Classification of breast cancer histology images using convolutional neural networks
Shorfuzzaman An explainable stacked ensemble of deep learning models for improved melanoma skin cancer detection
Priego-Torres et al. Automatic segmentation of whole-slide H&E stained breast histopathology images using a deep convolutional neural network architecture
Duran-Lopez et al. PROMETEO: A CNN-based computer-aided diagnosis system for WSI prostate cancer detection
Haj-Hassan et al. Classifications of multispectral colorectal cancer tissues using convolution neural network
Cheng et al. Contour-aware semantic segmentation network with spatial attention mechanism for medical image
WO2014172527A1 (en) Systems and methods for multiplexed biomarker quantitation using single cell segmentation on sequentially stained tissue
WO2015069824A2 (en) Diagnostic system and method for biological tissue analysis
Wang et al. A hybrid network for automatic hepatocellular carcinoma segmentation in H&E-stained whole slide images
JP2021512446A (en) Image processing methods, electronic devices and storage media
CN111986150A (en) Interactive marking refinement method for digital pathological image
Sabouri et al. A cascade classifier for diagnosis of melanoma in clinical images
WO2018221625A1 (en) System and method for diagnostic support using pathological image of skin tissue
Abdelsamea et al. A cascade-learning approach for automated segmentation of tumour epithelium in colorectal cancer
Xu et al. Using transfer learning on whole slide images to predict tumor mutational burden in bladder cancer patients
Gutiérrez et al. A supervised visual model for finding regions of interest in basal cell carcinoma images
Chen et al. Automatic whole slide pathology image diagnosis framework via unit stochastic selection and attention fusion
Langer et al. Computer-aided diagnostics in digital pathology: automated evaluation of early-phase pancreatic cancer in mice
JP2023029283A (en) cancer prognosis
Momeni et al. Dropout-enabled ensemble learning for multi-scale biomedical data
Sheeba et al. Microscopic image analysis in breast cancer detection using ensemble deep learning architectures integrated with web of things

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18808948

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019521286

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18808948

Country of ref document: EP

Kind code of ref document: A1