WO2023145917A1 - Learning model generation method, image processing method, image processing device, computer program, and learning model - Google Patents

Learning model generation method, image processing method, image processing device, computer program, and learning model Download PDF

Info

Publication number
WO2023145917A1
WO2023145917A1 PCT/JP2023/002769 JP2023002769W WO2023145917A1 WO 2023145917 A1 WO2023145917 A1 WO 2023145917A1 JP 2023002769 W JP2023002769 W JP 2023002769W WO 2023145917 A1 WO2023145917 A1 WO 2023145917A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
cell
learning model
training data
generation method
Prior art date
Application number
PCT/JP2023/002769
Other languages
French (fr)
Japanese (ja)
Inventor
星児 鈴木
祐香 嶋津
匡記 松村
紘一郎 頼
Original Assignee
テルモ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by テルモ株式会社 filed Critical テルモ株式会社
Publication of WO2023145917A1 publication Critical patent/WO2023145917A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to a learning model generation method, an image processing method, an image processing device, a computer program, and a learning model.
  • Non-Patent Document 1 discloses a method of converting an electron microscope image into an image showing a nucleus using a neural network.
  • the present invention has been made in view of such circumstances, and provides a learning model generation method, an image processing method, an image processing apparatus, a computer program, and a learning model capable of recognizing cells with high accuracy even when the imaging environment fluctuates. intended to provide
  • the learning model generation method includes first training data including a cell image obtained by imaging a cell and a nuclear staining image obtained by staining the cell. is obtained, and based on the obtained first training data, a learning model is generated so as to output a stained nuclear image when a cell image is input.
  • cells can be recognized with high accuracy even when the imaging environment fluctuates.
  • FIG. 4 is a diagram showing the relationship between normalized images and small images; It is a figure which shows an example of a sliding window method. It is a figure which shows an example of a structure of a learning model. It is a figure which shows an example of the count result screen by an image processing apparatus.
  • FIG. 3 shows a cell mini-image and a corresponding nuclear staining mini-image;
  • FIG. 10 is a diagram showing an example of small cell images in which brightness, contrast, and amount of blurring are varied based on normalized small cell images.
  • FIG. 10 is a diagram showing an example of homogenization processing of nuclear-stained small images; FIG.
  • FIG. 10 is a diagram showing an example of medium small images with varying brightness, contrast, and blur amount based on normalized medium small images. It is a figure which shows the generation method in the 1st stage of a learning model. It is a figure which shows the generation method in the 1st stage of a learning model. It is a figure which shows the generation method in the 2nd step of a learning model. It is a figure which shows the generation method in the 2nd stage of a learning model. It is a figure which shows the generation method in the 3rd stage of a learning model. It is a figure which shows the generation method in the 3rd stage of a learning model. FIG.
  • FIG. 10 is a diagram showing a comparison result between a case where training data includes a medium image containing no cells and a case where the training data does not include the medium image. It is a figure which shows the procedure of the cell counting process by an image processing apparatus. It is a figure which shows the procedure of the production
  • FIG. 4 is a diagram showing the procedure of a learning model generation process by an image processing device;
  • FIG. 1 is a diagram showing an example of the configuration of an image processing apparatus 50 of this embodiment.
  • the image processing device 50 includes a control unit 51 that controls the entire device, a communication unit 52, a memory 53, a normalization unit 54, an image processing unit 55, a display unit 56, an operation unit 57, a counting unit 58, a storage unit 59, and a learning unit.
  • a processing unit 62 is provided.
  • the storage unit 59 stores a computer program 60, a learning model 61, and required information.
  • the normalization unit 54, the image processing unit 55, the counting unit 58, and the learning processing unit 62 may be configured by hardware, may be realized by a computer program (software) 60, or may be hardware and software. may be realized in combination with The learning processing unit 62 may be incorporated in an external device, and the learning model 61 generated by the device may be stored in the storage unit 59 . Also, the image processing device 50 may be configured by a plurality of devices.
  • the control unit 51 can be configured with a CPU (Central Processing Unit), MPU (Micro-Processing Unit), GPU (Graphics Processing Unit), or the like.
  • the control unit 51 can execute processing defined by the computer program 60 . That is, the processing by the control unit 51 is also the processing by the computer program 60 .
  • the communication unit 52 for example, includes a communication module and can transmit and receive information to and from an external device using wireless communication or wired communication.
  • the communication unit 52 has a function as an acquisition unit, and can acquire a cell image obtained by capturing cells, a culture medium image containing no cells, and a stained nuclear image obtained by staining cells from an external device.
  • cells are not particularly limited, and include, for example, adherent cells (adherent cells) and floating cells.
  • Adherent cells include, for example, adherent somatic cells and the like. Examples of somatic cells include myoblasts (e.g., skeletal myoblasts), muscle satellite cells, mesenchymal stem cells (e.g., bone marrow, adipose tissue, peripheral blood, skin, hair root, muscle tissue, intrauterine tissue stem cells such as cardiomyocytes, fibroblasts, cardiac stem cells, embryonic stem cells, pluripotent stem cells such as iPS (induced pluripotent stem) cells, synovial cells, chondrocytes , Epithelial cells (e.g., oral mucosal epithelial cells, retinal pigment epithelial cells, nasal mucosal epithelial cells, etc.), endothelial cells (e.g., vascular endothelial cells, etc.), hepatocytes (e.g.,
  • Somatic cells may be those differentiated from iPS cells (iPS cell-derived cells), iPS cell-derived cardiomyocytes, fibroblasts, myoblasts, epithelial cells, endothelial cells, hepatocytes, pancreatic cells, kidney cells, adrenal cells, periodontal ligament cells, gingival cells, periosteal cells, skin cells, synovial cells, chondrocytes, and the like.
  • Floating cells include lymphocytes (T lymphocytes, B lymphocytes) and the like.
  • nucleated cells having nuclei inside the cells are preferable.
  • the culture medium in the early stages of culture contains foreign substances such as tissue fragments that are easily mistaken for cells.
  • a medium image is an image that does not contain cells, but contains foreign substances such as tissue fragments.
  • the memory 53 can be composed of semiconductor memory such as SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory), and flash memory.
  • the computer program 60 can be developed in the memory 53 and the control unit 51 can execute the computer program 60 .
  • the normalization unit 54 converts the cell image and medium image acquired via the communication unit 52 into grayscale. In addition, the normalization unit 54 normalizes the grayscale-converted images (cell image, culture medium image) converted to grayscale.
  • a normalized grayscale transformed image is also referred to as a normalized image.
  • FIG. 2 is a diagram showing an example of normalization processing.
  • the horizontal axis of the graph in FIG. 2 indicates the pixel value (luminance value: 0 to 255), and the vertical axis indicates the abundance ratio of each pixel value to the most frequent pixel value.
  • the grayscale transformed images Gi, Gj are images captured in different shooting environments, such as different shooting timings and shooting locations, and represent images captured by an imaging device (eg, an optical phase-contrast microscope, etc.).
  • the vertical and horizontal resolution (m ⁇ n) of an image differs depending on the imaging device.
  • the resolution m or n can be, for example, about 1000, 1200, or 1400, but is not limited to these.
  • a histogram of the pixel values (for example, 0 to 255) of the specific region Ri of the grayscale converted image Gi is created, and the average value ( Let ai be the brightness) and ⁇ i be the variance (contrast).
  • a histogram of pixel values (eg, 0 to 255) in a specific region Rj of the grayscale converted image Gj is created, and the distribution of the created histogram is approximated by a predetermined probability distribution (eg, normal distribution).
  • aj be the average value (brightness) and ⁇ j be the variance (contrast).
  • the specific regions Ri and Rj may be any regions in the grayscale-converted image, but the sizes of the regions are the same.
  • a be the average value of a predetermined probability distribution (for example, normal distribution, etc.), and let ⁇ be the variance.
  • the average value a may be 128, for example.
  • the variance ⁇ may be 15, for example.
  • the average value (brightness) a and variance (contrast) ⁇ can be set based on the values obtained when the image is captured in a shooting environment in which the lighting conditions are good and condensation does not occur on the flask or the like.
  • the normalization unit 54 converts the grayscale-converted image so that the pixel value distribution of the obtained grayscale-converted (cell image) approximates a predetermined probability distribution. More specifically, the normalization unit 54 converts the grayscale-converted image so that the average value of the pixel value distribution of the acquired grayscale-converted image becomes the average value of the probability distribution, and the acquired grayscale-converted image A grayscale-converted image is transformed so that the variance of the pixel value distribution of the image becomes the variance of the probability distribution. In the example of FIG. 2, the pixel values of all the pixels of the grayscale-converted image Gi are converted so that the average value ai of the distribution of the grayscale-converted image Gi becomes the average value a of the normal distribution.
  • the pixel values of all the pixels of the grayscale-converted image Gi are converted so that the variance ⁇ i of the distribution of the grayscale-converted image Gi becomes the variance ⁇ of the normal distribution.
  • the pixel values of all the pixels of the grayscale-converted image Gj are converted so that the average value aj of the distribution of the grayscale-converted image Gj becomes the average value a of the normal distribution.
  • the pixel values of all the pixels of the grayscale-converted image Gj are converted so that the variance ⁇ j of the distribution of the grayscale-converted image Gj becomes the variance ⁇ of the normal distribution.
  • grayscale-converted images By normalizing grayscale-converted images (cell images, culture medium images), the impact of changes in the imaging environment on the image quality is reduced. is obtained, and the cells can be recognized with high accuracy by the processing described later.
  • the image processing unit 55 performs processing to divide the normalized image (cell image and culture medium image) into small images of a predetermined size.
  • a small cell image after grayscale conversion is called a small cell image (recognition target image).
  • the predetermined size can be, for example, a resolution of 256 ⁇ 256, but is not limited to this.
  • the image processing unit 55 uses the sliding window method when dividing the normalized image into small images.
  • FIG. 3 is a diagram showing the relationship between normalized images and small images.
  • normalized images are explained as cell images.
  • the normalized image may be the medium image.
  • Small images small cell images
  • the information of the peripheral portion of the small cell image Dm extracted from the area other than the vicinity of the periphery of the normalized image is obtained by using the adjacent small cell image D(m+1) by the sliding window method. Since it can be recognized by the center part such as (m+1), the information on the peripheral part is input to the learning model 61, and there is no problem.
  • the edge E1 of the 256 ⁇ 256 area is adjacent to the boundary of the normalized image, and the information from the image outside the normalized image is Since (hint) cannot be obtained, the accuracy of cell recognition decreases.
  • the edge E2 of the 256 ⁇ 256 area cannot obtain information (hints) from the image outside the normalized image.
  • the recognition accuracy of The image processing unit 55 uses a sliding window method when dividing the normalized image into small images so that the cell recognition accuracy is not lowered.
  • FIG. 4 is a diagram showing an example of the sliding window method.
  • normalized images are explained as cell images.
  • the normalized image may be the medium image.
  • the image processing unit 55 inserts a frame-shaped area W having a required pixel value around the normalized image (or a non-normalized cell image may be used).
  • the required pixel values include statistical values (eg, mean, mode, or median) of pixel values in the outer peripheral region of the normalized image adjacent to the frame-shaped region W.
  • the edge of the 256 ⁇ 256 area is adjacent to the frame-shaped area W, and the pixel values of the frame-shaped area W are all 0 or As compared with the case of H.255, it is possible to suppress deterioration in cell recognition accuracy.
  • the end of the 256 ⁇ 256 area is adjacent to the frame-shaped area W, and similarly, the cell recognition accuracy (detection accuracy) can suppress the decrease in
  • the display unit 56 can be configured with a liquid crystal display, an organic EL display, or the like. Note that an external display device may be connected to the image processing device 50 instead of the display unit 56 .
  • the operation unit 57 is composed of, for example, a keyboard, a mouse, a touch pad, a touch panel, or the like, and can accept operations for information displayed on the display unit 56.
  • the image processing device 50 divides the acquired normalized image into small cell images of a predetermined size, generates a virtual nuclear staining small image of a predetermined size based on each divided cell small image, and generates each virtual nuclear staining image.
  • the sub-images are stitched together to generate a virtual nuclear staining image of the same size as the original normalized image.
  • a stained nuclear image is an image obtained by imaging a cell in which only the nucleus inside the cell is actually stained using a reagent that binds to nucleic acids abundantly contained in the nucleus of the cell.
  • the virtual nuclear-stained image is an image similar to the nuclear-stained image, but it is an image that represents the nuclear-stained image virtually, instead of the image that is actually stained.
  • a learning model 61 is used to generate a virtual nuclear-stained image (virtual nuclear-stained small image). The learning model 61 will be described below.
  • FIG. 5 is a diagram showing an example of the configuration of the learning model 61.
  • the learning model 61 comprises an input layer 611 , a first network layer 612 , a second network layer 613 and an output layer 614 .
  • the learning model 61 can use, for example, an autoencoder, but is not limited to this.
  • a 256 ⁇ 256 small cell image (cell image) is input to the input layer 611 .
  • Cell subimages contain cells.
  • a 256 ⁇ 256 virtual nuclear staining small image (virtual nuclear staining image) is output from the output layer 614 .
  • the virtual nuclear staining mini image contains cell nuclei.
  • the first network layer 612 comprises a convolutional layer that weights the information about the input small image of cells, selects information with high importance, and removes other information, thereby compressing the input data and reducing the size of the cells. Extract image features.
  • the second network layer 613 comprises a deconvolution layer, and based on the features extracted in the first network layer 612, generates a virtual nucleus-stained small image as an output image from the compressed data.
  • a first network layer 612 includes, for example, an encoder, and a second network layer 613 includes a decoder.
  • the learning model 61 is not limited to an autoencoder, and may be any model capable of image generation, such as U-net.
  • the small cell image and the virtual nuclear staining small image are shown schematically, and the shape and number of cells in the small cell image and the shape and number of cell nuclei in the virtual nuclear staining small image may differ from the actual one.
  • the counting unit 58 identifies the position of the cell nucleus based on the virtual nuclear staining image of the same size as the original normalized image (for example, 1200 ⁇ 1200), and calculates the number of cells per cell image (cell nucleus number) are counted.
  • FIG. 6 is a diagram showing an example of the counting result screen 100 by the image processing device 50.
  • the counting result screen 100 is displayed on the display unit 56, for example.
  • the count result screen 100 displays an uploader area 101, an image selection area 102, a count result display area 104, an "evaluation" icon 103, and the like.
  • the image processing apparatus 50 displays the counting result in the counting result display area 104 based on the cell image.
  • an image list can be displayed in the image selection area 102.
  • the image selection area 102 displays columns for a selection box, cell image creation date, cell image file name, cell image source, and cell count.
  • the cell number column displays the number of cells when the cell image has been evaluated, and displays "not evaluated” when the cell image has not been evaluated.
  • the counting result is displayed in the counting result display area 104 .
  • a cell image on which the position of the cell nucleus is superimposed is displayed in the counting result display area 104 .
  • the cell nuclei are indicated by black circles, but they may be indicated in other manners, for example, with an identification code such as a cross mark.
  • the cell image is shown schematically, and the number of cell nuclei in the cell image may differ from the actual number. Also, the file name of the original cell image and the number of detected cells may be displayed in the counting result display area 104 .
  • the number of cells is important information for managing the state of cell culture, for example, determining whether the culture is complete or whether additional culture is required.
  • the number of cells in one cell image may reach several hundred (for example, about 400), and counting by visual inspection is a heavy burden on human beings. There is variability in judgment.
  • foreign substances such as tissue fragments may be misidentified as cells.
  • the present embodiment not only is the number of cells automatically counted using deep learning, but also normalization processing is performed without relying on visual observation. Absorbed, a cell image with image quality according to a predetermined probability distribution is obtained, and cells can be recognized (detected) with high accuracy.
  • sample cleaning is not required, and cells can be recognized with high accuracy even when the sample condition fluctuates.
  • the sliding window method it is possible to suppress the decrease in cell recognition accuracy in the vicinity of the periphery of the small cell image (recognition target image), and add a frame-shaped area to the recognition target image. By doing so, it is possible to suppress the decrease in accuracy of recognizing cells in the vicinity of the periphery of the normalized image.
  • FIG. 7 is a diagram showing a small cell image Di and a corresponding nuclear staining small image Si.
  • the small cell image Di contains cells
  • the stained nuclear small image Si is a nuclear stained image of the small image and contains stained cell nuclei.
  • the staining state of the cell nucleus is not uniform, and the degree of staining varies.
  • Small cell images and small nuclear staining images are schematic representations, and the shape and number of cells in the small cell images and the shape and number of cell nuclei in the small nuclear staining images may differ from the actual ones. There is
  • the small cell image Di is an image obtained by dividing the original cell image. That is, the small cell image Di is a normalized cell image obtained by performing the normalization process shown in FIG. 2 on the original grayscale-converted image and dividing it using the sliding window method shown in FIG. .
  • the stained nuclear small image Si is obtained by dividing the stained nuclear image obtained by staining the cells into an image of the same size as the small cell image Di.
  • the index i is the image index and indicates that both images correspond.
  • FIG. 8 is a diagram showing an example of a small cell image in which the brightness, contrast, and amount of blur are varied based on the normalized small cell image.
  • the image processing unit 55 independently varies the brightness, contrast, and blur amount of the normalized small cell image Di over at least three stages.
  • FIG. 9 is a diagram showing an example of homogenization processing of nuclear-stained small images.
  • the image processing unit 55 converts the stained cell nucleus into a predetermined shape and pixel value to generate a stained nuclear small image Si. Specifically, as shown in FIG. 9, the image processing unit 55 performs binarization processing on the stained nuclear small image Si, and binarizes the stained nuclear small image Si to a binarized image containing dotted pixels. Convert to image. A one-dot dot image in the binarized image represents the position of the cell nucleus.
  • the image processing unit 55 applies Gaussian convolution (blurring) to the binarized image to convert the peripheries (for example, 5 to 6 dots) of the dotted pixels of the binarized image to the required pixel values.
  • Gaussian convolution blurring
  • a nuclear stained small image Si that has been set is generated.
  • Convolution moves a pixel block (convolution operator) of a predetermined size on the nuclear staining small image, and multiplies the value of each element of the convolution operator by the corresponding pixel value of the nuclear staining small image (pixel value). This is a process of using the value obtained by summing the values as the central pixel value.
  • Gaussian convolution uses a Gaussian filter with a Gaussian function as the convolution operator.
  • the required pixel value may be a constant pixel value, or may have a gradation by changing the pixel value outward from a point-like pixel.
  • the pixels around the cell nucleus are uniform with respect to any cell nucleus, and the pixels have a diameter of 5 to 6 dots.
  • the brightness of cell nuclei is not uniform depending on the degree of staining.
  • the flow of the equalization process is as follows: (1) Pixel values of pixels below a certain threshold are set to 0 for noise removal. (2) Detect a light spot and convert it into one dot. (3) Blur the 1-dot image to convert it into a pixel with a diameter of 5 to 6 dots.
  • the control unit 51 acquires first training data including a small cell image obtained by imaging a cell and a nuclear staining small image obtained by staining the cell. That is, the control unit 51 can acquire the increased number of cell image small images illustrated in FIG. 8 and the nuclear staining small images Si (uniformed) corresponding thereto as the first training data. This allows a large amount of training data to be collected.
  • FIG. 10 is a diagram showing the medium small image Ci and the corresponding nuclear staining small image Si.
  • Medium small images Ci do not contain cells.
  • the corresponding nuclear-stained small image Si does not contain stained cell nuclei, so it is, for example, a black image.
  • the medium small image Ci is shown schematically, and the shape and number of foreign substances such as tissue fragments and medium scratches may differ from the actual ones.
  • the medium small image Ci is an image obtained by dividing the original medium image. That is, the medium small image Ci is a normalized medium image obtained by performing the normalization process shown in FIG. 2 on the original grayscale-converted image and dividing it using the sliding window method shown in FIG. .
  • the stained nuclear small image Si is an image (for example, a black image) of the same size as the medium small image Ci.
  • the index i is the image index and indicates that both images correspond.
  • FIG. 11 is a diagram showing an example of small culture medium images in which the brightness, contrast, and amount of blurring are varied based on the normalized small culture medium images.
  • the image processing unit 55 independently varies the brightness, contrast, and amount of blurring of the normalized small culture medium images Ci in at least three steps.
  • the control unit 51 acquires the second training data including medium small images containing no cells and nuclear staining small images corresponding to the medium small images. That is, the control unit 51 can acquire the increased small culture medium images illustrated in FIG. 11 and the stained nuclear images Si (black images) corresponding to them as the second training data. This allows a large amount of training data to be collected.
  • the first training data and the second training data are also collectively referred to as third training data.
  • the learning model 61 may be generated through three steps.
  • the learning processing unit 62 Based on the acquired first training data, the learning processing unit 62 generates the learning model 61 so that when a cell small image is input, a stained nuclear small image is output.
  • the first training data may include small cell images whose pixel values have been transformed so as to approximate the pixel value distribution of the small cell images to a predetermined probability distribution (eg, normal distribution). Also, the first training data may include small cell images whose pixel values are converted such that the average value or variance of the pixel value distribution of the small cell images becomes the average value or variance of a predetermined probability distribution. Also, the first training data may include small cell images obtained by varying at least one of the mean value and variance of the pixel value distribution of the small cell images and the amount of blurring of the small cell images.
  • the small cell image may include a small cell image in which a frame-shaped area having a required pixel value is inserted in the periphery, as illustrated in FIG.
  • the nuclear staining small image may include a nuclear staining small image in which a frame-shaped area having a required pixel value is inserted in the outer circumference, as illustrated in FIG.
  • the second training data may include culture medium small images whose pixel values have been transformed so as to approximate the pixel value distribution of the culture medium small images to a predetermined probability distribution (eg, normal distribution).
  • the second training data may include medium small images whose pixel values are converted such that the average value or variance of the pixel value distribution of the medium small images becomes the average value or variance of a predetermined probability distribution.
  • the second training data may include medium small images obtained by varying at least one of the mean value and variance of the pixel value distribution of the medium small images and the amount of blurring of the medium small images.
  • the small culture medium image may include a small culture medium image in which a frame-shaped area having a required pixel value is inserted in the periphery, as illustrated in FIG.
  • FIGS. 12A and 12B are diagrams showing the method of generating the learning model 61 in the first stage.
  • the learning processing unit 62 outputs the small cell image and the small medium image when the small cell image and the small medium image are input, so that the learning model 61 (first network Layer 612) is trained.
  • the learning processing unit 62 compares the small cell image Di received in the input layer with the small cell image output from the output layer, and Adjust the parameters so that Di is as reproducible as possible.
  • FIG. 12A the small cell image Di received in the input layer with the small cell image output from the output layer, and Adjust the parameters so that Di is as reproducible as possible.
  • the learning processing unit 62 compares the small culture medium images Ci received in the input layer and the small culture medium images output from the output layer, and the input small culture medium images Ci are available. Adjust the parameters so that they are as reproducible as possible. Error backpropagation, for example, can be used as a method of adjusting parameters by comparing input data and output data. Training data can be obtained at low cost by learning to match the input and output of the learning model 61 .
  • FIGS. 13A and B are diagrams showing a method of generating the learning model 61 in the second stage.
  • the learning processing unit 62 fixes the parameters of the first network layer 612 learned in the first stage, initializes the parameters of the second network layer 613, is input, the learning model 61 (second network layer 613) is trained so as to output a stained nuclear image.
  • the learning processing unit 62 sets the parameters of the second network layer 613 so that when the cell small image Di is input, the virtual stained small image Si is output from the output layer. to adjust.
  • FIG. 13B the learning processing unit 62 sets the second network so that when the medium small image Ci is input, the virtual stained small image Si (for example, a black image) is output from the output layer. Adjust the parameters of layer 613 .
  • FIGS. 14A and 14B are diagrams showing the method of generating the learning model 61 in the third stage.
  • the learning processing unit 62 cancels the parameter fixation of the first network layer 612, and outputs a stained nuclear small image when a small cell image and a small medium image are input.
  • the learning model 61 is generated by adjusting the parameters of the first network layer 612 and the second network layer 613 .
  • the learning processing unit 62 sets the first network layer 612 and the 2 Adjust the network layer 613 parameters.
  • the learning processing unit 62 when the medium small image Ci is input, the first network The parameters of layer 612 and second network layer 613 are adjusted.
  • FIG. 15 is a diagram showing the results of comparison between the training data including medium images containing no cells and the training data not including medium images.
  • the horizontal axis indicates the true value of the cell count value in one captured image
  • the vertical axis indicates the ratio of the cell count value to the true value when the learning model 61 is used.
  • the solid line graph shows the cell count value by the learning model 61 trained using the medium image (small medium image) that does not contain cells
  • the broken line graph shows the medium image that does not contain cells (small medium image ), the cell count value by the learning model 61 learned without using.
  • the culture medium in the early stage of culture contains foreign substances such as tissue pieces that are easily mistaken for cells.
  • the number of cells is approximately 100 or less in one captured image, a large number of tissue fragments are included, and in the case of the learning model 61 learned without using the culture medium image, these tissue fragments are included.
  • the ratio exceeds 100% because there are many errors of misidentification as cells.
  • the learning model 61 learned using the culture medium image it can be seen that there are few errors in misidentifying tissue fragments as cells.
  • the learning model 61 is generated by including the culture medium image and the corresponding nuclear staining image (for example, black image) in the training data, the cell recognition accuracy in the early stage of culture is improved. can be made
  • FIG. 16 is a diagram showing the procedure of cell counting processing by the image processing device 50.
  • the controller 51 acquires a cell image (S11).
  • a cell image is an image with a vertical and horizontal resolution (m ⁇ n) of about 1000, 1200, or 1400, for example.
  • the control unit 51 converts the acquired cell image into grayscale (S12), and normalizes the grayscale converted image (S13). For normalization, the method illustrated in FIG. 2 can be used.
  • the control unit 51 divides the normalized image into small cell images using a sliding window method (S14).
  • the resolution (size) of each segmented small cell image can be, for example, 256 ⁇ 256.
  • the control unit 51 inputs the cell small image to the learning model 61 and acquires the virtual nuclear staining small image output by the learning model 61 (S15).
  • the control unit 51 joins the acquired virtual nuclear staining small images into a virtual nuclear staining image of the same size as the cell image (S16).
  • the control unit 51 identifies the positions of cell nuclei based on the virtual nuclear staining image (S17), and counts the number of cells (number of positions of cell nuclei) (S18).
  • the control unit 51 superimposes the position of the cell nucleus on the cell image, outputs the counting result (S19), and ends the process.
  • FIG. 17 is a diagram showing the procedure for generating the first training data by the image processing device 50.
  • the control unit 51 acquires the cell image and the corresponding nuclear staining image (S31).
  • the cell image and the corresponding nuclear staining image are images having vertical and horizontal resolutions (m ⁇ n) of about 1000, 1200, and 1400, for example.
  • the control unit 51 converts the obtained cell image into grayscale (S32), and normalizes the converted cell image (S33). For normalization, the method illustrated in FIG. 2 can be used.
  • the control unit 51 Based on the normalized cell image, the control unit 51 generates a cell image with varying brightness, contrast, and blurring amount (S34). A method illustrated in FIG. 8 can be used to generate a cell image.
  • the control unit 51 binarizes the obtained stained nuclear image to generate a binary image (S35), and performs Gaussian convolution on the generated binary image to generate a stained nuclear image (S36). The process of steps S35 and S36 can use the method illustrated in FIG.
  • the control unit 51 generates training data (first training data) by dividing the generated cell image and nuclear staining image into small images (cell small image and nuclear staining small image) while making them correspond to each other by a sliding window method. (S37), the process ends.
  • the resolution (size) of the divided cell small image and nuclear staining small image can be, for example, 256 ⁇ 256.
  • FIG. 18 is a diagram showing the procedure for generating the second training data by the image processing device 50.
  • the control unit 51 acquires a medium image containing no cells (S41).
  • the culture medium image is an image with a vertical and horizontal resolution (m ⁇ n) of about 1000, 1200, and 1400, for example.
  • the control unit 51 converts the obtained culture medium image into grayscale (S42), and normalizes the converted culture medium image (S43). For normalization, the method illustrated in FIG. 2 can be used.
  • the control unit 51 Based on the normalized culture medium image, the control unit 51 generates a culture medium image with varying brightness, contrast, and blurring amount (S44). The method illustrated in FIG. 11 can be used to generate the culture medium image.
  • the control unit 51 acquires an image in which all pixel values have a predetermined value as a stained nuclear small image (S45).
  • a nuclear-stained small image can be, for example, a black image (pixel value is 0).
  • the control unit 51 divides the generated culture medium image into small images (medium small images) by a sliding window method, associates the acquired nuclear staining small images, and generates training data (second training data) (S46). , terminate the process.
  • the resolution (size) of the segmented medium small image and the corresponding nuclear stain small image can be, for example, 256 ⁇ 256.
  • FIG. 19 is a diagram showing the procedure for generating the learning model 61 by the image processing device 50.
  • the learning model 61 is generated using the first training data and second training data (collectively referred to as third training data) generated by the processing shown in FIGS. 17 and 18 .
  • the control unit 51 acquires a small cell image and a small medium image (S51).
  • the size of the cell small image and medium small image can be, for example, 256 ⁇ 256.
  • the control unit 51 initializes the parameters of the learning model 61, and when a small cell image is input to the learning model 61, adjusts the parameters so as to output the small cell image and generates the learning model 61 (S52). .
  • the control unit 51 adjusts the parameters so as to output the small culture medium image and generates the learning model 61 (S53).
  • the control unit 51 acquires a nuclear staining small image corresponding to the cell small image and a nuclear staining small image corresponding to the medium small image (S54), fixes the parameters of the first network layer 612 of the learning model 61,
  • the parameters of the 2 network layer 613 are initialized (S55).
  • the control unit 51 adjusts the parameters so as to output the nuclear staining small image corresponding to the input small cell image and generates the learning model 61 (S56).
  • the control unit 51 adjusts the parameters so as to output the nuclear staining small image corresponding to the input culture medium small image and generates the learning model 61 (S57).
  • the control unit 51 releases the fixed parameters of the first network layer 612 of the learning model 61 (S58).
  • the control unit 51 adjusts the parameters so as to output the nuclear staining small image corresponding to the input small cell image and generates the learning model 61 (S59).
  • the control unit 51 adjusts the parameters so as to output the nuclear staining small image corresponding to the input medium small image and generates the learning model 61 (S60).
  • the control unit 51 stores the generated learning model 61 in the storage unit 59 (S61), and terminates the process.
  • the image processing method of the present embodiment acquires a cell image obtained by imaging a cell, transforms the cell image so that the distribution of pixel values of the acquired cell image approximates a predetermined probability distribution, and based on the transformed cell image to generate a virtual nuclear staining image, and count the number of cells based on the generated virtual nuclear staining image.
  • the image processing method of this embodiment converts the cell image so that the average value of the pixel value distribution of the acquired cell image becomes the average value of the probability distribution.
  • the image processing method of this embodiment converts the cell image so that the variance of the pixel value distribution of the obtained cell image becomes the variance of the probability distribution.
  • the image processing method of the present embodiment inserts a frame-shaped region having a required pixel value around the periphery of an acquired cell image or a converted cell image, and generates a virtual nuclear staining image based on the cell image in which the frame-shaped region is inserted. Generate.
  • the required pixel values include statistical values of pixel values in the peripheral area of the cell image adjacent to the frame-shaped area.
  • the image processing method of this embodiment identifies the nuclear positions of cells based on the generated virtual nuclear staining image, and counts the number of identified nuclear positions as the number of cells.
  • the image processing method of this embodiment superimposes the identified nucleus position on the acquired cell image and displays it.
  • the image processing apparatus of the present embodiment includes an acquisition unit that acquires a cell image obtained by imaging a cell, a conversion unit that converts the cell image so that the pixel value distribution of the acquired cell image is approximated to a predetermined probability distribution, A generation unit that generates a virtual nuclear staining image based on the converted cell image, and a counting unit that counts the number of cells based on the generated virtual nuclear staining image.
  • the computer program of the present embodiment provides a computer with a cell image obtained by imaging a cell, converts the cell image so that the distribution of pixel values of the obtained cell image is approximated to a predetermined probability distribution, and converts the cell image.
  • a process of generating a virtual nuclear staining image based on and counting the number of cells based on the generated virtual nuclear staining image is executed.
  • the learning model generation method of the present embodiment acquires first training data including a cell image obtained by imaging a cell and a nuclear staining image obtained by staining the cell, and inputs a cell image based on the acquired first training data.
  • a learning model is generated so as to output a stained nuclear image when
  • the first training data includes cell images in which the pixel values have been converted so as to approximate the pixel value distribution of the cell images to a predetermined probability distribution.
  • the pixel values of the first training data are converted such that the average value or variance of the pixel value distribution of the cell image becomes the average value or variance of a predetermined probability distribution. including cell images.
  • the first training data includes cell images obtained by varying at least one of the average value, the variance, and the amount of blurring of the cell images of pixel value distribution of the cell images.
  • the learning model generation method of the present embodiment acquires second training data including a medium image that does not contain cells and a nuclear staining image corresponding to the medium image, and based on the acquired second training data, The learning model is generated so as to output a stained nuclear image when an image of a culture medium without a medium is input.
  • the second training data includes a culture medium image in which the pixel values have been converted to approximate the pixel value distribution of the culture medium image to a predetermined probability distribution.
  • the pixel values of the second training data are converted such that the average value or variance of the pixel value distribution of the culture medium image becomes the average value or variance of a predetermined probability distribution.
  • the second training data includes medium images obtained by varying at least one of the mean value, the variance, and the amount of blurring of the medium image of pixel value distribution of the medium image.
  • the culture medium image includes a culture medium image in which a frame-shaped area having a required pixel value is inserted in the periphery.
  • the nuclear staining image includes a nuclear staining image obtained by converting the stained cell nucleus into a predetermined shape and pixel value.
  • the cell image or nuclear staining image includes a cell image or nuclear staining image in which a frame-shaped area having a required pixel value is inserted in the periphery.
  • the learning model includes a first network layer for extracting feature values of an input image and a second network layer for outputting an output image based on the extracted feature values.
  • Acquiring third training data including a cell image captured, a medium image containing no cells, and a nuclear staining image corresponding to the cell image or medium image, and based on the acquired third training data, the cell image and the medium
  • the first network layer is learned so that the cell image and the medium image are output, the learned parameters of the first network layer are fixed, and the obtained third training data is used.
  • the learning model is generated by learning the first network layer and the second network layer so as to output a stained nuclear image when a cell image and a culture medium image are input.
  • the image processing method of the present embodiment acquires a cell image obtained by imaging a cell, inputs the acquired cell image into the learning model generated by the learning model generation method described above, acquires a nuclear staining image, and acquires output the stained nuclear image.
  • the image processing apparatus of the present embodiment includes a learning model generated by the learning model generation method described above, a first acquisition unit that acquires a cell image obtained by imaging a cell, and an acquired cell image that is input to the learning model. It comprises a second acquiring unit that acquires the stained nuclear image, and an output unit that outputs the acquired stained nuclear image.
  • the computer program of the present embodiment acquires a cell image obtained by imaging a cell into a computer, inputs the acquired cell image into the learning model generated by the learning model generation method described above, and acquires a stained nuclear image. , to output the acquired nuclear staining image, and to execute processing.
  • the learning model of this embodiment is generated by the learning model generation method described above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Provided are a learning model generation method, an image processing method, an image processing device, a computer program, and a learning model with which it is possible to recognize a cell with high accuracy even when there is a change in the imaging environment. The learning model generation method includes: acquiring first training data that includes a cell image in which a cell is imaged and a nuclear stained image in which the aforementioned cell is stained; and generating a learning model, on the basis of the acquired first training data, so as to output the nuclear stained image when the cell image is inputted.

Description

学習モデル生成方法、画像処理方法、画像処理装置、コンピュータプログラム及び学習モデルLEARNING MODEL GENERATING METHOD, IMAGE PROCESSING METHOD, IMAGE PROCESSING APPARATUS, COMPUTER PROGRAM AND LEARNING MODEL
 本発明は、学習モデル生成方法、画像処理方法、画像処理装置、コンピュータプログラム及び学習モデルに関する。 The present invention relates to a learning model generation method, an image processing method, an image processing device, a computer program, and a learning model.
 細胞の核を認識して計数する、あるいは核の形状から病理診断をすることは、従来専門家の目視によって行われてきたが、近年、コンピュータの発達によって、その一部が徐々に自動化されている。 Recognizing and counting the nuclei of cells, or diagnosing pathology from the shape of the nuclei has traditionally been done visually by specialists. there is
 非特許文献1には、ニューラルネットワークを用いて、電子顕微鏡画像から核などを示す画像に変換する手法が開示されている。 Non-Patent Document 1 discloses a method of converting an electron microscope image into an image showing a nucleus using a neural network.
 しかし、光学顕微鏡で撮影された画像は、実際の撮影環境に依存して変化するため、単純に深層学習によって得られた学習モデルを用いても、高い精度を得ることができない。 However, since images taken with an optical microscope change depending on the actual shooting environment, high accuracy cannot be obtained simply by using a learning model obtained by deep learning.
 本発明は、斯かる事情に鑑みてなされたものであり、撮影環境に変動がある場合でも高い精度で細胞を認識できる学習モデル生成方法、画像処理方法、画像処理装置、コンピュータプログラム及び学習モデルを提供することを目的とする。 The present invention has been made in view of such circumstances, and provides a learning model generation method, an image processing method, an image processing apparatus, a computer program, and a learning model capable of recognizing cells with high accuracy even when the imaging environment fluctuates. intended to provide
 本願は上記課題を解決する手段を複数含んでいるが、その一例を挙げるならば、学習モデル生成方法は、細胞を撮像した細胞画像、及び前記細胞を染色した核染色画像を含む第1訓練データを取得し、取得した第1訓練データに基づいて、細胞画像を入力した場合に、核染色画像を出力するように学習モデルを生成する。 The present application includes a plurality of means for solving the above problems. To give an example, the learning model generation method includes first training data including a cell image obtained by imaging a cell and a nuclear staining image obtained by staining the cell. is obtained, and based on the obtained first training data, a learning model is generated so as to output a stained nuclear image when a cell image is input.
 本発明によれば、撮影環境に変動がある場合でも高い精度で細胞を認識できる。 According to the present invention, cells can be recognized with high accuracy even when the imaging environment fluctuates.
本実施形態の画像処理装置の構成の一例を示す図である。It is a figure showing an example of composition of an image processing device of this embodiment. 正規化処理の一例を示す図である。It is a figure which shows an example of a normalization process. 正規化画像と小画像との関係を示す図である。FIG. 4 is a diagram showing the relationship between normalized images and small images; スライディングウィンドウ方式の一例を示す図である。It is a figure which shows an example of a sliding window method. 学習モデルの構成の一例を示す図である。It is a figure which shows an example of a structure of a learning model. 画像処理装置による計数結果画面の一例を示す図である。It is a figure which shows an example of the count result screen by an image processing apparatus. 細胞小画像と対応する核染色小画像とを示す図である。FIG. 3 shows a cell mini-image and a corresponding nuclear staining mini-image; 正規化した細胞小画像に基づいて明るさ、コントラスト、及びぼかし量を変動させた細胞小画像の一例を示す図である。FIG. 10 is a diagram showing an example of small cell images in which brightness, contrast, and amount of blurring are varied based on normalized small cell images. 核染色小画像の均一化処理の一例を示す図である。FIG. 10 is a diagram showing an example of homogenization processing of nuclear-stained small images; 培地小画像と対応する核染色小画像とを示す図である。FIG. 3 shows a medium small image and a corresponding nuclear stain small image. 正規化した培地小画像に基づいて明るさ、コントラスト、及びぼかし量を変動させた培地小画像の一例を示す図である。FIG. 10 is a diagram showing an example of medium small images with varying brightness, contrast, and blur amount based on normalized medium small images. 学習モデルの第1段階における生成方法を示す図である。It is a figure which shows the generation method in the 1st stage of a learning model. 学習モデルの第1段階における生成方法を示す図である。It is a figure which shows the generation method in the 1st stage of a learning model. 学習モデルの第2段階における生成方法を示す図である。It is a figure which shows the generation method in the 2nd step of a learning model. 学習モデルの第2段階における生成方法を示す図である。It is a figure which shows the generation method in the 2nd stage of a learning model. 学習モデルの第3段階における生成方法を示す図である。It is a figure which shows the generation method in the 3rd stage of a learning model. 学習モデルの第3段階における生成方法を示す図である。It is a figure which shows the generation method in the 3rd stage of a learning model. 訓練データに細胞を含まない培地画像を含む場合と含まない場合との比較結果を示す図である。FIG. 10 is a diagram showing a comparison result between a case where training data includes a medium image containing no cells and a case where the training data does not include the medium image. 画像処理装置による細胞計数処理の手順を示す図である。It is a figure which shows the procedure of the cell counting process by an image processing apparatus. 画像処理装置による第1訓練データの生成処理の手順を示す図である。It is a figure which shows the procedure of the production|generation process of the 1st training data by an image processing apparatus. 画像処理装置による第2訓練データの生成処理の手順を示す図である。It is a figure which shows the procedure of the production|generation process of the 2nd training data by an image processing apparatus. 画像処理装置による学習モデルの生成処理の手順を示す図である。FIG. 4 is a diagram showing the procedure of a learning model generation process by an image processing device;
 以下、本発明の実施の形態を図面に基づいて説明する。図1は本実施形態の画像処理装置50の構成の一例を示す図である。画像処理装置50は、装置全体を制御する制御部51、通信部52、メモリ53、正規化部54、画像処理部55、表示部56、操作部57、計数部58、記憶部59、及び学習処理部62を備える。記憶部59は、コンピュータプログラム60、学習モデル61、及び所要の情報を記憶する。 Hereinafter, embodiments of the present invention will be described based on the drawings. FIG. 1 is a diagram showing an example of the configuration of an image processing apparatus 50 of this embodiment. The image processing device 50 includes a control unit 51 that controls the entire device, a communication unit 52, a memory 53, a normalization unit 54, an image processing unit 55, a display unit 56, an operation unit 57, a counting unit 58, a storage unit 59, and a learning unit. A processing unit 62 is provided. The storage unit 59 stores a computer program 60, a learning model 61, and required information.
 正規化部54、画像処理部55、計数部58、及び学習処理部62は、ハードウェアで構成してもよく、コンピュータプログラム(ソフトウェア)60により実現するようにしてもよく、あるいはハードウェアとソフトウェアとを組み合わせて実現してもよい。学習処理部62を外部の装置に組み込み、当該装置で生成した学習モデル61を記憶部59に記憶してもよい。また、画像処理装置50を複数の装置で構成してもよい。 The normalization unit 54, the image processing unit 55, the counting unit 58, and the learning processing unit 62 may be configured by hardware, may be realized by a computer program (software) 60, or may be hardware and software. may be realized in combination with The learning processing unit 62 may be incorporated in an external device, and the learning model 61 generated by the device may be stored in the storage unit 59 . Also, the image processing device 50 may be configured by a plurality of devices.
 制御部51は、CPU(Central Processing Unit)、MPU(Micro-Processing Unit)、GPU(Graphics Processing Unit)等で構成することができる。制御部51は、コンピュータプログラム60で定められた処理を実行することができる。すなわち、制御部51による処理は、コンピュータプログラム60による処理でもある。 The control unit 51 can be configured with a CPU (Central Processing Unit), MPU (Micro-Processing Unit), GPU (Graphics Processing Unit), or the like. The control unit 51 can execute processing defined by the computer program 60 . That is, the processing by the control unit 51 is also the processing by the computer program 60 .
 通信部52は、例えば、通信モジュールを備え、無線通信又は有線通信を使って外部の装置と情報の送受信を行うことができる。通信部52は、取得部としての機能を有し、細胞を撮像した細胞画像、細胞を含まない培地画像、細胞を染色した核染色画像を、外部の装置から取得できる。 The communication unit 52, for example, includes a communication module and can transmit and receive information to and from an external device using wireless communication or wired communication. The communication unit 52 has a function as an acquisition unit, and can acquire a cell image obtained by capturing cells, a culture medium image containing no cells, and a stained nuclear image obtained by staining cells from an external device.
 本明細書において、細胞は、特に限定されるものではなく、例えば、接着細胞(付着性細胞)および浮遊細胞が挙げられる。接着細胞は、例えば、接着性の体細胞等を含む。体細胞の例としては、例えば、筋芽細胞(例えば、骨格筋芽細胞等)、筋衛星細胞、間葉系幹細胞(例えば、骨髄、脂肪組織、末梢血、皮膚、毛根、筋組織、子宮内膜、胎盤、臍帯血由来のもの等)、心筋細胞、線維芽細胞、心臓幹細胞等の組織幹細胞、胚性幹細胞、iPS(induced pluripotent stem)細胞等の多能性幹細胞、滑膜細胞、軟骨細胞、上皮細胞(例えば、口腔粘膜上皮細胞、網膜色素上皮細胞、鼻粘膜上皮細胞等)、内皮細胞(例えば、血管内皮細胞等)、肝細胞(例えば、肝実質細胞等)、膵細胞(例えば、膵島細胞等)、腎細胞、副腎細胞、歯根膜細胞、歯肉細胞、骨膜細胞、皮膚細胞等が挙げられる。体細胞は、iPS細胞から分化させたもの(iPS細胞由来細胞)であってよく、iPS細胞由来の心筋細胞、線維芽細胞、筋芽細胞、上皮細胞、内皮細胞、肝細胞、膵細胞、腎細胞、副腎細胞、歯根膜細胞、歯肉細胞、骨膜細胞、皮膚細胞、滑膜細胞、軟骨細胞等が挙げられる。浮遊細胞としては、リンパ球(Tリンパ球、Bリンパ球)等が挙げられる。細胞としては、細胞内に核を有する有核細胞が好ましい。 As used herein, cells are not particularly limited, and include, for example, adherent cells (adherent cells) and floating cells. Adherent cells include, for example, adherent somatic cells and the like. Examples of somatic cells include myoblasts (e.g., skeletal myoblasts), muscle satellite cells, mesenchymal stem cells (e.g., bone marrow, adipose tissue, peripheral blood, skin, hair root, muscle tissue, intrauterine tissue stem cells such as cardiomyocytes, fibroblasts, cardiac stem cells, embryonic stem cells, pluripotent stem cells such as iPS (induced pluripotent stem) cells, synovial cells, chondrocytes , Epithelial cells (e.g., oral mucosal epithelial cells, retinal pigment epithelial cells, nasal mucosal epithelial cells, etc.), endothelial cells (e.g., vascular endothelial cells, etc.), hepatocytes (e.g., hepatocytes, etc.), pancreatic cells (e.g., pancreatic islet cells, etc.), renal cells, adrenal cells, periodontal ligament cells, gingival cells, periosteal cells, skin cells and the like. Somatic cells may be those differentiated from iPS cells (iPS cell-derived cells), iPS cell-derived cardiomyocytes, fibroblasts, myoblasts, epithelial cells, endothelial cells, hepatocytes, pancreatic cells, kidney cells, adrenal cells, periodontal ligament cells, gingival cells, periosteal cells, skin cells, synovial cells, chondrocytes, and the like. Floating cells include lymphocytes (T lymphocytes, B lymphocytes) and the like. As the cells, nucleated cells having nuclei inside the cells are preferable.
 培養初期の培地には組織片などの細胞と誤認しやすい異物が含まれる。培地画像は、細胞は含まれないが、組織片などの異物を含む画像である。 The culture medium in the early stages of culture contains foreign substances such as tissue fragments that are easily mistaken for cells. A medium image is an image that does not contain cells, but contains foreign substances such as tissue fragments.
 メモリ53は、SRAM(Static Random Access Memory)、DRAM(Dynamic Random Access Memory)、フラッシュメモリ等の半導体メモリで構成することができる。コンピュータプログラム60をメモリ53に展開して、制御部51がコンピュータプログラム60を実行することができる。 The memory 53 can be composed of semiconductor memory such as SRAM (Static Random Access Memory), DRAM (Dynamic Random Access Memory), and flash memory. The computer program 60 can be developed in the memory 53 and the control unit 51 can execute the computer program 60 .
 正規化部54は、通信部52を介して取得した細胞画像、及び培地画像をグレースケールに変換する。また、正規化部54は、グレースケールに変換したグレースケール変換画像(細胞画像、培地画像)を正規化する。正規化されたグレースケール変換画像を正規化画像とも称する。 The normalization unit 54 converts the cell image and medium image acquired via the communication unit 52 into grayscale. In addition, the normalization unit 54 normalizes the grayscale-converted images (cell image, culture medium image) converted to grayscale. A normalized grayscale transformed image is also referred to as a normalized image.
 図2は正規化処理の一例を示す図である。図2のグラフの横軸は画素値(輝度値:0~255)を示し、縦軸は最頻の画素値に対する各画素値の存在比を示す。図2に示すように、グレースケール変換画像Gi、Gjに注目する。グレースケール変換画像Gi、Gjは、例えば、撮像タイミングや撮像場所が異なるなど、異なる撮影環境で撮像された画像であり、撮像装置(例えば、光学位相差顕微鏡など)で撮像された画像を示す。画像の縦横の解像度(m×n)は、撮像装置によって異なる。解像度m又はnは、例えば、1000、1200、1400程度とすることができるが、これらに限定されない。 FIG. 2 is a diagram showing an example of normalization processing. The horizontal axis of the graph in FIG. 2 indicates the pixel value (luminance value: 0 to 255), and the vertical axis indicates the abundance ratio of each pixel value to the most frequent pixel value. As shown in FIG. 2, attention is paid to the grayscale transformed images Gi, Gj. The grayscale-converted images Gi and Gj are images captured in different shooting environments, such as different shooting timings and shooting locations, and represent images captured by an imaging device (eg, an optical phase-contrast microscope, etc.). The vertical and horizontal resolution (m×n) of an image differs depending on the imaging device. The resolution m or n can be, for example, about 1000, 1200, or 1400, but is not limited to these.
 グレースケール変換画像Giの特定領域Riの画素値(例えば、0~255)のヒストグラムを作成し、作成したヒストグラムの分布を所定の確率分布(例えば、正規分布など)で近似した場合の平均値(明るさ)をai、分散(コントラスト)をσiとする。同様に、グレースケール変換画像Gjの特定領域Rjの画素値(例えば、0~255)のヒストグラムを作成し、作成したヒストグラムの分布を所定の確率分布(例えば、正規分布など)で近似した場合の平均値(明るさ)をaj、分散(コントラスト)をσjとする。特定領域Ri、Rjは、グレースケール変換画像内のどの領域でもよいが、領域の大きさは同一とする。 A histogram of the pixel values (for example, 0 to 255) of the specific region Ri of the grayscale converted image Gi is created, and the average value ( Let ai be the brightness) and σi be the variance (contrast). Similarly, a histogram of pixel values (eg, 0 to 255) in a specific region Rj of the grayscale converted image Gj is created, and the distribution of the created histogram is approximated by a predetermined probability distribution (eg, normal distribution). Let aj be the average value (brightness) and σj be the variance (contrast). The specific regions Ri and Rj may be any regions in the grayscale-converted image, but the sizes of the regions are the same.
 所定の確率分布(例えば、正規分布など)の平均値をaとし、分散をσとする。平均値aは、例えば、128としてもよい。分散σは、例えば、15としてもよい。この平均値(明るさ)a、及び分散(コントラスト)σは、撮影環境において照明条件が良く、フラスコなどが結露しないような状況で撮影された場合の値に基づいて設定できる。 Let a be the average value of a predetermined probability distribution (for example, normal distribution, etc.), and let σ be the variance. The average value a may be 128, for example. The variance σ may be 15, for example. The average value (brightness) a and variance (contrast) σ can be set based on the values obtained when the image is captured in a shooting environment in which the lighting conditions are good and condensation does not occur on the flask or the like.
 正規化部54は、取得したグレースケール変換(細胞画像)の画素値の分布を所定の確率分布に近似すべくグレースケール変換画像を変換する。より具体的には、正規化部54は、取得したグレースケール変換画像の画素値の分布の平均値が確率分布の平均値になるようにグレースケール変換画像を変換するとともに、取得したグレースケール変換画像の画素値の分布の分散が確率分布の分散になるようにグレースケール変換画像を変換する。図2の例では、グレースケール変換画像Giの分布の平均値aiが、正規分布の平均値aになるように、グレースケール変換画像Giの全画素の画素値を変換する。また、グレースケール変換画像Giの分布の分散σiが、正規分布の分散σになるように、グレースケール変換画像Giの全画素の画素値を変換する。グレースケール変換画像Gjについても同様に、グレースケール変換画像Gjの分布の平均値ajが、正規分布の平均値aになるように、グレースケール変換画像Gjの全画素の画素値を変換する。また、グレースケール変換画像Gjの分布の分散σjが、正規分布の分散σになるように、グレースケール変換画像Gjの全画素の画素値を変換する。 The normalization unit 54 converts the grayscale-converted image so that the pixel value distribution of the obtained grayscale-converted (cell image) approximates a predetermined probability distribution. More specifically, the normalization unit 54 converts the grayscale-converted image so that the average value of the pixel value distribution of the acquired grayscale-converted image becomes the average value of the probability distribution, and the acquired grayscale-converted image A grayscale-converted image is transformed so that the variance of the pixel value distribution of the image becomes the variance of the probability distribution. In the example of FIG. 2, the pixel values of all the pixels of the grayscale-converted image Gi are converted so that the average value ai of the distribution of the grayscale-converted image Gi becomes the average value a of the normal distribution. Further, the pixel values of all the pixels of the grayscale-converted image Gi are converted so that the variance σi of the distribution of the grayscale-converted image Gi becomes the variance σ of the normal distribution. Similarly, for the grayscale-converted image Gj, the pixel values of all the pixels of the grayscale-converted image Gj are converted so that the average value aj of the distribution of the grayscale-converted image Gj becomes the average value a of the normal distribution. Further, the pixel values of all the pixels of the grayscale-converted image Gj are converted so that the variance σj of the distribution of the grayscale-converted image Gj becomes the variance σ of the normal distribution.
 グレースケール変換画像(細胞画像、培地画像)を正規化することにより、撮影環境の変動が画像の画質に与える影響を低減し、撮影環境に変動がある場合でも、所定の確率分布に従う画質の画像が得られ、後述の処理によって、高い精度で細胞を認識できる。 By normalizing grayscale-converted images (cell images, culture medium images), the impact of changes in the imaging environment on the image quality is reduced. is obtained, and the cells can be recognized with high accuracy by the processing described later.
 画像処理部55は、正規化画像(細胞画像、及び培地画像)を、所定サイズの小画像に分割する処理を行う。小画像でグレースケール変換後の細胞画像を細胞小画像(認識対象画像)と称し、細胞小画像は、後述の学習モデル61に入力される画像であり、学習モデル61が認識しやすいサイズにしている。所定サイズは、例えば、256×256の解像度とすることができるが、これに限定されない。画像処理部55は、正規化画像を小画像に分割する際に、スライディングウィンドウ方式を用いる。 The image processing unit 55 performs processing to divide the normalized image (cell image and culture medium image) into small images of a predetermined size. A small cell image after grayscale conversion is called a small cell image (recognition target image). there is The predetermined size can be, for example, a resolution of 256×256, but is not limited to this. The image processing unit 55 uses the sliding window method when dividing the normalized image into small images.
 図3は正規化画像と小画像との関係を示す図である。図3において、正規化画像は細胞画像として説明する。正規化画像は培地画像でもよい。小画像(細胞小画像)は、所定サイズの領域を、一部領域が重なるように正規化画像を走査することにより、取り出すことができる。図3の例では、細胞小画像D1、D2、…、Dm、D(m+1)、…が取り出される。すなわち、1枚の正規化画像は複数枚の細胞小画像に分割される。 FIG. 3 is a diagram showing the relationship between normalized images and small images. In FIG. 3, normalized images are explained as cell images. The normalized image may be the medium image. Small images (small cell images) can be extracted by scanning the normalized image so that a region of a given size is partially overlapped. In the example of FIG. 3, small cell images D1, D2, . . . , Dm, D(m+1), . That is, one normalized image is divided into a plurality of small cell images.
 図3の例において、正規化画像の外周付近以外の領域から取り出される細胞小画像Dmの周辺部分の情報は、スライディングウィンドウ方式によって隣の細胞小画像D(m+1)などによって、当該細胞小画像D(m+1)などの中心部分で認識できるので、周辺部分の情報は学習モデル61に入力され、問題はない。一方、正規化画像の外周付近から取り出される細胞小画像D1のうち、256×256の領域の端部E1は、正規化画像の境界に隣接しており、正規化画像の外側の画像からの情報(ヒント)が得られないので、細胞の認識精度が低下する。同様に、正規化画像の外周付近から取り出される細胞小画像D2のうち、256×256の領域の端部E2は、正規化画像の外側の画像からの情報(ヒント)が得られないので、細胞の認識精度が低下する。細胞の認識精度が低下しないように、画像処理部55は、正規化画像を小画像に分割する際に、スライディングウィンドウ方式を用いる。 In the example of FIG. 3, the information of the peripheral portion of the small cell image Dm extracted from the area other than the vicinity of the periphery of the normalized image is obtained by using the adjacent small cell image D(m+1) by the sliding window method. Since it can be recognized by the center part such as (m+1), the information on the peripheral part is input to the learning model 61, and there is no problem. On the other hand, in the small cell image D1 extracted from the vicinity of the periphery of the normalized image, the edge E1 of the 256×256 area is adjacent to the boundary of the normalized image, and the information from the image outside the normalized image is Since (hint) cannot be obtained, the accuracy of cell recognition decreases. Similarly, in the small cell image D2 extracted from the vicinity of the periphery of the normalized image, the edge E2 of the 256×256 area cannot obtain information (hints) from the image outside the normalized image. the recognition accuracy of The image processing unit 55 uses a sliding window method when dividing the normalized image into small images so that the cell recognition accuracy is not lowered.
 図4はスライディングウィンドウ方式の一例を示す図である。図4において、正規化画像は細胞画像として説明する。正規化画像は培地画像でもよい。図4に示すように、画像処理部55は、正規化画像(又は正規化していない細胞画像でもよい)外周に所要の画素値を有する額縁状領域Wを挿入する。所要の画素値は、額縁状領域Wに隣接する正規化画像の外周領域の画素値の統計値(例えば、平均値でもよく、最頻値でもよく、中央値でもよい)を含む。 FIG. 4 is a diagram showing an example of the sliding window method. In FIG. 4, normalized images are explained as cell images. The normalized image may be the medium image. As shown in FIG. 4, the image processing unit 55 inserts a frame-shaped area W having a required pixel value around the normalized image (or a non-normalized cell image may be used). The required pixel values include statistical values (eg, mean, mode, or median) of pixel values in the outer peripheral region of the normalized image adjacent to the frame-shaped region W. FIG.
 正規化画像の外周付近から取り出される細胞小画像D1のうち、256×256の領域の端部は、額縁状領域Wに隣接しており、額縁状領域Wの画素値がすべて0の場合や全て255の場合などに比べて、細胞の認識精度の低下を抑制できる。同様に、正規化画像の外周付近から取り出される細胞小画像D2のうち、256×256の領域の端部は、額縁状領域Wに隣接しており、同様に、細胞の認識精度(検出精度)の低下を抑制できる。 In the small cell image D1 extracted from the vicinity of the outer periphery of the normalized image, the edge of the 256×256 area is adjacent to the frame-shaped area W, and the pixel values of the frame-shaped area W are all 0 or As compared with the case of H.255, it is possible to suppress deterioration in cell recognition accuracy. Similarly, in the small cell image D2 extracted from the vicinity of the periphery of the normalized image, the end of the 256×256 area is adjacent to the frame-shaped area W, and similarly, the cell recognition accuracy (detection accuracy) can suppress the decrease in
 表示部56は、液晶ディスプレイ又は有機ELディスプレイなどで構成することができる。なお、表示部56に代えて、外部の表示装置を画像処理装置50に接続するようにしてもよい。 The display unit 56 can be configured with a liquid crystal display, an organic EL display, or the like. Note that an external display device may be connected to the image processing device 50 instead of the display unit 56 .
 操作部57は、例えば、キーボード、マウス、タッチパッド又はタッチパネル等で構成され、表示部56に表示される情報に対する操作を受け付けることができる。 The operation unit 57 is composed of, for example, a keyboard, a mouse, a touch pad, a touch panel, or the like, and can accept operations for information displayed on the display unit 56.
 画像処理装置50は、取得した正規化画像を所定サイズの細胞小画像に分割し、分割した各細胞小画像に基づいて、所定サイズの仮想核染色小画像を生成し、生成した各仮想核染色小画像を繋ぎ合わせて元の正規化画像と同じサイズの仮想核染色画像を生成する。本明細書において、核染色画像は、細胞の核に多く含まれる核酸に結合する試薬を用いて細胞内の核だけを実際に染色した細胞を撮像して得られた画像である。仮想核染色画像は、核染色画像と同様の画像であるが、実際に染色したものではなく、仮想的に核染色画像を表現する画像である。仮想核染色画像(仮想核染色小画像)の生成には、学習モデル61を用いる。以下、学習モデル61について説明する。 The image processing device 50 divides the acquired normalized image into small cell images of a predetermined size, generates a virtual nuclear staining small image of a predetermined size based on each divided cell small image, and generates each virtual nuclear staining image. The sub-images are stitched together to generate a virtual nuclear staining image of the same size as the original normalized image. In this specification, a stained nuclear image is an image obtained by imaging a cell in which only the nucleus inside the cell is actually stained using a reagent that binds to nucleic acids abundantly contained in the nucleus of the cell. The virtual nuclear-stained image is an image similar to the nuclear-stained image, but it is an image that represents the nuclear-stained image virtually, instead of the image that is actually stained. A learning model 61 is used to generate a virtual nuclear-stained image (virtual nuclear-stained small image). The learning model 61 will be described below.
 図5は学習モデル61の構成の一例を示す図である。学習モデル61は、入力層611、第1ネットワーク層612、第2ネットワーク層613、及び出力層614を備える。学習モデル61は、例えば、オートエンコーダ(Autoencoder)を用いることができるが、これに限定されない。入力層611には、256×256の細胞小画像(細胞画像)が入力される。細胞小画像には、細胞が含まれる。出力層614からは、256×256の仮想核染色小画像(仮想核染色画像)が出力される。仮想核染色小画像には、細胞核が含まれる。第1ネットワーク層612は、畳み込み層を備え、入力された細胞小画像に関する情報を重み付けし、重要度の高い情報を選別し、それ以外の情報を削除することにより、入力データの圧縮と細胞小画像の特徴量を抽出する。第2ネットワーク層613は、逆畳み込み層を備え、第1ネットワーク層612で抽出した特徴量に基づいて、圧縮されたデータから出力画像としての仮想核染色小画像を生成する。第1ネットワーク層612は、例えば、エンコーダ(Encoder)を含み、第2ネットワーク層613は、デコーダ(Decoder)を含む。学習モデル61は、オートエンコーダ(Autoencoder)に限定されるものではなく、画像生成を行うことができるモデルであればよく、例えば、U-netでもよい。 FIG. 5 is a diagram showing an example of the configuration of the learning model 61. FIG. The learning model 61 comprises an input layer 611 , a first network layer 612 , a second network layer 613 and an output layer 614 . The learning model 61 can use, for example, an autoencoder, but is not limited to this. A 256×256 small cell image (cell image) is input to the input layer 611 . Cell subimages contain cells. A 256×256 virtual nuclear staining small image (virtual nuclear staining image) is output from the output layer 614 . The virtual nuclear staining mini image contains cell nuclei. The first network layer 612 comprises a convolutional layer that weights the information about the input small image of cells, selects information with high importance, and removes other information, thereby compressing the input data and reducing the size of the cells. Extract image features. The second network layer 613 comprises a deconvolution layer, and based on the features extracted in the first network layer 612, generates a virtual nucleus-stained small image as an output image from the compressed data. A first network layer 612 includes, for example, an encoder, and a second network layer 613 includes a decoder. The learning model 61 is not limited to an autoencoder, and may be any model capable of image generation, such as U-net.
 なお、図5において、細胞小画像及び仮想核染色小画像は、模式的に示したものであり、細胞小画像内の細胞の形状や数、及び仮想核染色小画像内の細胞核の形状や数は、実際のものと異なる場合がある。 In FIG. 5, the small cell image and the virtual nuclear staining small image are shown schematically, and the shape and number of cells in the small cell image and the shape and number of cell nuclei in the virtual nuclear staining small image may differ from the actual one.
 計数部58は、元の正規化画像のサイズ(例えば、1200×1200など)と同じサイズの仮想核染色画像に基づいて、細胞核の位置を特定するとともに、細胞画像1枚当たりの細胞数(細胞核の数)を計数する。 The counting unit 58 identifies the position of the cell nucleus based on the virtual nuclear staining image of the same size as the original normalized image (for example, 1200 × 1200), and calculates the number of cells per cell image (cell nucleus number) are counted.
 図6は画像処理装置50による計数結果画面100の一例を示す図である。計数結果画面100は、例えば、表示部56に表示される。計数結果画面100には、アップローダ領域101、画像選択領域102、計数結果表示領域104、「評価」アイコン103などが表示される。アップローダ領域101に、細胞画像ファイルをドラッグ・アンド・ドロップすることにより、画像処理装置50は、当該細胞画像に基づいて計数結果を計数結果表示領域104に表示する。また、アップローダ領域101をクリックすることにより、画像選択領域102に画像リストを表示させることができる。画像選択領域102には、選択ボックス、細胞画像の作成日、細胞画像のファイル名、細胞画像のソース、及び細胞数の各欄が表示される。細胞数の欄には、細胞画像が評価済みの場合には、細胞数が表示され、細胞画像が未評価の場合には、「未評価」が表示される。所望の細胞画像を選択して「評価」アイコン103を操作することにより、計数結果表示領域104に計数結果が表示される。 FIG. 6 is a diagram showing an example of the counting result screen 100 by the image processing device 50. FIG. The counting result screen 100 is displayed on the display unit 56, for example. The count result screen 100 displays an uploader area 101, an image selection area 102, a count result display area 104, an "evaluation" icon 103, and the like. By dragging and dropping a cell image file to the uploader area 101, the image processing apparatus 50 displays the counting result in the counting result display area 104 based on the cell image. Further, by clicking the uploader area 101, an image list can be displayed in the image selection area 102. FIG. The image selection area 102 displays columns for a selection box, cell image creation date, cell image file name, cell image source, and cell count. The cell number column displays the number of cells when the cell image has been evaluated, and displays "not evaluated" when the cell image has not been evaluated. By selecting a desired cell image and operating the “evaluation” icon 103 , the counting result is displayed in the counting result display area 104 .
 計数結果表示領域104には、細胞核の位置を重畳させた細胞画像が表示される。図中、細胞核は黒丸で示しているが、他の表示態様、例えば、クロスマーク等の識別符号を付けてもよい。なお、図6において、細胞画像は模式的に示したものであり、細胞画像内の細胞核の数は実際のものと異なる場合がある。また、計数結果表示領域104には、元の細胞画像のファイル名、検出した細胞数を表示してもよい。 A cell image on which the position of the cell nucleus is superimposed is displayed in the counting result display area 104 . In the figure, the cell nuclei are indicated by black circles, but they may be indicated in other manners, for example, with an identification code such as a cross mark. In addition, in FIG. 6, the cell image is shown schematically, and the number of cell nuclei in the cell image may differ from the actual number. Also, the file name of the original cell image and the number of detected cells may be displayed in the counting result display area 104 .
 細胞数は、細胞の培養状態の管理、例えば、培養が完了したか否か、追加で培養が必要であるか否か等を判断する上で重要な情報である。1つの細胞画像当たり、細胞数が数百個(例えば、400個程度)に及ぶ場合があり、目視による計数では人的負担が大きく、同一画像であっても人により細胞であるか否かの判断にばらつきがある。また、組織片などの異物を細胞と誤認する場合もある。しかし、上述のように、本実施形態によれば、目視によらず、深層学習を利用して細胞数を自動的に計数するだけでなく、正規化処理を行うことによって、撮影環境の変動を吸収して、所定の確率分布に従う画質の細胞画像が得られ、高い精度で細胞を認識(検出)できる。また、試料のクリーニングなども不要となり、試料の状態に変動がある場合でも、高い精度で細胞を認識できる。また、本実施形態によれば、スライディングウィンドウ方式を用いることにより、細胞小画像(認識対象画像)の外周付近における細胞の認識精度の低下を抑制でき、また認識対象画像に額縁状の領域を付加することによって正規化画像外周付近における細胞の認識精度の低下を抑制できる。 The number of cells is important information for managing the state of cell culture, for example, determining whether the culture is complete or whether additional culture is required. The number of cells in one cell image may reach several hundred (for example, about 400), and counting by visual inspection is a heavy burden on human beings. There is variability in judgment. In addition, foreign substances such as tissue fragments may be misidentified as cells. However, as described above, according to the present embodiment, not only is the number of cells automatically counted using deep learning, but also normalization processing is performed without relying on visual observation. Absorbed, a cell image with image quality according to a predetermined probability distribution is obtained, and cells can be recognized (detected) with high accuracy. In addition, sample cleaning is not required, and cells can be recognized with high accuracy even when the sample condition fluctuates. Further, according to the present embodiment, by using the sliding window method, it is possible to suppress the decrease in cell recognition accuracy in the vicinity of the periphery of the small cell image (recognition target image), and add a frame-shaped area to the recognition target image. By doing so, it is possible to suppress the decrease in accuracy of recognizing cells in the vicinity of the periphery of the normalized image.
 次に、学習モデル61の生成方法について説明する。まず、学習モデル61を生成するための訓練データの生成方法について説明し、その後に学習モデル61の学習方法について説明する。 Next, a method for generating the learning model 61 will be described. First, a method of generating training data for generating the learning model 61 will be described, and then a method of learning the learning model 61 will be described.
 図7は細胞小画像Diと対応する核染色小画像Siとを示す図である。細胞小画像Diは細胞を含み、核染色小画像Siは、小画像の核染色画像であり、染色された細胞核を含む。細胞核の染色状態は一様ではなく、染色具合にばらつきが生じている。細胞小画像及び核染色小画像は、模式的に示したものであり、細胞小画像内の細胞の形状や数、及び核染色小画像内の細胞核の形状や数は、実際のものと異なる場合がある。 FIG. 7 is a diagram showing a small cell image Di and a corresponding nuclear staining small image Si. The small cell image Di contains cells, and the stained nuclear small image Si is a nuclear stained image of the small image and contains stained cell nuclei. The staining state of the cell nucleus is not uniform, and the degree of staining varies. Small cell images and small nuclear staining images are schematic representations, and the shape and number of cells in the small cell images and the shape and number of cell nuclei in the small nuclear staining images may differ from the actual ones. There is
 細胞小画像Diは、元の細胞画像を分割した画像とする。すなわち、細胞小画像Diは、元のグレースケール変換画像に対して、図2に示す正規化処理を行い、図4に示すスライディングウィンドウ方式を用いて分割された、正規化された細胞画像である。核染色小画像Siは、細胞を染色した核染色画像を分割して細胞小画像Diと同じサイズの画像にしたものである。インデックスiは、画像のインデックスであり、両画像が対応していることを示す。 The small cell image Di is an image obtained by dividing the original cell image. That is, the small cell image Di is a normalized cell image obtained by performing the normalization process shown in FIG. 2 on the original grayscale-converted image and dividing it using the sliding window method shown in FIG. . The stained nuclear small image Si is obtained by dividing the stained nuclear image obtained by staining the cells into an image of the same size as the small cell image Di. The index i is the image index and indicates that both images correspond.
 図8は正規化した細胞小画像に基づいて明るさ、コントラスト、及びぼかし量を変動させた細胞小画像の一例を示す図である。図8に示すように、画像処理部55は、正規化した細胞小画像Diの明るさ、コントラスト、及びぼかし量をそれぞれ独立に少なくとも3段階に亘って変動させる。例えば、画像処理部55は、暗く、コントラストが低く、ぼけている画像や、明るく、コントラストが普通でぼけていない画像、普通の明るさでコントラストが高く、ぼけている画像などこの3つのパラメータ(明るさ、コントラスト、及びぼかし量)のすべての組み合わせの処理を行って、正規化した細胞小画像に基づいて複数(図の例では、3×3×3=27種類)の細胞小画像を生成する。これにより、学習データの数増し(Augmentation)を行うことができる。 FIG. 8 is a diagram showing an example of a small cell image in which the brightness, contrast, and amount of blur are varied based on the normalized small cell image. As shown in FIG. 8, the image processing unit 55 independently varies the brightness, contrast, and blur amount of the normalized small cell image Di over at least three stages. For example, the image processing unit 55 may process a dark, low-contrast, and blurred image, a bright, normal-contrast, and non-blurred image, and a normal-brightness, high-contrast, and blurred image based on these three parameters ( Brightness, contrast, and amount of blurring) are processed to generate multiple (3 x 3 x 3 = 27 types in the example shown) small cell images based on the normalized small cell images. do. This makes it possible to augment the learning data.
 画像処理部55は、画像の数増し手法として、明るさ、コントラスト、及びぼかし量の変動に加えて、回転又は裏返しなどの処理を行ってもよい。例えば、回転は、90度毎の4種類、更に縦横斜めの裏返しを含めて8種類(横方向の裏返しと縦方向に裏返してから180度回転するのは等価であるなど重複があるため8種類となる)の細胞小画像を生成してもよい。これらにより1枚の正規化した細胞小画像から27×8=216枚の細胞小画像を生成できる。 As a method for increasing the number of images, the image processing unit 55 may perform processing such as rotation or flipping in addition to variations in brightness, contrast, and amount of blurring. For example, there are 4 types of rotation for each 90 degrees, and 8 types including vertical, horizontal, and diagonal flips (horizontal flips and vertical flips are equivalent to rotating 180 degrees, so 8 types because there are overlaps) ) may be generated. With these, 27×8=216 small cell images can be generated from one normalized small cell image.
 図9は核染色小画像の均一化処理の一例を示す図である。画像処理部55は、染色された細胞核を所定の形状及び画素値に変換して核染色小画像Siを生成する。具体的には、図9に示すように、画像処理部55は、核染色小画像Siに対して、二値化処理を行って、核染色小画像Siから点状の画素を含む二値化画像に変換する。二値化画像における、1ドットの点状像は、細胞核の位置を表す。画像処理部55は、二値化画像に対して、ガウシアンコンボリューション(ぼかし)を施すことにより、二値化画像の点状の画素の周辺(例えば、5~6ドット)を所要の画素値に設定した核染色小画像Siを生成する。 FIG. 9 is a diagram showing an example of homogenization processing of nuclear-stained small images. The image processing unit 55 converts the stained cell nucleus into a predetermined shape and pixel value to generate a stained nuclear small image Si. Specifically, as shown in FIG. 9, the image processing unit 55 performs binarization processing on the stained nuclear small image Si, and binarizes the stained nuclear small image Si to a binarized image containing dotted pixels. Convert to image. A one-dot dot image in the binarized image represents the position of the cell nucleus. The image processing unit 55 applies Gaussian convolution (blurring) to the binarized image to convert the peripheries (for example, 5 to 6 dots) of the dotted pixels of the binarized image to the required pixel values. A nuclear stained small image Si that has been set is generated.
 コンボリューションは、核染色小画像上で所定サイズの画素ブロック(コンボリューションオペレータ)を移動させて、コンボリューションオペレータの各要素の値と対応する核染色小画像の画素値(ピクセル値)とを乗算して合計した値を中心の画素値とする処理である。ガウシアンコンボリューションは、コンボリューションオペレータとしてガウス関数を用いたガウスフィルタを用いる。 Convolution moves a pixel block (convolution operator) of a predetermined size on the nuclear staining small image, and multiplies the value of each element of the convolution operator by the corresponding pixel value of the nuclear staining small image (pixel value). This is a process of using the value obtained by summing the values as the central pixel value. Gaussian convolution uses a Gaussian filter with a Gaussian function as the convolution operator.
 所要の画素値は、一定の画素値でもよく、点状の画素を中心として外側に向かって画素値を変化させて、グラデーションを持つようにしてもよい。生成した核染色小画像Siは、細胞核を中心とする周囲の画素が、いずれの細胞核に対しても均一となっており、5~6ドット径の画素となる。均一化処理前の核染色小画像は、染色の具合によって細胞核の明るさが均一ではない。均一化処理の流れは、(1)ノイズ除去のため一定の閾値以下の画素の画素値を0にする。(2)光点を検出して1ドットに変換する。(3)当該1ドットの画像にぼかしを施して5~6ドット径の画素に変換する。 The required pixel value may be a constant pixel value, or may have a gradation by changing the pixel value outward from a point-like pixel. In the generated nuclear-stained small image Si, the pixels around the cell nucleus are uniform with respect to any cell nucleus, and the pixels have a diameter of 5 to 6 dots. In the nuclear-stained small image before homogenization processing, the brightness of cell nuclei is not uniform depending on the degree of staining. The flow of the equalization process is as follows: (1) Pixel values of pixels below a certain threshold are set to 0 for noise removal. (2) Detect a light spot and convert it into one dot. (3) Blur the 1-dot image to convert it into a pixel with a diameter of 5 to 6 dots.
 不均一な各細胞核を有する、均一化前の核染色小画像を訓練データとして用いると、深層学習の特性として、核染色小画像における細胞核を見失って学習するケースが発生し、学習モデルの細胞認識精度が低下する。本実施形態のように、均一化した核染色小画像Siを用いることにより、細胞核を見失って学習することを防止できるとともに、細胞核の染色状態のばらつきによる誤認識も防止でき、細胞の認識精度を向上させることができる。 When small images of nuclear staining before homogenization with non-uniform cell nuclei are used as training data, a characteristic of deep learning is the case where the cell nucleus is lost in the small images of nuclear staining and learning occurs. Decrease accuracy. As in the present embodiment, by using the uniformized nuclear staining small image Si, it is possible to prevent learning by losing sight of the cell nucleus, and to prevent erroneous recognition due to variations in the staining state of the cell nucleus, thereby improving the recognition accuracy of the cell. can be improved.
 制御部51は、細胞を撮像した細胞小画像、及び当該細胞を染色した核染色小画像を含む第1訓練データを取得する。すなわち、制御部51は、図8に例示した数増しした細胞画小像、及びこれらに対応する核染色小画像Si(均一化されたもの)を第1訓練データとして取得することができる。これにより、多数の訓練データを収集できる。 The control unit 51 acquires first training data including a small cell image obtained by imaging a cell and a nuclear staining small image obtained by staining the cell. That is, the control unit 51 can acquire the increased number of cell image small images illustrated in FIG. 8 and the nuclear staining small images Si (uniformed) corresponding thereto as the first training data. This allows a large amount of training data to be collected.
 図10は培地小画像Ciと対応する核染色小画像Siとを示す図である。培地小画像Ciは細胞を含まない。対応する核染色小画像Siは、染色された細胞核を含まないので、例えば、黒色の画像となる。培地小画像Ciは、模式的に示したものであり、組織片や培地の傷などの異物の形状や数は、実際のものと異なる場合がある。 FIG. 10 is a diagram showing the medium small image Ci and the corresponding nuclear staining small image Si. Medium small images Ci do not contain cells. The corresponding nuclear-stained small image Si does not contain stained cell nuclei, so it is, for example, a black image. The medium small image Ci is shown schematically, and the shape and number of foreign substances such as tissue fragments and medium scratches may differ from the actual ones.
 培地小画像Ciは、元の培地画像を分割した画像とする。すなわち、培地小画像Ciは、元のグレースケール変換画像に対して、図2に示す正規化処理を行い、図4に示すスライディングウィンドウ方式を用いて分割された、正規化された培地画像である。核染色小画像Siは、培地小画像Ciと同じサイズの画像(例えば、黒色の画像)である。インデックスiは、画像のインデックスであり、両画像が対応していることを示す。 The medium small image Ci is an image obtained by dividing the original medium image. That is, the medium small image Ci is a normalized medium image obtained by performing the normalization process shown in FIG. 2 on the original grayscale-converted image and dividing it using the sliding window method shown in FIG. . The stained nuclear small image Si is an image (for example, a black image) of the same size as the medium small image Ci. The index i is the image index and indicates that both images correspond.
 図11は正規化した培地小画像に基づいて明るさ、コントラスト、及びぼかし量を変動させた培地小画像の一例を示す図である。図11に示すように、画像処理部55は、正規化した培地小画像Ciの明るさ、コントラスト、及びぼかし量をそれぞれ独立に少なくとも3段階に亘って変動させる。例えば、画像処理部55は、暗く、コントラストが低く、ぼけている画像や、明るく、コントラストが普通でぼけていない画像、普通の明るさでコントラストが高く、ぼけている画像などこの3つのパラメータ(明るさ、コントラスト、及びぼかし量)のすべての組み合わせの処理を行って、正規化した培地小画像に基づいて複数(図の例では、3×3×3=27種類)の培地小画像を生成する。これにより、学習データの数増し(Augmentation)を行うことができる。 FIG. 11 is a diagram showing an example of small culture medium images in which the brightness, contrast, and amount of blurring are varied based on the normalized small culture medium images. As shown in FIG. 11, the image processing unit 55 independently varies the brightness, contrast, and amount of blurring of the normalized small culture medium images Ci in at least three steps. For example, the image processing unit 55 may process a dark, low-contrast, and blurred image, a bright, normal-contrast, and non-blurred image, and a normal-brightness, high-contrast, and blurred image based on these three parameters ( Brightness, contrast, and amount of blurring) are processed to generate multiple (3 x 3 x 3 = 27 types in the example shown) small medium images based on the normalized small medium images. do. This makes it possible to augment the learning data.
 画像処理部55は、画像の数増し手法として、明るさ、コントラスト、及びぼかし量の変動に加えて、回転又は裏返しなどの処理を行ってもよい。例えば、回転は、90度毎の4種類、更に縦横斜めの裏返しを含めて8種類(横方向の裏返しと縦方向に裏返してから180度回転するのは等価であるなど重複があるため8種類となる)の培地小画像を生成してもよい。これらにより1枚の正規化した培地小画像から27×8=216枚の培地小画像を生成できる。 As a method for increasing the number of images, the image processing unit 55 may perform processing such as rotation or flipping in addition to variations in brightness, contrast, and amount of blurring. For example, there are 4 types of rotation for each 90 degrees, and 8 types including vertical, horizontal, and diagonal flips (horizontal flips and vertical flips are equivalent to rotating 180 degrees, so 8 types because there are overlaps) ) may be generated. With these, 27×8=216 small medium images can be generated from one normalized small medium image.
 制御部51は、細胞を含まない培地小画像、及び当該培地小画像に対応する核染色小画像を含む第2訓練データを取得する。すなわち、制御部51は、図11に例示した数増しした培地小画像、及びこれらに対応する核染色画像Si(黒色の画像)を第2訓練データとして取得することができる。これにより、多数の訓練データを収集できる。 The control unit 51 acquires the second training data including medium small images containing no cells and nuclear staining small images corresponding to the medium small images. That is, the control unit 51 can acquire the increased small culture medium images illustrated in FIG. 11 and the stained nuclear images Si (black images) corresponding to them as the second training data. This allows a large amount of training data to be collected.
 次に、第1訓練データ、及び第2訓練データを用いて、学習モデル61の生成方法について説明する。第1訓練データ及び第2訓練データをまとめて第3訓練データとも称する。なお、学習モデル61の生成は、3つの段階を通じて行ってもよい。 Next, a method for generating the learning model 61 using the first training data and the second training data will be described. The first training data and the second training data are also collectively referred to as third training data. Note that the learning model 61 may be generated through three steps.
 学習処理部62は、取得した第1訓練データに基づいて、細胞小画像を入力した場合に、核染色小画像を出力するように学習モデル61を生成する。第1訓練データは、細胞小画像の画素値の分布を所定の確率分布(例えば、正規分布)に近似すべく画素値が変換された細胞小画像を含めてもよい。また、第1訓練データは、細胞小画像の画素値の分布の平均値又は分散が所定の確率分布の平均値又は分散になるように画素値が変換された細胞小画像を含めてもよい。また、第1訓練データは、細胞小画像の画素値の分布の平均値、分散及び細胞小画像のぼかし量の少なくとも一つを変動させた細胞小画像を含めてもよい。また、細胞小画像は、図4に例示したように、外周に所要の画素値を有する額縁状領域を挿入した細胞小画像を含めてもよい。また、核染色小画像は、図4に例示したように、外周に所要の画素値を有する額縁状領域を挿入した核染色小画像を含めてもよい。 Based on the acquired first training data, the learning processing unit 62 generates the learning model 61 so that when a cell small image is input, a stained nuclear small image is output. The first training data may include small cell images whose pixel values have been transformed so as to approximate the pixel value distribution of the small cell images to a predetermined probability distribution (eg, normal distribution). Also, the first training data may include small cell images whose pixel values are converted such that the average value or variance of the pixel value distribution of the small cell images becomes the average value or variance of a predetermined probability distribution. Also, the first training data may include small cell images obtained by varying at least one of the mean value and variance of the pixel value distribution of the small cell images and the amount of blurring of the small cell images. Also, the small cell image may include a small cell image in which a frame-shaped area having a required pixel value is inserted in the periphery, as illustrated in FIG. Further, the nuclear staining small image may include a nuclear staining small image in which a frame-shaped area having a required pixel value is inserted in the outer circumference, as illustrated in FIG.
 学習処理部62は、取得した第2訓練データに基づいて、細胞を含まない培地小画像を入力した場合に核染色小画像を出力するように学習モデル61を生成する。第2訓練データは、培地小画像の画素値の分布を所定の確率分布(例えば、正規分布)に近似すべく画素値が変換された培地小画像を含めてもよい。また、第2訓練データは、培地小画像の画素値の分布の平均値又は分散が所定の確率分布の平均値又は分散になるように画素値が変換された培地小画像を含めてもよい。また、第2訓練データは、培地小画像の画素値の分布の平均値、分散及び培地小画像のぼかし量の少なくとも一つを変動させた培地小画像を含めてもよい。また、培地小画像は、図4に例示したように、外周に所要の画素値を有する額縁状領域を挿入した培地小画像を含めてもよい。 Based on the obtained second training data, the learning processing unit 62 generates the learning model 61 so as to output a nuclear staining small image when a medium small image containing no cells is input. The second training data may include culture medium small images whose pixel values have been transformed so as to approximate the pixel value distribution of the culture medium small images to a predetermined probability distribution (eg, normal distribution). Also, the second training data may include medium small images whose pixel values are converted such that the average value or variance of the pixel value distribution of the medium small images becomes the average value or variance of a predetermined probability distribution. Also, the second training data may include medium small images obtained by varying at least one of the mean value and variance of the pixel value distribution of the medium small images and the amount of blurring of the medium small images. Also, the small culture medium image may include a small culture medium image in which a frame-shaped area having a required pixel value is inserted in the periphery, as illustrated in FIG.
 図12A及びBは学習モデル61の第1段階における生成方法を示す図である。図12A及びBに示すように、学習処理部62は、細胞小画像及び培地小画像を入力した場合に、当該細胞小画像及び当該培地小画像を出力するように、学習モデル61(第1ネットワーク層612)を学習させる。具体的には、図12Aに示すように、学習処理部62は、入力層で受け取った細胞小画像Diと、出力層から出力された細胞小画像とを比較して、入力された細胞小画像Diが可能な限り再現できるようにパラメータを調整する。また、図12Bに示すように、学習処理部62は、入力層で受け取った培地小画像Ciと、出力層から出力された培地小画像とを比較して、入力された培地小画像Ciが可能な限り再現できるようにパラメータを調整する。入力データと出力データとの比較によりパラメータを調整する手法は、例えば、誤差逆伝播法を用いることができる。学習モデル61の入出力を一致させる学習により、低コストで訓練データを得ることができる。 FIGS. 12A and 12B are diagrams showing the method of generating the learning model 61 in the first stage. As shown in FIGS. 12A and 12B, the learning processing unit 62 outputs the small cell image and the small medium image when the small cell image and the small medium image are input, so that the learning model 61 (first network Layer 612) is trained. Specifically, as shown in FIG. 12A, the learning processing unit 62 compares the small cell image Di received in the input layer with the small cell image output from the output layer, and Adjust the parameters so that Di is as reproducible as possible. In addition, as shown in FIG. 12B, the learning processing unit 62 compares the small culture medium images Ci received in the input layer and the small culture medium images output from the output layer, and the input small culture medium images Ci are available. Adjust the parameters so that they are as reproducible as possible. Error backpropagation, for example, can be used as a method of adjusting parameters by comparing input data and output data. Training data can be obtained at low cost by learning to match the input and output of the learning model 61 .
 図13A及びBは学習モデル61の第2段階における生成方法を示す図である。図13A及びBに示すように、学習処理部62は、第1段階において学習した第1ネットワーク層612のパラメータを固定し、第2ネットワーク層613のパラメータを初期化し、細胞小画像及び培地小画像を入力した場合に、核染色小画像を出力するように、学習モデル61(第2ネットワーク層613)を学習させる。具体的には、図13Aに示すように、学習処理部62は、細胞小画像Diを入力した場合に、出力層から仮想染色小画像Siが出力されるように、第2ネットワーク層613のパラメータを調整する。また、図13Bに示すように、学習処理部62は、培地小画像Ciを入力した場合に、出力層から仮想染色小画像Si(例えば、黒色の画像)が出力されるように、第2ネットワーク層613のパラメータを調整する。 13A and B are diagrams showing a method of generating the learning model 61 in the second stage. As shown in FIGS. 13A and B, the learning processing unit 62 fixes the parameters of the first network layer 612 learned in the first stage, initializes the parameters of the second network layer 613, is input, the learning model 61 (second network layer 613) is trained so as to output a stained nuclear image. Specifically, as shown in FIG. 13A, the learning processing unit 62 sets the parameters of the second network layer 613 so that when the cell small image Di is input, the virtual stained small image Si is output from the output layer. to adjust. Further, as shown in FIG. 13B, the learning processing unit 62 sets the second network so that when the medium small image Ci is input, the virtual stained small image Si (for example, a black image) is output from the output layer. Adjust the parameters of layer 613 .
 図14A及びBは学習モデル61の第3段階における生成方法を示す図である。図14A及びBに示すように、学習処理部62は、第1ネットワーク層612のパラメータ固定を解除し、細胞小画像及び培地小画像を入力した場合に、核染色小画像を出力するように、第1ネットワーク層612及び第2ネットワーク層613のパラメータを調整して学習モデル61を生成する。具体的には、図14Aに示すように、学習処理部62は、細胞小画像Diを入力した場合に、出力層から仮想染色小画像Siが出力されるように、第1ネットワーク層612及び第2ネットワーク層613のパラメータを調整する。また、図14Bに示すように、学習処理部62は、培地小画像Ciを入力した場合に、出力層から仮想染色小画像Si(例えば、黒色の画像)が出力されるように、第1ネットワーク層612及び第2ネットワーク層613のパラメータを調整する。 FIGS. 14A and 14B are diagrams showing the method of generating the learning model 61 in the third stage. As shown in FIGS. 14A and 14B, the learning processing unit 62 cancels the parameter fixation of the first network layer 612, and outputs a stained nuclear small image when a small cell image and a small medium image are input. The learning model 61 is generated by adjusting the parameters of the first network layer 612 and the second network layer 613 . Specifically, as shown in FIG. 14A , the learning processing unit 62 sets the first network layer 612 and the 2 Adjust the network layer 613 parameters. In addition, as shown in FIG. 14B, the learning processing unit 62, when the medium small image Ci is input, the first network The parameters of layer 612 and second network layer 613 are adjusted.
 図15は訓練データに細胞を含まない培地画像を含む場合と含まない場合との比較結果を示す図である。図15において、横軸は1枚の撮像画像内における細胞計数値の真値を示し、縦軸は学習モデル61を用いた場合の、真値に対する細胞計数値の割合を示す。図15中、実線のグラフは、細胞を含まない培地画像(培地小画像)を用いて学習した学習モデル61による細胞計数値を示し、破線のグラフは、細胞を含まない培地画像(培地小画像)を用いずに学習した学習モデル61による細胞計数値を示す。培養初期の培地には組織片などの細胞と誤認しやすい異物が含まれる。細胞数が1枚の撮像画像内に概ね100個以下の場合には、組織片などが多く含まれており、培地画像を用いずに学習した学習モデル61の場合には、これらの組織片を細胞と誤認するエラーが多いため、割合が100%を超える。一方、培地画像を用いて学習した学習モデル61の場合には、組織片を細胞と誤認するエラーが少ないことが分かる。このように、本実施形態によれば、培地画像とそれに対応する核染色画像(例えば、黒色の画像)を訓練データに含めて学習モデル61を生成するので、培養初期における細胞の認識精度を向上させることができる。 FIG. 15 is a diagram showing the results of comparison between the training data including medium images containing no cells and the training data not including medium images. In FIG. 15, the horizontal axis indicates the true value of the cell count value in one captured image, and the vertical axis indicates the ratio of the cell count value to the true value when the learning model 61 is used. In FIG. 15 , the solid line graph shows the cell count value by the learning model 61 trained using the medium image (small medium image) that does not contain cells, and the broken line graph shows the medium image that does not contain cells (small medium image ), the cell count value by the learning model 61 learned without using. The culture medium in the early stage of culture contains foreign substances such as tissue pieces that are easily mistaken for cells. When the number of cells is approximately 100 or less in one captured image, a large number of tissue fragments are included, and in the case of the learning model 61 learned without using the culture medium image, these tissue fragments are included. The ratio exceeds 100% because there are many errors of misidentification as cells. On the other hand, in the case of the learning model 61 learned using the culture medium image, it can be seen that there are few errors in misidentifying tissue fragments as cells. As described above, according to the present embodiment, since the learning model 61 is generated by including the culture medium image and the corresponding nuclear staining image (for example, black image) in the training data, the cell recognition accuracy in the early stage of culture is improved. can be made
 図16は画像処理装置50による細胞計数処理の手順を示す図である。便宜上、処理の主体を制御部51として説明する。制御部51は、細胞画像を取得する(S11)。細胞画像は、縦横の解像度(m×n)が、例えば、1000、1200、1400程度の画像である。制御部51は、取得した細胞画像をグレースケールに変換し(S12)、グレースケール変換画像を正規化する(S13)。正規化は、図2に例示した方法を用いることができる。 FIG. 16 is a diagram showing the procedure of cell counting processing by the image processing device 50. FIG. For the sake of convenience, the main body of processing will be described as the control unit 51 . The controller 51 acquires a cell image (S11). A cell image is an image with a vertical and horizontal resolution (m×n) of about 1000, 1200, or 1400, for example. The control unit 51 converts the acquired cell image into grayscale (S12), and normalizes the grayscale converted image (S13). For normalization, the method illustrated in FIG. 2 can be used.
 制御部51は、正規化画像を、スライディングウィンドウ方式で細胞小画像に分割する(S14)。分割された各細胞小画像の解像度(サイズ)は、例えば、256×256とすることができる。制御部」51は、細胞小画像を学習モデル61に入力して、学習モデル61が出力する仮想核染色小画像を取得する(S15)。 The control unit 51 divides the normalized image into small cell images using a sliding window method (S14). The resolution (size) of each segmented small cell image can be, for example, 256×256. The control unit 51 inputs the cell small image to the learning model 61 and acquires the virtual nuclear staining small image output by the learning model 61 (S15).
 制御部51は、取得した仮想核染色小画像をつなぎ合わせて細胞画像と同じサイズの仮想核染色画像にする(S16)。制御部51は、仮想核染色画像に基づいて細胞核の位置を特定し(S17)、細胞数(細胞核の位置の数)を計数する(S18)。制御部51は、細胞画像に細胞核の位置を重畳し、計数結果を出力し(S19)、処理を終了する。 The control unit 51 joins the acquired virtual nuclear staining small images into a virtual nuclear staining image of the same size as the cell image (S16). The control unit 51 identifies the positions of cell nuclei based on the virtual nuclear staining image (S17), and counts the number of cells (number of positions of cell nuclei) (S18). The control unit 51 superimposes the position of the cell nucleus on the cell image, outputs the counting result (S19), and ends the process.
 図17は画像処理装置50による第1訓練データの生成処理の手順を示す図である。制御部51は、細胞画像及び対応する核染色画像を取得する(S31)。細胞画像及び対応する核染色画像は、縦横の解像度(m×n)が、例えば、1000、1200、1400程度の画像である。制御部51は、取得した細胞画像をグレースケールに変換し(S32)、変換した細胞画像を正規化する(S33)。正規化は、図2に例示した方法を用いることができる。 FIG. 17 is a diagram showing the procedure for generating the first training data by the image processing device 50. FIG. The control unit 51 acquires the cell image and the corresponding nuclear staining image (S31). The cell image and the corresponding nuclear staining image are images having vertical and horizontal resolutions (m×n) of about 1000, 1200, and 1400, for example. The control unit 51 converts the obtained cell image into grayscale (S32), and normalizes the converted cell image (S33). For normalization, the method illustrated in FIG. 2 can be used.
 制御部51は、正規化した細胞画像に基づいて、輝度、コントラスト、ぼかし量を変動させた細胞画像を生成する(S34)。細胞画像の生成は、図8に例示した方法を用いることができる。制御部51は、取得した核染色画像を二値化して二値化画像を生成し(S35)、生成した二値化画像にガウシアンコンボリューションを施して核染色画像を生成する(S36)。ステップS35及びS36の処理は、図9に例示した方法を用いることができる。 Based on the normalized cell image, the control unit 51 generates a cell image with varying brightness, contrast, and blurring amount (S34). A method illustrated in FIG. 8 can be used to generate a cell image. The control unit 51 binarizes the obtained stained nuclear image to generate a binary image (S35), and performs Gaussian convolution on the generated binary image to generate a stained nuclear image (S36). The process of steps S35 and S36 can use the method illustrated in FIG.
 制御部51は、生成した細胞画像及び核染色画像を、スライディングウィンドウ方式でそれぞれ対応させながら小画像(細胞小画像及び核染色小画像)に分割して訓練データ(第1訓練データ)を生成し(S37)、処理を終了する。分割された細胞小画像及び核染色小画像の解像度(サイズ)は、例えば、256×256とすることができる。 The control unit 51 generates training data (first training data) by dividing the generated cell image and nuclear staining image into small images (cell small image and nuclear staining small image) while making them correspond to each other by a sliding window method. (S37), the process ends. The resolution (size) of the divided cell small image and nuclear staining small image can be, for example, 256×256.
 図18は画像処理装置50による第2訓練データの生成処理の手順を示す図である。制御部51は、細胞を含まない培地画像を取得する(S41)。培地画像は、縦横の解像度(m×n)が、例えば、1000、1200、1400程度の画像である。制御部51は、取得した培地画像をグレースケールに変換し(S42)、変換した培地画像を正規化する(S43)。正規化は、図2に例示した方法を用いることができる。 FIG. 18 is a diagram showing the procedure for generating the second training data by the image processing device 50. FIG. The control unit 51 acquires a medium image containing no cells (S41). The culture medium image is an image with a vertical and horizontal resolution (m×n) of about 1000, 1200, and 1400, for example. The control unit 51 converts the obtained culture medium image into grayscale (S42), and normalizes the converted culture medium image (S43). For normalization, the method illustrated in FIG. 2 can be used.
 制御部51は、正規化した培地画像に基づいて、輝度、コントラスト、ぼかし量を変動させた培地画像を生成する(S44)。培地画像の生成は、図11に例示した方法を用いることができる。制御部51は、全画素値が所定値の画像を核染色小画像として取得する(S45)。核染色小画像は、例えば、黒色の画像(画素値が0)とすることができる。 Based on the normalized culture medium image, the control unit 51 generates a culture medium image with varying brightness, contrast, and blurring amount (S44). The method illustrated in FIG. 11 can be used to generate the culture medium image. The control unit 51 acquires an image in which all pixel values have a predetermined value as a stained nuclear small image (S45). A nuclear-stained small image can be, for example, a black image (pixel value is 0).
 制御部51は、生成した培地画像を、スライディングウィンドウ方式で小画像(培地小画像)に分割し、取得した核染色小画像を対応付けて訓練データ(第2訓練データ)を生成し(S46)、処理を終了する。分割された培地小画像及び対応する核染色小画像の解像度(サイズ)は、例えば、256×256とすることができる。 The control unit 51 divides the generated culture medium image into small images (medium small images) by a sliding window method, associates the acquired nuclear staining small images, and generates training data (second training data) (S46). , terminate the process. The resolution (size) of the segmented medium small image and the corresponding nuclear stain small image can be, for example, 256×256.
 図19は画像処理装置50による学習モデル61の生成処理の手順を示す図である。学習モデル61の生成は、図17及び図18で示した処理によって生成された第1訓練デーヤ及び第2訓練データ(まとめて第3訓練データとも称する)を用いる。制御部51は、細胞小画像及び培地小画像を取得する(S51)。細胞小画像及び培地小画像のサイズは、例えば、256×256とすることができる。制御部51は、学習モデル61のパラメータを初期化し、細胞小画像を学習モデル61に入力した場合に、当該細胞小画像を出力するようにパラメータを調整して学習モデル61を生成する(S52)。 FIG. 19 is a diagram showing the procedure for generating the learning model 61 by the image processing device 50. FIG. The learning model 61 is generated using the first training data and second training data (collectively referred to as third training data) generated by the processing shown in FIGS. 17 and 18 . The control unit 51 acquires a small cell image and a small medium image (S51). The size of the cell small image and medium small image can be, for example, 256×256. The control unit 51 initializes the parameters of the learning model 61, and when a small cell image is input to the learning model 61, adjusts the parameters so as to output the small cell image and generates the learning model 61 (S52). .
 制御部51は、培地小画像を学習モデル61に入力した場合に、当該培地小画像を出力するようにパラメータを調整して学習モデル61を生成する(S53)。制御部51は、細胞小画像に対応する核染色小画像、及び培地小画像に対応する核染色小画像を取得し(S54)、学習モデル61の第1ネットワーク層612のパラメータを固定し、第2ネットワーク層613のパラメータを初期化する(S55)。 When the small culture medium image is input to the learning model 61, the control unit 51 adjusts the parameters so as to output the small culture medium image and generates the learning model 61 (S53). The control unit 51 acquires a nuclear staining small image corresponding to the cell small image and a nuclear staining small image corresponding to the medium small image (S54), fixes the parameters of the first network layer 612 of the learning model 61, The parameters of the 2 network layer 613 are initialized (S55).
 制御部51は、細胞小画像を学習モデル61に入力した場合に、入力した細胞小画像に対応する核染色小画像を出力するようにパラメータを調整して学習モデル61を生成する(S56)。制御部51は、培地小画像を学習モデル61に入力した場合に、入力した培地小画像に対応する核染色小画像を出力するようにパラメータを調整して学習モデル61を生成する(S57)。制御部51は、学習モデル61の第1ネットワーク層612のパラメータの固定を解除する(S58)。 When the small cell image is input to the learning model 61, the control unit 51 adjusts the parameters so as to output the nuclear staining small image corresponding to the input small cell image and generates the learning model 61 (S56). When the culture medium small image is input to the learning model 61, the control unit 51 adjusts the parameters so as to output the nuclear staining small image corresponding to the input culture medium small image and generates the learning model 61 (S57). The control unit 51 releases the fixed parameters of the first network layer 612 of the learning model 61 (S58).
 制御部51は、細胞小画像を学習モデル61に入力した場合に、入力した細胞小画像に対応する核染色小画像を出力するようにパラメータを調整して学習モデル61を生成する(S59)。制御部51は、培地小画像を学習モデル61に入力した場合に、入力した培地小画像に対応する核染色小画像を出力するようにパラメータを調整して学習モデル61を生成する(S60)。制御部51は、生成した学習モデル61を記憶部59に記憶し(S61)、処理を終了する。 When the small cell image is input to the learning model 61, the control unit 51 adjusts the parameters so as to output the nuclear staining small image corresponding to the input small cell image and generates the learning model 61 (S59). When the medium small image is input to the learning model 61, the control unit 51 adjusts the parameters so as to output the nuclear staining small image corresponding to the input medium small image and generates the learning model 61 (S60). The control unit 51 stores the generated learning model 61 in the storage unit 59 (S61), and terminates the process.
 本実施形態の画像処理方法は、細胞を撮像した細胞画像を取得し、取得した細胞画像の画素値の分布を所定の確率分布に近似すべく前記細胞画像を変換し、変換した細胞画像に基づいて仮想核染色画像を生成し、生成した仮想核染色画像に基づいて細胞数を計数する。 The image processing method of the present embodiment acquires a cell image obtained by imaging a cell, transforms the cell image so that the distribution of pixel values of the acquired cell image approximates a predetermined probability distribution, and based on the transformed cell image to generate a virtual nuclear staining image, and count the number of cells based on the generated virtual nuclear staining image.
 本実施形態の画像処理方法は、取得した細胞画像の画素値の分布の平均値が前記確率分布の平均値になるように前記細胞画像を変換する。 The image processing method of this embodiment converts the cell image so that the average value of the pixel value distribution of the acquired cell image becomes the average value of the probability distribution.
 本実施形態の画像処理方法は、取得した細胞画像の画素値の分布の分散が前記確率分布の分散になるように前記細胞画像を変換する。 The image processing method of this embodiment converts the cell image so that the variance of the pixel value distribution of the obtained cell image becomes the variance of the probability distribution.
 本実施形態の画像処理方法は、取得した細胞画像又は変換した細胞画像の外周に所要の画素値を有する額縁状領域を挿入し、額縁状領域を挿入した細胞画像に基づいて仮想核染色画像を生成する。 The image processing method of the present embodiment inserts a frame-shaped region having a required pixel value around the periphery of an acquired cell image or a converted cell image, and generates a virtual nuclear staining image based on the cell image in which the frame-shaped region is inserted. Generate.
 本実施形態の画像処理方法において、前記所要の画素値は、前記額縁状領域に隣接する前記細胞画像の外周領域の画素値の統計値を含む。 In the image processing method of the present embodiment, the required pixel values include statistical values of pixel values in the peripheral area of the cell image adjacent to the frame-shaped area.
 本実施形態の画像処理方法は、生成した仮想核染色画像に基づいて細胞の核位置を特定し、特定した核位置の数を細胞数として計数する。 The image processing method of this embodiment identifies the nuclear positions of cells based on the generated virtual nuclear staining image, and counts the number of identified nuclear positions as the number of cells.
 本実施形態の画像処理方法は、特定した核位置を、取得した細胞画像に重畳して表示する。 The image processing method of this embodiment superimposes the identified nucleus position on the acquired cell image and displays it.
 本実施形態の画像処理装置は、細胞を撮像した細胞画像を取得する取得部と、取得した細胞画像の画素値の分布を所定の確率分布に近似すべく前記細胞画像を変換する変換部と、変換した細胞画像に基づいて仮想核染色画像を生成する生成部と、生成した仮想核染色画像に基づいて細胞数を計数する計数部とを備える。 The image processing apparatus of the present embodiment includes an acquisition unit that acquires a cell image obtained by imaging a cell, a conversion unit that converts the cell image so that the pixel value distribution of the acquired cell image is approximated to a predetermined probability distribution, A generation unit that generates a virtual nuclear staining image based on the converted cell image, and a counting unit that counts the number of cells based on the generated virtual nuclear staining image.
 本実施形態のコンピュータプログラムは、コンピュータに、細胞を撮像した細胞画像を取得し、取得した細胞画像の画素値の分布を所定の確率分布に近似すべく前記細胞画像を変換し、変換した細胞画像に基づいて仮想核染色画像を生成し、生成した仮想核染色画像に基づいて細胞数を計数する、処理を実行させる。 The computer program of the present embodiment provides a computer with a cell image obtained by imaging a cell, converts the cell image so that the distribution of pixel values of the obtained cell image is approximated to a predetermined probability distribution, and converts the cell image. A process of generating a virtual nuclear staining image based on and counting the number of cells based on the generated virtual nuclear staining image is executed.
 本実施形態の学習モデル生成方法は、細胞を撮像した細胞画像、及び前記細胞を染色した核染色画像を含む第1訓練データを取得し、取得した第1訓練データに基づいて、細胞画像を入力した場合に、核染色画像を出力するように学習モデルを生成する。 The learning model generation method of the present embodiment acquires first training data including a cell image obtained by imaging a cell and a nuclear staining image obtained by staining the cell, and inputs a cell image based on the acquired first training data. A learning model is generated so as to output a stained nuclear image when
 本実施形態の学習モデル生成方法において、前記第1訓練データは、前記細胞画像の画素値の分布を所定の確率分布に近似すべく前記画素値が変換された細胞画像を含む。 In the learning model generation method of the present embodiment, the first training data includes cell images in which the pixel values have been converted so as to approximate the pixel value distribution of the cell images to a predetermined probability distribution.
 本実施形態の学習モデル生成方法において、前記第1訓練データは、前記細胞画像の画素値の分布の平均値又は分散が所定の確率分布の平均値又は分散になるように前記画素値が変換された細胞画像を含む。 In the learning model generation method of the present embodiment, the pixel values of the first training data are converted such that the average value or variance of the pixel value distribution of the cell image becomes the average value or variance of a predetermined probability distribution. including cell images.
 本実施形態の学習モデル生成方法において、前記第1訓練データは、前記細胞画像の画素値の分布の平均値、分散及び前記細胞画像のぼかし量の少なくとも一つを変動させた細胞画像を含む。 In the learning model generation method of the present embodiment, the first training data includes cell images obtained by varying at least one of the average value, the variance, and the amount of blurring of the cell images of pixel value distribution of the cell images.
 本実施形態の学習モデル生成方法は、細胞を含まない培地画像、及び前記培地画像に対応する核染色画像を含む第2訓練データを取得し、取得した第2訓練データに基づいて、細胞を含まない培地画像を入力した場合に核染色画像を出力するように前記学習モデルを生成する。 The learning model generation method of the present embodiment acquires second training data including a medium image that does not contain cells and a nuclear staining image corresponding to the medium image, and based on the acquired second training data, The learning model is generated so as to output a stained nuclear image when an image of a culture medium without a medium is input.
 本実施形態の学習モデル生成方法において、前記第2訓練データは、前記培地画像の画素値の分布を所定の確率分布に近似すべく前記画素値が変換された培地画像を含む。 In the learning model generation method of this embodiment, the second training data includes a culture medium image in which the pixel values have been converted to approximate the pixel value distribution of the culture medium image to a predetermined probability distribution.
 本実施形態の学習モデル生成方法において、前記第2訓練データは、前記培地画像の画素値の分布の平均値又は分散が所定の確率分布の平均値又は分散になるように前記画素値が変換された培地画像を含む。 In the learning model generation method of the present embodiment, the pixel values of the second training data are converted such that the average value or variance of the pixel value distribution of the culture medium image becomes the average value or variance of a predetermined probability distribution. Contains medium images.
 本実施形態の学習モデル生成方法において、前記第2訓練データは、前記培地画像の画素値の分布の平均値、分散及び前記培地画像のぼかし量の少なくとも一つを変動させた培地画像を含む。 In the learning model generation method of the present embodiment, the second training data includes medium images obtained by varying at least one of the mean value, the variance, and the amount of blurring of the medium image of pixel value distribution of the medium image.
 本実施形態の学習モデル生成方法において、前記培地画像は、外周に所要の画素値を有する額縁状領域を挿入した培地画像を含む。 In the learning model generation method of this embodiment, the culture medium image includes a culture medium image in which a frame-shaped area having a required pixel value is inserted in the periphery.
 本実施形態の学習モデル生成方法において、前記核染色画像は、染色された細胞核を所定の形状及び画素値に変換した核染色画像を含む。 In the learning model generation method of this embodiment, the nuclear staining image includes a nuclear staining image obtained by converting the stained cell nucleus into a predetermined shape and pixel value.
 本実施形態の学習モデル生成方法において、前記細胞画像又は核染色画像は、外周に所要の画素値を有する額縁状領域を挿入した細胞画像又は核染色画像を含む。 In the learning model generation method of this embodiment, the cell image or nuclear staining image includes a cell image or nuclear staining image in which a frame-shaped area having a required pixel value is inserted in the periphery.
 本実施形態の学習モデル生成方法において、前記学習モデルは、入力画像の特徴量を抽出する第1ネットワーク層と、抽出した特徴量に基づいて出力画像を出力する第2ネットワーク層とを備え、細胞を撮像した細胞画像、細胞を含まない培地画像、及び前記細胞画像又は培地画像に対応する核染色画像を含む第3訓練データを取得し、取得した第3訓練データに基づいて、細胞画像及び培地画像を入力した場合に、前記細胞画像及び前記培地画像を出力するように、前記第1ネットワーク層を学習させ、学習した前記第1ネットワーク層のパラメータを固定し、取得した第3訓練データに基づいて、細胞画像及び培地画像を入力した場合に、核染色画像を出力するように、前記第2ネットワーク層を学習させ、前記第1ネットワーク層のパラメータの固定を解除し、取得した第3訓練データに基づいて、細胞画像及び培地画像を入力した場合に、核染色画像を出力するように、前記第1ネットワーク層及び前記第2ネットワーク層を学習させて前記学習モデルを生成する。 In the learning model generation method of the present embodiment, the learning model includes a first network layer for extracting feature values of an input image and a second network layer for outputting an output image based on the extracted feature values. Acquiring third training data including a cell image captured, a medium image containing no cells, and a nuclear staining image corresponding to the cell image or medium image, and based on the acquired third training data, the cell image and the medium When an image is input, the first network layer is learned so that the cell image and the medium image are output, the learned parameters of the first network layer are fixed, and the obtained third training data is used. and learning the second network layer so as to output a nuclear staining image when a cell image and a medium image are input, canceling the fixation of the parameters of the first network layer, and obtaining third training data Based on, the learning model is generated by learning the first network layer and the second network layer so as to output a stained nuclear image when a cell image and a culture medium image are input.
 本実施形態の画像処理方法は、細胞を撮像した細胞画像を取得し、取得した細胞画像を、前述の学習モデル生成方法によって生成された学習モデルに入力して、核染色画像を取得し、取得した核染色画像を出力する。 The image processing method of the present embodiment acquires a cell image obtained by imaging a cell, inputs the acquired cell image into the learning model generated by the learning model generation method described above, acquires a nuclear staining image, and acquires output the stained nuclear image.
 本実施形態の画像処理装置は、前述の学習モデル生成方法によって生成された学習モデルと、細胞を撮像した細胞画像を取得する第1取得部と、取得した細胞画像を前記学習モデルに入力して核染色画像を取得する第2取得部と、取得した核染色画像を出力する出力部とを備える。 The image processing apparatus of the present embodiment includes a learning model generated by the learning model generation method described above, a first acquisition unit that acquires a cell image obtained by imaging a cell, and an acquired cell image that is input to the learning model. It comprises a second acquiring unit that acquires the stained nuclear image, and an output unit that outputs the acquired stained nuclear image.
 本実施形態のコンピュータプログラムは、コンピュータに、細胞を撮像した細胞画像を取得し、取得した細胞画像を、前述の学習モデル生成方法によって生成された学習モデルに入力して、核染色画像を取得し、取得した核染色画像を出力する、処理を実行させる。 The computer program of the present embodiment acquires a cell image obtained by imaging a cell into a computer, inputs the acquired cell image into the learning model generated by the learning model generation method described above, and acquires a stained nuclear image. , to output the acquired nuclear staining image, and to execute processing.
 本実施形態の学習モデルは、前述学習モデル生成方法によって生成されている。 The learning model of this embodiment is generated by the learning model generation method described above.
 50 画像処理装置
 51 制御部
 52 通信部
 53 メモリ
 54 正規化部
 55 画像処理部
 56 表示部
 57 操作部
 58 計数部
 59 記憶部
 60 コンピュータプログラム
 61 学習モデル
 611 入力層
 612 第1ネットワーク層
 613 第2ネットワーク層
 614 出力層
 62 学習処理部
 
50 image processing device 51 control unit 52 communication unit 53 memory 54 normalization unit 55 image processing unit 56 display unit 57 operation unit 58 counting unit 59 storage unit 60 computer program 61 learning model 611 input layer 612 first network layer 613 second network Layer 614 Output layer 62 Learning processing unit

Claims (16)

  1.  細胞を撮像した細胞画像、及び前記細胞を染色した核染色画像を含む第1訓練データを取得し、
     取得した第1訓練データに基づいて、細胞画像を入力した場合に、核染色画像を出力するように学習モデルを生成する、
     学習モデル生成方法。
    Acquiring first training data including a cell image obtained by imaging a cell and a nuclear staining image obtained by staining the cell,
    generating a learning model based on the acquired first training data so as to output a stained nuclear image when a cell image is input;
    Learning model generation method.
  2.  前記第1訓練データは、
     前記細胞画像の画素値の分布を所定の確率分布に近似すべく前記画素値が変換された細胞画像を含む、
     請求項1に記載の学習モデル生成方法。
    The first training data is
    including a cell image in which the pixel values are converted to approximate a distribution of pixel values of the cell image to a predetermined probability distribution;
    The learning model generation method according to claim 1.
  3.  前記第1訓練データは、
     前記細胞画像の画素値の分布の平均値又は分散が所定の確率分布の平均値又は分散になるように前記画素値が変換された細胞画像を含む、
     請求項1又は請求項2に記載の学習モデル生成方法。
    The first training data is
    A cell image in which the pixel values are converted such that the average value or variance of the pixel value distribution of the cell image is the average value or variance of a predetermined probability distribution,
    The learning model generation method according to claim 1 or 2.
  4.  前記第1訓練データは、
     前記細胞画像の画素値の分布の平均値、分散及び前記細胞画像のぼかし量の少なくとも一つを変動させた細胞画像を含む、
     請求項1から請求項3のいずれか一項に記載の学習モデル生成方法。
    The first training data is
    including a cell image obtained by varying at least one of the average value, the variance, and the amount of blurring of the cell image of the pixel value distribution of the cell image;
    The learning model generation method according to any one of claims 1 to 3.
  5.  細胞を含まない培地画像、及び前記培地画像に対応する核染色画像を含む第2訓練データを取得し、
     取得した第2訓練データに基づいて、細胞を含まない培地画像を入力した場合に核染色画像を出力するように前記学習モデルを生成する、
     請求項1から請求項4のいずれか一項に記載の学習モデル生成方法。
    Obtaining second training data including medium images containing no cells and nuclear staining images corresponding to the medium images;
    generating the learning model based on the obtained second training data so as to output a nuclear staining image when a medium image containing no cells is input;
    The learning model generation method according to any one of claims 1 to 4.
  6.  前記第2訓練データは、
     前記培地画像の画素値の分布を所定の確率分布に近似すべく前記画素値が変換された培地画像を含む、
     請求項5に記載の学習モデル生成方法。
    The second training data is
    Including a culture medium image in which the pixel values are converted so as to approximate the distribution of the pixel values of the culture medium image to a predetermined probability distribution,
    The learning model generation method according to claim 5.
  7.  前記第2訓練データは、
     前記培地画像の画素値の分布の平均値又は分散が所定の確率分布の平均値又は分散になるように前記画素値が変換された培地画像を含む、
     請求項5又は請求項6に記載の学習モデル生成方法。
    The second training data is
    A culture medium image in which the pixel values are converted such that the average value or variance of the pixel value distribution of the culture medium image is the average value or variance of a predetermined probability distribution,
    The learning model generation method according to claim 5 or 6.
  8.  前記第2訓練データは、
     前記培地画像の画素値の分布の平均値、分散及び前記培地画像のぼかし量の少なくとも一つを変動させた培地画像を含む、
     請求項5から請求項7のいずれか一項に記載の学習モデル生成方法。
    The second training data is
    including a culture medium image in which at least one of the average value of the pixel value distribution of the culture medium image, the variance, and the amount of blurring of the culture medium image is varied,
    The learning model generation method according to any one of claims 5 to 7.
  9.  前記培地画像は、外周に所要の画素値を有する額縁状領域を挿入した培地画像を含む、
     請求項5から請求項8のいずれか一項に記載の学習モデル生成方法。
    The medium image includes a medium image in which a frame-shaped area having a required pixel value is inserted in the outer periphery,
    The learning model generation method according to any one of claims 5 to 8.
  10.  前記核染色画像は、
     染色された細胞核を所定の形状及び画素値に変換した核染色画像を含む、
     請求項1から請求項9のいずれか一項に記載の学習モデル生成方法。
    The nuclear staining image is
    Including a nuclear staining image in which the stained cell nucleus is converted into a predetermined shape and pixel value,
    The learning model generation method according to any one of claims 1 to 9.
  11.  前記細胞画像又は核染色画像は、外周に所要の画素値を有する額縁状領域を挿入した細胞画像又は核染色画像を含む、
     請求項1から請求項10のいずれか一項に記載の学習モデル生成方法。
    The cell image or nuclear staining image includes a cell image or nuclear staining image in which a frame-shaped area having a required pixel value is inserted in the outer periphery,
    The learning model generation method according to any one of claims 1 to 10.
  12.  前記学習モデルは、
     入力画像の特徴量を抽出する第1ネットワーク層と、
     抽出した特徴量に基づいて出力画像を出力する第2ネットワーク層と
     を備え、
     細胞を撮像した細胞画像、細胞を含まない培地画像、及び前記細胞画像又は培地画像に対応する核染色画像を含む第3訓練データを取得し、
     取得した第3訓練データに基づいて、細胞画像及び培地画像を入力した場合に、前記細胞画像及び前記培地画像を出力するように、前記第1ネットワーク層を学習させ、
     学習した前記第1ネットワーク層のパラメータを固定し、取得した第3訓練データに基づいて、細胞画像及び培地画像を入力した場合に、核染色画像を出力するように、前記第2ネットワーク層を学習させ、
     前記第1ネットワーク層のパラメータの固定を解除し、取得した第3訓練データに基づいて、細胞画像及び培地画像を入力した場合に、核染色画像を出力するように、前記第1ネットワーク層及び前記第2ネットワーク層を学習させて前記学習モデルを生成する、
     請求項1から請求項11のいずれか一項に記載の学習モデル生成方法。
    The learning model is
    a first network layer for extracting features of an input image;
    a second network layer that outputs an output image based on the extracted feature amount,
    Acquiring third training data including a cell image capturing a cell, a medium image containing no cell, and a nuclear staining image corresponding to the cell image or the medium image,
    Based on the acquired third training data, learning the first network layer so as to output the cell image and the medium image when the cell image and the medium image are input,
    Fixing the learned parameters of the first network layer, and learning the second network layer based on the acquired third training data so as to output a stained nuclear image when a cell image and a culture medium image are input. let
    The first network layer and the training a second network layer to generate the learned model;
    The learning model generation method according to any one of claims 1 to 11.
  13.  細胞を撮像した細胞画像を取得し、
     取得した細胞画像を、請求項1から請求項12のいずれか一項に記載された学習モデル生成方法によって生成された学習モデルに入力して、核染色画像を取得し、
     取得した核染色画像を出力する、
     画像処理方法。
    Acquire a cell image that captures the cell,
    inputting the obtained cell image into a learning model generated by the learning model generation method according to any one of claims 1 to 12 to obtain a stained nuclear image;
    Output the acquired nuclear staining image,
    Image processing method.
  14.  請求項1から請求項12のいずれか一項に記載された学習モデル生成方法によって生成された学習モデルと、
     細胞を撮像した細胞画像を取得する第1取得部と、
     取得した細胞画像を前記学習モデルに入力して核染色画像を取得する第2取得部と、
     取得した核染色画像を出力する出力部と
     を備える、
     画像処理装置。
    A learning model generated by the learning model generation method according to any one of claims 1 to 12;
    a first acquisition unit that acquires a cell image obtained by imaging a cell;
    a second acquisition unit that acquires a stained nuclear image by inputting the acquired cell image to the learning model;
    and an output unit that outputs the acquired nuclear staining image,
    Image processing device.
  15.  コンピュータに、
     細胞を撮像した細胞画像を取得し、
     取得した細胞画像を、請求項1から請求項12のいずれか一項に記載された学習モデル生成方法によって生成された学習モデルに入力して、核染色画像を取得し、
     取得した核染色画像を出力する、
     処理を実行させるコンピュータプログラム。
    to the computer,
    Acquire a cell image that captures the cell,
    inputting the obtained cell image into a learning model generated by the learning model generation method according to any one of claims 1 to 12 to obtain a stained nuclear image;
    Output the acquired nuclear staining image,
    A computer program that causes a process to be performed.
  16.  請求項1から請求項12のいずれか一項に記載された学習モデル生成方法によって生成
    された、
     学習モデル。
     
    Generated by the learning model generation method according to any one of claims 1 to 12,
    learning model.
PCT/JP2023/002769 2022-01-31 2023-01-30 Learning model generation method, image processing method, image processing device, computer program, and learning model WO2023145917A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022013091 2022-01-31
JP2022-013091 2022-01-31

Publications (1)

Publication Number Publication Date
WO2023145917A1 true WO2023145917A1 (en) 2023-08-03

Family

ID=87471677

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/002769 WO2023145917A1 (en) 2022-01-31 2023-01-30 Learning model generation method, image processing method, image processing device, computer program, and learning model

Country Status (1)

Country Link
WO (1) WO2023145917A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014095667A (en) * 2012-11-12 2014-05-22 Mitsubishi Electric Corp Target type discrimination device and target type discrimination method
US20140270457A1 (en) * 2013-03-15 2014-09-18 The Board Of Trustees Of The University Of Illinois Stain-free histopathology by chemical imaging
JP2019135939A (en) * 2018-02-07 2019-08-22 澁谷工業株式会社 Cell count method, cell count device and cell culture method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014095667A (en) * 2012-11-12 2014-05-22 Mitsubishi Electric Corp Target type discrimination device and target type discrimination method
US20140270457A1 (en) * 2013-03-15 2014-09-18 The Board Of Trustees Of The University Of Illinois Stain-free histopathology by chemical imaging
JP2019135939A (en) * 2018-02-07 2019-08-22 澁谷工業株式会社 Cell count method, cell count device and cell culture method

Similar Documents

Publication Publication Date Title
JP5584289B2 (en) System and method for detecting low quality in 3D reconstruction
Campanella et al. Towards machine learned quality control: A benchmark for sharpness quantification in digital pathology
JP6791245B2 (en) Image processing device, image processing method and image processing program
WO2021139258A1 (en) Image recognition based cell recognition and counting method and apparatus, and computer device
US5790692A (en) Method and means of least squares designed filters for image segmentation in scanning cytometry
US20020186874A1 (en) Method and means for image segmentation in fluorescence scanning cytometry
JP2021166062A (en) Focal point weighting machine learning classifier error prediction for microscope slide image
US20110274336A1 (en) Optimizing the initialization and convergence of active contours for segmentation of cell nuclei in histological sections
JP5804220B1 (en) Image processing apparatus and image processing program
CN115760826B (en) Bearing wear condition diagnosis method based on image processing
JP4383352B2 (en) Histological evaluation of nuclear polymorphism
JP3946590B2 (en) Image processing method, image processing program, and image processing apparatus
JP5088329B2 (en) Cell feature amount calculating apparatus and cell feature amount calculating method
CN114764189A (en) Microscope system and method for evaluating image processing results
WO2023145917A1 (en) Learning model generation method, image processing method, image processing device, computer program, and learning model
WO2023145918A1 (en) Image processing method, image processing device, and computer program
US20060171589A1 (en) Grayscale character dictionary generation apparatus
WO2021009804A1 (en) Method for learning threshold value
CN111507977A (en) Method for extracting barium agent information in image
JP2023111299A (en) Information processing method, information processing device and computer program
CN112329497A (en) Target identification method, device and equipment
JP4772312B2 (en) Particle image analysis system, particle image display computer program, and recording medium
Jatti Segmentation of microscopic bone images
EP4379676A1 (en) Detection system, detection apparatus, learning apparatus, detection method, learning method and program
Demilew et al. Historical Ethiopic handwritten document recognition using deep learning

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23747127

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023577058

Country of ref document: JP

Kind code of ref document: A