WO2021070371A1 - Cell image analysis method and cell analysis device - Google Patents

Cell image analysis method and cell analysis device Download PDF

Info

Publication number
WO2021070371A1
WO2021070371A1 PCT/JP2019/040275 JP2019040275W WO2021070371A1 WO 2021070371 A1 WO2021070371 A1 WO 2021070371A1 JP 2019040275 W JP2019040275 W JP 2019040275W WO 2021070371 A1 WO2021070371 A1 WO 2021070371A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
cell
region
nuclear
learning model
Prior art date
Application number
PCT/JP2019/040275
Other languages
French (fr)
Japanese (ja)
Inventor
秋絵 外口
Original Assignee
株式会社島津製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社島津製作所 filed Critical 株式会社島津製作所
Priority to JP2021551081A priority Critical patent/JP7248139B2/en
Priority to PCT/JP2019/040275 priority patent/WO2021070371A1/en
Publication of WO2021070371A1 publication Critical patent/WO2021070371A1/en

Links

Images

Classifications

    • CCHEMISTRY; METALLURGY
    • C12BIOCHEMISTRY; BEER; SPIRITS; WINE; VINEGAR; MICROBIOLOGY; ENZYMOLOGY; MUTATION OR GENETIC ENGINEERING
    • C12MAPPARATUS FOR ENZYMOLOGY OR MICROBIOLOGY; APPARATUS FOR CULTURING MICROORGANISMS FOR PRODUCING BIOMASS, FOR GROWING CELLS OR FOR OBTAINING FERMENTATION OR METABOLIC PRODUCTS, i.e. BIOREACTORS OR FERMENTERS
    • C12M1/00Apparatus for enzymology or microbiology
    • C12M1/34Measuring or testing with condition measuring or sensing means, e.g. colony counters

Definitions

  • the present invention relates to a cell image analysis method for analyzing and processing an observation image obtained for a cell, and a cell analysis device using the method.
  • the cell nucleus (hereinafter, sometimes simply referred to as the "nucleus" according to convention) contains DNA that controls the genetic information of the cell, and is one of the most important organs in the cell, which is responsible for cell division and gene expression control. Therefore, observation of nuclei in microscopic images of cells is very important for various cell-related research and product development.
  • Patent Document 1 discloses a technique for observing a morphology after staining a nucleus with a simple operation. .. Observation of cell nuclei is also important in the field of pathological image analysis.
  • Non-Patent Document 1 cell nuclei are extracted from images (pathological images) obtained of cells stained with cell nuclei and proteins per cell are quantified. The technology to be used is disclosed.
  • Patent Documents 2 and 3 disclose a technique for detecting a region of a cell nucleus by using a machine learning learning model for an image obtained by staining a cell or a cell nucleus with a microscope.
  • Patent Document 4 discloses a technique for measuring the number of cell nuclei using a cell image (pathological image) obtained by staining cell nuclei.
  • the method of staining cells and cell nuclei is an invasive method, and the cells used for observation are continuously cultured, or the cells used for observation are used as they are for another purpose, for example. It cannot be administered to patients as a product for regenerative medicine.
  • pluripotent stem cells such as iPS cells and ES cells
  • the invasive observation method as described above cannot be used. Therefore, there is a strong demand for the establishment of a non-invasive and non-destructive method for observing cell nuclei.
  • Patent Document 5 discloses a technique for determining whether a cell is alive or dead in a non-staining manner, that is, non-invasively, using a microscope using Raman scattered light.
  • Raman imaging can obtain spectrum information on the corresponding pixels in the image, it requires complicated processing such as the need to specify the spectrum that contributes to the determination performed in the living cell.
  • cell nuclei can be observed without staining with a microscope using Raman scattered light, not only complicated preliminary studies such as spectrum identification are required, but also observations that can be taken in a short time are possible. There is a restriction that the area is small. Therefore, it is not suitable for observing the nuclei of living cells existing in the entire well in which the cells are cultured. It is also impossible to measure the total number of cell nuclei in the well in which the cells are cultured.
  • the present invention has been made to solve the above problems, and its main purpose is to observe and analyze the nuclei of cells existing in a wide range in a container such as a well in a non-invasive manner. It is to provide a cell image analysis method and a cell analysis apparatus which can be performed.
  • the phase image of the cell created based on the hologram data acquired by the holographic microscope is used as the input image, and the corresponding pseudo nuclear region image based on the stained image obtained by staining the cell nucleus is the correct image.
  • the second learning model creation step to create the cell region learning model Using the nuclear region learning model, a nuclear region estimation step of acquiring a phase image of the cell to be analyzed as an input image and a nuclear region estimation image showing the region of the cell nucleus as an output image, Using the cell region learning model, a cell region estimation step of acquiring a phase image of the cell to be analyzed as an input image and a cell region estimation image showing the cell region as an output image, and Using the nuclear region estimation image and the cell region estimation image, a nuclear region extraction step of extracting cell nuclei existing in a range estimated to be a cell region, and a nuclear region extraction step. It has.
  • one aspect of the cell analyzer according to the present invention made to solve the above problems is with a holographic microscope
  • An image creation unit that creates a phase image of the cells based on the hologram data obtained by observing the cells with the holographic microscope.
  • the learning data in which the phase image of the cell created based on the hologram data is used as the input image and the corresponding pseudo-nucleus region image based on the stained image obtained by staining the cell nucleus is used as the correct image.
  • the first learning model storage unit that stores the nuclear region learning model created by performing machine learning, Created by performing machine learning using learning data in which the phase image of the cells is used as an input image and the pseudo cell region image based on the stained image obtained by staining the corresponding cell skeleton is used as the correct image.
  • a second learning model storage unit that stores the stored cell region learning model, Using the nuclear region learning model stored in the first learning model storage unit, the phase image created by the image creation unit for the cell to be analyzed is used as an input image, and a nuclear region estimation image showing the region of the cell nucleus is output.
  • the nuclear region estimation unit acquired as an image and Using the cell region learning model stored in the second learning model storage unit, the phase image of the cell to be analyzed is used as an input image, and the cell region estimation image showing the cell region is acquired as an output image.
  • the holographic microscope is typically a digital holographic microscope, and the phase image is, for example, an image reconstructed based on phase information obtained by calculation processing based on hologram data obtained by the digital holographic microscope. is there.
  • a phase image of an appropriate cell and a nuclear stained image obtained by staining the cell nucleus of the same cell and observing it with a fluorescence microscope or the like are used.
  • a nuclear region learning model for identifying the region of the cell nucleus is created by machine learning such as deep learning.
  • interference fringes may appear in the phase image reconstructed based on the hologram data, and when the nuclear region is detected using the above nuclear region learning model, the interference fringes on the phase image may appear. Due to the influence of, a part that is not originally a cell nucleus may be mistakenly detected as a cell nucleus. Further, even if a foreign substance such as dust is present on the medium in the culture vessel, it may be erroneously detected as a cell nucleus.
  • a region in which the cytoskeleton is distributed is specified by using a stained image obtained by staining the cytoskeleton such as actin filament, in addition to the nuclear region learning model. Create a cell region learning model for this by machine learning.
  • the cytoskeleton is a structural element that exists in a fibrous or reticular form throughout the cytoplasm and determines the shape and morphology of cells. Therefore, the distribution region of the cytoskeleton can be regarded as roughly indicating the cell shape. That is, the cell region estimation image obtained by using the cell region learning model in the cell region estimation step shows the division between the cell region and the extracellular region (background region).
  • the cell nucleus exists in the cell region, so compare the nuclear region estimation image and the cell region estimation image for the same imaging range, and if there is a cell nucleus existing in the background region other than the cell, it is a false detection of the nucleus. It can be judged that there is. By making such a judgment, the nuclear region extraction step can eliminate falsely detected cell nuclei and extract only more accurate cell nuclei.
  • the cell nucleus is accurately extracted from the phase image without performing invasive treatment such as staining of the cell to be analyzed. be able to.
  • the influence thereof can be eliminated or reduced, and the cell nucleus can be observed accurately.
  • the true shape and morphology of the cell nucleus, or the state inside the cell nucleus can be observed.
  • the cells to be analyzed can be photographed (measured) non-invasively, the cells after imaging can be continuously cultured or used for analysis or observation for another purpose.
  • the schematic block diagram of one Embodiment of the cell analysis apparatus using the cell image analysis method which concerns on this invention The conceptual diagram of the structure of the full-thickness convolutional neural network used in the cell analysis apparatus of this embodiment.
  • the flowchart which shows the flow of processing at the time of making a learning model in the cell analysis apparatus of this embodiment.
  • the flowchart which shows the flow of the process from the imaging of the cell to be analyzed to the output of the cell count result in the cell analysis apparatus of this embodiment.
  • FIG. 6 is a diagram showing an example of a nuclear region estimation image output by estimation using a learning model using the IHM phase image shown in FIG. 6 as an input image.
  • the figure which shows the correct answer image of the nuclear region estimation image shown in FIG. 7. The figure which shows the image which superposed the nuclear position candidate point on the IHM phase image shown in FIG.
  • FIG. 6 is a diagram showing a cell region estimation image (binary image) output by estimation using a learning model using the IHM phase image shown in FIG. 6 as an input image.
  • FIG. 1 is a block configuration diagram of a main part of a cell analysis apparatus according to an embodiment for carrying out the cell image analysis method according to the present invention.
  • the cell analysis device 1 of the present embodiment includes a microscopic observation unit 10, a control / processing unit 20, an input unit 30 and a display unit 40 which are user interfaces. Further, the cell analysis device 1 is provided with an FCN model creation unit 50.
  • the microscopic observation unit 10 is an in-line Holographic Microscopy (IHM), includes a light source unit 11 including a laser diode and an image sensor 12, and is located between the light source unit 11 and the image sensor 12. A culture plate 13 containing the cells 14 is placed.
  • IHM Holographic Microscopy
  • the control / processing unit 20 controls the operation of the microscopic observation unit 10 and processes the data acquired by the microscopic observation unit 10, and includes the imaging control unit 21, the hologram data storage unit 22, and the phase information calculation.
  • a functional block includes a unit 23, an image creation unit 24, a nuclear region estimation unit 25, a cell region estimation unit 26, a mask processing unit 27, a cell counting unit 28, and a display processing unit 29.
  • the nuclear region estimation unit 25 includes a nuclear region learning model storage unit 251
  • the cell region estimation unit 26 includes a cell region learning model storage unit 261.
  • the FCN model creation unit 50 includes a learning image data input unit 51, an image alignment processing unit 52, a stained image preprocessing unit 53, a stained image binarization unit 54, a learning execution unit 55, and a model construction unit 56. And are included as functional blocks.
  • the learned learning model created by the FCN model creation unit 50 is stored in the storage unit of the control / processing unit 20, and functions as the nuclear region learning model storage unit 251 and the cell region learning model storage unit 261.
  • the entity of the control / processing unit 20 is a personal computer in which predetermined software is installed, a higher-performance workstation, or a computer system including a high-performance computer connected to such a computer via a communication line. is there. That is, the function of each block included in the control / processing unit 20 is stored in the computer or computer system, which is executed by executing software installed in a computer system including a single computer or a plurality of computers. It can be embodied by processing using various types of data.
  • the entity of the FCN model creation unit 50 is a personal computer on which predetermined software is installed or a workstation with higher performance. Normally, this computer is a computer different from the control / processing unit 20, but it may be the same computer. That is, the control / processing unit 20 can have the function of the FCN model creation unit 50.
  • the imaging control unit 21 controls the microscopic observation unit 10 and acquires hologram data by the following procedure. To do.
  • the light source unit 11 irradiates a predetermined area of the culture plate 13 with coherent light having an angle spread of about 10 °.
  • the coherent light (object light 16) transmitted through the culture plate 13 and the cells 14 reaches the image sensor 12 while interfering with the light (reference light 15) transmitted through the region close to the cells 14 on the culture plate 13.
  • the object light 16 is light whose phase changes when it passes through the cell 14, while the reference light 15 is light that does not pass through the cell 14 and therefore does not undergo the phase change caused by the cell 14. Therefore, on the detection surface (image surface) of the image sensor 12, an image, that is, a hologram is formed by interference fringes between the object light 16 whose phase has been changed by the cells 14 and the reference light 15 whose phase has not changed.
  • the light source unit 11 and the image sensor 12 are sequentially moved in the X-axis direction and the Y-axis direction in conjunction with each other by a movement mechanism (not shown).
  • the irradiation region (observation region) of the coherent light emitted from the light source unit 11 is moved on the culture plate 13, and the hologram data (2 of the hologram formed on the detection surface of the image sensor 12) covers a wide two-dimensional region. Dimensional light intensity distribution data) can be obtained.
  • the hologram data obtained by the microscopic observation unit 10 is sequentially sent to the control / processing unit 20 and stored in the hologram data storage unit 22.
  • the phase information calculation unit 23 reads the hologram data from the hologram data storage unit 22 and executes a predetermined arithmetic process for phase recovery to obtain the phase information of the entire observation area (photographing area). calculate.
  • the image creation unit 24 creates an IHM phase image based on the calculated phase information.
  • a well-known algorithm disclosed in Patent Document 6 and the like can be used when calculating such phase information and creating an IHM phase image.
  • FIG. 6 is an example of an IHM phase image. It can be seen that transparent cells are difficult to see with a general optical microscope, but individual cells can be observed fairly clearly in the IHM phase image. However, it is difficult to visually recognize the nucleus of each cell from this IHM phase image. Therefore, in the cell analysis device 1 of the present embodiment, a region in which a nucleus is presumed to exist in each cell is determined by using a full-layer convolutional neural network (FCN), which is one of the machine learning methods. Obtain the shown nuclear region estimation image.
  • FCN convolutional neural network
  • FIG. 2 is a conceptual diagram of the structure of the FCN. The details of the structure and processing of FCN are explained in detail in many documents. It can also be implemented using commercially available or free software such as "MATLAB” provided by MathWorks in the United States. Therefore, a schematic description will be given here.
  • MATLAB provided by MathWorks in the United States. Therefore, a schematic description will be given here.
  • the FCN includes, for example, a multi-layer network 60 in which repetitions of a convolutional layer and a pooling layer are multi-layered, and a convolutional layer 61 corresponding to a fully connected layer in a convolutional neural network.
  • the multi-layer network 60 the convolution process using a filter (kernel) of a predetermined size and the pooling process of reducing the convolution result two-dimensionally and extracting an effective value are repeated.
  • the multilayer network 60 may be composed of only a convolutional layer without a pooling layer.
  • the final stage convolution layer 61 local convolution and deconvolution are performed while sliding a filter of a predetermined size in the input image.
  • this FCN by performing semantic segmentation on the input image 63 such as the IHM phase image, it is possible to output the segmentation image 64 in which the region of the cell nucleus is labeled on a pixel-by-pixel basis.
  • the multilayer network 60 and the convolutional layer 61 are designed so as to label the input IHM phase image on a pixel-by-pixel basis. That is, the smallest unit of one region labeled in the segmentation image 64, which is the output image, is one pixel on the IHM phase image. Therefore, even if the size of the cell nucleus is about one pixel on the IHM phase image, the cell nucleus is detected as one region in the segmentation image 64.
  • the coefficients (weights) of the filters in each of the plurality of convolution layers included in the multilayer network 60 and the final convolution layer 61 are used in advance by using a large number of image data for learning. It is necessary to train and build a learning model.
  • learning can be performed using the stochastic gradient descent method that is often used in machine learning (particularly deep learning).
  • the learning image data input unit 51 is a learning data (also called teacher data or training data) in which the IHM phase image created by the image creation unit 24 and the corresponding correct image are a set, but here. Then, a large number of sets (referred to as learning data) are read in advance (step S11).
  • the IHM phase image is created based on the data obtained by actually photographing the cells with the cell analysis device 1 as described above, but the IHM phase image is not necessarily limited to a specific cell analysis device, and other cell analysis having the same configuration is performed. It may be the one obtained by the device.
  • the correct image is a fluorescence image (nuclear-stained fluorescence image) obtained by staining only the nucleus of the cell when the IHM phase image was created and taking the image with an appropriate microscope.
  • the staining method is not particularly limited as long as the cell nucleus can be stained, and for example, DAPI (4', 6-diamidino-2-phenylindole), propidium iodide, SYSTEMX (registered trademark), TO-PRO (registered trademark)- 3 and the like can be used.
  • the image alignment processing unit 52 aligns both images by performing image processing such as translation, rotation, enlargement / reduction, etc. on one of the images (step S12). In general, it is advisable to perform image processing so as to align the position of the nuclear-stained fluorescence image with reference to the IHM phase image in which the cells are more clearly visible.
  • This alignment work may be performed manually by the operator with reference to, for example, the edge of the well or the mark added to the culture plate, or may be automatically performed by a predetermined algorithm.
  • the stained image preprocessing unit 53 performs noise removing processing and background removing processing so that the cell nucleus region becomes clearer in the nuclear stained fluorescent image (steps S13 and S14).
  • the noise removal process aims to remove various types of noise, and for example, various types of filters such as a linear filter and a median filter can be used.
  • the background removing process is mainly intended to remove intensity unevenness in a background portion other than the cell nucleus, and a method using an average value filter or the like is known as a background subtraction process.
  • various methods used in the conventional image processing can be used. Further, since the noise condition depends on the characteristics of the microscope and the characteristics of the target sample, the noise removal process can be omitted in some cases.
  • the stained image binarization unit 54 binarizes the preprocessed image as described above to create a binary image in which the nuclear region and the other regions are clarified (step S15). Further, the stained image binarization unit 54 performs a closing process that combines an expansion process and a reduction process as a morphology conversion process on the binarized image (step S16).
  • FIG. 5 (a) is an original image of a nuclear-stained fluorescent image
  • FIG. 5 (b) is an image of the image shown in FIG. 5 (a) after background removal processing
  • FIG. 5 (c) is FIG. 5 (c).
  • This is an example of a binary image after binarizing the image shown in a). The same applies to the images described below, but here, mesenchymal stem cells (MSCs) were targeted, and DAPI was used for nuclear staining.
  • MSCs mesenchymal stem cells
  • the intensity unevenness in the background portion other than the nuclear region is large, so that the binarization capable of extracting the nuclear region cannot be performed unless the background removal treatment is performed.
  • the background removal process in advance, as shown in FIG. 5C, it is possible to obtain a binary image in which the nuclear region is accurately extracted.
  • This binary image is an image in which the cell nucleus region and the other regions are semantically segmented for each pixel on the corresponding IHM phase image.
  • the closing process after binarization it is possible to remove mainly small noise on the fluorescence image such as bright spot noise.
  • the closing process may be omitted depending on the noise situation and the performance of the noise removal process in step S13.
  • the learning execution unit 55 can use the training execution unit 55.
  • FCN learning is executed using a large number of training data (step S17). That is, the filter coefficients in a plurality of convolution layers in the FCN network are learned so that the result of semantic segmentation by FCN is as close as possible to the correct image.
  • the model building unit 56 builds a model in the process of repeating the learning, and when the predetermined learning is completed, the model building unit 56 saves the learning model based on the learning result (step S18).
  • the data forming the learning model indicating the nuclear region created in this way is stored in the nuclear region learning model storage unit 251.
  • interference fringes appear not a little in the hologram data obtained by the microscopic observation unit 10, and this may also appear in the IHM phase image.
  • IHM phase image shown in FIG. 6 a substantially concentric image derived from the interference fringes appears fairly clearly.
  • the interference fringes become clear on the IHM phase image, a part of the interference fringes may be erroneously detected as a cell nucleus when estimating the nuclear region using the nuclear region learning model.
  • FIG. 7 is a diagram showing an example of a nuclear region estimation image obtained by estimation using a nuclear region learning model using the IHM phase image shown in FIG. 6 as an input image.
  • FIG. 8 is a diagram showing a correct image of the nuclear region estimation image shown in FIG. 7, that is, an accurate nuclear region.
  • FIG. 9 is a diagram showing an image in which a point (nuclear position candidate point) which is a nuclear region estimation result is superimposed on the IHM phase image shown in FIG.
  • FIG. 10 is a diagram showing an image in which the correct points of the nuclear region are superimposed on the IHM phase image shown in FIG.
  • the points indicating the nuclear positions in FIGS. 9 and 10 are rectangular, and they can be obtained by performing a maximum value region extraction process and a binarization process, which will be described later, on the nuclear region represented by the gray scale. It was done.
  • FIGS. 9 and 10 it can be seen that in FIG. 9, a large number of false nuclear regions are detected on the interference fringes where the cell nuclei should not exist. This means that simply estimating the nuclear region using the nuclear position learning model will result in a large number of false positives. Therefore, in the cell analysis apparatus of the present embodiment, in order to perform the process of excluding the falsely detected false nuclear region, a learning model for recognizing the cell region is created separately from the learning model for recognizing the nuclear region. To do.
  • the cytoskeleton existing in the fiber shape or the network shape throughout the inside of the cell is used.
  • 11 (a) and 11 (b) are a fluorescence-stained image of actin filaments in the same observation region and a bright-field image obtained by a normal microscope.
  • Actin filaments are a type of cytoskeleton and are present in the form of fibers throughout the inside of cells.
  • FIG. 11 (b) it is difficult to visually recognize the cell region in the bright field image, but as shown in FIG. 11 (a), the actin filaments are present in almost the entire cell, so that the actin filaments are present.
  • the distributed range can be considered to be the cellular region. Therefore, in the cell analyzer of the present embodiment, a cell region learning model for extracting a cell region is created by using a fluorescent image stained with a cytoskeleton (here, actin filament) as an accurate image.
  • a cytoskeleton here, actin filament
  • the cell region learning model can be created by the same procedure as the nuclear region learning model, that is, the procedure shown in FIG.
  • the cytoskeleton is fibrously stretched inside the cell, but it does not necessarily exist evenly, and even within the cell, if the cytoskeleton does not exist or its density is low, it is binarized. Some pixels are partially blackened. On the other hand, by performing the closing process after binarization, the pixels whose surroundings are white are converted to white even if they are black. As a result, it is possible to obtain an image showing not only the part where the cytoskeleton actually exists but also the range where the cytoskeleton is distributed, that is, the entire cell region. That is, the image after the closing process is an image in which the cell region and the other regions are separated for each pixel on the corresponding IHM phase image.
  • the learning execution unit 55 executes FCN learning using the large number of learning data.
  • the model building unit 56 builds a model in the process of repeating the learning, and when a predetermined learning is completed, the model building unit 56 saves the cell region learning model based on the learning result.
  • the data constituting the cell region learning model created in this way is stored in the cell region learning model storage unit 261 in the cell analysis device 1.
  • the operator sets the culture plate 13 containing the cells 14 to be analyzed at a predetermined position of the microscopic observation unit 10, and performs a predetermined operation on the input unit 30.
  • the microscopic observation unit 10 photographs the sample (cells 14 in the culture plate 13) (step S21).
  • the phase information calculation unit 23 and the image creation unit 24 perform phase calculation based on the hologram data obtained by the photographing, and form an IHM phase image (step S22).
  • the nuclear region estimation unit 25 reads the IHM phase image obtained in step S22 as an input image, performs processing by FCN using the nuclear region learning model stored in the nuclear region learning model storage unit 251 and performs processing by FCN, and inputs the input image.
  • the segmentation image corresponding to is acquired as an output image (step S23).
  • the segmentation image at this time is a nuclear region estimation image showing the region of the cell nucleus and the region other than the cell nucleus separately in the same observation range as the IHM phase image which is the input image.
  • the nuclear region estimation unit 25 outputs a grayscale image in which the gradation from white to black is changed according to the probability value of each pixel. That is, in the nuclear region estimation image, the portion estimated with high accuracy as the nuclear region is displayed with white or a gradation close to it, and the portion with low estimation accuracy as the nuclear region is relatively close to black. It is displayed in gradation.
  • the nuclear region estimation image shown in FIG. 7 can be obtained for the IHM phase image shown in FIG. As described above, when the interference fringes appear to some extent clearly on the input IHM phase image or when the image of a foreign substance appears, a false nuclear region appears at a site where the cell nucleus does not actually exist. ..
  • the cell region estimation unit 26 reads the IHM phase image obtained in step S22 as an input image, and performs processing by FCN using the cell region learning model stored in the cell region learning model storage unit 261.
  • the segmentation image corresponding to the input image is acquired as an output image (step S24).
  • the segmentation image at this time is a cell region estimation image showing the cell region and other regions separately for the same observation range as the IHM phase image which is the input image.
  • FIG. 12 is an example of a cell region estimation image obtained by using the IHM phase image shown in FIG. 6 as an input image.
  • the cell region estimation unit 26 compares the probability value of each pixel with a predetermined threshold value, and outputs a binary segmentation image in which the pixel having the probability value equal to or higher than the threshold value is white and the other pixels are black.
  • the mask processing unit 27 performs mask processing on the nuclear region estimation image obtained in step S23 using the binarized cell region estimation image obtained in step S24 (step S25). That is, only the white part of the binary image shown in FIG. 12, that is, the nuclear region existing in the cell region is extracted, and the other nuclear regions are excluded.
  • the nuclear region excluded at this time is a nuclear region erroneously detected due to the influence of interference fringes or an image of a foreign substance.
  • FIG. 13 is a nuclear region estimation image after mask processing, and as can be seen by comparison with FIG. 7, the number of nuclear regions is significantly reduced.
  • the core region estimation image after this mask processing is a grayscale image, and the peak value of the signal differs depending on the core. Therefore, even if the determination process using a single threshold value is performed, not all nuclei can be detected. Further, since a plurality of adjacent nuclei may be seen as one region, it is preferable to perform a process of separating the plurality of adjacent nuclei. Therefore, the mask processing unit 27 performs the maximum value region extraction processing on the nuclear region estimation image after the mask processing.
  • the region other than the black region is spatially expanded.
  • the brightness is generally lowered by subtracting the offset value determined in advance in consideration of the noise tolerance value from the signal value (luminance value) of each pixel.
  • the luminance value is subtracted for each pixel between the image before the expansion process and the image after the luminance decrease.
  • the luminance value becomes non-zero in a narrow range near the peak of the luminance value in the original image, regardless of the peak value, and the luminance value becomes zero in other regions. Become. That is, by this processing, it is possible to extract a region having a maximum value in which the brightness value is higher than that around the original image.
  • the position of the center of gravity may be obtained for each cell nucleus to obtain the nuclear position candidate point.
  • the mask processing unit 27 creates an image obtained by extracting the maximum value region from the masked nuclear region estimation image, and then binarizes the image to obtain a nuclear position candidate point image showing a candidate point of the nuclear region.
  • FIG. 14 is a diagram in which a nuclear position candidate point is superimposed on the IHM image shown in FIG. Comparing FIG. 14 and FIG. 10, it can be seen that the nuclear position candidate points substantially coincide with the correct nuclear position. That is, by performing mask processing using the cell region estimation image, it is possible to accurately eliminate the false nuclear region and extract the region that is truly the cell nucleus.
  • the cell counting unit 28 counts the nuclear position candidate points on the nuclear position candidate point image obtained in step S26 (step S27). Except in special cases, one cell has one cell nucleus. Therefore, it can be considered that the number of cell nuclei represents the number of cells, and the display processing unit 29 displays the counting result as the number of cells on the display unit 40 (step S28).
  • the display processing unit 29 displays the counting result as the number of cells on the display unit 40 (step S28).
  • one or a plurality of IHM phase images, kernel region estimation images, kernel position candidate point images, cell region estimation images, and the like may be displayed together.
  • the cell analyzer of the present embodiment can accurately calculate the number of cells existing in the observation range and provide it to the user. Further, in the cell analysis apparatus of the present embodiment, if an image in which the nuclear position candidate points are superimposed on the IHM phase image as shown in FIG. 14 is displayed on the display unit 40, the user can easily position the cell nucleus in the cell. Can be grasped. Further, if the nuclear region estimation image as shown in FIG. 13 is displayed on the display unit 40, the user can easily grasp the shape and morphology of the cell nucleus. Since the counting of living cells and the observation of cell nuclei as described above are performed non-invasively, the cells used for the analysis and observation can be continuously cultured or used for another purpose.
  • the grayscale nuclear region estimation image is masked, and after the masking, the maximum value region extraction process and the binarization process are performed to obtain the nuclear position candidate points for each nuclear region.
  • mask processing using the cell region estimation image may be performed.
  • a masking process after binarizing a grayscale nuclear region estimation image, for each nuclear region, when one nuclear region and a part or all of the cell region on the cell region estimation image do not overlap. , The nuclear region may be regarded as a false positive and excluded.
  • various methods can be adopted as the mask processing method.
  • FCN is used as a machine learning method for semantic segmentation of cell nuclei, but it is clear that a normal convolutional neural network (CNN) may be used. Further, it is effective to apply the present invention not only to the machine learning method using a neural network but also to any machine learning method capable of semantic segmentation of an image.
  • machine learning methods include, for example, Support Vector Machine (SVM), Random Forest, and AdaBoost.
  • SVM Support Vector Machine
  • Random Forest Random Forest
  • AdaBoost AdaBoost
  • FCN can output the estimated probability of segmentation when outputting a segmentation image for the input image, but other methods such as CNN and SVM also use patch images (the entire image is finely divided). A small area image) can be scanned and the probability of cell nucleus-likeness at the center point of each patch image can be output.
  • an in-line holographic microscope is used as the microscopic observation unit 10, but if it is a microscope that can obtain a hologram, an off-axis type, a phase shift type, or the like can be used. It is natural that it can be replaced with a holographic microscope of the above method.
  • One aspect of the cell image analysis method according to the present invention is The phase image of the cell created based on the hologram data acquired by the holographic microscope is used as the input image, and the corresponding pseudo nuclear region image based on the stained image obtained by staining the cell nucleus is the correct image.
  • the second learning model creation step to create the cell region learning model Using the nuclear region learning model, a nuclear region estimation step of acquiring a phase image of the cell to be analyzed as an input image and a nuclear region estimation image showing the region of the cell nucleus as an output image, Using the cell region learning model, a cell region estimation step of acquiring a phase image of the cell to be analyzed as an input image and a cell region estimation image showing the cell region as an output image, and Using the nuclear region estimation image and the cell region estimation image, a nuclear region extraction step of extracting cell nuclei existing in a range estimated to be a cell region, and a nuclear region extraction step. It has.
  • one aspect of the cell analyzer according to the present invention is with a holographic microscope
  • An image creation unit that creates a phase image of the cells based on the hologram data obtained by observing the cells with the holographic microscope.
  • the learning data in which the phase image of the cell created based on the hologram data is used as the input image and the corresponding pseudo-nucleus region image based on the stained image obtained by staining the cell nucleus is used as the correct image.
  • the first learning model storage unit that stores the nuclear region learning model created by performing machine learning, Created by performing machine learning using learning data in which the phase image of the cells is used as an input image and the pseudo cell region image based on the stained image obtained by staining the corresponding cell skeleton is used as the correct image.
  • a second learning model storage unit that stores the stored cell region learning model, Using the nuclear region learning model stored in the first learning model storage unit, the phase image created by the image creation unit for the cell to be analyzed is used as an input image, and a nuclear region estimation image showing the region of the cell nucleus is output.
  • the nuclear region estimation unit acquired as an image and Using the cell region learning model stored in the second learning model storage unit, the phase image of the cell to be analyzed is used as an input image, and the cell region estimation image showing the cell region is acquired as an output image.
  • the cells to be analyzed are created based on hologram data without performing invasive processing such as staining.
  • the cell nucleus can be accurately extracted from the phase image of the cell. In particular, even when interference fringes appear in the phase image or an image due to a foreign substance appears, the influence thereof can be eliminated or reduced, and the cell nucleus can be observed accurately. Then, for example, by counting the number of the cell nuclei, information on the number of cells can be obtained. In addition, the true shape and morphology of the cell nucleus, or the state inside the cell nucleus can be observed. In addition, since the cells to be analyzed can be photographed (measured) non-invasively, the cells after imaging can be continuously cultured or used for analysis or observation for another purpose.
  • the cell image analysis method according to paragraph 1 may further include a cell counting step for counting the number of cell nuclei extracted in the nuclear region extraction step.
  • the cell analyzer according to item 7 can further include a cell counting unit that counts the number of cell nuclei extracted by the nuclear region extraction unit.
  • one cell has one cell nucleus, so the number of cell nuclei can be considered to represent the number of cells.
  • the number of cell nuclei extracted accurately can be counted, so that the number of cells existing in the observation range can be determined. It can be calculated accurately.
  • the cell image analysis method further includes a display step of displaying a nuclear region estimation image in which the cell nuclei extracted in the nuclear region extraction step are displayed. can do.
  • the cell analysis apparatus further includes a display processing unit that displays a nuclear region estimation image in which the cell nuclei extracted by the nuclear region extraction unit are displayed. Can be.
  • the user can observe the shape and morphology of the accurately extracted cell nucleus based on the displayed image. it can.
  • the cell region estimation image is a binary image in which a cell region and other regions are separated. Can be.
  • the cell region estimation image is a binary image in which a cell region and other regions are separated. can do.
  • the cell region and the other regions are clarified, so that the cell nucleus is likely to be used. It can be extracted by accurate and simple processing.
  • the machine learning can be performed using a convolutional neural network.
  • the convolutional neural network can be a fully convolutional neural network.
  • the machine learning can be performed using a convolutional neural network.
  • the convolutional neural network can be a fully convolutional neural network.
  • the region or cell is likely to be a cell nucleus.
  • Each region can be estimated accurately.
  • a learning model creation unit that creates a learning model by performing machine learning using learning data that uses a cell phase image as an input image and a pseudo region image based on the corresponding stained image as a correct image. Prepare, The nuclear region learning model and the cell region learning model can be created by using the learning model creation unit.
  • the cell analysis apparatus has a function of creating a learning model used for obtaining a cell region estimation image from a phase image of a cell
  • the cell analysis according to the thirteenth item has that function. Therefore, according to the cell analyzer according to the thirteenth paragraph, for example, the nuclear region estimation image obtained for the phase image of the cell to be analyzed in the nuclear region estimation unit is added to the training data to re-learn the learning model. It is possible to easily improve the learning model such as building.

Landscapes

  • Chemical & Material Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Organic Chemistry (AREA)
  • Biotechnology (AREA)
  • Zoology (AREA)
  • Wood Science & Technology (AREA)
  • Sustainable Development (AREA)
  • Microbiology (AREA)
  • Biomedical Technology (AREA)
  • Medicinal Chemistry (AREA)
  • Biochemistry (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Genetics & Genomics (AREA)
  • Analytical Chemistry (AREA)
  • Image Analysis (AREA)
  • Investigating Or Analysing Biological Materials (AREA)

Abstract

One aspect of a cell analysis device according to the present invention is provided with: a holographic microscope (10); an image creation unit (23, 24) for creating a phase image of a cell on the basis of hologram data; a first learning model storage unit (251) for storing a nuclear region learning model that is created by performing machine learning through setting the phase image of the cell as an input image and using learning data in which a pseudo nuclear region image corresponding to the input image and based on a stain image obtained by staining a cell nucleus is defined as a correct image; a second learning model storage unit (261) for storing a cell region learning model that is created by performing machine learning through setting the phase image of the cell as an input image and using learning data in which a pseudo cell region image corresponding to the input image and based on the stain image obtained by staining cytoskeletons is defined as a correct image; a nuclear region inference unit (25) for obtaining, as an output image, a nuclear region inference image indicating the region of the cell nucleus by using the nuclear region learning model and setting, as an input image, the phase image created for the cell being analyzed; a cell region inference unit (26) for obtaining, as an output image, a cell region inference image indicating a cell region by using the cell region learning model and setting, as an input image, the phase image for the cell being analyzed; and a nuclear region extraction unit (27) for extracting a cell nucleus present in a range inferred to be the cell region by using the nuclear region inference image and the cell region inference image.

Description

細胞画像解析方法及び細胞解析装置Cell image analysis method and cell analysis device
 本発明は、細胞について得られた観察画像を解析処理する細胞画像解析方法、及び該方法を用いた細胞解析装置に関する。 The present invention relates to a cell image analysis method for analyzing and processing an observation image obtained for a cell, and a cell analysis device using the method.
 細胞核(以下、慣用に従い単に「核」ということがある)は細胞の遺伝情報を司るDNAを含んでおり、細胞分裂や遺伝子発現制御などを担う、細胞で最も重要な器官の一つである。そのため、細胞の顕微鏡画像における核の観察は、細胞に関連した各種の研究や製品開発のうえで非常に重要である。 The cell nucleus (hereinafter, sometimes simply referred to as the "nucleus" according to convention) contains DNA that controls the genetic information of the cell, and is one of the most important organs in the cell, which is responsible for cell division and gene expression control. Therefore, observation of nuclei in microscopic images of cells is very important for various cell-related research and product development.
 一般的に生細胞のままでの核の観察は難しいとされており、例えば特許文献1には、核を簡単な操作で以て染色したうえで形態観察を行うための技術が開示されている。また、病理画像解析の分野においても細胞核の観察は重要であり、例えば非特許文献1には、細胞核を染色した細胞について得られた画像(病理画像)において細胞核を抽出し細胞当たりのタンパク質を定量化する技術が開示されている。また、特許文献2、3には、細胞や細胞核を染色して顕微鏡で得られた画像を対象とした機械学習の学習モデルを利用して、細胞核の領域を検出する技術が開示されている。さらにまた、特許文献4には、細胞核を染色した細胞画像(病理画像)を用いて細胞核数を計測する技術が開示されている。 It is generally considered difficult to observe a nucleus as a living cell. For example, Patent Document 1 discloses a technique for observing a morphology after staining a nucleus with a simple operation. .. Observation of cell nuclei is also important in the field of pathological image analysis. For example, in Non-Patent Document 1, cell nuclei are extracted from images (pathological images) obtained of cells stained with cell nuclei and proteins per cell are quantified. The technology to be used is disclosed. Further, Patent Documents 2 and 3 disclose a technique for detecting a region of a cell nucleus by using a machine learning learning model for an image obtained by staining a cell or a cell nucleus with a microscope. Furthermore, Patent Document 4 discloses a technique for measuring the number of cell nuclei using a cell image (pathological image) obtained by staining cell nuclei.
 しかしながら、上述したように細胞や細胞核を染色する方法は侵襲的な手法であり、観察に使用した細胞の培養を継続したり、観察に使用した細胞をそのまま別の目的に使用したりする、例えば再生医療等製品として患者に投与する、といったことはできない。iPS細胞やES細胞等の多能性幹細胞を利用した再生医療の研究・開発においては、多能性を維持した状態の未分化の細胞を大量に培養する必要がある。そのためには、培養中の細胞の状態を頻繁に確認する必要があるが、こうした分野では上述したような侵襲的な観察手法を用いることができない。そのため、非侵襲的で且つ非破壊的に細胞核の観察が行える手法の確立が強く要望されている。 However, as described above, the method of staining cells and cell nuclei is an invasive method, and the cells used for observation are continuously cultured, or the cells used for observation are used as they are for another purpose, for example. It cannot be administered to patients as a product for regenerative medicine. In research and development of regenerative medicine using pluripotent stem cells such as iPS cells and ES cells, it is necessary to culture a large amount of undifferentiated cells in a state of maintaining pluripotency. For that purpose, it is necessary to frequently check the state of cells in culture, but in such a field, the invasive observation method as described above cannot be used. Therefore, there is a strong demand for the establishment of a non-invasive and non-destructive method for observing cell nuclei.
 一方、特許文献5には、ラマン散乱光を利用した顕微鏡を用い、非染色で、つまりは非侵襲的に細胞の生死判定を行う技術が開示されている。しかしながら、ラマンイメージングでは画像内の対応画素におけるスペクトル情報を得ることができるものの、生細胞において実施する判定に寄与するスペクトルを特定する必要があるなど、複雑な処理を要する。また、ラマン散乱光を利用した顕微鏡では染色を行わずに細胞核を観察することはできるものの、スペクトルの特定など煩雑な事前検討が必要であるだけでなく、短時間で以て撮影可能である観察領域が小さいという制約がある。そのため、細胞を培養しているウェル内全体に存在する生細胞の核を観察するには不向きである。また、細胞を培養しているウェル内全体の細胞核の数を計測することも不可能である。 On the other hand, Patent Document 5 discloses a technique for determining whether a cell is alive or dead in a non-staining manner, that is, non-invasively, using a microscope using Raman scattered light. However, although Raman imaging can obtain spectrum information on the corresponding pixels in the image, it requires complicated processing such as the need to specify the spectrum that contributes to the determination performed in the living cell. In addition, although cell nuclei can be observed without staining with a microscope using Raman scattered light, not only complicated preliminary studies such as spectrum identification are required, but also observations that can be taken in a short time are possible. There is a restriction that the area is small. Therefore, it is not suitable for observing the nuclei of living cells existing in the entire well in which the cells are cultured. It is also impossible to measure the total number of cell nuclei in the well in which the cells are cultured.
国際特許公開第2012/095935号International Patent Publication No. 2012/095935 特開2019-95853号公報Japanese Unexamined Patent Publication No. 2019-95853 特開2019-91307号公報Japanese Unexamined Patent Publication No. 2019-91307 特許第5924406号公報Japanese Patent No. 5924406 国際特許公開第2014/162744号International Patent Publication No. 2014/162744 国際特許公開第2016/162945号International Patent Publication No. 2016/162945
 本発明は上記課題を解決するためになされたものであり、その主たる目的は、例えばウェル等の容器内の広い範囲に存在する細胞の核を、非侵襲的に良好に観察したり解析したりすることができる細胞画像解析方法及び細胞解析装置を提供することである。 The present invention has been made to solve the above problems, and its main purpose is to observe and analyze the nuclei of cells existing in a wide range in a container such as a well in a non-invasive manner. It is to provide a cell image analysis method and a cell analysis apparatus which can be performed.
 上記課題を解決するためになされた本発明に係る細胞画像解析方法の一態様は、
 ホログラフィック顕微鏡で取得されたホログラムデータに基づいて作成される細胞の位相画像を入力画像とし、これに対応する、細胞核を染色して得られた染色画像に基づく擬似的な核領域画像を正解画像とした学習データを用い、機械学習を行うことで核領域学習モデルを作成する第1の学習モデル作成ステップと、
 前記細胞の位相画像を入力画像とし、これに対応する、細胞骨格を染色して得られた染色画像に基づく擬似的な細胞領域画像を正解画像とした学習データを用い、機械学習を行うことで細胞領域学習モデルを作成する第2の学習モデル作成ステップと、
 前記核領域学習モデルを用い、解析対象の細胞についての位相画像を入力画像とし、細胞核の領域を示す核領域推定画像を出力画像として取得する核領域推定ステップと、
 前記細胞領域学習モデルを用い、前記解析対象の細胞についての位相画像を入力画像とし、細胞領域を示す細胞領域推定画像を出力画像として取得する細胞領域推定ステップと、
 前記核領域推定画像と前記細胞領域推定画像とを用い、細胞領域であると推定される範囲に存在する細胞核を抽出する核領域抽出ステップと、
 を有するものである。
One aspect of the cell image analysis method according to the present invention, which has been made to solve the above problems, is
The phase image of the cell created based on the hologram data acquired by the holographic microscope is used as the input image, and the corresponding pseudo nuclear region image based on the stained image obtained by staining the cell nucleus is the correct image. The first learning model creation step of creating a nuclear region learning model by performing machine learning using the training data of
By performing machine learning using the learning data in which the phase image of the cells is used as the input image and the corresponding pseudo cell region image based on the stained image obtained by staining the cytoskeleton is used as the correct image. The second learning model creation step to create the cell region learning model,
Using the nuclear region learning model, a nuclear region estimation step of acquiring a phase image of the cell to be analyzed as an input image and a nuclear region estimation image showing the region of the cell nucleus as an output image,
Using the cell region learning model, a cell region estimation step of acquiring a phase image of the cell to be analyzed as an input image and a cell region estimation image showing the cell region as an output image, and
Using the nuclear region estimation image and the cell region estimation image, a nuclear region extraction step of extracting cell nuclei existing in a range estimated to be a cell region, and a nuclear region extraction step.
It has.
 また上記課題を解決するためになされた本発明に係る細胞解析装置の一態様は、
 ホログラフィック顕微鏡と、
 前記ホログラフィック顕微鏡で細胞を観察することで取得されたホログラムデータに基づいて、前記細胞の位相画像を作成する画像作成部と、
 ホログラムデータに基づいて作成される細胞の位相画像を入力画像とし、これに対応する、細胞核を染色して得られた染色画像に基づく擬似的な核領域画像を正解画像とした学習データを用い、機械学習を行うことで作成された核領域学習モデルが記憶される第1学習モデル記憶部と、
 前記細胞の位相画像を入力画像とし、これに対応する細胞骨格を染色して得られた染色画像に基づく擬似的な細胞領域画像を正解画像とした学習データを用い、機械学習を行うことで作成された細胞領域学習モデルが記憶される第2学習モデル記憶部と、
 前記第1学習モデル記憶部に記憶されている核領域学習モデルを用い、解析対象の細胞について前記画像作成部で作成された位相画像を入力画像とし、細胞核の領域を示す核領域推定画像を出力画像として取得する核領域推定部と、
 前記第2学習モデル記憶部に記憶されている細胞領域学習モデルを用い、前記解析対象の細胞についての位相画像を入力画像とし、細胞領域を示す細胞領域推定画像を出力画像として取得する細胞領域推定部と、
 前記核領域推定画像と前記細胞領域推定画像とを用い、細胞領域であると推定される範囲に存在する細胞核を抽出する核領域抽出部と、
 を備えるものである。
Further, one aspect of the cell analyzer according to the present invention made to solve the above problems is
With a holographic microscope
An image creation unit that creates a phase image of the cells based on the hologram data obtained by observing the cells with the holographic microscope.
Using the learning data in which the phase image of the cell created based on the hologram data is used as the input image and the corresponding pseudo-nucleus region image based on the stained image obtained by staining the cell nucleus is used as the correct image. The first learning model storage unit that stores the nuclear region learning model created by performing machine learning,
Created by performing machine learning using learning data in which the phase image of the cells is used as an input image and the pseudo cell region image based on the stained image obtained by staining the corresponding cell skeleton is used as the correct image. A second learning model storage unit that stores the stored cell region learning model,
Using the nuclear region learning model stored in the first learning model storage unit, the phase image created by the image creation unit for the cell to be analyzed is used as an input image, and a nuclear region estimation image showing the region of the cell nucleus is output. The nuclear region estimation unit acquired as an image and
Using the cell region learning model stored in the second learning model storage unit, the phase image of the cell to be analyzed is used as an input image, and the cell region estimation image showing the cell region is acquired as an output image. Department and
Using the nuclear region estimation image and the cell region estimation image, a nuclear region extraction unit that extracts cell nuclei existing in a range estimated to be a cell region, and a nuclear region extraction unit.
Is provided.
 上記ホログラフィック顕微鏡は典型的にはデジタルホログラフィック顕微鏡であり、上記位相画像は例えば、デジタルホログラフィック顕微鏡により得られるホログラムデータに基づく計算処理により求められた位相情報に基づいて再構成される画像である。 The holographic microscope is typically a digital holographic microscope, and the phase image is, for example, an image reconstructed based on phase information obtained by calculation processing based on hologram data obtained by the digital holographic microscope. is there.
 本発明の上記態様による細胞画像解析方法及び細胞解析装置では、適宜の細胞についての位相画像と、同じ細胞の細胞核を染色して蛍光顕微鏡等で観察することで得られた核染色画像とを用いて、細胞核の領域を特定するための核領域学習モデルをディープラーニング等の機械学習により作成する。しかしながら、測定の原理上、ホログラムデータに基づいて再構成された位相画像には干渉縞が現れる場合があり、上記核領域学習モデルを用いて核領域の検出を行うと、位相画像上の干渉縞の影響によって本来は細胞核でない部分を細胞核であると誤って検出してしまうことがある。また、培養容器内の培地上にゴミなどの異物が存在した場合にも、それを細胞核であると誤検出することがある。 In the cell image analysis method and cell analyzer according to the above aspect of the present invention, a phase image of an appropriate cell and a nuclear stained image obtained by staining the cell nucleus of the same cell and observing it with a fluorescence microscope or the like are used. Then, a nuclear region learning model for identifying the region of the cell nucleus is created by machine learning such as deep learning. However, due to the principle of measurement, interference fringes may appear in the phase image reconstructed based on the hologram data, and when the nuclear region is detected using the above nuclear region learning model, the interference fringes on the phase image may appear. Due to the influence of, a part that is not originally a cell nucleus may be mistakenly detected as a cell nucleus. Further, even if a foreign substance such as dust is present on the medium in the culture vessel, it may be erroneously detected as a cell nucleus.
 これに対し本発明の上記態様による細胞画像解析方法及び細胞解析装置では、核領域学習モデルとは別に、アクチンフィラメント等の細胞骨格を染色した染色画像を用い、細胞骨格が分布する領域を特定するための細胞領域学習モデルを機械学習により作成する。細胞骨格は細胞質内全体に繊維状又は網状に存在する構造要素であり、細胞の形状・形態を決定するものである。したがって、細胞骨格の分布領域は概ね細胞形状を示しているものとみなせる。即ち、細胞領域推定ステップにおいて細胞領域学習モデルを用いて得られた細胞領域推定画像は、細胞領域と細胞外の領域(背景領域)との区分を示す。当然、細胞核は細胞領域に存在するから、同じ撮影範囲に対する核領域推定画像と細胞領域推定画像とを比較し、細胞以外の背景領域に存在している細胞核があれば、それは核の誤検出であると判断できる。核領域抽出ステップでは、こうした判断を行うことによって、誤検出された細胞核を排除し、より確度の高い細胞核のみを抽出することができる。 On the other hand, in the cell image analysis method and the cell analysis apparatus according to the above aspect of the present invention, a region in which the cytoskeleton is distributed is specified by using a stained image obtained by staining the cytoskeleton such as actin filament, in addition to the nuclear region learning model. Create a cell region learning model for this by machine learning. The cytoskeleton is a structural element that exists in a fibrous or reticular form throughout the cytoplasm and determines the shape and morphology of cells. Therefore, the distribution region of the cytoskeleton can be regarded as roughly indicating the cell shape. That is, the cell region estimation image obtained by using the cell region learning model in the cell region estimation step shows the division between the cell region and the extracellular region (background region). Naturally, the cell nucleus exists in the cell region, so compare the nuclear region estimation image and the cell region estimation image for the same imaging range, and if there is a cell nucleus existing in the background region other than the cell, it is a false detection of the nucleus. It can be judged that there is. By making such a judgment, the nuclear region extraction step can eliminate falsely detected cell nuclei and extract only more accurate cell nuclei.
 このようにして、本発明の上記態様による細胞画像解析方法及び細胞解析装置によれば、解析対象である細胞に対する染色等の侵襲的な処理を行うことなく、位相画像から細胞核を正確に抽出することができる。特に、位相画像に干渉縞が現れていたり異物による像等が現れていたりした場合でも、それらの影響を排除して又は軽減して、細胞核を正確に観察することができる。そして、例えば、その細胞核の数を計数することで、細胞数の情報を得ることができる。また、真の細胞核の形状や形態、或いは、細胞核内部の状態を観察することができる。また、解析対象である細胞の撮影(測定)は非侵襲的に行えるので、撮影後の細胞を引き続き培養したり、別の目的の分析や観察に供したりすることができる。 In this way, according to the cell image analysis method and the cell analysis apparatus according to the above aspect of the present invention, the cell nucleus is accurately extracted from the phase image without performing invasive treatment such as staining of the cell to be analyzed. be able to. In particular, even when interference fringes appear in the phase image or an image due to a foreign substance appears, the influence thereof can be eliminated or reduced, and the cell nucleus can be observed accurately. Then, for example, by counting the number of the cell nuclei, information on the number of cells can be obtained. In addition, the true shape and morphology of the cell nucleus, or the state inside the cell nucleus can be observed. In addition, since the cells to be analyzed can be photographed (measured) non-invasively, the cells after imaging can be continuously cultured or used for analysis or observation for another purpose.
本発明に係る細胞画像解析方法を用いた細胞解析装置の一実施形態の概略構成図。The schematic block diagram of one Embodiment of the cell analysis apparatus using the cell image analysis method which concerns on this invention. 本実施形態の細胞解析装置で使用されている全層畳み込みニューラルネットワークの構造の概念図。The conceptual diagram of the structure of the full-thickness convolutional neural network used in the cell analysis apparatus of this embodiment. 本実施形態の細胞解析装置における学習モデル作成時の処理の流れを示すフローチャート。The flowchart which shows the flow of processing at the time of making a learning model in the cell analysis apparatus of this embodiment. 本実施形態の細胞解析装置における解析対象である細胞の撮影から細胞計数結果出力までの処理の流れを示すフローチャート。The flowchart which shows the flow of the process from the imaging of the cell to be analyzed to the output of the cell count result in the cell analysis apparatus of this embodiment. 本実施形態の細胞解析装置における学習モデル作成時に使用する正解画像の処理過程を示す図であり、核染色蛍光画像の元画像(a)、背景除去処理後の画像(b)、及び二値化処理後の画像(c)。It is a figure which shows the processing process of the correct answer image used at the time of making a learning model in the cell analysis apparatus of this embodiment, the original image (a) of a nuclear-stained fluorescence image, the image (b) after background removal processing, and binarization. Image (c) after processing. 本実施形態の細胞解析装置で得られるIHM位相像の一例を示す図。The figure which shows an example of the IHM phase image obtained by the cell analyzer of this embodiment. 図6に示したIHM位相像を入力画像とし学習モデルを用いた推定により出力される核領域推定画像の一例を示す図。FIG. 6 is a diagram showing an example of a nuclear region estimation image output by estimation using a learning model using the IHM phase image shown in FIG. 6 as an input image. 図7に示した核領域推定画像の正解画像を示す図。The figure which shows the correct answer image of the nuclear region estimation image shown in FIG. 7. 図6に示したIHM位相像に核位置候補点を重ね合わせた画像を示す図。The figure which shows the image which superposed the nuclear position candidate point on the IHM phase image shown in FIG. 図6に示したIHM位相像に核領域の正解点を重ね合わせた画像を示す図。The figure which shows the image which superposed the correct answer point of a nuclear region on the IHM phase image shown in FIG. アクチンフィラメント染色画像と明視野画像の一例を示す図。The figure which shows an example of the actin filament stained image and the bright field image. 図6に示したIHM位相像を入力画像とし学習モデルを用いた推定により出力される細胞領域推定画像(二値画像)を示す図。FIG. 6 is a diagram showing a cell region estimation image (binary image) output by estimation using a learning model using the IHM phase image shown in FIG. 6 as an input image. 図12に示した細胞領域推定画像を用いたマスク処理を行った後の核領域推定画像を示す図。The figure which shows the nuclear region estimation image after performing the mask processing using the cell region estimation image shown in FIG. 図6に示したIHM位相像にマスク処理後の核位置候補点を重ね合わせた画像を示す図。The figure which shows the image which superposed the core position candidate point after mask processing on the IHM phase image shown in FIG.
 以下、本発明に係る細胞画像解析方法及び細胞解析装置の一実施形態について、添付図面を参照して説明する。
 図1は、本発明に係る細胞画像解析方法を実施するための一実施形態である細胞解析装置の要部のブロック構成図である。
Hereinafter, an embodiment of the cell image analysis method and the cell analysis apparatus according to the present invention will be described with reference to the accompanying drawings.
FIG. 1 is a block configuration diagram of a main part of a cell analysis apparatus according to an embodiment for carrying out the cell image analysis method according to the present invention.
 本実施形態の細胞解析装置1は、顕微観察部10、制御・処理部20、ユーザインターフェイスである入力部30及び表示部40、を備える。また、細胞解析装置1には、FCNモデル作成部50、が付設されている。 The cell analysis device 1 of the present embodiment includes a microscopic observation unit 10, a control / processing unit 20, an input unit 30 and a display unit 40 which are user interfaces. Further, the cell analysis device 1 is provided with an FCN model creation unit 50.
 顕微観察部10はインライン型ホログラフィック顕微鏡(In-line Holographic Microscopy:IHM)であり、レーザダイオードなどを含む光源部11とイメージセンサ12とを備え、光源部11とイメージセンサ12との間に、細胞14を含む培養プレート13が配置される。 The microscopic observation unit 10 is an in-line Holographic Microscopy (IHM), includes a light source unit 11 including a laser diode and an image sensor 12, and is located between the light source unit 11 and the image sensor 12. A culture plate 13 containing the cells 14 is placed.
 制御・処理部20は、顕微観察部10の動作を制御するとともに顕微観察部10で取得されたデータを処理するものであって、撮影制御部21と、ホログラムデータ記憶部22と、位相情報算出部23と、画像作成部24と、核領域推定部25と、細胞領域推定部26と、マスク処理部27と、細胞計数部28と、表示処理部29と、を機能ブロックとして備える。核領域推定部25は核領域学習モデル記憶部251を含み、細胞領域推定部26は細胞領域学習モデル記憶部261を含む。 The control / processing unit 20 controls the operation of the microscopic observation unit 10 and processes the data acquired by the microscopic observation unit 10, and includes the imaging control unit 21, the hologram data storage unit 22, and the phase information calculation. A functional block includes a unit 23, an image creation unit 24, a nuclear region estimation unit 25, a cell region estimation unit 26, a mask processing unit 27, a cell counting unit 28, and a display processing unit 29. The nuclear region estimation unit 25 includes a nuclear region learning model storage unit 251, and the cell region estimation unit 26 includes a cell region learning model storage unit 261.
 FCNモデル作成部50は、学習画像データ入力部51と、画像位置合わせ処理部52と、染色画像前処理部53と、染色画像二値化部54と、学習実行部55と、モデル構築部56と、を機能ブロックとして含む。このFCNモデル作成部50において作成される学習済みの学習モデルが、制御・処理部20における記憶部に格納され、核領域学習モデル記憶部251及び細胞領域学習モデル記憶部261として機能する。 The FCN model creation unit 50 includes a learning image data input unit 51, an image alignment processing unit 52, a stained image preprocessing unit 53, a stained image binarization unit 54, a learning execution unit 55, and a model construction unit 56. And are included as functional blocks. The learned learning model created by the FCN model creation unit 50 is stored in the storage unit of the control / processing unit 20, and functions as the nuclear region learning model storage unit 251 and the cell region learning model storage unit 261.
 通常、制御・処理部20の実体は、所定のソフトウェアがインストールされたパーソナルコンピュータやより性能の高いワークステーション、或いは、そうしたコンピュータと通信回線を介して接続された高性能なコンピュータを含むコンピュータシステムである。即ち、制御・処理部20に含まれる各ブロックの機能は、コンピュータ単体又は複数のコンピュータを含むコンピュータシステムに搭載されているソフトウェアを実行することで実施される、該コンピュータ又はコンピュータシステムに記憶されている各種データを用いた処理によって具現化されるものとすることができる。 Usually, the entity of the control / processing unit 20 is a personal computer in which predetermined software is installed, a higher-performance workstation, or a computer system including a high-performance computer connected to such a computer via a communication line. is there. That is, the function of each block included in the control / processing unit 20 is stored in the computer or computer system, which is executed by executing software installed in a computer system including a single computer or a plurality of computers. It can be embodied by processing using various types of data.
 また、FCNモデル作成部50の実体も、所定のソフトウェアがインストールされたパーソナルコンピュータやより性能の高いワークステーションである。通常、このコンピュータは制御・処理部20と異なるコンピュータであるが、同じコンピュータであってもよい。つまり、FCNモデル作成部50の機能を制御・処理部20に持たせることもできる。 Also, the entity of the FCN model creation unit 50 is a personal computer on which predetermined software is installed or a workstation with higher performance. Normally, this computer is a computer different from the control / processing unit 20, but it may be the same computer. That is, the control / processing unit 20 can have the function of the FCN model creation unit 50.
 本実施形態の細胞解析装置1において、細胞の観察画像であるIHM位相像を作成するまでの作業及び処理について説明する。
 オペレータが細胞14を含む培養プレート13を所定位置にセットして入力部30で所定の操作を行うと、撮影制御部21は顕微観察部10を制御し、以下のような手順でホログラムデータを取得する。
The work and processing up to the creation of the IHM phase image, which is an observation image of cells, will be described in the cell analysis device 1 of the present embodiment.
When the operator sets the culture plate 13 containing the cells 14 at a predetermined position and performs a predetermined operation on the input unit 30, the imaging control unit 21 controls the microscopic observation unit 10 and acquires hologram data by the following procedure. To do.
 即ち、光源部11は10°程度の角度の広がりを持つコヒーレント光を培養プレート13の所定の領域に照射する。培養プレート13及び細胞14を透過したコヒーレント光(物体光16)は、培養プレート13上で細胞14に近接する領域を透過した光(参照光15)と干渉しつつイメージセンサ12に到達する。物体光16は細胞14を透過する際に位相が変化した光であり、他方、参照光15は細胞14を透過しないので細胞14に起因する位相変化を受けない光である。したがって、イメージセンサ12の検出面(像面)上には、細胞14により位相が変化した物体光16と位相が変化していない参照光15との干渉縞による像つまりホログラムが形成される。 That is, the light source unit 11 irradiates a predetermined area of the culture plate 13 with coherent light having an angle spread of about 10 °. The coherent light (object light 16) transmitted through the culture plate 13 and the cells 14 reaches the image sensor 12 while interfering with the light (reference light 15) transmitted through the region close to the cells 14 on the culture plate 13. The object light 16 is light whose phase changes when it passes through the cell 14, while the reference light 15 is light that does not pass through the cell 14 and therefore does not undergo the phase change caused by the cell 14. Therefore, on the detection surface (image surface) of the image sensor 12, an image, that is, a hologram is formed by interference fringes between the object light 16 whose phase has been changed by the cells 14 and the reference light 15 whose phase has not changed.
 光源部11及びイメージセンサ12は、図示しない移動機構によって、連動してX軸方向及びY軸方向に順次移動される。これにより、光源部11から発せられたコヒーレント光の照射領域(観察領域)を培養プレート13上で移動させ、広い2次元領域に亘るホログラムデータ(イメージセンサ12の検出面で形成されたホログラムの2次元的な光強度分布データ)を取得することができる。 The light source unit 11 and the image sensor 12 are sequentially moved in the X-axis direction and the Y-axis direction in conjunction with each other by a movement mechanism (not shown). As a result, the irradiation region (observation region) of the coherent light emitted from the light source unit 11 is moved on the culture plate 13, and the hologram data (2 of the hologram formed on the detection surface of the image sensor 12) covers a wide two-dimensional region. Dimensional light intensity distribution data) can be obtained.
 上述したように顕微観察部10で得られたホログラムデータは逐次、制御・処理部20に送られ、ホログラムデータ記憶部22に格納される。制御・処理部20において、位相情報算出部23は、ホログラムデータ記憶部22からホログラムデータを読み出し、位相回復のための所定の演算処理を実行することで観察領域(撮影領域)全体の位相情報を算出する。画像作成部24は、算出された位相情報に基づいてIHM位相像を作成する。こうした位相情報の算出やIHM位相像の作成の際には、特許文献6等に開示されている周知のアルゴリズムを用いることができる。 As described above, the hologram data obtained by the microscopic observation unit 10 is sequentially sent to the control / processing unit 20 and stored in the hologram data storage unit 22. In the control / processing unit 20, the phase information calculation unit 23 reads the hologram data from the hologram data storage unit 22 and executes a predetermined arithmetic process for phase recovery to obtain the phase information of the entire observation area (photographing area). calculate. The image creation unit 24 creates an IHM phase image based on the calculated phase information. A well-known algorithm disclosed in Patent Document 6 and the like can be used when calculating such phase information and creating an IHM phase image.
 図6はIHM位相像の一例である。透明である細胞は一般的な光学顕微鏡では視認しにくいが、IHM位相像では個々の細胞をかなり明瞭に観察できることが分かる。但し、このIHM位相像から各細胞の核を視認することは困難である。そこで、本実施形態の細胞解析装置1では、機械学習法の一つである全層畳み込みニューラルネットワーク(FCN:Fully Convolutional Neural network)を利用して、各細胞において核が存在すると推定される領域を示す核領域推定画像を得る。 FIG. 6 is an example of an IHM phase image. It can be seen that transparent cells are difficult to see with a general optical microscope, but individual cells can be observed fairly clearly in the IHM phase image. However, it is difficult to visually recognize the nucleus of each cell from this IHM phase image. Therefore, in the cell analysis device 1 of the present embodiment, a region in which a nucleus is presumed to exist in each cell is determined by using a full-layer convolutional neural network (FCN), which is one of the machine learning methods. Obtain the shown nuclear region estimation image.
 図2はFCNの構造の概念図である。FCNの構造や処理の詳細は多くの文献に詳しく説明されている。また、米国マスワークス(MathWorks)社が提供している「MATLAB」などの市販の或いはフリーのソフトウェアを利用した実装も可能である。そのため、ここでは概略的に説明する。 FIG. 2 is a conceptual diagram of the structure of the FCN. The details of the structure and processing of FCN are explained in detail in many documents. It can also be implemented using commercially available or free software such as "MATLAB" provided by MathWorks in the United States. Therefore, a schematic description will be given here.
 図2に示すように、FCNは、例えば畳み込み層とプーリング層との繰り返しが多層化された多層ネットワーク60と、畳み込みニューラルネットワークにおける全結合層に相当する畳み込み層61と、を含む。この場合、多層ネットワーク60では、所定のサイズのフィルタ(カーネル)を用いた畳み込み処理と、畳み込み結果を2次元的に縮小して有効値を抽出するプーリング処理とを繰り返す。但し、多層ネットワーク60は、プーリング層がなく畳み込み層のみで構成されていてもよい。また、最終段の畳み込み層61では、所定のサイズのフィルタを入力画像内でスライドさせつつ局所的な畳み込み及び逆畳み込みを行う。このFCNでは、IHM位相像などの入力画像63に対してセマンティックセグメンテーションを行うことで、細胞核の領域を画素単位でラベル付けしたセグメンテーション画像64を出力することができる。 As shown in FIG. 2, the FCN includes, for example, a multi-layer network 60 in which repetitions of a convolutional layer and a pooling layer are multi-layered, and a convolutional layer 61 corresponding to a fully connected layer in a convolutional neural network. In this case, in the multi-layer network 60, the convolution process using a filter (kernel) of a predetermined size and the pooling process of reducing the convolution result two-dimensionally and extracting an effective value are repeated. However, the multilayer network 60 may be composed of only a convolutional layer without a pooling layer. Further, in the final stage convolution layer 61, local convolution and deconvolution are performed while sliding a filter of a predetermined size in the input image. In this FCN, by performing semantic segmentation on the input image 63 such as the IHM phase image, it is possible to output the segmentation image 64 in which the region of the cell nucleus is labeled on a pixel-by-pixel basis.
 ここでは、入力されるIHM位相像における画素単位でのラベル化を行うように多層ネットワーク60及び畳み込み層61を設計する。即ち、出力画像であるセグメンテーション画像64においてラベル付けされる一つの領域の最小単位はIHM位相像上の一つの画素である。そのため、仮にIHM位相像上で細胞核のサイズが1画素程度であったとしても、セグメンテーション画像64において細胞核は一つの領域として検出される。 Here, the multilayer network 60 and the convolutional layer 61 are designed so as to label the input IHM phase image on a pixel-by-pixel basis. That is, the smallest unit of one region labeled in the segmentation image 64, which is the output image, is one pixel on the IHM phase image. Therefore, even if the size of the cell nucleus is about one pixel on the IHM phase image, the cell nucleus is detected as one region in the segmentation image 64.
 FCNにより上記のようなセマンティックセグメンテーションを行うには、予め多数の学習用の画像データを用いて、多層ネットワーク60に含まれる複数の畳み込み層や最終段の畳み込み層61それぞれにおけるフィルタの係数(重み)を学習させ、学習モデルを構築しておく必要がある。 In order to perform the above-mentioned semantic segmentation by FCN, the coefficients (weights) of the filters in each of the plurality of convolution layers included in the multilayer network 60 and the final convolution layer 61 are used in advance by using a large number of image data for learning. It is necessary to train and build a learning model.
 次に、図3に示すフローチャートに従って、FCNモデル作成部50における学習処理の動作を説明する。なお、学習を行う際には例えば、一般的に機械学習(特にディープラーニング)でしばしば用いられている確率的勾配降下法を利用した学習を行うことができる。 Next, the operation of the learning process in the FCN model creation unit 50 will be described according to the flowchart shown in FIG. When learning, for example, learning can be performed using the stochastic gradient descent method that is often used in machine learning (particularly deep learning).
 FCNモデル作成部50において学習画像データ入力部51は、画像作成部24で作成されたIHM位相像とそれに対応する正解画像とを一組とする学習データ(教師データ、訓練データとも呼ばれるが、ここでは学習データという)を予め多数組読み込む(ステップS11)。IHM位相像は、上述したように細胞解析装置1で実際に細胞を撮影したデータに基づいて作成されるものであるが、必ずしも特定の細胞解析装置に限らず、同様の構成の他の細胞解析装置で得られたものでよい。一方、正解画像は、IHM位相像を作成したときの細胞の核のみを染色し、それを適宜の顕微鏡で撮影した蛍光画像(核染色蛍光画像)である。染色の手法は細胞核を染色可能であれば特に問わないが、例えば、DAPI(4',6-diamidino-2-phenylindole)、ヨウ化プロピジウム、SYTOX(登録商標)、TO-PRO(登録商標)-3などを用いることができる。 In the FCN model creation unit 50, the learning image data input unit 51 is a learning data (also called teacher data or training data) in which the IHM phase image created by the image creation unit 24 and the corresponding correct image are a set, but here. Then, a large number of sets (referred to as learning data) are read in advance (step S11). The IHM phase image is created based on the data obtained by actually photographing the cells with the cell analysis device 1 as described above, but the IHM phase image is not necessarily limited to a specific cell analysis device, and other cell analysis having the same configuration is performed. It may be the one obtained by the device. On the other hand, the correct image is a fluorescence image (nuclear-stained fluorescence image) obtained by staining only the nucleus of the cell when the IHM phase image was created and taking the image with an appropriate microscope. The staining method is not particularly limited as long as the cell nucleus can be stained, and for example, DAPI (4', 6-diamidino-2-phenylindole), propidium iodide, SYSTEMX (registered trademark), TO-PRO (registered trademark)- 3 and the like can be used.
 一組であるIHM位相像と核染色蛍光画像とでは細胞の位置や方向、大きさなどが全く同じであることが望ましいが、一般には、デジタルホログラフィック顕微鏡での撮影と並行して染色蛍光画像を取得することはできないため、得られるIHM位相像と核染色蛍光画像とで細胞の位置や方向、大きさなどが異なることは避けられない。そこで、画像位置合わせ処理部52は、いずれか一方の画像に対して平行移動、回転、拡大・縮小などの画像処理を行うことで、両画像の位置合わせを実施する(ステップS12)。一般的には、細胞がより明瞭に視認可能であるIHM位相像を基準として、核染色蛍光画像の位置を合わせるように画像処理を行うとよい。この位置合わせの作業は、例えばウェルの縁部や培養プレートに付加したマークなどを参照してオペレータが手作業で行ってもよいが、所定のアルゴリズムで自動的に実施するようにしてもよい。 It is desirable that the position, direction, and size of cells are exactly the same between the IHM phase image and the nuclear-stained fluorescence image, which are a set, but in general, the stained fluorescence image is taken in parallel with the imaging with a digital holographic microscope. It is inevitable that the position, direction, size, etc. of the cells will differ between the obtained IHM phase image and the nuclear-stained fluorescence image. Therefore, the image alignment processing unit 52 aligns both images by performing image processing such as translation, rotation, enlargement / reduction, etc. on one of the images (step S12). In general, it is advisable to perform image processing so as to align the position of the nuclear-stained fluorescence image with reference to the IHM phase image in which the cells are more clearly visible. This alignment work may be performed manually by the operator with reference to, for example, the edge of the well or the mark added to the culture plate, or may be automatically performed by a predetermined algorithm.
 次に、染色画像前処理部53は核染色蛍光画像において細胞核の領域がより鮮明になるように、ノイズ除去処理、及び背景除去処理を実施する(ステップS13、S14)。ノイズ除去処理は各種のノイズを除去することを目的としており、例えば、線形フィルタ、メディアンフィルタなどの各種のフィルタを用いることができる。また、背景除去処理は、細胞核以外の背景部分にある強度ムラを除去することを主な目的としたものであり、背景減算処理として、平均値フィルタを利用した方法などが知られている。ノイズ除去処理、及び背景除去処理の手法自体は、従来の画像処理で利用されている様々な手法を利用することができる。また、ノイズの状況は顕微鏡の特性や対象サンプルの特性にも依存するので、場合によってはノイズ除去処理を省略することができる。 Next, the stained image preprocessing unit 53 performs noise removing processing and background removing processing so that the cell nucleus region becomes clearer in the nuclear stained fluorescent image (steps S13 and S14). The noise removal process aims to remove various types of noise, and for example, various types of filters such as a linear filter and a median filter can be used. Further, the background removing process is mainly intended to remove intensity unevenness in a background portion other than the cell nucleus, and a method using an average value filter or the like is known as a background subtraction process. As the noise removal processing and the background removal processing method itself, various methods used in the conventional image processing can be used. Further, since the noise condition depends on the characteristics of the microscope and the characteristics of the target sample, the noise removal process can be omitted in some cases.
 染色画像二値化部54は上記のように前処理がなされた画像を二値化処理して、核の領域とそれ以外の領域とを明確化した二値画像を作成する(ステップS15)。さらに染色画像二値化部54は、二値化後の画像に対し、モルフォロジー変換処理として膨張処理と縮小処理とを組み合わせたクロージング処理を実施する(ステップS16)。 The stained image binarization unit 54 binarizes the preprocessed image as described above to create a binary image in which the nuclear region and the other regions are clarified (step S15). Further, the stained image binarization unit 54 performs a closing process that combines an expansion process and a reduction process as a morphology conversion process on the binarized image (step S16).
 図5(a)は核染色蛍光画像の元画像、図5(b)は図5(a)に示した画像に対し背景除去処理を行った後の画像、図5(c)は図5(a)に示した画像に対し二値化を行った後の二値画像の一例である。以下で説明する画像も同様であるが、ここでは、間葉系幹細胞(MSC:Mesenchymal Stem Cells)を対象とし、核染色にはDAPIを用いた。 5 (a) is an original image of a nuclear-stained fluorescent image, FIG. 5 (b) is an image of the image shown in FIG. 5 (a) after background removal processing, and FIG. 5 (c) is FIG. 5 (c). This is an example of a binary image after binarizing the image shown in a). The same applies to the images described below, but here, mesenchymal stem cells (MSCs) were targeted, and DAPI was used for nuclear staining.
 一般的に核染色蛍光画像では、核領域以外の背景部分における強度ムラが大きいので、背景除去処理を実施しないと核領域を抽出することが可能な二値化が行えない。それに対し、背景除去処理を事前に行うことで、図5(c)に示したように、核領域が的確に抽出されている二値画像を得ることができる。この二値画像は、対応するIHM位相像上の各画素について、細胞核領域とそれ以外の領域とをセマンティックセグメンテーションした画像である。また、上述したように二値化後にクロージング処理を行うことで、主として輝点ノイズ等の蛍光画像上の小さなノイズを除去することができる。但し、ノイズの状況やステップS13でのノイズ除去処理の性能によっては、クロージング処理は省略することもできる。 In general, in a nuclear-stained fluorescent image, the intensity unevenness in the background portion other than the nuclear region is large, so that the binarization capable of extracting the nuclear region cannot be performed unless the background removal treatment is performed. On the other hand, by performing the background removal process in advance, as shown in FIG. 5C, it is possible to obtain a binary image in which the nuclear region is accurately extracted. This binary image is an image in which the cell nucleus region and the other regions are semantically segmented for each pixel on the corresponding IHM phase image. Further, as described above, by performing the closing process after binarization, it is possible to remove mainly small noise on the fluorescence image such as bright spot noise. However, the closing process may be omitted depending on the noise situation and the performance of the noise removal process in step S13.
 全ての学習データについてステップS12~S15の処理を行うことで、IHM位相像と核染色蛍光画像の二値画像とを組とする多数の学習データが揃ったならば、学習実行部55は、その多数の学習データを用いて、FCNの学習を実行する(ステップS17)。即ち、FCNによるセマンティックセグメンテーションの結果が正解画像にできるだけ近くなるように、FCNのネットワークにおける複数の畳み込み層でのフィルタ係数を学習する。モデル構築部56は、その学習の繰り返しの過程でモデルを構築し、所定の学習が終了すると、モデル構築部56はその学習結果に基づく学習モデルを保存する(ステップS18)。
 このようにして作成された、核領域を示す学習モデルを構成するデータが核領域学習モデル記憶部251に格納される。
By performing the processes of steps S12 to S15 for all the training data, if a large number of training data including a binary image of the IHM phase image and the nuclear-stained fluorescent image are prepared, the learning execution unit 55 can use the training execution unit 55. FCN learning is executed using a large number of training data (step S17). That is, the filter coefficients in a plurality of convolution layers in the FCN network are learned so that the result of semantic segmentation by FCN is as close as possible to the correct image. The model building unit 56 builds a model in the process of repeating the learning, and when the predetermined learning is completed, the model building unit 56 saves the learning model based on the learning result (step S18).
The data forming the learning model indicating the nuclear region created in this way is stored in the nuclear region learning model storage unit 251.
 但し、ホログラフィック顕微鏡による測定の原理上、顕微観察部10で得られるホログラムデータには少なからず干渉縞が現れ、これがIHM位相像にも現れることがある。図6に示したIHM位相像では、干渉縞に由来する略同心円状の像がかなり明瞭に現れている。IHM位相像上で干渉縞が明瞭になると、核領域学習モデルを用いて核領域を推定する際に、干渉縞の一部を誤って細胞核であると誤検出することがある。 However, due to the principle of measurement by a holographic microscope, interference fringes appear not a little in the hologram data obtained by the microscopic observation unit 10, and this may also appear in the IHM phase image. In the IHM phase image shown in FIG. 6, a substantially concentric image derived from the interference fringes appears fairly clearly. When the interference fringes become clear on the IHM phase image, a part of the interference fringes may be erroneously detected as a cell nucleus when estimating the nuclear region using the nuclear region learning model.
 図7は、図6に示したIHM位相像を入力画像とし核領域学習モデルを用いた推定により得られる核領域推定画像の一例を示す図である。図8は、図7に示した核領域推定画像の正解画像、つまりは正確な核の領域を示す図である。図9は、図6に示したIHM位相像に核領域推定結果である点(核位置候補点)を重ね合わせた画像を示す図である。図10は、図6に示したIHM位相像に核領域の正解な点を重ね合わせた画像を示す図である。なお、図9、図10において核位置を示す点は矩形状であり、それらは、グレイスケールで表された核領域に対し、後述する極大値領域抽出処理及び二値化処理を行うことによって得られたものである。 FIG. 7 is a diagram showing an example of a nuclear region estimation image obtained by estimation using a nuclear region learning model using the IHM phase image shown in FIG. 6 as an input image. FIG. 8 is a diagram showing a correct image of the nuclear region estimation image shown in FIG. 7, that is, an accurate nuclear region. FIG. 9 is a diagram showing an image in which a point (nuclear position candidate point) which is a nuclear region estimation result is superimposed on the IHM phase image shown in FIG. FIG. 10 is a diagram showing an image in which the correct points of the nuclear region are superimposed on the IHM phase image shown in FIG. The points indicating the nuclear positions in FIGS. 9 and 10 are rectangular, and they can be obtained by performing a maximum value region extraction process and a binarization process, which will be described later, on the nuclear region represented by the gray scale. It was done.
 図9と図10とを比較すると、図9では細胞核が存在しない筈である干渉縞上において偽の核領域が多数検出されていることが分かる。これは、単に核位置学習モデルを用いて核領域を推定しただけでは、多数の誤検出が生じることを意味している。
 そこで、本実施形態の細胞解析装置では、誤検出された偽の核領域を除外する処理を行うために、核領域を認識するための学習モデルとは別に、細胞領域を認識する学習モデルを作成する。
Comparing FIGS. 9 and 10, it can be seen that in FIG. 9, a large number of false nuclear regions are detected on the interference fringes where the cell nuclei should not exist. This means that simply estimating the nuclear region using the nuclear position learning model will result in a large number of false positives.
Therefore, in the cell analysis apparatus of the present embodiment, in order to perform the process of excluding the falsely detected false nuclear region, a learning model for recognizing the cell region is created separately from the learning model for recognizing the nuclear region. To do.
 ここでは、細胞領域を示す正解画像を得るために、細胞の内部全体に繊維形状又は網目状に存在する細胞骨格を利用する。図11(a)及び(b)は、同じ観察領域におけるアクチンフィラメントの蛍光染色画像及び通常の顕微鏡による明視野画像である。アクチンフィラメントは細胞骨格の一種であり、細胞の内部全体に繊維形状に存在する。図11(b)に示すように、明視野画像では細胞領域を視認することが難しいが、図11(a)に示すように、アクチンフィラメントは概ね細胞全体に存在しているため、アクチンフィラメントが分布している範囲は細胞領域であるとみなすことができる。そこで、本実施形態の細胞解析装置では、細胞骨格(ここではアクチンフィラメント)を染色した蛍光画像を正確画像として、細胞領域を抽出するための細胞領域学習モデルを作成する。 Here, in order to obtain a correct image showing the cell region, the cytoskeleton existing in the fiber shape or the network shape throughout the inside of the cell is used. 11 (a) and 11 (b) are a fluorescence-stained image of actin filaments in the same observation region and a bright-field image obtained by a normal microscope. Actin filaments are a type of cytoskeleton and are present in the form of fibers throughout the inside of cells. As shown in FIG. 11 (b), it is difficult to visually recognize the cell region in the bright field image, but as shown in FIG. 11 (a), the actin filaments are present in almost the entire cell, so that the actin filaments are present. The distributed range can be considered to be the cellular region. Therefore, in the cell analyzer of the present embodiment, a cell region learning model for extracting a cell region is created by using a fluorescent image stained with a cytoskeleton (here, actin filament) as an accurate image.
 細胞領域学習モデルは上記核領域学習モデルと同様の手順、即ち図3に示した手順で作成することができる。 The cell region learning model can be created by the same procedure as the nuclear region learning model, that is, the procedure shown in FIG.
 細胞骨格は細胞の内部に繊維状に張り巡らされているが、必ずしも均等に存在しているわけではなく、細胞内であっても細胞骨格が存在しない又はその密度が低いために二値化すると部分的に黒色になる画素が生じる。これに対し、二値化後にクロージング処理を行うことで、周囲が白色である画素については黒色であっても白色に変換される。これにより、細胞骨格が実際に存在している部分だけではなく、細胞骨格が分布している範囲つまりは細胞領域全体を示す画像を得ることができる。即ち、クロージング処理後の画像は、対応するIHM位相像上の各画素について、細胞領域とそれ以外の領域とを区分けした画像となる。 The cytoskeleton is fibrously stretched inside the cell, but it does not necessarily exist evenly, and even within the cell, if the cytoskeleton does not exist or its density is low, it is binarized. Some pixels are partially blackened. On the other hand, by performing the closing process after binarization, the pixels whose surroundings are white are converted to white even if they are black. As a result, it is possible to obtain an image showing not only the part where the cytoskeleton actually exists but also the range where the cytoskeleton is distributed, that is, the entire cell region. That is, the image after the closing process is an image in which the cell region and the other regions are separated for each pixel on the corresponding IHM phase image.
 IHM位相像とクロージング処理後の細胞骨格染色蛍光画像とを組とする多数の学習データが揃ったならば、学習実行部55は、その多数の学習データを用いて、FCNの学習を実行する。モデル構築部56は、その学習の繰り返しの過程でモデルを構築し、所定の学習が終了すると、モデル構築部56はその学習結果に基づく細胞領域学習モデルを保存する。このようにして作成された細胞領域学習モデルを構成するデータが、細胞解析装置1における細胞領域学習モデル記憶部261に格納される。 When a large number of learning data including the IHM phase image and the cytoskeleton-stained fluorescent image after the closing process are prepared, the learning execution unit 55 executes FCN learning using the large number of learning data. The model building unit 56 builds a model in the process of repeating the learning, and when a predetermined learning is completed, the model building unit 56 saves the cell region learning model based on the learning result. The data constituting the cell region learning model created in this way is stored in the cell region learning model storage unit 261 in the cell analysis device 1.
 次に、細胞解析装置1で行われる、解析対象である細胞についての核領域推定画像の作成処理及びそれに基づく細胞計数処理について、図4に示すフローチャートを参照して説明する。 Next, the process of creating a nuclear region estimation image for the cell to be analyzed and the cell counting process based on the process performed by the cell analysis device 1 will be described with reference to the flowchart shown in FIG.
 オペレータは解析対象の細胞14を含む培養プレート13を顕微観察部10の所定位置にセットし、入力部30で所定の操作を行う。これにより、撮影制御部21の制御の下で顕微観察部10は、試料(培養プレート13中の細胞14)の撮影を実施する(ステップS21)。位相情報算出部23及び画像作成部24は、その撮影により得られたホログラムデータに基づいて位相計算を実行し、IHM位相像を形成する(ステップS22)。 The operator sets the culture plate 13 containing the cells 14 to be analyzed at a predetermined position of the microscopic observation unit 10, and performs a predetermined operation on the input unit 30. As a result, under the control of the imaging control unit 21, the microscopic observation unit 10 photographs the sample (cells 14 in the culture plate 13) (step S21). The phase information calculation unit 23 and the image creation unit 24 perform phase calculation based on the hologram data obtained by the photographing, and form an IHM phase image (step S22).
 核領域推定部25は、ステップS22で得られたIHM位相像を入力画像として読み込み、核領域学習モデル記憶部251に格納されている核領域学習モデルを用いたFCNによる処理を実施し、入力画像に対応したセグメンテーション画像を出力画像として取得する(ステップS23)。このときのセグメンテーション画像は、入力画像であるIHM位相像と同じ観察範囲について、細胞核の領域と細胞核以外の領域とを区別して示す核領域推定画像である。 The nuclear region estimation unit 25 reads the IHM phase image obtained in step S22 as an input image, performs processing by FCN using the nuclear region learning model stored in the nuclear region learning model storage unit 251 and performs processing by FCN, and inputs the input image. The segmentation image corresponding to is acquired as an output image (step S23). The segmentation image at this time is a nuclear region estimation image showing the region of the cell nucleus and the region other than the cell nucleus separately in the same observation range as the IHM phase image which is the input image.
 よく知られているように、上述したような機械学習を行うことで構築した学習モデルを利用したFCN処理では、画素毎にセマンティックセグメンテーションの確からしさの確率を示す数値が得られる。つまり、ここでは、核の領域であるとの推定の確からしさの確率が画素毎に得られる。そこで核領域推定部25は、画素毎のその確率値に応じて白から黒までの階調を変えたグレイスケールの画像を出力する。即ち、核領域推定画像では、核領域であると高い確度で推定される部分は白又はそれに近い階調で表示され、核領域であるとの推定の確度が低い部分は相対的に黒に近い階調で表示される。 As is well known, in FCN processing using a learning model constructed by performing machine learning as described above, a numerical value indicating the probability of certainty of semantic segmentation can be obtained for each pixel. That is, here, the probability of the certainty of estimation that it is the region of the nucleus is obtained for each pixel. Therefore, the nuclear region estimation unit 25 outputs a grayscale image in which the gradation from white to black is changed according to the probability value of each pixel. That is, in the nuclear region estimation image, the portion estimated with high accuracy as the nuclear region is displayed with white or a gradation close to it, and the portion with low estimation accuracy as the nuclear region is relatively close to black. It is displayed in gradation.
 すでに説明したように、図6に示したIHM位相像に対しては図7に示した核領域推定画像が得られる。上述したように、入力されたIHM位相像上で干渉縞が或る程度明瞭に現れている場合や異物の像が現れている場合、実際には細胞核が存在しない部位に偽の核領域が現れる。 As described above, the nuclear region estimation image shown in FIG. 7 can be obtained for the IHM phase image shown in FIG. As described above, when the interference fringes appear to some extent clearly on the input IHM phase image or when the image of a foreign substance appears, a false nuclear region appears at a site where the cell nucleus does not actually exist. ..
 次に、細胞領域推定部26は、ステップS22で得られたIHM位相像を入力画像として読み込み、細胞領域学習モデル記憶部261に格納されている細胞領域学習モデルを用いたFCNによる処理を実施し、入力画像に対応したセグメンテーション画像を出力画像として取得する(ステップS24)。このときのセグメンテーション画像は、入力画像であるIHM位相像と同じ観察範囲について、細胞領域とそれ以外の領域とを区別して示す細胞領域推定画像である。図12は、図6に示したIHM位相像を入力画像として得られる細胞領域推定画像の一例である。 Next, the cell region estimation unit 26 reads the IHM phase image obtained in step S22 as an input image, and performs processing by FCN using the cell region learning model stored in the cell region learning model storage unit 261. , The segmentation image corresponding to the input image is acquired as an output image (step S24). The segmentation image at this time is a cell region estimation image showing the cell region and other regions separately for the same observation range as the IHM phase image which is the input image. FIG. 12 is an example of a cell region estimation image obtained by using the IHM phase image shown in FIG. 6 as an input image.
 核領域推定画像と同様に、このときにも、FCN処理によって画素毎に得られるセグメンテーションの確からしさの確率を示す数値が得られる。そこで、細胞領域推定部26は、各画素の確率値をそれぞれ所定の閾値と比較し、閾値以上の確率値を有する画素を白、それ以外を黒とした二値のセグメンテーション画像を出力する。 Similar to the nuclear region estimation image, at this time as well, a numerical value indicating the probability of the certainty of the segmentation obtained for each pixel by the FCN processing can be obtained. Therefore, the cell region estimation unit 26 compares the probability value of each pixel with a predetermined threshold value, and outputs a binary segmentation image in which the pixel having the probability value equal to or higher than the threshold value is white and the other pixels are black.
 次いでマスク処理部27は、ステップS23で得られた核領域推定画像に対し、ステップS24で得られた二値化された細胞領域推定画像を用いたマスク処理を行う(ステップS25)。即ち、図12に示した二値画像の白色の部分つまり細胞領域に存在する核領域のみを抽出し、それ以外の核領域を排除する。このときに排除された核領域は、干渉縞や異物の像などの影響によって誤って検出された核領域である。図13は、マスク処理後の核領域推定画像であり、図7と比較すれば分かるように、核領域の数が大幅に減少している。 Next, the mask processing unit 27 performs mask processing on the nuclear region estimation image obtained in step S23 using the binarized cell region estimation image obtained in step S24 (step S25). That is, only the white part of the binary image shown in FIG. 12, that is, the nuclear region existing in the cell region is extracted, and the other nuclear regions are excluded. The nuclear region excluded at this time is a nuclear region erroneously detected due to the influence of interference fringes or an image of a foreign substance. FIG. 13 is a nuclear region estimation image after mask processing, and as can be seen by comparison with FIG. 7, the number of nuclear regions is significantly reduced.
 このマスク処理後の核領域推定画像はグレイスケール画像であり、核によって信号のピーク値が異なる。そのため、仮に単一の閾値を用いた判定処理を行っても、全ての核を検出できるとは限らない。また、隣接する複数の核が一つの領域として見える場合もあるため、隣接している複数の核を切り分ける処理を行うことが好ましい。そこで、マスク処理部27は、マスク処理後の核領域推定画像に対して極大値領域抽出処理を行う。 The core region estimation image after this mask processing is a grayscale image, and the peak value of the signal differs depending on the core. Therefore, even if the determination process using a single threshold value is performed, not all nuclei can be detected. Further, since a plurality of adjacent nuclei may be seen as one region, it is preferable to perform a process of separating the plurality of adjacent nuclei. Therefore, the mask processing unit 27 performs the maximum value region extraction processing on the nuclear region estimation image after the mask processing.
 具体的には、グレイスケール画像である核領域推定画像において、黒である領域(背景領域)以外の領域を空間的に膨張させる。次に、その膨張画像において、ノイズ許容値を予め考慮して定めたオフセット値を各画素の信号値(輝度値)から減算することで、輝度を全般的に低下させる。そして、膨張処理前の画像と輝度低下後の画像との間で、画素毎の輝度値の引き算処理を行う。このような引き算処理を行うと、元画像において輝度値がピークである付近の狭い範囲において、そのピークの値とは無関係に、輝度値が非ゼロとなり、それ以外の領域では輝度値がゼロになる。即ち、この処理によって、元画像の中でその周囲よりも輝度値が高い極大値の領域を抽出することができる。 Specifically, in the nuclear region estimation image which is a grayscale image, the region other than the black region (background region) is spatially expanded. Next, in the expanded image, the brightness is generally lowered by subtracting the offset value determined in advance in consideration of the noise tolerance value from the signal value (luminance value) of each pixel. Then, the luminance value is subtracted for each pixel between the image before the expansion process and the image after the luminance decrease. When such subtraction processing is performed, the luminance value becomes non-zero in a narrow range near the peak of the luminance value in the original image, regardless of the peak value, and the luminance value becomes zero in other regions. Become. That is, by this processing, it is possible to extract a region having a maximum value in which the brightness value is higher than that around the original image.
 なお、この極大値領域抽出処理のアルゴリズムは画像処理において一般的なものであるが、このアルゴリズムに限らず、極大値領域抽出処理には適宜の既知の手法を用いることができる。例えば、核領域推定画像を二値化したあとに、細胞核毎に重心位置を求めることで核位置候補点を得るようにしてもよい。 Although this algorithm for maximal value region extraction processing is common in image processing, it is not limited to this algorithm, and an appropriate known method can be used for maximum value region extraction processing. For example, after binarizing the nuclear region estimation image, the position of the center of gravity may be obtained for each cell nucleus to obtain the nuclear position candidate point.
 マスク処理部27は、マスク処理後の核領域推定画像から極大値領域を抽出した画像を作成したあと、該画像を二値化することで核領域の候補点を示す核位置候補点画像を得る(ステップS26)。図14は、図6に示したIHM画像に、核位置候補点を重ね合わせた図である。この図14と図10とを比較すると、核位置候補点は正解である核位置にほぼ一致していることが分かる。即ち、細胞領域推定画像を利用したマスク処理を行うことで、偽の核領域を的確に排除し、真に細胞核である領域を抽出することができる。 The mask processing unit 27 creates an image obtained by extracting the maximum value region from the masked nuclear region estimation image, and then binarizes the image to obtain a nuclear position candidate point image showing a candidate point of the nuclear region. (Step S26). FIG. 14 is a diagram in which a nuclear position candidate point is superimposed on the IHM image shown in FIG. Comparing FIG. 14 and FIG. 10, it can be seen that the nuclear position candidate points substantially coincide with the correct nuclear position. That is, by performing mask processing using the cell region estimation image, it is possible to accurately eliminate the false nuclear region and extract the region that is truly the cell nucleus.
 細胞計数部28は、ステップS26で得られた核位置候補点画像上で核位置候補点を計数する(ステップS27)。特殊な場合を除けば、一つの細胞は一つの細胞核を有する。したがって、細胞核の数は細胞数を表しているとみることができ、表示処理部29は計数結果を細胞数として表示部40に表示する(ステップS28)。もちろん、IHM位相像、核領域推定画像、核位置候補点画像、細胞領域推定画像などの一つ又は複数を併せて表示してもよい。 The cell counting unit 28 counts the nuclear position candidate points on the nuclear position candidate point image obtained in step S26 (step S27). Except in special cases, one cell has one cell nucleus. Therefore, it can be considered that the number of cell nuclei represents the number of cells, and the display processing unit 29 displays the counting result as the number of cells on the display unit 40 (step S28). Of course, one or a plurality of IHM phase images, kernel region estimation images, kernel position candidate point images, cell region estimation images, and the like may be displayed together.
 以上のようにして、本実施形態の細胞解析装置では、観察範囲内に存在する細胞の数を正確に算出してユーザに提供することができる。
 また、本実施形態の細胞解析装置において、図14に示したようなIHM位相像に核位置候補点を重ねた画像を表示部40に表示すれば、ユーザは細胞内での細胞核の位置を容易に把握することができる。また、図13に示したような核領域推定画像を表示部40に表示すれば、ユーザは細胞核の形状や形態を容易に把握することができる。
 上述したような生細胞の計数や細胞核の観察は非侵襲的に行われるので、その解析や観察に使用された細胞を引き続き培養したり別の目的に使用したりすることが可能である。
As described above, the cell analyzer of the present embodiment can accurately calculate the number of cells existing in the observation range and provide it to the user.
Further, in the cell analysis apparatus of the present embodiment, if an image in which the nuclear position candidate points are superimposed on the IHM phase image as shown in FIG. 14 is displayed on the display unit 40, the user can easily position the cell nucleus in the cell. Can be grasped. Further, if the nuclear region estimation image as shown in FIG. 13 is displayed on the display unit 40, the user can easily grasp the shape and morphology of the cell nucleus.
Since the counting of living cells and the observation of cell nuclei as described above are performed non-invasively, the cells used for the analysis and observation can be continuously cultured or used for another purpose.
 上記実施形態の細胞解析装置では、グレイスケールの核領域推定画像に対しマスク処理を行い、マスク処理後に極大値領域抽出処理及び二値化処理を行って各核領域に対する核位置候補点を求めていたが、グレイスケールの核領域推定画像から核位置候補点画像を求めた後に細胞領域推定画像を用いたマスク処理を行ってもよい。また、マスク処理として、グレイスケールである核領域推定画像を二値化したあとそれぞれの核領域について、一つの核領域と細胞領域推定画像上の細胞領域との一部又は全部が重複しない場合に、その核領域は誤検出であるとみなして排除するという処理を行ってもよい。このように、マスク処理の方法についても様々な方法を採用することができる。 In the cell analyzer of the above embodiment, the grayscale nuclear region estimation image is masked, and after the masking, the maximum value region extraction process and the binarization process are performed to obtain the nuclear position candidate points for each nuclear region. However, after obtaining the nuclear position candidate point image from the grayscale nuclear region estimation image, mask processing using the cell region estimation image may be performed. In addition, as a masking process, after binarizing a grayscale nuclear region estimation image, for each nuclear region, when one nuclear region and a part or all of the cell region on the cell region estimation image do not overlap. , The nuclear region may be regarded as a false positive and excluded. As described above, various methods can be adopted as the mask processing method.
 また、上記実施形態では、細胞核のセマンティックセグメンテーションのための機械学習法としてFCNを用いていたが、通常の畳み込みニューラルネットワーク(CNN)でもよいことは明らかである。また、ニューラルネットワークを用いた機械学習法に限らず、画像についてのセマンティックセグメンテーションが可能な機械学習法であれば本発明を適用することが有効である。こうした機械学習法としては、例えばサポートベクターマシン(SVM)、ランダムフォレスト、アダブーストなどがある。上述したように、FCNでは、入力された画像に対してセグメンテーション画像を出力する際にセグメンテーションの推定確率を出力できるが、CNNやSVM等の他の手法でも、パッチ画像(全体画像を細かく区切った小領域の画像)を走査させ、各パッチ画像の中心点における細胞核らしさの確率を出力することができる。 Further, in the above embodiment, FCN is used as a machine learning method for semantic segmentation of cell nuclei, but it is clear that a normal convolutional neural network (CNN) may be used. Further, it is effective to apply the present invention not only to the machine learning method using a neural network but also to any machine learning method capable of semantic segmentation of an image. Such machine learning methods include, for example, Support Vector Machine (SVM), Random Forest, and AdaBoost. As described above, FCN can output the estimated probability of segmentation when outputting a segmentation image for the input image, but other methods such as CNN and SVM also use patch images (the entire image is finely divided). A small area image) can be scanned and the probability of cell nucleus-likeness at the center point of each patch image can be output.
 また、上記実施形態の細胞解析装置では、顕微観察部10としてインライン型ホログラフィック顕微鏡を用いていたが、ホログラムが得られる顕微鏡であれば、オフアクシス(軸外し)型、位相シフト型などの他の方式のホログラフィック顕微鏡に置換え可能であることは当然である。 Further, in the cell analysis apparatus of the above embodiment, an in-line holographic microscope is used as the microscopic observation unit 10, but if it is a microscope that can obtain a hologram, an off-axis type, a phase shift type, or the like can be used. It is natural that it can be replaced with a holographic microscope of the above method.
 また、上記実施形態や各種の変形例はあくまでも本発明の一例であり、本発明の趣旨の範囲でさらに適宜変形、修正、追加を行っても本願特許請求の範囲に包含されることは明らかである。 Further, the above-described embodiment and various modifications are merely examples of the present invention, and it is clear that even if modifications, modifications, and additions are made as appropriate within the scope of the present invention, they are included in the claims of the present application. is there.
 [種々の態様]
 上述した例示的な実施形態が以下の態様の具体例であることは、当業者には明らかである。
[Various aspects]
It will be apparent to those skilled in the art that the exemplary embodiments described above are specific examples of the following embodiments.
 (第1項)本発明に係る細胞画像解析方法の一態様は、
 ホログラフィック顕微鏡で取得されたホログラムデータに基づいて作成される細胞の位相画像を入力画像とし、これに対応する、細胞核を染色して得られた染色画像に基づく擬似的な核領域画像を正解画像とした学習データを用い、機械学習を行うことで核領域学習モデルを作成する第1の学習モデル作成ステップと、
 前記細胞の位相画像を入力画像とし、これに対応する、細胞骨格を染色して得られた染色画像に基づく擬似的な細胞領域画像を正解画像とした学習データを用い、機械学習を行うことで細胞領域学習モデルを作成する第2の学習モデル作成ステップと、
 前記核領域学習モデルを用い、解析対象の細胞についての位相画像を入力画像とし、細胞核の領域を示す核領域推定画像を出力画像として取得する核領域推定ステップと、
 前記細胞領域学習モデルを用い、前記解析対象の細胞についての位相画像を入力画像とし、細胞領域を示す細胞領域推定画像を出力画像として取得する細胞領域推定ステップと、
 前記核領域推定画像と前記細胞領域推定画像とを用い、細胞領域であると推定される範囲に存在する細胞核を抽出する核領域抽出ステップと、
 を有するものである。
(Item 1) One aspect of the cell image analysis method according to the present invention is
The phase image of the cell created based on the hologram data acquired by the holographic microscope is used as the input image, and the corresponding pseudo nuclear region image based on the stained image obtained by staining the cell nucleus is the correct image. The first learning model creation step of creating a nuclear region learning model by performing machine learning using the training data of
By performing machine learning using the learning data in which the phase image of the cells is used as the input image and the corresponding pseudo cell region image based on the stained image obtained by staining the cytoskeleton is used as the correct image. The second learning model creation step to create the cell region learning model,
Using the nuclear region learning model, a nuclear region estimation step of acquiring a phase image of the cell to be analyzed as an input image and a nuclear region estimation image showing the region of the cell nucleus as an output image,
Using the cell region learning model, a cell region estimation step of acquiring a phase image of the cell to be analyzed as an input image and a cell region estimation image showing the cell region as an output image, and
Using the nuclear region estimation image and the cell region estimation image, a nuclear region extraction step of extracting cell nuclei existing in a range estimated to be a cell region, and a nuclear region extraction step.
It has.
 (第7項)また本発明に係る細胞解析装置の一態様は、
 ホログラフィック顕微鏡と、
 前記ホログラフィック顕微鏡で細胞を観察することで取得されたホログラムデータに基づいて、前記細胞の位相画像を作成する画像作成部と、
 ホログラムデータに基づいて作成される細胞の位相画像を入力画像とし、これに対応する、細胞核を染色して得られた染色画像に基づく擬似的な核領域画像を正解画像とした学習データを用い、機械学習を行うことで作成された核領域学習モデルが記憶される第1学習モデル記憶部と、
 前記細胞の位相画像を入力画像とし、これに対応する細胞骨格を染色して得られた染色画像に基づく擬似的な細胞領域画像を正解画像とした学習データを用い、機械学習を行うことで作成された細胞領域学習モデルが記憶される第2学習モデル記憶部と、
 前記第1学習モデル記憶部に記憶されている核領域学習モデルを用い、解析対象の細胞について前記画像作成部で作成された位相画像を入力画像とし、細胞核の領域を示す核領域推定画像を出力画像として取得する核領域推定部と、
 前記第2学習モデル記憶部に記憶されている細胞領域学習モデルを用い、前記解析対象の細胞についての位相画像を入力画像とし、細胞領域を示す細胞領域推定画像を出力画像として取得する細胞領域推定部と、
 前記核領域推定画像と前記細胞領域推定画像とを用い、細胞領域であると推定される範囲に存在する細胞核を抽出する核領域抽出部と、
 を備えるものである。
(Section 7) Further, one aspect of the cell analyzer according to the present invention is
With a holographic microscope
An image creation unit that creates a phase image of the cells based on the hologram data obtained by observing the cells with the holographic microscope.
Using the learning data in which the phase image of the cell created based on the hologram data is used as the input image and the corresponding pseudo-nucleus region image based on the stained image obtained by staining the cell nucleus is used as the correct image. The first learning model storage unit that stores the nuclear region learning model created by performing machine learning,
Created by performing machine learning using learning data in which the phase image of the cells is used as an input image and the pseudo cell region image based on the stained image obtained by staining the corresponding cell skeleton is used as the correct image. A second learning model storage unit that stores the stored cell region learning model,
Using the nuclear region learning model stored in the first learning model storage unit, the phase image created by the image creation unit for the cell to be analyzed is used as an input image, and a nuclear region estimation image showing the region of the cell nucleus is output. The nuclear region estimation unit acquired as an image and
Using the cell region learning model stored in the second learning model storage unit, the phase image of the cell to be analyzed is used as an input image, and the cell region estimation image showing the cell region is acquired as an output image. Department and
Using the nuclear region estimation image and the cell region estimation image, a nuclear region extraction unit that extracts cell nuclei existing in a range estimated to be a cell region, and a nuclear region extraction unit.
Is provided.
 第1項に記載の細胞画像解析方法、及び第7項に記載の細胞解析装置によれば、解析対象である細胞に対する染色等の侵襲的な処理を行うことなく、ホログラムデータに基づいて作成された細胞の位相画像から細胞核を正確に抽出することができる。特に、位相画像に干渉縞が現れていたり異物による像等が現れていたりした場合でも、それらの影響を排除して又は軽減して、細胞核を正確に観察することができる。そして、例えば、その細胞核の数を計数することで、細胞数の情報を得ることができる。また、真の細胞核の形状や形態、或いは、細胞核内部の状態を観察することができる。また、解析対象である細胞の撮影(測定)は非侵襲的に行えるので、撮影後の細胞を引き続き培養したり、別の目的の分析や観察に供したりすることができる。 According to the cell image analysis method according to the first item and the cell analysis device according to the seventh item, the cells to be analyzed are created based on hologram data without performing invasive processing such as staining. The cell nucleus can be accurately extracted from the phase image of the cell. In particular, even when interference fringes appear in the phase image or an image due to a foreign substance appears, the influence thereof can be eliminated or reduced, and the cell nucleus can be observed accurately. Then, for example, by counting the number of the cell nuclei, information on the number of cells can be obtained. In addition, the true shape and morphology of the cell nucleus, or the state inside the cell nucleus can be observed. In addition, since the cells to be analyzed can be photographed (measured) non-invasively, the cells after imaging can be continuously cultured or used for analysis or observation for another purpose.
 (第2項)第1項に記載の細胞画像解析方法では、前記核領域抽出ステップで抽出された細胞核の数を計数する細胞計数ステップ、をさらに有するものとすることができる。 (2) The cell image analysis method according to paragraph 1 may further include a cell counting step for counting the number of cell nuclei extracted in the nuclear region extraction step.
 (第8項)また第7項に記載の細胞解析装置では、前記核領域抽出部で抽出された細胞核の数を計数する細胞計数部、をさらに備えることができる。 (Item 8) Further, the cell analyzer according to item 7 can further include a cell counting unit that counts the number of cell nuclei extracted by the nuclear region extraction unit.
 特殊な場合を除けば、一つの細胞は一つの細胞核を有するから、細胞核の数は細胞数を表しているとみることができる。第2項に記載の細胞画像解析方法、及び第8項に記載の細胞解析装置によれば、的確に抽出された細胞核の数を計数することができるので、観察範囲に存在する細胞の数を正確に求めることができる。 Except for special cases, one cell has one cell nucleus, so the number of cell nuclei can be considered to represent the number of cells. According to the cell image analysis method according to the second item and the cell analysis device according to the eighth item, the number of cell nuclei extracted accurately can be counted, so that the number of cells existing in the observation range can be determined. It can be calculated accurately.
 (第3項)第1項又は第2項に記載の細胞画像解析方法では、前記核領域抽出ステップで抽出された細胞核が表示された核領域推定画像を表示する表示ステップ、をさらに有するものとすることができる。 (Clause 3) The cell image analysis method according to the first or second paragraph further includes a display step of displaying a nuclear region estimation image in which the cell nuclei extracted in the nuclear region extraction step are displayed. can do.
 (第9項)また第7項又は第8項に記載の細胞解析装置では、前記核領域抽出部で抽出された細胞核が表示された核領域推定画像を表示する表示処理部、をさらに備えるものとすることができる。 (Section 9) Further, the cell analysis apparatus according to the seventh or eighth paragraph further includes a display processing unit that displays a nuclear region estimation image in which the cell nuclei extracted by the nuclear region extraction unit are displayed. Can be.
 第3項に記載の細胞画像解析方法、及び第9項に記載の細胞解析装置によれば、ユーザは表示された画像に基づいて、的確に抽出された細胞核の形状や形態を観察することができる。 According to the cell image analysis method according to the third item and the cell analysis device according to the ninth item, the user can observe the shape and morphology of the accurately extracted cell nucleus based on the displayed image. it can.
 (第4項)第1項~第3項のいずれか1項に記載の細胞画像解析方法において、前記細胞領域推定画像は、細胞領域とそれ以外の領域とを区分けした二値画像であるものとすることができる。 (Clause 4) In the cell image analysis method according to any one of paragraphs 1 to 3, the cell region estimation image is a binary image in which a cell region and other regions are separated. Can be.
 (第10項)第7項~第9項のいずれか1項に記載の細胞解析装置において、前記細胞領域推定画像は、細胞領域とそれ以外の領域とを区分けした二値画像であるものとすることができる。 (Item 10) In the cell analyzer according to any one of paragraphs 7 to 9, the cell region estimation image is a binary image in which a cell region and other regions are separated. can do.
 第4項に記載の細胞画像解析方法、及び第10項に記載の細胞解析装置によれば、細胞領域とそれ以外の領域とが明確化されているので、細胞核である可能性が高いものを的確に且つ簡単な処理によって抽出することができる。 According to the cell image analysis method described in the fourth item and the cell analysis device described in the tenth item, the cell region and the other regions are clarified, so that the cell nucleus is likely to be used. It can be extracted by accurate and simple processing.
 (第5項)第1項~第4項のいずれか1項に記載の細胞画像解析方法において、前記機械学習は畳み込みニューラルネットワークを用いたものとすることができる。 (Section 5) In the cell image analysis method according to any one of paragraphs 1 to 4, the machine learning can be performed using a convolutional neural network.
 (第6項)第5項に記載の細胞画像解析方法において、前記畳み込みニューラルネットワークは全畳み込みニューラルネットワークとすることができる。 (Section 6) In the cell image analysis method according to paragraph 5, the convolutional neural network can be a fully convolutional neural network.
 (第11項)第7項~第10項のいずれか1項に記載の細胞解析装置において、前記機械学習は畳み込みニューラルネットワークを用いたものとすることができる。 (Item 11) In the cell analysis apparatus according to any one of items 7 to 10, the machine learning can be performed using a convolutional neural network.
 (第12項)また第11項に記載の細胞解析装置において、前記畳み込みニューラルネットワークは全畳み込みニューラルネットワークとすることができる。 (Item 12) Further, in the cell analysis apparatus according to the item 11, the convolutional neural network can be a fully convolutional neural network.
 第5項及び第6項に記載の細胞画像解析方法、並びに、第10項及び第11項に記載の細胞解析装置によれば、細胞核である可能性が高い領域や細胞である可能性が高い領域をそれぞれ的確に推定することができる。 According to the cell image analysis method according to the fifth and sixth paragraphs and the cell analysis apparatus according to the tenth and eleventh paragraphs, there is a high possibility that the region or cell is likely to be a cell nucleus. Each region can be estimated accurately.
 (第13項)また第7項~第12項のいずれか1項に記載の細胞解析装置において、
 細胞の位相画像を入力画像とし、これに対応する染色画像に基づく擬似的な領域画像を正解画像とした学習データを用い、機械学習を行うことで学習モデルを作成する学習モデル作成部、をさらに備え、
 該学習モデル作成部を用いて前記核領域学習モデル及び前記細胞領域学習モデルを作成するものとすることができる。
(Section 13) In the cell analyzer according to any one of paragraphs 7 to 12,
A learning model creation unit that creates a learning model by performing machine learning using learning data that uses a cell phase image as an input image and a pseudo region image based on the corresponding stained image as a correct image. Prepare,
The nuclear region learning model and the cell region learning model can be created by using the learning model creation unit.
 第7項に記載の細胞解析装置は、細胞の位相画像から細胞領域推定画像を求めるために利用される学習モデルを作成する機能を有することは必須ではないが、第13項に記載の細胞解析装置はその機能を有する。したがって、第13項に記載の細胞解析装置によれば、例えば、核領域推定部において解析対象の細胞の位相画像に対して得られた核領域推定画像を、学習データに加えて学習モデルを再構築する、といった学習モデルの改良を容易に行うことができる。 Although it is not essential that the cell analysis apparatus according to the seventh item has a function of creating a learning model used for obtaining a cell region estimation image from a phase image of a cell, the cell analysis according to the thirteenth item. The device has that function. Therefore, according to the cell analyzer according to the thirteenth paragraph, for example, the nuclear region estimation image obtained for the phase image of the cell to be analyzed in the nuclear region estimation unit is added to the training data to re-learn the learning model. It is possible to easily improve the learning model such as building.
1…細胞解析装置
10…顕微観察部
 11…光源部
 12…イメージセンサ
 13…培養プレート
 14…細胞
 15…参照光
 16…物体光
20…制御・処理部
 21…撮影制御部
 22…ホログラムデータ記憶部
 23…位相情報算出部
 24…画像作成部
 25…核領域推定部
  251…核領域学習モデル記憶部
 26…細胞領域推定部
  261…細胞領域学習モデル記憶部
 27…マスク処理部
 28…細胞計数部
 29…表示処理部
30…入力部
40…表示部
50…FCNモデル作成部
 51…学習画像データ入力部
 52…画像位置合わせ処理部
 53…染色画像前処理部
 54…染色画像二値化部
 55…学習実行部
 56…モデル構築部
1 ... Cell analysis device 10 ... Microscopic observation unit 11 ... Light source unit 12 ... Image sensor 13 ... Culture plate 14 ... Cell 15 ... Reference light 16 ... Object light 20 ... Control / processing unit 21 ... Imaging control unit 22 ... Hologram data storage unit 23 ... Phase information calculation unit 24 ... Image creation unit 25 ... Nuclear region estimation unit 251 ... Nuclear region learning model storage unit 26 ... Cell region estimation unit 261 ... Cell region learning model storage unit 27 ... Mask processing unit 28 ... Cell counting unit 29 ... Display processing unit 30 ... Input unit 40 ... Display unit 50 ... FCN model creation unit 51 ... Learning image data input unit 52 ... Image alignment processing unit 53 ... Stained image preprocessing unit 54 ... Stained image binarization unit 55 ... Learning Execution unit 56 ... Model construction unit

Claims (13)

  1.  ホログラフィック顕微鏡で取得されたホログラムデータに基づいて作成される細胞の位相画像を入力画像とし、これに対応する、細胞核を染色して得られた染色画像に基づく擬似的な核領域画像を正解画像とした学習データを用い、機械学習を行うことで核領域学習モデルを作成する第1の学習モデル作成ステップと、
     前記細胞の位相画像を入力画像とし、これに対応する、細胞骨格を染色して得られた染色画像に基づく擬似的な細胞領域画像を正解画像とした学習データを用い、機械学習を行うことで細胞領域学習モデルを作成する第2の学習モデル作成ステップと、
     前記核領域学習モデルを用い、解析対象の細胞についての位相画像を入力画像とし、細胞核の領域を示す核領域推定画像を出力画像として取得する核領域推定ステップと、
     前記細胞領域学習モデルを用い、前記解析対象の細胞についての位相画像を入力画像とし、細胞領域を示す細胞領域推定画像を出力画像として取得する細胞領域推定ステップと、
     前記核領域推定画像と前記細胞領域推定画像とを用い、細胞領域であると推定される範囲に存在する細胞核を抽出する核領域抽出ステップと、
     を有する、細胞画像解析方法。
    The phase image of the cell created based on the hologram data acquired by the holographic microscope is used as the input image, and the corresponding pseudo nuclear region image based on the stained image obtained by staining the cell nucleus is the correct image. The first learning model creation step of creating a nuclear region learning model by performing machine learning using the training data of
    By performing machine learning using the learning data in which the phase image of the cells is used as the input image and the corresponding pseudo cell region image based on the stained image obtained by staining the cytoskeleton is used as the correct image. The second learning model creation step to create the cell region learning model,
    Using the nuclear region learning model, a nuclear region estimation step of acquiring a phase image of the cell to be analyzed as an input image and a nuclear region estimation image showing the region of the cell nucleus as an output image,
    Using the cell region learning model, a cell region estimation step of acquiring a phase image of the cell to be analyzed as an input image and a cell region estimation image showing the cell region as an output image, and
    Using the nuclear region estimation image and the cell region estimation image, a nuclear region extraction step of extracting cell nuclei existing in a range estimated to be a cell region, and a nuclear region extraction step.
    A cell image analysis method.
  2.  前記核領域抽出ステップで抽出された細胞核の数を計数する細胞計数ステップ、をさらに有する請求項1に記載の細胞画像解析方法。 The cell image analysis method according to claim 1, further comprising a cell counting step of counting the number of cell nuclei extracted in the nuclear region extraction step.
  3.  前記核領域抽出ステップで抽出された細胞核が表示された核領域推定画像を表示する表示ステップ、をさらに有する請求項1に記載の細胞画像解析方法。 The cell image analysis method according to claim 1, further comprising a display step of displaying a nuclear region estimation image in which the cell nuclei extracted in the nuclear region extraction step are displayed.
  4.  前記細胞領域推定画像は、細胞領域とそれ以外の領域とを区分けした二値画像である、請求項1に記載の細胞画像解析方法。 The cell image analysis method according to claim 1, wherein the cell region estimation image is a binary image in which a cell region and other regions are separated.
  5.  前記機械学習は畳み込みニューラルネットワークを用いたものである、請求項1に記載の細胞画像解析方法。 The cell image analysis method according to claim 1, wherein the machine learning uses a convolutional neural network.
  6.  前記畳み込みニューラルネットワークは全畳み込みニューラルネットワークである、請求項5に記載の細胞画像解析方法。 The cell image analysis method according to claim 5, wherein the convolutional neural network is a fully convolutional neural network.
  7.  ホログラフィック顕微鏡と、
     前記ホログラフィック顕微鏡で細胞を観察することで取得されたホログラムデータに基づいて、前記細胞の位相画像を作成する画像作成部と、
     ホログラムデータに基づいて作成される細胞の位相画像を入力画像とし、これに対応する、細胞核を染色して得られた染色画像に基づく擬似的な核領域画像を正解画像とした学習データを用い、機械学習を行うことで作成された核領域学習モデルが記憶される第1学習モデル記憶部と、
     前記細胞の位相画像を入力画像とし、これに対応する細胞骨格を染色して得られた染色画像に基づく擬似的な細胞領域画像を正解画像とした学習データを用い、機械学習を行うことで作成された細胞領域学習モデルが記憶される第2学習モデル記憶部と、
     前記第1学習モデル記憶部に記憶されている核領域学習モデルを用い、解析対象の細胞について前記画像作成部で作成された位相画像を入力画像とし、細胞核の領域を示す核領域推定画像を出力画像として取得する核領域推定部と、
     前記第2学習モデル記憶部に記憶されている細胞領域学習モデルを用い、前記解析対象の細胞についての位相画像を入力画像とし、細胞領域を示す細胞領域推定画像を出力画像として取得する細胞領域推定部と、
     前記核領域推定画像と前記細胞領域推定画像とを用い、細胞領域であると推定される範囲に存在する細胞核を抽出する核領域抽出部と、
     を備える細胞解析装置。
    With a holographic microscope
    An image creation unit that creates a phase image of the cells based on the hologram data obtained by observing the cells with the holographic microscope.
    Using the learning data in which the phase image of the cell created based on the hologram data is used as the input image and the corresponding pseudo-nucleus region image based on the stained image obtained by staining the cell nucleus is used as the correct image. The first learning model storage unit that stores the nuclear region learning model created by performing machine learning,
    Created by performing machine learning using learning data in which the phase image of the cells is used as an input image and the pseudo cell region image based on the stained image obtained by staining the corresponding cell skeleton is used as the correct image. A second learning model storage unit that stores the stored cell region learning model,
    Using the nuclear region learning model stored in the first learning model storage unit, the phase image created by the image creation unit for the cell to be analyzed is used as an input image, and a nuclear region estimation image showing the region of the cell nucleus is output. The nuclear region estimation unit acquired as an image and
    Using the cell region learning model stored in the second learning model storage unit, the phase image of the cell to be analyzed is used as an input image, and the cell region estimation image showing the cell region is acquired as an output image. Department and
    Using the nuclear region estimation image and the cell region estimation image, a nuclear region extraction unit that extracts cell nuclei existing in a range estimated to be a cell region, and a nuclear region extraction unit.
    A cell analyzer equipped with.
  8.  前記核領域抽出部で抽出された細胞核の数を計数する細胞計数部、をさらに備える、請求項7に記載の細胞解析装置。 The cell analysis apparatus according to claim 7, further comprising a cell counting unit that counts the number of cell nuclei extracted by the nuclear region extraction unit.
  9.  前記核領域抽出部で抽出された細胞核が表示された核領域推定画像を表示する表示処理部、をさらに備える請求項7に記載の細胞解析装置。 The cell analysis apparatus according to claim 7, further comprising a display processing unit that displays a nuclear region estimation image in which the cell nuclei extracted by the nuclear region extraction unit are displayed.
  10.  前記細胞領域推定画像は、細胞領域とそれ以外の領域とを区分けした二値画像である、請求項7に記載の細胞解析装置。 The cell analysis apparatus according to claim 7, wherein the cell region estimation image is a binary image in which a cell region and other regions are separated.
  11.  前記機械学習は畳み込みニューラルネットワークを用いたものである、請求項7に記載の細胞解析装置。 The cell analysis device according to claim 7, wherein the machine learning uses a convolutional neural network.
  12.  前記畳み込みニューラルネットワークは全畳み込みニューラルネットワークである、請求項11に記載の細胞解析装置。 The cell analysis apparatus according to claim 11, wherein the convolutional neural network is a fully convolutional neural network.
  13.  細胞の位相画像を入力画像とし、これに対応する染色画像に基づく擬似的な領域画像を正解画像とした学習データを用い、機械学習を行うことで学習モデルを作成する学習モデル作成部、をさらに備え、該学習モデル作成部を用いて前記核領域学習モデル及び前記細胞領域学習モデルを作成する、請求項7に記載の細胞解析装置。 A learning model creation unit that creates a learning model by performing machine learning using learning data that uses a cell phase image as an input image and a pseudo region image based on the corresponding stained image as a correct image. The cell analysis apparatus according to claim 7, further comprising, using the learning model creating unit to create the nuclear region learning model and the cell region learning model.
PCT/JP2019/040275 2019-10-11 2019-10-11 Cell image analysis method and cell analysis device WO2021070371A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2021551081A JP7248139B2 (en) 2019-10-11 2019-10-11 Cell image analysis method and cell analysis device
PCT/JP2019/040275 WO2021070371A1 (en) 2019-10-11 2019-10-11 Cell image analysis method and cell analysis device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/040275 WO2021070371A1 (en) 2019-10-11 2019-10-11 Cell image analysis method and cell analysis device

Publications (1)

Publication Number Publication Date
WO2021070371A1 true WO2021070371A1 (en) 2021-04-15

Family

ID=75438159

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/040275 WO2021070371A1 (en) 2019-10-11 2019-10-11 Cell image analysis method and cell analysis device

Country Status (2)

Country Link
JP (1) JP7248139B2 (en)
WO (1) WO2021070371A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016093090A1 (en) * 2014-12-09 2016-06-16 コニカミノルタ株式会社 Image processing apparatus and image processing program
WO2019171453A1 (en) * 2018-03-06 2019-09-12 株式会社島津製作所 Cell image analysis method, cell image analysis device, and learning model creation method
WO2019180833A1 (en) * 2018-03-20 2019-09-26 株式会社島津製作所 Cell observation device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016093090A1 (en) * 2014-12-09 2016-06-16 コニカミノルタ株式会社 Image processing apparatus and image processing program
WO2019171453A1 (en) * 2018-03-06 2019-09-12 株式会社島津製作所 Cell image analysis method, cell image analysis device, and learning model creation method
WO2019180833A1 (en) * 2018-03-20 2019-09-26 株式会社島津製作所 Cell observation device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
BIANCO, VITTORIO ET AL.: "Strategies for reducing speckle noise in digital holography", SCIENCE & APPLICATIONS, vol. 7, no. 48, 2018, pages 1 - 16, XP055816501 *
CHRISTIANSEN, ERIC M. ET AL.: "In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images", CELL, vol. 173, 19 April 2018 (2018-04-19), pages 792 - 803, XP002788720 *
LEE, JIMIN ET AL.: "Deep-Learning-Based Label-Free Segmentation of Cell Nuclei in Time-Lapse Refractive Index Tomograms", IEEE ACCESS, vol. 7, 21 June 2019 (2019-06-21), pages 83449 - 83460, XP011733981, DOI: 10.1109/ACCESS.2019.2924255 *
OUNKOMOL, CHAWIN ET AL.: "Label-free prediction of three-dimensional fluorescence images from transmitted light microscopy", NAT. METHODS, vol. 15, no. 11, November 2018 (2018-11-01), pages 917 - 920, XP036624647, DOI: 10.1038/s41592-018-0111-2 *

Also Published As

Publication number Publication date
JPWO2021070371A1 (en) 2021-04-15
JP7248139B2 (en) 2023-03-29

Similar Documents

Publication Publication Date Title
JP7344568B2 (en) Method and system for digitally staining label-free fluorescent images using deep learning
Weng et al. Combining deep learning and coherent anti-Stokes Raman scattering imaging for automated differential diagnosis of lung cancer
EP3486836B1 (en) Image analysis method, apparatus, program, and learned deep learning algorithm
US11978211B2 (en) Cellular image analysis method, cellular image analysis device, and learning model creation method
US20200388033A1 (en) System and method for automatic labeling of pathology images
JP2023508284A (en) Method and system for digital staining of microscopic images using deep learning
JP2018119969A (en) Analyzing digital holographic microscopy data for hematology applications
US20210110536A1 (en) Cell image analysis method and cell image analysis device
WO2014087689A1 (en) Image processing device, image processing system, and program
Goceri et al. Quantitative validation of anti‐PTBP1 antibody for diagnostic neuropathology use: Image analysis approach
JP2022506135A (en) Segmentation of 3D intercellular structures in microscopic images using iterative deep learning flows that incorporate human contributions
US20210133981A1 (en) Biology driven approach to image segmentation using supervised deep learning-based segmentation
Delpiano et al. Automated detection of fluorescent cells in in‐resin fluorescence sections for integrated light and electron microscopy
Shaw et al. Optical mesoscopy, machine learning, and computational microscopy enable high information content diagnostic imaging of blood films
dos Santos et al. Automated nuclei segmentation on dysplastic oral tissues using cnn
JP2021078356A (en) Cell analysis apparatus
Hodneland et al. Automated detection of tunneling nanotubes in 3D images
Niederlein et al. Image analysis in high content screening
WO2021070371A1 (en) Cell image analysis method and cell analysis device
Serin et al. A novel overlapped nuclei splitting algorithm for histopathological images
WO2021070372A1 (en) Cell image analysis method and cell analysis device
Kotyk et al. Detection of dead stained microscopic cells based on color intensity and contrast
Ramarolahy et al. Classification and generation of microscopy images with plasmodium falciparum via artificial neural networks using low cost settings
Mannam et al. Improving fluorescence lifetime imaging microscopy phasor accuracy using convolutional neural networks
Gil et al. Automatic analysis system for abnormal red blood cells in peripheral blood smears

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19948861

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021551081

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19948861

Country of ref document: EP

Kind code of ref document: A1