WO2021070371A1 - Procédé d'analyse d'image cellulaire et dispositif d'analyse d'image cellulaire - Google Patents

Procédé d'analyse d'image cellulaire et dispositif d'analyse d'image cellulaire Download PDF

Info

Publication number
WO2021070371A1
WO2021070371A1 PCT/JP2019/040275 JP2019040275W WO2021070371A1 WO 2021070371 A1 WO2021070371 A1 WO 2021070371A1 JP 2019040275 W JP2019040275 W JP 2019040275W WO 2021070371 A1 WO2021070371 A1 WO 2021070371A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
cell
region
nuclear
learning model
Prior art date
Application number
PCT/JP2019/040275
Other languages
English (en)
Japanese (ja)
Inventor
秋絵 外口
Original Assignee
株式会社島津製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社島津製作所 filed Critical 株式会社島津製作所
Priority to PCT/JP2019/040275 priority Critical patent/WO2021070371A1/fr
Priority to JP2021551081A priority patent/JP7248139B2/ja
Publication of WO2021070371A1 publication Critical patent/WO2021070371A1/fr

Links

Images

Classifications

    • CCHEMISTRY; METALLURGY
    • C12BIOCHEMISTRY; BEER; SPIRITS; WINE; VINEGAR; MICROBIOLOGY; ENZYMOLOGY; MUTATION OR GENETIC ENGINEERING
    • C12MAPPARATUS FOR ENZYMOLOGY OR MICROBIOLOGY; APPARATUS FOR CULTURING MICROORGANISMS FOR PRODUCING BIOMASS, FOR GROWING CELLS OR FOR OBTAINING FERMENTATION OR METABOLIC PRODUCTS, i.e. BIOREACTORS OR FERMENTERS
    • C12M1/00Apparatus for enzymology or microbiology
    • C12M1/34Measuring or testing with condition measuring or sensing means, e.g. colony counters

Definitions

  • the present invention relates to a cell image analysis method for analyzing and processing an observation image obtained for a cell, and a cell analysis device using the method.
  • the cell nucleus (hereinafter, sometimes simply referred to as the "nucleus" according to convention) contains DNA that controls the genetic information of the cell, and is one of the most important organs in the cell, which is responsible for cell division and gene expression control. Therefore, observation of nuclei in microscopic images of cells is very important for various cell-related research and product development.
  • Patent Document 1 discloses a technique for observing a morphology after staining a nucleus with a simple operation. .. Observation of cell nuclei is also important in the field of pathological image analysis.
  • Non-Patent Document 1 cell nuclei are extracted from images (pathological images) obtained of cells stained with cell nuclei and proteins per cell are quantified. The technology to be used is disclosed.
  • Patent Documents 2 and 3 disclose a technique for detecting a region of a cell nucleus by using a machine learning learning model for an image obtained by staining a cell or a cell nucleus with a microscope.
  • Patent Document 4 discloses a technique for measuring the number of cell nuclei using a cell image (pathological image) obtained by staining cell nuclei.
  • the method of staining cells and cell nuclei is an invasive method, and the cells used for observation are continuously cultured, or the cells used for observation are used as they are for another purpose, for example. It cannot be administered to patients as a product for regenerative medicine.
  • pluripotent stem cells such as iPS cells and ES cells
  • the invasive observation method as described above cannot be used. Therefore, there is a strong demand for the establishment of a non-invasive and non-destructive method for observing cell nuclei.
  • Patent Document 5 discloses a technique for determining whether a cell is alive or dead in a non-staining manner, that is, non-invasively, using a microscope using Raman scattered light.
  • Raman imaging can obtain spectrum information on the corresponding pixels in the image, it requires complicated processing such as the need to specify the spectrum that contributes to the determination performed in the living cell.
  • cell nuclei can be observed without staining with a microscope using Raman scattered light, not only complicated preliminary studies such as spectrum identification are required, but also observations that can be taken in a short time are possible. There is a restriction that the area is small. Therefore, it is not suitable for observing the nuclei of living cells existing in the entire well in which the cells are cultured. It is also impossible to measure the total number of cell nuclei in the well in which the cells are cultured.
  • the present invention has been made to solve the above problems, and its main purpose is to observe and analyze the nuclei of cells existing in a wide range in a container such as a well in a non-invasive manner. It is to provide a cell image analysis method and a cell analysis apparatus which can be performed.
  • the phase image of the cell created based on the hologram data acquired by the holographic microscope is used as the input image, and the corresponding pseudo nuclear region image based on the stained image obtained by staining the cell nucleus is the correct image.
  • the second learning model creation step to create the cell region learning model Using the nuclear region learning model, a nuclear region estimation step of acquiring a phase image of the cell to be analyzed as an input image and a nuclear region estimation image showing the region of the cell nucleus as an output image, Using the cell region learning model, a cell region estimation step of acquiring a phase image of the cell to be analyzed as an input image and a cell region estimation image showing the cell region as an output image, and Using the nuclear region estimation image and the cell region estimation image, a nuclear region extraction step of extracting cell nuclei existing in a range estimated to be a cell region, and a nuclear region extraction step. It has.
  • one aspect of the cell analyzer according to the present invention made to solve the above problems is with a holographic microscope
  • An image creation unit that creates a phase image of the cells based on the hologram data obtained by observing the cells with the holographic microscope.
  • the learning data in which the phase image of the cell created based on the hologram data is used as the input image and the corresponding pseudo-nucleus region image based on the stained image obtained by staining the cell nucleus is used as the correct image.
  • the first learning model storage unit that stores the nuclear region learning model created by performing machine learning, Created by performing machine learning using learning data in which the phase image of the cells is used as an input image and the pseudo cell region image based on the stained image obtained by staining the corresponding cell skeleton is used as the correct image.
  • a second learning model storage unit that stores the stored cell region learning model, Using the nuclear region learning model stored in the first learning model storage unit, the phase image created by the image creation unit for the cell to be analyzed is used as an input image, and a nuclear region estimation image showing the region of the cell nucleus is output.
  • the nuclear region estimation unit acquired as an image and Using the cell region learning model stored in the second learning model storage unit, the phase image of the cell to be analyzed is used as an input image, and the cell region estimation image showing the cell region is acquired as an output image.
  • the holographic microscope is typically a digital holographic microscope, and the phase image is, for example, an image reconstructed based on phase information obtained by calculation processing based on hologram data obtained by the digital holographic microscope. is there.
  • a phase image of an appropriate cell and a nuclear stained image obtained by staining the cell nucleus of the same cell and observing it with a fluorescence microscope or the like are used.
  • a nuclear region learning model for identifying the region of the cell nucleus is created by machine learning such as deep learning.
  • interference fringes may appear in the phase image reconstructed based on the hologram data, and when the nuclear region is detected using the above nuclear region learning model, the interference fringes on the phase image may appear. Due to the influence of, a part that is not originally a cell nucleus may be mistakenly detected as a cell nucleus. Further, even if a foreign substance such as dust is present on the medium in the culture vessel, it may be erroneously detected as a cell nucleus.
  • a region in which the cytoskeleton is distributed is specified by using a stained image obtained by staining the cytoskeleton such as actin filament, in addition to the nuclear region learning model. Create a cell region learning model for this by machine learning.
  • the cytoskeleton is a structural element that exists in a fibrous or reticular form throughout the cytoplasm and determines the shape and morphology of cells. Therefore, the distribution region of the cytoskeleton can be regarded as roughly indicating the cell shape. That is, the cell region estimation image obtained by using the cell region learning model in the cell region estimation step shows the division between the cell region and the extracellular region (background region).
  • the cell nucleus exists in the cell region, so compare the nuclear region estimation image and the cell region estimation image for the same imaging range, and if there is a cell nucleus existing in the background region other than the cell, it is a false detection of the nucleus. It can be judged that there is. By making such a judgment, the nuclear region extraction step can eliminate falsely detected cell nuclei and extract only more accurate cell nuclei.
  • the cell nucleus is accurately extracted from the phase image without performing invasive treatment such as staining of the cell to be analyzed. be able to.
  • the influence thereof can be eliminated or reduced, and the cell nucleus can be observed accurately.
  • the true shape and morphology of the cell nucleus, or the state inside the cell nucleus can be observed.
  • the cells to be analyzed can be photographed (measured) non-invasively, the cells after imaging can be continuously cultured or used for analysis or observation for another purpose.
  • the schematic block diagram of one Embodiment of the cell analysis apparatus using the cell image analysis method which concerns on this invention The conceptual diagram of the structure of the full-thickness convolutional neural network used in the cell analysis apparatus of this embodiment.
  • the flowchart which shows the flow of processing at the time of making a learning model in the cell analysis apparatus of this embodiment.
  • the flowchart which shows the flow of the process from the imaging of the cell to be analyzed to the output of the cell count result in the cell analysis apparatus of this embodiment.
  • FIG. 6 is a diagram showing an example of a nuclear region estimation image output by estimation using a learning model using the IHM phase image shown in FIG. 6 as an input image.
  • the figure which shows the correct answer image of the nuclear region estimation image shown in FIG. 7. The figure which shows the image which superposed the nuclear position candidate point on the IHM phase image shown in FIG.
  • FIG. 6 is a diagram showing a cell region estimation image (binary image) output by estimation using a learning model using the IHM phase image shown in FIG. 6 as an input image.
  • FIG. 1 is a block configuration diagram of a main part of a cell analysis apparatus according to an embodiment for carrying out the cell image analysis method according to the present invention.
  • the cell analysis device 1 of the present embodiment includes a microscopic observation unit 10, a control / processing unit 20, an input unit 30 and a display unit 40 which are user interfaces. Further, the cell analysis device 1 is provided with an FCN model creation unit 50.
  • the microscopic observation unit 10 is an in-line Holographic Microscopy (IHM), includes a light source unit 11 including a laser diode and an image sensor 12, and is located between the light source unit 11 and the image sensor 12. A culture plate 13 containing the cells 14 is placed.
  • IHM Holographic Microscopy
  • the control / processing unit 20 controls the operation of the microscopic observation unit 10 and processes the data acquired by the microscopic observation unit 10, and includes the imaging control unit 21, the hologram data storage unit 22, and the phase information calculation.
  • a functional block includes a unit 23, an image creation unit 24, a nuclear region estimation unit 25, a cell region estimation unit 26, a mask processing unit 27, a cell counting unit 28, and a display processing unit 29.
  • the nuclear region estimation unit 25 includes a nuclear region learning model storage unit 251
  • the cell region estimation unit 26 includes a cell region learning model storage unit 261.
  • the FCN model creation unit 50 includes a learning image data input unit 51, an image alignment processing unit 52, a stained image preprocessing unit 53, a stained image binarization unit 54, a learning execution unit 55, and a model construction unit 56. And are included as functional blocks.
  • the learned learning model created by the FCN model creation unit 50 is stored in the storage unit of the control / processing unit 20, and functions as the nuclear region learning model storage unit 251 and the cell region learning model storage unit 261.
  • the entity of the control / processing unit 20 is a personal computer in which predetermined software is installed, a higher-performance workstation, or a computer system including a high-performance computer connected to such a computer via a communication line. is there. That is, the function of each block included in the control / processing unit 20 is stored in the computer or computer system, which is executed by executing software installed in a computer system including a single computer or a plurality of computers. It can be embodied by processing using various types of data.
  • the entity of the FCN model creation unit 50 is a personal computer on which predetermined software is installed or a workstation with higher performance. Normally, this computer is a computer different from the control / processing unit 20, but it may be the same computer. That is, the control / processing unit 20 can have the function of the FCN model creation unit 50.
  • the imaging control unit 21 controls the microscopic observation unit 10 and acquires hologram data by the following procedure. To do.
  • the light source unit 11 irradiates a predetermined area of the culture plate 13 with coherent light having an angle spread of about 10 °.
  • the coherent light (object light 16) transmitted through the culture plate 13 and the cells 14 reaches the image sensor 12 while interfering with the light (reference light 15) transmitted through the region close to the cells 14 on the culture plate 13.
  • the object light 16 is light whose phase changes when it passes through the cell 14, while the reference light 15 is light that does not pass through the cell 14 and therefore does not undergo the phase change caused by the cell 14. Therefore, on the detection surface (image surface) of the image sensor 12, an image, that is, a hologram is formed by interference fringes between the object light 16 whose phase has been changed by the cells 14 and the reference light 15 whose phase has not changed.
  • the light source unit 11 and the image sensor 12 are sequentially moved in the X-axis direction and the Y-axis direction in conjunction with each other by a movement mechanism (not shown).
  • the irradiation region (observation region) of the coherent light emitted from the light source unit 11 is moved on the culture plate 13, and the hologram data (2 of the hologram formed on the detection surface of the image sensor 12) covers a wide two-dimensional region. Dimensional light intensity distribution data) can be obtained.
  • the hologram data obtained by the microscopic observation unit 10 is sequentially sent to the control / processing unit 20 and stored in the hologram data storage unit 22.
  • the phase information calculation unit 23 reads the hologram data from the hologram data storage unit 22 and executes a predetermined arithmetic process for phase recovery to obtain the phase information of the entire observation area (photographing area). calculate.
  • the image creation unit 24 creates an IHM phase image based on the calculated phase information.
  • a well-known algorithm disclosed in Patent Document 6 and the like can be used when calculating such phase information and creating an IHM phase image.
  • FIG. 6 is an example of an IHM phase image. It can be seen that transparent cells are difficult to see with a general optical microscope, but individual cells can be observed fairly clearly in the IHM phase image. However, it is difficult to visually recognize the nucleus of each cell from this IHM phase image. Therefore, in the cell analysis device 1 of the present embodiment, a region in which a nucleus is presumed to exist in each cell is determined by using a full-layer convolutional neural network (FCN), which is one of the machine learning methods. Obtain the shown nuclear region estimation image.
  • FCN convolutional neural network
  • FIG. 2 is a conceptual diagram of the structure of the FCN. The details of the structure and processing of FCN are explained in detail in many documents. It can also be implemented using commercially available or free software such as "MATLAB” provided by MathWorks in the United States. Therefore, a schematic description will be given here.
  • MATLAB provided by MathWorks in the United States. Therefore, a schematic description will be given here.
  • the FCN includes, for example, a multi-layer network 60 in which repetitions of a convolutional layer and a pooling layer are multi-layered, and a convolutional layer 61 corresponding to a fully connected layer in a convolutional neural network.
  • the multi-layer network 60 the convolution process using a filter (kernel) of a predetermined size and the pooling process of reducing the convolution result two-dimensionally and extracting an effective value are repeated.
  • the multilayer network 60 may be composed of only a convolutional layer without a pooling layer.
  • the final stage convolution layer 61 local convolution and deconvolution are performed while sliding a filter of a predetermined size in the input image.
  • this FCN by performing semantic segmentation on the input image 63 such as the IHM phase image, it is possible to output the segmentation image 64 in which the region of the cell nucleus is labeled on a pixel-by-pixel basis.
  • the multilayer network 60 and the convolutional layer 61 are designed so as to label the input IHM phase image on a pixel-by-pixel basis. That is, the smallest unit of one region labeled in the segmentation image 64, which is the output image, is one pixel on the IHM phase image. Therefore, even if the size of the cell nucleus is about one pixel on the IHM phase image, the cell nucleus is detected as one region in the segmentation image 64.
  • the coefficients (weights) of the filters in each of the plurality of convolution layers included in the multilayer network 60 and the final convolution layer 61 are used in advance by using a large number of image data for learning. It is necessary to train and build a learning model.
  • learning can be performed using the stochastic gradient descent method that is often used in machine learning (particularly deep learning).
  • the learning image data input unit 51 is a learning data (also called teacher data or training data) in which the IHM phase image created by the image creation unit 24 and the corresponding correct image are a set, but here. Then, a large number of sets (referred to as learning data) are read in advance (step S11).
  • the IHM phase image is created based on the data obtained by actually photographing the cells with the cell analysis device 1 as described above, but the IHM phase image is not necessarily limited to a specific cell analysis device, and other cell analysis having the same configuration is performed. It may be the one obtained by the device.
  • the correct image is a fluorescence image (nuclear-stained fluorescence image) obtained by staining only the nucleus of the cell when the IHM phase image was created and taking the image with an appropriate microscope.
  • the staining method is not particularly limited as long as the cell nucleus can be stained, and for example, DAPI (4', 6-diamidino-2-phenylindole), propidium iodide, SYSTEMX (registered trademark), TO-PRO (registered trademark)- 3 and the like can be used.
  • the image alignment processing unit 52 aligns both images by performing image processing such as translation, rotation, enlargement / reduction, etc. on one of the images (step S12). In general, it is advisable to perform image processing so as to align the position of the nuclear-stained fluorescence image with reference to the IHM phase image in which the cells are more clearly visible.
  • This alignment work may be performed manually by the operator with reference to, for example, the edge of the well or the mark added to the culture plate, or may be automatically performed by a predetermined algorithm.
  • the stained image preprocessing unit 53 performs noise removing processing and background removing processing so that the cell nucleus region becomes clearer in the nuclear stained fluorescent image (steps S13 and S14).
  • the noise removal process aims to remove various types of noise, and for example, various types of filters such as a linear filter and a median filter can be used.
  • the background removing process is mainly intended to remove intensity unevenness in a background portion other than the cell nucleus, and a method using an average value filter or the like is known as a background subtraction process.
  • various methods used in the conventional image processing can be used. Further, since the noise condition depends on the characteristics of the microscope and the characteristics of the target sample, the noise removal process can be omitted in some cases.
  • the stained image binarization unit 54 binarizes the preprocessed image as described above to create a binary image in which the nuclear region and the other regions are clarified (step S15). Further, the stained image binarization unit 54 performs a closing process that combines an expansion process and a reduction process as a morphology conversion process on the binarized image (step S16).
  • FIG. 5 (a) is an original image of a nuclear-stained fluorescent image
  • FIG. 5 (b) is an image of the image shown in FIG. 5 (a) after background removal processing
  • FIG. 5 (c) is FIG. 5 (c).
  • This is an example of a binary image after binarizing the image shown in a). The same applies to the images described below, but here, mesenchymal stem cells (MSCs) were targeted, and DAPI was used for nuclear staining.
  • MSCs mesenchymal stem cells
  • the intensity unevenness in the background portion other than the nuclear region is large, so that the binarization capable of extracting the nuclear region cannot be performed unless the background removal treatment is performed.
  • the background removal process in advance, as shown in FIG. 5C, it is possible to obtain a binary image in which the nuclear region is accurately extracted.
  • This binary image is an image in which the cell nucleus region and the other regions are semantically segmented for each pixel on the corresponding IHM phase image.
  • the closing process after binarization it is possible to remove mainly small noise on the fluorescence image such as bright spot noise.
  • the closing process may be omitted depending on the noise situation and the performance of the noise removal process in step S13.
  • the learning execution unit 55 can use the training execution unit 55.
  • FCN learning is executed using a large number of training data (step S17). That is, the filter coefficients in a plurality of convolution layers in the FCN network are learned so that the result of semantic segmentation by FCN is as close as possible to the correct image.
  • the model building unit 56 builds a model in the process of repeating the learning, and when the predetermined learning is completed, the model building unit 56 saves the learning model based on the learning result (step S18).
  • the data forming the learning model indicating the nuclear region created in this way is stored in the nuclear region learning model storage unit 251.
  • interference fringes appear not a little in the hologram data obtained by the microscopic observation unit 10, and this may also appear in the IHM phase image.
  • IHM phase image shown in FIG. 6 a substantially concentric image derived from the interference fringes appears fairly clearly.
  • the interference fringes become clear on the IHM phase image, a part of the interference fringes may be erroneously detected as a cell nucleus when estimating the nuclear region using the nuclear region learning model.
  • FIG. 7 is a diagram showing an example of a nuclear region estimation image obtained by estimation using a nuclear region learning model using the IHM phase image shown in FIG. 6 as an input image.
  • FIG. 8 is a diagram showing a correct image of the nuclear region estimation image shown in FIG. 7, that is, an accurate nuclear region.
  • FIG. 9 is a diagram showing an image in which a point (nuclear position candidate point) which is a nuclear region estimation result is superimposed on the IHM phase image shown in FIG.
  • FIG. 10 is a diagram showing an image in which the correct points of the nuclear region are superimposed on the IHM phase image shown in FIG.
  • the points indicating the nuclear positions in FIGS. 9 and 10 are rectangular, and they can be obtained by performing a maximum value region extraction process and a binarization process, which will be described later, on the nuclear region represented by the gray scale. It was done.
  • FIGS. 9 and 10 it can be seen that in FIG. 9, a large number of false nuclear regions are detected on the interference fringes where the cell nuclei should not exist. This means that simply estimating the nuclear region using the nuclear position learning model will result in a large number of false positives. Therefore, in the cell analysis apparatus of the present embodiment, in order to perform the process of excluding the falsely detected false nuclear region, a learning model for recognizing the cell region is created separately from the learning model for recognizing the nuclear region. To do.
  • the cytoskeleton existing in the fiber shape or the network shape throughout the inside of the cell is used.
  • 11 (a) and 11 (b) are a fluorescence-stained image of actin filaments in the same observation region and a bright-field image obtained by a normal microscope.
  • Actin filaments are a type of cytoskeleton and are present in the form of fibers throughout the inside of cells.
  • FIG. 11 (b) it is difficult to visually recognize the cell region in the bright field image, but as shown in FIG. 11 (a), the actin filaments are present in almost the entire cell, so that the actin filaments are present.
  • the distributed range can be considered to be the cellular region. Therefore, in the cell analyzer of the present embodiment, a cell region learning model for extracting a cell region is created by using a fluorescent image stained with a cytoskeleton (here, actin filament) as an accurate image.
  • a cytoskeleton here, actin filament
  • the cell region learning model can be created by the same procedure as the nuclear region learning model, that is, the procedure shown in FIG.
  • the cytoskeleton is fibrously stretched inside the cell, but it does not necessarily exist evenly, and even within the cell, if the cytoskeleton does not exist or its density is low, it is binarized. Some pixels are partially blackened. On the other hand, by performing the closing process after binarization, the pixels whose surroundings are white are converted to white even if they are black. As a result, it is possible to obtain an image showing not only the part where the cytoskeleton actually exists but also the range where the cytoskeleton is distributed, that is, the entire cell region. That is, the image after the closing process is an image in which the cell region and the other regions are separated for each pixel on the corresponding IHM phase image.
  • the learning execution unit 55 executes FCN learning using the large number of learning data.
  • the model building unit 56 builds a model in the process of repeating the learning, and when a predetermined learning is completed, the model building unit 56 saves the cell region learning model based on the learning result.
  • the data constituting the cell region learning model created in this way is stored in the cell region learning model storage unit 261 in the cell analysis device 1.
  • the operator sets the culture plate 13 containing the cells 14 to be analyzed at a predetermined position of the microscopic observation unit 10, and performs a predetermined operation on the input unit 30.
  • the microscopic observation unit 10 photographs the sample (cells 14 in the culture plate 13) (step S21).
  • the phase information calculation unit 23 and the image creation unit 24 perform phase calculation based on the hologram data obtained by the photographing, and form an IHM phase image (step S22).
  • the nuclear region estimation unit 25 reads the IHM phase image obtained in step S22 as an input image, performs processing by FCN using the nuclear region learning model stored in the nuclear region learning model storage unit 251 and performs processing by FCN, and inputs the input image.
  • the segmentation image corresponding to is acquired as an output image (step S23).
  • the segmentation image at this time is a nuclear region estimation image showing the region of the cell nucleus and the region other than the cell nucleus separately in the same observation range as the IHM phase image which is the input image.
  • the nuclear region estimation unit 25 outputs a grayscale image in which the gradation from white to black is changed according to the probability value of each pixel. That is, in the nuclear region estimation image, the portion estimated with high accuracy as the nuclear region is displayed with white or a gradation close to it, and the portion with low estimation accuracy as the nuclear region is relatively close to black. It is displayed in gradation.
  • the nuclear region estimation image shown in FIG. 7 can be obtained for the IHM phase image shown in FIG. As described above, when the interference fringes appear to some extent clearly on the input IHM phase image or when the image of a foreign substance appears, a false nuclear region appears at a site where the cell nucleus does not actually exist. ..
  • the cell region estimation unit 26 reads the IHM phase image obtained in step S22 as an input image, and performs processing by FCN using the cell region learning model stored in the cell region learning model storage unit 261.
  • the segmentation image corresponding to the input image is acquired as an output image (step S24).
  • the segmentation image at this time is a cell region estimation image showing the cell region and other regions separately for the same observation range as the IHM phase image which is the input image.
  • FIG. 12 is an example of a cell region estimation image obtained by using the IHM phase image shown in FIG. 6 as an input image.
  • the cell region estimation unit 26 compares the probability value of each pixel with a predetermined threshold value, and outputs a binary segmentation image in which the pixel having the probability value equal to or higher than the threshold value is white and the other pixels are black.
  • the mask processing unit 27 performs mask processing on the nuclear region estimation image obtained in step S23 using the binarized cell region estimation image obtained in step S24 (step S25). That is, only the white part of the binary image shown in FIG. 12, that is, the nuclear region existing in the cell region is extracted, and the other nuclear regions are excluded.
  • the nuclear region excluded at this time is a nuclear region erroneously detected due to the influence of interference fringes or an image of a foreign substance.
  • FIG. 13 is a nuclear region estimation image after mask processing, and as can be seen by comparison with FIG. 7, the number of nuclear regions is significantly reduced.
  • the core region estimation image after this mask processing is a grayscale image, and the peak value of the signal differs depending on the core. Therefore, even if the determination process using a single threshold value is performed, not all nuclei can be detected. Further, since a plurality of adjacent nuclei may be seen as one region, it is preferable to perform a process of separating the plurality of adjacent nuclei. Therefore, the mask processing unit 27 performs the maximum value region extraction processing on the nuclear region estimation image after the mask processing.
  • the region other than the black region is spatially expanded.
  • the brightness is generally lowered by subtracting the offset value determined in advance in consideration of the noise tolerance value from the signal value (luminance value) of each pixel.
  • the luminance value is subtracted for each pixel between the image before the expansion process and the image after the luminance decrease.
  • the luminance value becomes non-zero in a narrow range near the peak of the luminance value in the original image, regardless of the peak value, and the luminance value becomes zero in other regions. Become. That is, by this processing, it is possible to extract a region having a maximum value in which the brightness value is higher than that around the original image.
  • the position of the center of gravity may be obtained for each cell nucleus to obtain the nuclear position candidate point.
  • the mask processing unit 27 creates an image obtained by extracting the maximum value region from the masked nuclear region estimation image, and then binarizes the image to obtain a nuclear position candidate point image showing a candidate point of the nuclear region.
  • FIG. 14 is a diagram in which a nuclear position candidate point is superimposed on the IHM image shown in FIG. Comparing FIG. 14 and FIG. 10, it can be seen that the nuclear position candidate points substantially coincide with the correct nuclear position. That is, by performing mask processing using the cell region estimation image, it is possible to accurately eliminate the false nuclear region and extract the region that is truly the cell nucleus.
  • the cell counting unit 28 counts the nuclear position candidate points on the nuclear position candidate point image obtained in step S26 (step S27). Except in special cases, one cell has one cell nucleus. Therefore, it can be considered that the number of cell nuclei represents the number of cells, and the display processing unit 29 displays the counting result as the number of cells on the display unit 40 (step S28).
  • the display processing unit 29 displays the counting result as the number of cells on the display unit 40 (step S28).
  • one or a plurality of IHM phase images, kernel region estimation images, kernel position candidate point images, cell region estimation images, and the like may be displayed together.
  • the cell analyzer of the present embodiment can accurately calculate the number of cells existing in the observation range and provide it to the user. Further, in the cell analysis apparatus of the present embodiment, if an image in which the nuclear position candidate points are superimposed on the IHM phase image as shown in FIG. 14 is displayed on the display unit 40, the user can easily position the cell nucleus in the cell. Can be grasped. Further, if the nuclear region estimation image as shown in FIG. 13 is displayed on the display unit 40, the user can easily grasp the shape and morphology of the cell nucleus. Since the counting of living cells and the observation of cell nuclei as described above are performed non-invasively, the cells used for the analysis and observation can be continuously cultured or used for another purpose.
  • the grayscale nuclear region estimation image is masked, and after the masking, the maximum value region extraction process and the binarization process are performed to obtain the nuclear position candidate points for each nuclear region.
  • mask processing using the cell region estimation image may be performed.
  • a masking process after binarizing a grayscale nuclear region estimation image, for each nuclear region, when one nuclear region and a part or all of the cell region on the cell region estimation image do not overlap. , The nuclear region may be regarded as a false positive and excluded.
  • various methods can be adopted as the mask processing method.
  • FCN is used as a machine learning method for semantic segmentation of cell nuclei, but it is clear that a normal convolutional neural network (CNN) may be used. Further, it is effective to apply the present invention not only to the machine learning method using a neural network but also to any machine learning method capable of semantic segmentation of an image.
  • machine learning methods include, for example, Support Vector Machine (SVM), Random Forest, and AdaBoost.
  • SVM Support Vector Machine
  • Random Forest Random Forest
  • AdaBoost AdaBoost
  • FCN can output the estimated probability of segmentation when outputting a segmentation image for the input image, but other methods such as CNN and SVM also use patch images (the entire image is finely divided). A small area image) can be scanned and the probability of cell nucleus-likeness at the center point of each patch image can be output.
  • an in-line holographic microscope is used as the microscopic observation unit 10, but if it is a microscope that can obtain a hologram, an off-axis type, a phase shift type, or the like can be used. It is natural that it can be replaced with a holographic microscope of the above method.
  • One aspect of the cell image analysis method according to the present invention is The phase image of the cell created based on the hologram data acquired by the holographic microscope is used as the input image, and the corresponding pseudo nuclear region image based on the stained image obtained by staining the cell nucleus is the correct image.
  • the second learning model creation step to create the cell region learning model Using the nuclear region learning model, a nuclear region estimation step of acquiring a phase image of the cell to be analyzed as an input image and a nuclear region estimation image showing the region of the cell nucleus as an output image, Using the cell region learning model, a cell region estimation step of acquiring a phase image of the cell to be analyzed as an input image and a cell region estimation image showing the cell region as an output image, and Using the nuclear region estimation image and the cell region estimation image, a nuclear region extraction step of extracting cell nuclei existing in a range estimated to be a cell region, and a nuclear region extraction step. It has.
  • one aspect of the cell analyzer according to the present invention is with a holographic microscope
  • An image creation unit that creates a phase image of the cells based on the hologram data obtained by observing the cells with the holographic microscope.
  • the learning data in which the phase image of the cell created based on the hologram data is used as the input image and the corresponding pseudo-nucleus region image based on the stained image obtained by staining the cell nucleus is used as the correct image.
  • the first learning model storage unit that stores the nuclear region learning model created by performing machine learning, Created by performing machine learning using learning data in which the phase image of the cells is used as an input image and the pseudo cell region image based on the stained image obtained by staining the corresponding cell skeleton is used as the correct image.
  • a second learning model storage unit that stores the stored cell region learning model, Using the nuclear region learning model stored in the first learning model storage unit, the phase image created by the image creation unit for the cell to be analyzed is used as an input image, and a nuclear region estimation image showing the region of the cell nucleus is output.
  • the nuclear region estimation unit acquired as an image and Using the cell region learning model stored in the second learning model storage unit, the phase image of the cell to be analyzed is used as an input image, and the cell region estimation image showing the cell region is acquired as an output image.
  • the cells to be analyzed are created based on hologram data without performing invasive processing such as staining.
  • the cell nucleus can be accurately extracted from the phase image of the cell. In particular, even when interference fringes appear in the phase image or an image due to a foreign substance appears, the influence thereof can be eliminated or reduced, and the cell nucleus can be observed accurately. Then, for example, by counting the number of the cell nuclei, information on the number of cells can be obtained. In addition, the true shape and morphology of the cell nucleus, or the state inside the cell nucleus can be observed. In addition, since the cells to be analyzed can be photographed (measured) non-invasively, the cells after imaging can be continuously cultured or used for analysis or observation for another purpose.
  • the cell image analysis method according to paragraph 1 may further include a cell counting step for counting the number of cell nuclei extracted in the nuclear region extraction step.
  • the cell analyzer according to item 7 can further include a cell counting unit that counts the number of cell nuclei extracted by the nuclear region extraction unit.
  • one cell has one cell nucleus, so the number of cell nuclei can be considered to represent the number of cells.
  • the number of cell nuclei extracted accurately can be counted, so that the number of cells existing in the observation range can be determined. It can be calculated accurately.
  • the cell image analysis method further includes a display step of displaying a nuclear region estimation image in which the cell nuclei extracted in the nuclear region extraction step are displayed. can do.
  • the cell analysis apparatus further includes a display processing unit that displays a nuclear region estimation image in which the cell nuclei extracted by the nuclear region extraction unit are displayed. Can be.
  • the user can observe the shape and morphology of the accurately extracted cell nucleus based on the displayed image. it can.
  • the cell region estimation image is a binary image in which a cell region and other regions are separated. Can be.
  • the cell region estimation image is a binary image in which a cell region and other regions are separated. can do.
  • the cell region and the other regions are clarified, so that the cell nucleus is likely to be used. It can be extracted by accurate and simple processing.
  • the machine learning can be performed using a convolutional neural network.
  • the convolutional neural network can be a fully convolutional neural network.
  • the machine learning can be performed using a convolutional neural network.
  • the convolutional neural network can be a fully convolutional neural network.
  • the region or cell is likely to be a cell nucleus.
  • Each region can be estimated accurately.
  • a learning model creation unit that creates a learning model by performing machine learning using learning data that uses a cell phase image as an input image and a pseudo region image based on the corresponding stained image as a correct image. Prepare, The nuclear region learning model and the cell region learning model can be created by using the learning model creation unit.
  • the cell analysis apparatus has a function of creating a learning model used for obtaining a cell region estimation image from a phase image of a cell
  • the cell analysis according to the thirteenth item has that function. Therefore, according to the cell analyzer according to the thirteenth paragraph, for example, the nuclear region estimation image obtained for the phase image of the cell to be analyzed in the nuclear region estimation unit is added to the training data to re-learn the learning model. It is possible to easily improve the learning model such as building.

Landscapes

  • Chemical & Material Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Organic Chemistry (AREA)
  • Biotechnology (AREA)
  • Zoology (AREA)
  • Wood Science & Technology (AREA)
  • Sustainable Development (AREA)
  • Microbiology (AREA)
  • Biomedical Technology (AREA)
  • Medicinal Chemistry (AREA)
  • Biochemistry (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Genetics & Genomics (AREA)
  • Analytical Chemistry (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Image Analysis (AREA)

Abstract

Un aspect du dispositif d'analyse cellulaire selon la présente invention comprend : un microscope holographique (10) ; une unité de création d'image (23, 24) pour créer une image de phase d'une cellule sur la base de données d'hologramme ; une première unité de stockage de modèle d'apprentissage (251) pour stocker un modèle d'apprentissage de région nucléaire étant créé par réalisation d'un apprentissage automatique par réglage de l'image de phase de la cellule en tant qu'image d'entrée et utilisation de données d'apprentissage dans lesquelles une image de région pseudo-nucléaire correspondant à l'image d'entrée et sur la base d'une image de coloration obtenue par coloration d'un noyau cellulaire est définie en tant qu'image correcte ; une seconde unité de stockage de modèle d'apprentissage (261) pour stocker un modèle d'apprentissage de région cellulaire étant créé par réalisation d'un apprentissage automatique par réglage de l'image de phase de la cellule en tant qu'image d'entrée et utilisation de données d'apprentissage dans lesquelles une image de région pseudo-cellulaire correspondant à l'image d'entrée et sur la base de l'image de coloration obtenue par coloration de cytosquelette est définie en tant qu'image correcte ; une unité d'inférence de région nucléaire (25) pour obtenir, en tant qu'image de sortie, une image d'inférence de région nucléaire indiquant la région du noyau cellulaire à l'aide du modèle d'apprentissage de région nucléaire et régler, en tant qu'image d'entrée, l'image de phase créée pour la cellule en cours d'analyse ; une unité d'inférence de région cellulaire (26) pour obtenir, en tant qu'image de sortie, une image d'inférence de région cellulaire indiquant une région cellulaire à l'aide du modèle d'apprentissage de région cellulaire et régler, en tant qu'image d'entrée, l'image de phase pour la cellule en cours d'analyse ; et une unité d'extraction de région nucléaire (27) pour extraire un noyau cellulaire présent dans une plage inférée comme étant la région cellulaire en utilisant l'image d'inférence de région nucléaire et l'image d'inférence de région cellulaire.
PCT/JP2019/040275 2019-10-11 2019-10-11 Procédé d'analyse d'image cellulaire et dispositif d'analyse d'image cellulaire WO2021070371A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2019/040275 WO2021070371A1 (fr) 2019-10-11 2019-10-11 Procédé d'analyse d'image cellulaire et dispositif d'analyse d'image cellulaire
JP2021551081A JP7248139B2 (ja) 2019-10-11 2019-10-11 細胞画像解析方法及び細胞解析装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/040275 WO2021070371A1 (fr) 2019-10-11 2019-10-11 Procédé d'analyse d'image cellulaire et dispositif d'analyse d'image cellulaire

Publications (1)

Publication Number Publication Date
WO2021070371A1 true WO2021070371A1 (fr) 2021-04-15

Family

ID=75438159

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/040275 WO2021070371A1 (fr) 2019-10-11 2019-10-11 Procédé d'analyse d'image cellulaire et dispositif d'analyse d'image cellulaire

Country Status (2)

Country Link
JP (1) JP7248139B2 (fr)
WO (1) WO2021070371A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016093090A1 (fr) * 2014-12-09 2016-06-16 コニカミノルタ株式会社 Appareil de traitement d'image et programme de traitement d'image
WO2019171453A1 (fr) * 2018-03-06 2019-09-12 株式会社島津製作所 Procédé et dispositif d'analyse d'image de cellule, et procédé de création de modèle d'apprentissage
WO2019180833A1 (fr) * 2018-03-20 2019-09-26 株式会社島津製作所 Dispositif d'observation de cellules

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016093090A1 (fr) * 2014-12-09 2016-06-16 コニカミノルタ株式会社 Appareil de traitement d'image et programme de traitement d'image
WO2019171453A1 (fr) * 2018-03-06 2019-09-12 株式会社島津製作所 Procédé et dispositif d'analyse d'image de cellule, et procédé de création de modèle d'apprentissage
WO2019180833A1 (fr) * 2018-03-20 2019-09-26 株式会社島津製作所 Dispositif d'observation de cellules

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
BIANCO, VITTORIO ET AL.: "Strategies for reducing speckle noise in digital holography", SCIENCE & APPLICATIONS, vol. 7, no. 48, 2018, pages 1 - 16, XP055816501 *
CHRISTIANSEN, ERIC M. ET AL.: "In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images", CELL, vol. 173, 19 April 2018 (2018-04-19), pages 792 - 803, XP002788720 *
LEE, JIMIN ET AL.: "Deep-Learning-Based Label-Free Segmentation of Cell Nuclei in Time-Lapse Refractive Index Tomograms", IEEE ACCESS, vol. 7, 21 June 2019 (2019-06-21), pages 83449 - 83460, XP011733981, DOI: 10.1109/ACCESS.2019.2924255 *
OUNKOMOL, CHAWIN ET AL.: "Label-free prediction of three-dimensional fluorescence images from transmitted light microscopy", NAT. METHODS, vol. 15, no. 11, November 2018 (2018-11-01), pages 917 - 920, XP036624647, DOI: 10.1038/s41592-018-0111-2 *

Also Published As

Publication number Publication date
JP7248139B2 (ja) 2023-03-29
JPWO2021070371A1 (fr) 2021-04-15

Similar Documents

Publication Publication Date Title
JP7344568B2 (ja) ディープラーニングを使用して無標識蛍光画像をデジタル染色する方法及びシステム
JP7496389B2 (ja) 画像解析方法、装置、プログラムおよび学習済み深層学習アルゴリズムの製造方法
Weng et al. Combining deep learning and coherent anti-Stokes Raman scattering imaging for automated differential diagnosis of lung cancer
US20200388033A1 (en) System and method for automatic labeling of pathology images
US11978211B2 (en) Cellular image analysis method, cellular image analysis device, and learning model creation method
JP2023508284A (ja) ディープラーニングを使用した顕微鏡画像のデジタル的染色のための方法及びシステム
US20210110536A1 (en) Cell image analysis method and cell image analysis device
JP2017519985A (ja) 血液学用デジタルホログラフィ顕微鏡検査データ分析
WO2014087689A1 (fr) Dispositif de traitement d'image, système de traitement d'image, et programme
Goceri et al. Quantitative validation of anti‐PTBP1 antibody for diagnostic neuropathology use: Image analysis approach
Wang et al. Detection of dendritic spines using wavelet‐based conditional symmetric analysis and regularized morphological shared‐weight neural networks
Son et al. Morphological change tracking of dendritic spines based on structural features
JP2022506135A (ja) ヒトの寄与を組み込む反復的深層学習フローを使用した顕微鏡画像内の3d細胞間構造のセグメント化
US20210133981A1 (en) Biology driven approach to image segmentation using supervised deep learning-based segmentation
Delpiano et al. Automated detection of fluorescent cells in in‐resin fluorescence sections for integrated light and electron microscopy
Shaw et al. Optical mesoscopy, machine learning, and computational microscopy enable high information content diagnostic imaging of blood films
JP2021078356A (ja) 細胞解析装置
Niederlein et al. Image analysis in high content screening
Hodneland et al. Automated detection of tunneling nanotubes in 3D images
WO2021070371A1 (fr) Procédé d'analyse d'image cellulaire et dispositif d'analyse d'image cellulaire
Serin et al. A novel overlapped nuclei splitting algorithm for histopathological images
Mannam et al. Improving fluorescence lifetime imaging microscopy phasor accuracy using convolutional neural networks
WO2021070372A1 (fr) Procédé d'analyse d'images de cellules et dispositif d'analyse d'image de cellules
Dai et al. Exceeding the limit for microscopic image translation with a deep learning-based unified framework
Kotyk et al. Detection of dead stained microscopic cells based on color intensity and contrast

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19948861

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021551081

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19948861

Country of ref document: EP

Kind code of ref document: A1