CN116343205A - Automatic labeling method for fluorescence-bright field microscopic image of planktonic algae cells - Google Patents

Automatic labeling method for fluorescence-bright field microscopic image of planktonic algae cells Download PDF

Info

Publication number
CN116343205A
CN116343205A CN202310217988.6A CN202310217988A CN116343205A CN 116343205 A CN116343205 A CN 116343205A CN 202310217988 A CN202310217988 A CN 202310217988A CN 116343205 A CN116343205 A CN 116343205A
Authority
CN
China
Prior art keywords
image
bright field
mask
images
fluorescence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310217988.6A
Other languages
Chinese (zh)
Inventor
赵南京
贾仁庆
殷高方
徐敏
胡翔
黄朋
梁天泓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Institutes of Physical Science of CAS
Original Assignee
Hefei Institutes of Physical Science of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Institutes of Physical Science of CAS filed Critical Hefei Institutes of Physical Science of CAS
Priority to CN202310217988.6A priority Critical patent/CN116343205A/en
Publication of CN116343205A publication Critical patent/CN116343205A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/693Acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/80Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Investigating, Analyzing Materials By Fluorescence Or Luminescence (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an automatic labeling method for fluorescence-bright field microscopic images of floating algae cells, which comprises the steps of firstly synchronously measuring fluorescence of the floating algae cells and the bright field microscopic images, then carrying out digital image morphological processing on the fluorescence images, automatically drawing outlines of the bright field microscopic images of the floating algae cells through an image processing technology, converting the fluorescence images into mask images required by segmentation of mask NN networks of training examples, and finally training the mask NN networks by using the automatically generated labeling mask images, thereby providing an effective means for identifying the floating algae cells. The automatic labeling method of the invention greatly reduces or even replaces the workload of manual labeling.

Description

Automatic labeling method for fluorescence-bright field microscopic image of planktonic algae cells
Technical Field
The invention belongs to the field of resources and environment, in particular to the field related to phytoplankton image recognition, and particularly relates to an automatic labeling method for fluorescence-bright field microscopic images of plankton cells.
Background
The monitoring of the diversity of the plankton is an important component for evaluating the water quality organisms, and has important significance for protecting the water ecological environment. Traditional microscopic methods for identifying algal communities require specialized personnel to operate and are time-consuming and labor-intensive. The different floating algae cell images have obvious morphological differences, and the floating algae monitoring method based on image identification can be used for completing algae classification by extracting morphological, color and texture characteristics of the algae cell images, thereby playing a role in floating algae cell monitoring.
Image segmentation is an important step for identifying planktonic algae cell images, and the traditional image segmentation methods such as a threshold method, an edge monitoring method and the like have unstable segmentation results, so that complete algae cells are easily segmented into parts and background impurities are easily segmented. In recent years, an image recognition method based on a convolutional neural network, such as Mask RCNN, is well applied to segmentation of planktonic algae cell images. However, a large amount of labeling data is required for training the convolutional neural network, and the current labeling of the planktonic algae cell image still depends on manually drawing the outline of the algae cell by using a mouse in labeling tools such as LabelMe, so that the workload of labeling the planktonic algae microscopic image is huge, and the method has become a bottleneck problem for limiting the development of the planktonic algae cell monitoring method based on the image recognition technology.
Disclosure of Invention
The invention discloses a method for automatically marking an image of an image field by utilizing a fluorescence microscopic image of planktonic algae, which replaces the traditional workload of manually drawing the outline of algae cells in marking tools such as LabelMe and the like.
The technical scheme of the invention is as follows: an automatic labeling method for fluorescence-bright field microscopic images of planktonic algae cells comprises the following steps:
step (1), synchronously collecting microscopic fluorescence images and microscopic bright field images of planktonic algae cells in a fluorescence-bright field dual-channel microscopic imaging instrument;
converting the fluorescent image into a gray level image, and converting the gray level fluorescent image into a binary image by adopting a discriminant method;
and (3) performing morphological open operation on the binarized image by using a cross structural element M so as to eliminate isolated points caused by noise factors:
step (4) performing morphological dilation operation on the binarized image by using a cross-shaped structural element M;
step (5) determining a connection area in the binary image, thereby realizing automatic drawing of outlines of the planktonic algae cells and generating an instance Mask diagram required by training a Mask RCNN network;
marking algae seeds corresponding to each example mask map, and constructing a planktonic algae cell image data set;
firstly, pre-training a Mask RCNN network by using a COCO data set, and then training the Mask RCNN network by using a planktonic algae cell image data set on the basis of a pre-training model;
and (8) inputting the acquired planktonic algae cell image into a trained Mask RCNN model, and outputting the frame, mask image and planktonic algae cell type of the planktonic algae cell image.
The beneficial effects are that:
aiming at the problems that the data labeling is seriously dependent on manual labeling and the manual labeling cost is high in a planktonic algae cell segmentation network, the invention provides a method for automatically labeling an image of a microscopic field by utilizing a fluorescence microscopic image of planktonic algae cells. According to the method, through synchronously measuring the bright field image and the fluorescent image of the planktonic algae cells in the fluorescent-bright field dual-channel imaging system, after the digital image form processing is carried out on the fluorescent image, the fluorescent image is converted into a Mask image required by a Mask RCNN model for dividing training examples, so that the Mask image can be effectively and quickly automatically generated, the workload of manual marking is replaced, and finally, the Mask RCNN network is trained by using the automatically generated marking Mask image, so that an effective means is provided for planktonic algae cell identification.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a Mask RCNN network architecture;
fig. 3 shows bright field image and fluorescent image effect diagram of nostoc, a. Bright field image, b. Fluorescent image, c. Bright field image and fluorescent image are combined together, d. Mask diagram is drawn automatically;
fig. 4 shows the effect diagram of bright field image and fluorescent image of the hypermenorrhea, a. Bright field image, b. Fluorescent image, c. Additive fusion of bright field image and fluorescent image, d. Mask diagram automatically drawn;
FIG. 5 is a bright field image and fluorescence image effect map of oocyst algae in a lake; a. bright field image, b, fluorescent image, c, adding and fusing bright field image and fluorescent image, d, automatically drawing mask image;
FIG. 6 shows the effect of the automatic labeling method of the present invention integrated into LabelMe labeling tool, a. Nostoc, b. Polynostoc c. Lake oocyst algae;
FIG. 7 is a Mask RCN segmentation effect diagram; a. nostoc, b. Polydinoflagellate, c. Oocyst of lake.
Detailed Description
The technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by those skilled in the art without the inventive effort based on the embodiments of the present invention are within the scope of protection of the present invention.
The invention discloses a method for automatically marking outlines of microscopic images of an image field by utilizing fluorescent images of planktonic algae cells, which comprises the following steps as shown in figure 1:
step (1), synchronously collecting microscopic fluorescence images and microscopic bright field images of planktonic algae cells in a fluorescence-bright field dual-channel microscopic imaging instrument;
and (2) converting the fluorescent image into a gray level image, and converting the gray level fluorescent image into a binary image by adopting a discriminant method. In the binarized image B processed by the step, a bright area is an area where planktonic algae cells are located, and a black area is a background area;
step (3) performing morphological open operation on the binarized image B by using a cross-shaped structural element M so as to eliminate isolated points caused by factors such as noise and the like:
Figure BDA0004115624260000031
step (4) to ensure that the automatically drawn outline may contain the whole algal cell, the binarized image B is mapped using the cross-shaped structural element M 1 Performing morphological dilation operation:
Figure BDA0004115624260000032
step (5) determining the binary image B by a cv2.findContours method in a Python-Opencv image processing library 2 Thereby realizing the automatic drawing of the outline of the planktonic algae cells and generating an example Mask diagram required for training the Mask RCNN network. The network structure is shown in figure 2, mask RCNN is an example segmentation model developed on the basis of fast RCNN, and the boundary frame, the segmentation Mask graph and the classification result of the floating algae cells are output simultaneously by using the fully-connected network FCN at the output end;
marking the algae species of each mask example by using LabelMe software, and constructing a planktonic algae cell image data set;
the step (7) COCO dataset is a large-scale object detection and segmentation image dataset. To accelerate the convergence rate of the model, a Mask RCNN network is pre-trained by using a COCO data set, and then the network is trained by using a planktonic algae cell image data set on the basis of the pre-trained model;
and (8) inputting the acquired planktonic algae cell image into a trained Mask RCNN model, and outputting the frame, mask image and planktonic algae cell type of the planktonic algae cell image.
Referring to fig. 3, an effect diagram of automatic labeling is shown, wherein an effect diagram of bright field images and fluorescent images of nostoc are shown, a is a bright field image, b is a fluorescent image, c is addition fusion of the bright field image and the fluorescent image, and d is a mask diagram of automatic drawing;
in the invention, the fluorescent image is processed in a digital mode to automatically acquire a mask image of the algae cells, in order to conveniently mark the types of the bright field images, as shown in a figure 6 a, the source codes of a marking tool LabelMe are modified, when the bright field images are opened, the fluorescent image is automatically loaded according to a path and the mask image is drawn, then the areas where the algae cells are located are hit right, and the types of the algae cells can be marked by clicking an edition Label button.
Referring to fig. 4, the effect diagram of the bright field image and the fluorescent image of the dinoflagellate is shown, wherein a is the bright field image, b is the fluorescent image, c is the additive fusion of the bright field image and the fluorescent image, and d is the mask diagram which is automatically drawn.
Referring to fig. 5, the effect diagram of bright field image and fluorescent image of oocyst algae in lake is shown, wherein a is bright field image, b is fluorescent image, c is additive fusion of bright field image and fluorescent image, and d is mask diagram of automatic drawing.
Referring to fig. 6, the method of automatically profiling planktonic algae cells of the present invention is integrated into LabelMe labeling, and when bright field images are opened, fluorescence images are automatically loaded according to paths and edge profiles are mapped as shown in fig. 6. The user can modify the variety of plankton by right clicking on the pop-up box. FIG. 6 is a graph showing the effect of the automatic labeling method of the present invention integrated into a LabelMe labeling tool, wherein a is nostoc, b is Polychloromyces, and c is oocyst;
referring to fig. 7, a Mask RCN segmentation effect diagram is shown. After the Mask RCNN model is trained, the bright field image of the plankton cells is input into the Mask RCNN model, and an example segmentation result of plankton cells can be obtained, wherein the example segmentation result comprises three information of algae types, mask images and frames. An example graph of segmentation results is shown in fig. 7: wherein a is nostoc, b is hyperdinoflagellate, and c is oocyst of a lake.
While the foregoing has been described in relation to illustrative embodiments thereof, so as to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, but is to be construed as limited to the spirit and scope of the invention as defined and defined by the appended claims, as long as various changes are apparent to those skilled in the art, all within the scope of which the invention is defined by the appended claims.

Claims (3)

1. An automatic labeling method for fluorescence-bright field microscopic images of floating algae cells is characterized by comprising the following steps:
step (1), synchronously collecting microscopic fluorescence images and microscopic bright field images of planktonic algae cells in a fluorescence-bright field dual-channel microscopic imaging instrument;
converting the fluorescent image into a gray level image, and converting the gray level fluorescent image into a binary image by adopting a discriminant method;
and (3) performing morphological open operation on the binarized image by using a cross structural element M so as to eliminate isolated points caused by noise factors:
step (4) performing morphological dilation operation on the binarized image by using a cross-shaped structural element M;
step (5) determining a connection area in the binary image, thereby realizing automatic drawing of outlines of the planktonic algae cells and generating an example mask diagram required by training a mask NN network;
marking algae seeds corresponding to each instance mask map, and constructing a planktonic algae cell image data set;
step (7), firstly, pretraining a mask RCNN network by using a COCO data set, and then training the mask RCNN network by using a planktonic algae cell image data set on the basis of a pretraining model;
and (8) inputting the acquired planktonic algae cell image into a trained maskRCNN model, and outputting the frame, mask image and planktonic algae cell type of the planktonic algae cell image.
2. The automated fluorescence-bright field microscopic image labeling method of planktonic algae cells according to claim 1, wherein the step (2) comprises:
in the binarized image B processed by the step, a bright area is an area where planktonic algae cells are located, and a black area is a background area.
3. The method for automatically labeling fluorescent-bright field microscopic images of planktonic algae cells according to claim 1, wherein the step (4) uses cross-shaped structural elements M to perform morphological dilation operation on the binarized images, thereby ensuring that the automatically drawn outline contains the whole algae cells.
CN202310217988.6A 2023-03-08 2023-03-08 Automatic labeling method for fluorescence-bright field microscopic image of planktonic algae cells Pending CN116343205A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310217988.6A CN116343205A (en) 2023-03-08 2023-03-08 Automatic labeling method for fluorescence-bright field microscopic image of planktonic algae cells

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310217988.6A CN116343205A (en) 2023-03-08 2023-03-08 Automatic labeling method for fluorescence-bright field microscopic image of planktonic algae cells

Publications (1)

Publication Number Publication Date
CN116343205A true CN116343205A (en) 2023-06-27

Family

ID=86890836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310217988.6A Pending CN116343205A (en) 2023-03-08 2023-03-08 Automatic labeling method for fluorescence-bright field microscopic image of planktonic algae cells

Country Status (1)

Country Link
CN (1) CN116343205A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116645390A (en) * 2023-07-27 2023-08-25 吉林省星博医疗器械有限公司 Fluorescent image cell rapid segmentation method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116645390A (en) * 2023-07-27 2023-08-25 吉林省星博医疗器械有限公司 Fluorescent image cell rapid segmentation method and system
CN116645390B (en) * 2023-07-27 2023-10-03 吉林省星博医疗器械有限公司 Fluorescent image cell rapid segmentation method and system

Similar Documents

Publication Publication Date Title
CN111178197B (en) Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method
CN111126386B (en) Sequence domain adaptation method based on countermeasure learning in scene text recognition
Jayakody et al. Microscope image based fully automated stomata detection and pore measurement method for grapevines
Pennekamp et al. Implementing image analysis in laboratory‐based experimental systems for ecology and evolution: a hands‐on guide
CN110765941A (en) Seawater pollution area identification method and equipment based on high-resolution remote sensing image
CN108596038B (en) Method for identifying red blood cells in excrement by combining morphological segmentation and neural network
CN109919934A (en) A kind of liquid crystal display panel defect inspection method based on the study of multi-source domain depth migration
CN112766334B (en) Cross-domain image classification method based on pseudo label domain adaptation
CN113011357A (en) Depth fake face video positioning method based on space-time fusion
CN112508079B (en) Fine identification method, system, equipment, terminal and application of ocean frontal surface
CN106340016A (en) DNA quantitative analysis method based on cell microscope image
CN112365497A (en) High-speed target detection method and system based on Trident Net and Cascade-RCNN structures
CN111461121A (en) Electric meter number identification method based on YO L OV3 network
CN111210447B (en) Hematoxylin-eosin staining pathological image hierarchical segmentation method and terminal
CN116343205A (en) Automatic labeling method for fluorescence-bright field microscopic image of planktonic algae cells
CN109815957A (en) A kind of character recognition method based on color image under complex background
CN115294377A (en) System and method for identifying road cracks
CN116977633A (en) Feature element segmentation model training method, feature element segmentation method and device
CN116664944A (en) Vineyard pest identification method based on attribute feature knowledge graph
CN116310548A (en) Method for detecting invasive plant seeds in imported seed products
CN113077438B (en) Cell nucleus region extraction method and imaging method for multi-cell nucleus color image
CN108428234B (en) Interactive segmentation performance optimization method based on image segmentation result evaluation
CN117611918A (en) Marine organism classification method based on hierarchical neural network
Bergum et al. Automatic in-situ instance and semantic segmentation of planktonic organisms using Mask R-CNN
CN115841557B (en) Intelligent crane operation environment construction method based on digital twin technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination