CN116402818A - Full-automatic fluorescence scanner and method thereof - Google Patents

Full-automatic fluorescence scanner and method thereof Download PDF

Info

Publication number
CN116402818A
CN116402818A CN202310671643.8A CN202310671643A CN116402818A CN 116402818 A CN116402818 A CN 116402818A CN 202310671643 A CN202310671643 A CN 202310671643A CN 116402818 A CN116402818 A CN 116402818A
Authority
CN
China
Prior art keywords
training
full
image
feature
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310671643.8A
Other languages
Chinese (zh)
Inventor
张开山
黄城
刘艳省
赵丹
郭志敏
李超
饶浪晴
吴思凡
吴乐中
鲍珂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HANGZHOU WATSON BIOTECH Inc
Original Assignee
HANGZHOU WATSON BIOTECH Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HANGZHOU WATSON BIOTECH Inc filed Critical HANGZHOU WATSON BIOTECH Inc
Priority to CN202310671643.8A priority Critical patent/CN116402818A/en
Publication of CN116402818A publication Critical patent/CN116402818A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Image Analysis (AREA)

Abstract

A full-automatic fluorescence scanner and a method thereof acquire a detection image acquired by a fluorescence microscope; and excavating implicit characteristic information about CTC cells in the detection image by adopting an artificial intelligence technology based on deep learning, and carrying out noise reduction of the detection image and full expression of the CTC cell characteristics based on the implicit characteristic information to improve the number detection accuracy of the CTC cells.

Description

Full-automatic fluorescence scanner and method thereof
Technical Field
The present application relates to the field of intelligent scanning technology, and more particularly, to a fully automatic fluorescence scanner and method thereof.
Background
Circulating Tumor Cells (CTCs) are cancer cells shed from primary tumors or metastases into the blood circulation, which are important markers for tumor metastasis and recurrence, and detection and analysis of circulating tumor cells in the blood is of great importance for early diagnosis and treatment of cancer.
Currently, a fluorescence microscope-based CTC cell recognition method is a common method for detecting CTCs, which is capable of distinguishing CTCs from normal blood cells using a fluorescence-labeled specific antibody, and then imaging and counting the CTCs using a fluorescence microscope. However, in the prior art, the detection and analysis of CTCs mainly rely on manual operations such as fluorescence microscope, and the problems of low detection efficiency, large error, artificial interference and the like exist, so that the efficiency and accuracy of CTC cell number detection are affected.
Accordingly, an optimized fully automated fluorescence scanner is desired.
Disclosure of Invention
The present application has been made in order to solve the above technical problems. Embodiments of the present application provide a full-automatic fluorescence scanner and method thereof that acquire a detection image acquired by a fluorescence microscope; and excavating implicit characteristic information about CTC cells in the detection image by adopting an artificial intelligence technology based on deep learning, and carrying out noise reduction of the detection image and full expression of the CTC cell characteristics based on the implicit characteristic information to improve the number detection accuracy of the CTC cells.
In a first aspect, there is provided a fully automated fluorescence scanner comprising:
the detection image acquisition module is used for acquiring detection images acquired by the fluorescence microscope;
the image noise reduction module is used for enabling the detection image to pass through the image filtering module based on the encoder-decoder structure to obtain a noise-reduced detection image and a shallow feature map output by a first convolution layer of the encoder;
the image semantic segmentation module is used for carrying out image semantic segmentation on the noise-reduced detection image to obtain a CTC cell prediction result image;
the first full-connection coding module is used for expanding the CTC cell prediction result graph into one-dimensional pixel vectors and then passing through the first full-connection layer to obtain full-pixel associated feature vectors of the CTC cell prediction result graph;
The second full-connection coding module is used for expanding the shallow feature map into shallow feature vectors and then obtaining shallow feature full-element associated feature vectors through a second full-connection layer;
the feature fusion module is used for cascading the all-pixel associated feature vector of the CTC cell prediction result graph and the shallow feature all-element associated feature vector to obtain a decoded feature vector; and
and the quantity counting module is used for enabling the decoding characteristic vector to pass through a decoder to obtain a decoding value, wherein the decoding value is used for representing the quantity value of the CTC cells.
In the above-described fully automatic fluorescence scanner, the encoder and the decoder have a symmetrical structure, and the encoder includes five convolution layers, and the decoder includes five deconvolution layers.
In the above-mentioned full-automatic fluorescence scanner, the image noise reduction module includes: the image noise reduction coding unit is used for enabling the detection image to pass through an encoder of the image filtering module to obtain a plurality of detection feature images, wherein the detection feature images output by a first convolution layer of the encoder are shallow feature images; the multi-scale feature fusion unit is used for fusing the detection feature images to obtain a multi-scale detection feature image; and the image noise reduction decoding unit is used for inputting the multi-scale detection feature images into a decoder of the image filtering module based on the layer jump connection of the detection feature images so as to obtain the noise reduced detection image.
In the above-mentioned full-automatic fluorescence scanner, the image noise reduction encoding unit is configured to: the respective convolution layers of the encoder using the image filtering module perform convolution processing, pooling processing, and nonlinear activation processing on input data, respectively, to output the detection feature map by the respective convolution layers of the encoder.
In the above-mentioned fully automatic fluorescence scanner, the first fully connected encoding module includes: the image unfolding unit is used for unfolding the CTC cell prediction result image into a CTC cell prediction one-dimensional pixel characteristic vector; and the first full-connection unit is used for carrying out full-connection coding on the CTC cell prediction one-dimensional pixel feature vector by using the first full-connection layer so as to obtain the CTC cell prediction result image full-pixel association feature vector.
In the above-mentioned full-automatic fluorescence scanner, the number counting module is configured to: performing a decoding regression on the decoded feature vector using the decoder in the following formula to obtain the decoded value; wherein, the formula is:
Figure SMS_1
wherein->
Figure SMS_2
Representing said decoded feature vector,/->
Figure SMS_3
Representing the decoded value->
Figure SMS_4
Representing a weight matrix, +.>
Figure SMS_5
Representing the bias vector +_>
Figure SMS_6
Representing a matrix multiplication.
The fully automatic fluorescence scanner further comprises a training module for training the image filtering module based on the encoder-decoder structure, the first fully connecting layer, the second fully connecting layer and the decoder; wherein, training module includes: the training data acquisition unit is used for acquiring training detection images and the true value of the quantity value of the CTC cells; the training image noise reduction unit is used for enabling the training detection image to pass through the image filtering module based on the encoder-decoder structure so as to obtain a training noise reduction detection image and a training shallow feature map output by a first convolution layer of the encoder; the training image semantic segmentation unit is used for carrying out image semantic segmentation on the training noise-reduced detection image so as to obtain a training CTC cell prediction result image; the training first full-connection coding unit is used for expanding the training CTC cell prediction result graph into training one-dimensional pixel vectors and then passing through the first full-connection layer to obtain training CTC cell prediction result graph full-pixel associated feature vectors; the training second full-connection coding unit is used for expanding the training shallow feature map into training shallow feature vectors and then obtaining training shallow feature full-element associated feature vectors through the second full-connection layer; the training feature fusion unit is used for cascading the training CTC cell prediction result graph full-pixel association feature vector and the training shallow feature full-element association feature vector to obtain a training decoding feature vector;
A decoding loss unit, configured to pass the training decoding feature vector through the decoder to obtain a decoding loss function value; the stream refinement loss unit is used for calculating stream refinement loss function values of the training CTC cell prediction result graph full-pixel association feature vector and the training shallow feature full-element association feature vector; and a model training unit for training the encoder-decoder structure based image filtering module, the first fully connected layer, the second fully connected layer and the decoder with a weighted sum of the classification loss function value and the stream refinement loss function value as a loss function value and by back propagation of gradient descent.
In the above-described fully automatic fluorescence scanner, the decoding loss unit includes: a training decoding subunit for performing decoding regression on the training decoding feature vector using the decoder in a training decoding formula to obtain a training decoding value; wherein, training decoding formula is:
Figure SMS_7
wherein->
Figure SMS_8
Is the training decoded feature vector, +.>
Figure SMS_9
Is the training decoded value,/->
Figure SMS_10
Is a weight matrix, < >>
Figure SMS_11
Representing a matrix multiplication; and a loss function value calculation subunit for calculating, as the decoding loss function value, a variance between the training decoded value and a true value of the number value of CTC cells in the training data.
In the above-mentioned fully automatic fluorescence scanner, the flow type refinement loss unit is configured to: calculating the stream refinement loss function value of the training CTC cell prediction result graph full-pixel association feature vector and the training shallow feature full-element association feature vector according to the following optimization formula; wherein, the optimization formula is:
Figure SMS_12
wherein,,
Figure SMS_13
representing the full-pixel associated feature vector of the training CTC cell predictor map,/for>
Figure SMS_14
Representing the training shallow feature full element association feature vector,>
Figure SMS_15
represents the square of the two norms of the vector, and +.>
Figure SMS_16
And->
Figure SMS_17
Represents position-by-position subtraction and multiplication of vectors, respectively, ">
Figure SMS_18
An exponential operation representing a vector representing a calculation of a natural exponential function value raised to a power by a characteristic value of each position in the vector, ">
Figure SMS_19
Representing the streaming refinement loss function value.
In a second aspect, a fully automatic fluorescence scanning method is provided, comprising:
acquiring a detection image acquired by a fluorescence microscope;
passing the detected image through an image filtering module based on an encoder-decoder structure to obtain a noise-reduced detected image and a shallow feature map output by a first convolution layer of the encoder;
Performing image semantic segmentation on the noise-reduced detection image to obtain a CTC cell prediction result image;
expanding the CTC cell prediction result graph into a one-dimensional pixel vector, and then passing through a first full-connection layer to obtain a full-pixel associated feature vector of the CTC cell prediction result graph;
expanding the shallow feature map into a shallow feature vector, and then passing through a second full-connection layer to obtain a shallow feature full-element associated feature vector;
cascading the all-pixel associated feature vector of the CTC cell prediction result graph and the all-element associated feature vector of the shallow feature to obtain a decoded feature vector; and
the decoded feature vector is passed through a decoder to obtain a decoded value, which is used to represent the number value of CTC cells.
Compared with the prior art, the full-automatic fluorescence scanner and the method thereof provided by the application acquire the detection image acquired by the fluorescence microscope; and excavating implicit characteristic information about CTC cells in the detection image by adopting an artificial intelligence technology based on deep learning, and carrying out noise reduction of the detection image and full expression of the CTC cell characteristics based on the implicit characteristic information to improve the number detection accuracy of the CTC cells.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an application scenario diagram of a fully automatic fluorescence scanner according to an embodiment of the present application.
Fig. 2 is a block diagram of a fully automated fluorescence scanner in accordance with an embodiment of the present application.
Fig. 3 is a block diagram of the image noise reduction module in a fully automatic fluorescence scanner according to an embodiment of the present application.
Fig. 4 is a block diagram of the first fully connected encoding module in a fully automated fluorescence scanner according to an embodiment of the present application.
Fig. 5 is a block diagram of the training module in a fully automated fluorescence scanner according to an embodiment of the present application.
Fig. 6 is a block diagram of the decode loss unit in a fully automated fluorescence scanner according to an embodiment of the present application.
Fig. 7 is a flowchart of a fully automatic fluorescence scanning method according to an embodiment of the present application.
Fig. 8 is a schematic diagram of a system architecture of a fully automatic fluorescence scanning method according to an embodiment of the application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Unless defined otherwise, all technical and scientific terms used in the examples of this application have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present application.
In the description of the embodiments of the present application, unless otherwise indicated and defined, the term "connected" should be construed broadly, and for example, may be an electrical connection, may be a communication between two elements, may be a direct connection, or may be an indirect connection via an intermediary, and it will be understood by those skilled in the art that the specific meaning of the term may be understood according to the specific circumstances.
It should be noted that, the term "first\second\third" in the embodiments of the present application is merely to distinguish similar objects, and does not represent a specific order for the objects, it is to be understood that "first\second\third" may interchange a specific order or sequence where allowed. It is to be understood that the "first\second\third" distinguishing objects may be interchanged where appropriate such that the embodiments of the present application described herein may be implemented in sequences other than those illustrated or described herein.
As described above, currently, a CTC cell recognition method based on a fluorescence microscope is a common method for detecting CTCs, which is capable of distinguishing CTCs from normal blood cells using a fluorescent-labeled specific antibody, and then imaging and counting the CTCs using a fluorescence microscope. However, in the prior art, the detection and analysis of CTCs mainly rely on manual operations such as fluorescence microscope, and the problems of low detection efficiency, large error, artificial interference and the like exist, so that the efficiency and accuracy of CTC cell number detection are affected. Accordingly, an optimized fully automated fluorescence scanner is desired.
Accordingly, it is considered that in the process of detecting the number of CTC cells using a fluorescence microscope, the number detection of CTC cells can be achieved by analyzing a detection image acquired by the fluorescence microscope. However, it is considered that there is background noise interference of other fluorescent signals than CTCs, such as proteins in plasma, platelets, erythrocytes, etc., when CTCs are actually imaged and counted using a fluorescent microscope. Background noise can increase the detection difficulty and error of CTC, reduce the signal-to-noise ratio and contrast of CTC, and further influence the accuracy of CTC cell number detection. Therefore, in this process, it is difficult to perform noise reduction of the detection image and sufficient expression of the CTC cell characteristics to improve the accuracy of the number detection of CTC cells.
In recent years, deep learning and neural networks have been widely used in the fields of computer vision, natural language processing, text signal processing, and the like. The development of deep learning and neural networks provides new solutions and schemes for image noise reduction and mining of implicit characteristic information about CTC cells in the detected images.
Based on this, in the technical scheme of the application, a full-automatic fluorescence scanner is provided to detect tasks aiming at the number of CTC cells, and a multi-task learning mechanism can be adopted to jointly learn a counting task and a segmentation task, wherein the counting task is realized by means of point segmentation and image shallow features as guidance.
Specifically, in the segmentation task, the whole network structure is composed of an encoder-decoder, wherein the encoder adopts ResNet-50, which has 5 layers to respectively etch shallow layer features and deep layer features in the original detection image; in the decoder, the original image resolution is gradually restored by progressively fusing the corresponding features in the encoder. And then, the CTC cell prediction result graph and the image shallow features extracted by the encoder are input into a counting module in a combined way to obtain counting statistics. Specifically, in the counting module structure, the CTC cell prediction result graph is subjected to full connection feature extraction, then is respectively spliced and fused with the shallow feature graph of the image to obtain image splicing features, and the prediction quantity is ensured to be more than 0 through a layer of full connection and ReLU activation function.
More specifically, in the technical solution of the present application, first, a detection image acquired by a fluorescence microscope is acquired. It should be appreciated that it is contemplated that there may be background noise interference of other fluorescent signals besides CTCs, such as proteins in plasma, platelets, erythrocytes, etc., when CTCs are imaged and counted using a fluorescence microscope in practice. Background noise can increase the detection difficulty and error of CTC, reduce the signal-to-noise ratio and contrast of CTC, and further influence the accuracy of CTC cell number detection. Therefore, image noise reduction is required before image feature extraction.
Based on the above, in the technical solution of the present application, the detected image is passed through an image filtering module based on an encoder-decoder structure to obtain a noise-reduced detected image and a shallow feature map output by a first convolution layer of the encoder. Specifically, in the encoding stage, not only deep implicit semantic features related to CTC cells in the detected image but also shallow basic detail features of CTC cells are considered when feature extraction of the detected image is performed, so that subsequent statistics of the number of CTC cells are facilitated. Therefore, in the technical scheme of the application, the detection image is passed through an encoder comprising a plurality of convolution layers of the image filtering module to obtain a plurality of detection feature images, and the detection feature images are fused to obtain a multi-scale detection feature image, wherein the detection feature image output by the first convolution layer of the encoder is a shallow feature image. In particular, here, the encoder includes five convolution layers, and the first to fifth convolution layers of the encoder are used to extract implicit characteristic information about CTC cells at different depths in the detection image, respectively. That is, when the feature mining of the detection image is performed, the deep rich semantic features of CTC cells in the detection image are extracted, and meanwhile, the basic detail features of the deep rich semantic features in the shallow layer are reserved, so that the number statistics of CTC cells can be performed later.
Then, in a decoding stage, the multi-scale detection feature map is input to a decoder of the image filtering module comprising a plurality of deconvolution layers based on the layer jump connection of the plurality of detection feature maps to obtain the noise reduced detection image, in particular, the encoder and the decoder have a symmetrical structure here. That is, a symmetric U-shape form is adopted in the decoder, which includes five deconvolution layers, gradually restoring the original image resolution by gradually fusing corresponding features in the encoder. In this way, the first task, i.e. the noise-reduced detected image, can be output at the end of the decoder.
Further, after the noise-reduced detection image is obtained, image semantic segmentation is carried out on the noise-reduced detection image so as to carry out corresponding masking operation of CTC cells in the noise-reduced detection image, and therefore a CTC cell prediction result graph is obtained.
And then, after the CTC cell prediction result graph is obtained, the CTC cell prediction result graph and the image shallow features extracted by the encoder are input into a counting module in a combined mode, and a counting statistic value of the number of CTC cells is obtained.
Specifically, when the image features in the CTC cell prediction result map are feature fused with the shallow feature map, it is considered that the correlation feature information about the implicit features of the CTC cells is provided between each pixel in the CTC cell prediction result map, which is of great significance for the subsequent detection of the number of CTC cells. Therefore, in the technical scheme of the application, after the CTC cell prediction result graph is unfolded into a one-dimensional pixel vector, the full-connection layer is used for encoding so as to extract global correlation feature distribution information among all pixel values in the CTC cell prediction result graph, thereby obtaining a full-pixel correlation feature vector. Then, for the shallow feature map, which contains shallow feature information about details, edges, positions and the like of the CTC cells, in order to facilitate subsequent feature fusion, so as to improve accuracy of detecting CTC cell data, in the technical scheme of the present application, the shallow feature map is further developed into a shallow feature vector and then processed in a second full-connection layer, so as to extract relevant feature information between feature values in the shallow feature map, thereby obtaining a shallow feature full-element relevant feature vector.
And then, cascading the full-pixel association feature vector and the shallow feature full-element association feature vector of the CTC cell prediction result graph to fuse global association feature information of each pixel in the CTC cell prediction result graph and shallow feature information of the CTC cell to obtain a decoding feature vector, and enabling the decoding feature vector to pass through a decoder to obtain a decoding value for representing the number value of the CTC cell. Specifically, the CTC cell prediction result map is subjected to full-connection feature extraction, then is spliced and fused with shallow features related to CTC cells in the detection image to obtain spliced features, and a layer of full-connection and ReLU activation function is used for ensuring that the prediction number is greater than 0. In this way, the accuracy of the detection of CTC cell number can be improved.
In particular, in the technical solution of the present application, when the CTC cell predictor full-pixel associated feature vector and the shallow feature full-element associated feature vector are cascaded to obtain the decoded feature vector, because both the CTC cell predictor full-pixel associated feature vector and the shallow feature full-element associated feature vector are serialized representations of image semantic features of the detected image, when the CTC cell predictor full-pixel associated feature vector and the shallow feature full-element associated feature vector are cascaded to obtain the decoded feature vector, it is desirable to promote correlation of respective image semantic serialization expressions of the CTC cell predictor full-pixel associated feature vector and the shallow feature full-element associated feature vector in a merged high-dimensional feature space.
Based on this, the applicant of the present application further introduced a full-pixel correlation feature vector for the CTC cell predictor map in addition to the error loss function for the decoder
Figure SMS_20
And the shallow feature total element associated feature vector +.>
Figure SMS_21
The streaming refinement loss function of (2) is expressed as:
Figure SMS_22
wherein the method comprises the steps of
Figure SMS_23
Representing the square of the two norms of the vector.
Here, the stream refinement loss function is based on the CTC cell predictor map full-pixel correlation feature vector
Figure SMS_24
And the shallow feature total element associated feature vector +.>
Figure SMS_25
Conversion of the serialized streaming distribution of image semantic features into spatial distribution in the merged high-dimensional feature space, achieving super-resolution promotion of the spatial distribution in the high-dimensional feature space by interpolation under the serial distribution of simultaneous vectors, thereby providing finer alignment of distribution differences in the high-dimensional feature space by mutual probabilistic relationships under balanced sequences to jointly present cross inter-dimensional context correlation in the serialized image semantic feature dimension and the spatial dimension of the high-dimensional feature space, thereby promoting correlation of the image semantic serialization expression of each of the CTC cell predictor map full-pixel correlation feature vector and the shallow feature full-element correlation feature vector in the merged high-dimensional feature space to promote the level And the combined decoding feature vector has the expression effect on the full-pixel association feature vector of the CTC cell prediction result graph and the shallow feature full-element association feature vector, so that the accuracy of a decoding value obtained by the decoding feature vector through a decoder is improved. Thus, background noise interference can be reduced, and further, accuracy of detecting the number of CTC cells can be improved.
Fig. 1 is an application scenario diagram of a fully automatic fluorescence scanner according to an embodiment of the present application. As shown in fig. 1, in this application scenario, first, a detection image (e.g., D as illustrated in fig. 1) acquired by a fluorescence microscope (e.g., N as illustrated in fig. 1) is acquired; the acquired detection image is then input into a server (e.g., S as illustrated in fig. 1) deployed with a full-automatic fluorescence scanning algorithm, wherein the server is capable of processing the detection image based on the full-automatic fluorescence scanning algorithm to generate a decoded value representing a number value of CTC cells.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
In one embodiment of the present application, FIG. 2 is a block diagram of a fully automated fluorescence scanner in accordance with an embodiment of the present application. As shown in fig. 2, a fully automatic fluorescence scanner 100 according to an embodiment of the present application includes: a detection image acquisition module 110 for acquiring a detection image acquired by a fluorescence microscope; an image noise reduction module 120, configured to pass the detected image through an image filtering module based on an encoder-decoder structure to obtain a noise reduced detected image and a shallow feature map output by a first convolution layer of the encoder; the image semantic segmentation module 130 is configured to perform image semantic segmentation on the noise-reduced detection image to obtain a CTC cell prediction result map; the first full-connection encoding module 140 is configured to expand the CTC cell prediction result map into a one-dimensional pixel vector, and then pass through the first full-connection layer to obtain a full-pixel associated feature vector of the CTC cell prediction result map; the second full-connection encoding module 150 is configured to expand the shallow feature map into a shallow feature vector, and then pass through a second full-connection layer to obtain a shallow feature full-element associated feature vector; the feature fusion module 160 is configured to concatenate the CTC cell prediction result map full-pixel associated feature vector and the shallow feature full-element associated feature vector to obtain a decoded feature vector; and a number counting module 170 for passing the decoded feature vector through a decoder to obtain a decoded value representing a number value of CTC cells.
Specifically, in the embodiment of the present application, the detection image acquisition module 110 is configured to acquire a detection image acquired by a fluorescence microscope. As described above, currently, a CTC cell recognition method based on a fluorescence microscope is a common method for detecting CTCs, which is capable of distinguishing CTCs from normal blood cells using a fluorescent-labeled specific antibody, and then imaging and counting the CTCs using a fluorescence microscope. However, in the prior art, the detection and analysis of CTCs mainly rely on manual operations such as fluorescence microscope, and the problems of low detection efficiency, large error, artificial interference and the like exist, so that the efficiency and accuracy of CTC cell number detection are affected. Accordingly, an optimized fully automated fluorescence scanner is desired.
Accordingly, it is considered that in the process of detecting the number of CTC cells using a fluorescence microscope, the number detection of CTC cells can be achieved by analyzing a detection image acquired by the fluorescence microscope. However, it is considered that there is background noise interference of other fluorescent signals than CTCs, such as proteins in plasma, platelets, erythrocytes, etc., when CTCs are actually imaged and counted using a fluorescent microscope. Background noise can increase the detection difficulty and error of CTC, reduce the signal-to-noise ratio and contrast of CTC, and further influence the accuracy of CTC cell number detection. Therefore, in this process, it is difficult to perform noise reduction of the detection image and sufficient expression of the CTC cell characteristics to improve the accuracy of the number detection of CTC cells.
In recent years, deep learning and neural networks have been widely used in the fields of computer vision, natural language processing, text signal processing, and the like. The development of deep learning and neural networks provides new solutions and schemes for image noise reduction and mining of implicit characteristic information about CTC cells in the detected images.
Based on this, in the technical scheme of the application, a full-automatic fluorescence scanner is provided to detect tasks aiming at the number of CTC cells, and a multi-task learning mechanism can be adopted to jointly learn a counting task and a segmentation task, wherein the counting task is realized by means of point segmentation and image shallow features as guidance.
Specifically, in the segmentation task, the whole network structure is composed of an encoder-decoder, wherein the encoder adopts ResNet-50, which has 5 layers to respectively etch shallow layer features and deep layer features in the original detection image; in the decoder, the original image resolution is gradually restored by progressively fusing the corresponding features in the encoder. And then, the CTC cell prediction result graph and the image shallow features extracted by the encoder are input into a counting module in a combined way to obtain counting statistics. Specifically, in the counting module structure, the CTC cell prediction result graph is subjected to full connection feature extraction, then is respectively spliced and fused with the shallow feature graph of the image to obtain image splicing features, and the prediction quantity is ensured to be more than 0 through a layer of full connection and ReLU activation function.
More specifically, in the technical solution of the present application, first, a detection image acquired by a fluorescence microscope is acquired.
Specifically, in the embodiment of the present application, the image noise reduction module 120 is configured to pass the detected image through an image filtering module based on an encoder-decoder structure to obtain a noise reduced detected image and a shallow feature map output by a first convolution layer of the encoder. It should be appreciated that it is contemplated that there may be background noise interference of other fluorescent signals besides CTCs, such as proteins in plasma, platelets, erythrocytes, etc., when CTCs are imaged and counted using a fluorescence microscope in practice. Background noise can increase the detection difficulty and error of CTC, reduce the signal-to-noise ratio and contrast of CTC, and further influence the accuracy of CTC cell number detection. Therefore, image noise reduction is required before image feature extraction.
Based on the above, in the technical solution of the present application, the detected image is passed through an image filtering module based on an encoder-decoder structure to obtain a noise-reduced detected image and a shallow feature map output by a first convolution layer of the encoder. Specifically, in the encoding stage, not only deep implicit semantic features related to CTC cells in the detected image but also shallow basic detail features of CTC cells are considered when feature extraction of the detected image is performed, so that subsequent statistics of the number of CTC cells are facilitated.
Therefore, in the technical scheme of the application, the detection image is passed through an encoder comprising a plurality of convolution layers of the image filtering module to obtain a plurality of detection feature images, and the detection feature images are fused to obtain a multi-scale detection feature image, wherein the detection feature image output by the first convolution layer of the encoder is a shallow feature image. In particular, here, the encoder includes five convolution layers, and the first to fifth convolution layers of the encoder are used to extract implicit characteristic information about CTC cells at different depths in the detection image, respectively. That is, when the feature mining of the detection image is performed, the deep rich semantic features of CTC cells in the detection image are extracted, and meanwhile, the basic detail features of the deep rich semantic features in the shallow layer are reserved, so that the number statistics of CTC cells can be performed later.
Then, in a decoding stage, the multi-scale detection feature map is input to a decoder of the image filtering module comprising a plurality of deconvolution layers based on the layer jump connection of the plurality of detection feature maps to obtain the noise reduced detection image, in particular, the encoder and the decoder have a symmetrical structure here. That is, a symmetric U-shape form is adopted in the decoder, which includes five deconvolution layers, gradually restoring the original image resolution by gradually fusing corresponding features in the encoder. In this way, the first task, i.e. the noise-reduced detected image, can be output at the end of the decoder.
In a specific example of the present application, the encoder and the decoder have a symmetrical structure, and the encoder includes five convolution layers, and the decoder includes five deconvolution layers.
Fig. 3 is a block diagram of the image noise reduction module in the full-automatic fluorescence scanner according to an embodiment of the present application, as shown in fig. 3, the image noise reduction module 120 includes: an image noise reduction encoding unit 121, configured to pass the detected image through an encoder of the image filtering module to obtain a plurality of detection feature maps, where the detection feature map output by a first convolution layer of the encoder is a shallow feature map; a multi-scale feature fusion unit 122, configured to fuse the plurality of detection feature maps to obtain a multi-scale detection feature map; and an image noise reduction decoding unit 123, configured to input the multi-scale detection feature map to a decoder of the image filtering module based on the layer jump connection of the plurality of detection feature maps, so as to obtain the noise reduced detection image.
Further, the image noise reduction encoding unit 121 is configured to: the respective convolution layers of the encoder using the image filtering module perform convolution processing, pooling processing, and nonlinear activation processing on input data, respectively, to output the detection feature map by the respective convolution layers of the encoder.
The automatic codec includes an encoder and a decoder, the encoder having two convolutional layers. In one example, the first convolution layer has 1 input channel number, 2 output channel number, convolution kernel size 10, sliding step size 10, zero padding width 1, and then a normalization layer and a ReLU nonlinear active layer are set; the number of input channels of the second convolution layer is 25, the number of output channels is 50, the convolution kernel size is 3, the sliding step length is 3, the zero padding width is 0, and then a normalization layer and a ReLU nonlinear activation layer are arranged; the tail end of the encoder is a full-connection layer, and the number of neurons is 10; the decoder head end is a full-connection layer, the number of neurons is 850, two deconvolution layers are connected later, the number of input channels of a first deconvolution layer is 50, the number of output channels is 25, the convolution kernel size is 4, the sliding step length is 3, the zero padding width is 1, then a normalization layer and a ReLU nonlinear activation layer are arranged, the number of input channels of a second deconvolution layer is 25, the number of output channels is 1, the convolution kernel size is 10, the sliding step length is 10, the zero padding width is 1, and then a Sigmoid nonlinear activation layer is arranged.
Specifically, in the embodiment of the present application, the image semantic segmentation module 130 is configured to perform image semantic segmentation on the noise-reduced detection image to obtain a CTC cell prediction result map. Further, after the noise-reduced detection image is obtained, image semantic segmentation is carried out on the noise-reduced detection image so as to carry out corresponding masking operation of CTC cells in the noise-reduced detection image, and therefore a CTC cell prediction result graph is obtained.
Specifically, in the embodiment of the present application, the first full-connection encoding module 140 and the second full-connection encoding module 150 are configured to spread the CTC cell prediction result map into a one-dimensional pixel vector, and then pass through a first full-connection layer to obtain a full-pixel associated feature vector of the CTC cell prediction result map; and the shallow feature map is used for expanding the shallow feature map into shallow feature vectors and then passing through a second full-connection layer to obtain shallow feature full-element associated feature vectors.
And then, after the CTC cell prediction result graph is obtained, the CTC cell prediction result graph and the image shallow features extracted by the encoder are input into a counting module in a combined mode, and a counting statistic value of the number of CTC cells is obtained.
Specifically, when the image features in the CTC cell prediction result map are feature fused with the shallow feature map, it is considered that the correlation feature information about the implicit features of the CTC cells is provided between each pixel in the CTC cell prediction result map, which is of great significance for the subsequent detection of the number of CTC cells.
Therefore, in the technical scheme of the application, after the CTC cell prediction result graph is unfolded into a one-dimensional pixel vector, the full-connection layer is used for encoding so as to extract global correlation feature distribution information among all pixel values in the CTC cell prediction result graph, thereby obtaining a full-pixel correlation feature vector. Then, for the shallow feature map, which contains shallow feature information about details, edges, positions and the like of the CTC cells, in order to facilitate subsequent feature fusion, so as to improve accuracy of detecting CTC cell data, in the technical scheme of the present application, the shallow feature map is further developed into a shallow feature vector and then processed in a second full-connection layer, so as to extract relevant feature information between feature values in the shallow feature map, thereby obtaining a shallow feature full-element relevant feature vector.
Fig. 4 is a block diagram of the first full-connection encoding module in the full-automatic fluorescence scanner according to an embodiment of the present application, and as shown in fig. 4, the first full-connection encoding module 140 includes: an image unfolding unit 141, configured to unfold the CTC cell prediction result map into a CTC cell prediction one-dimensional pixel feature vector; and a first full-connection unit 142, configured to perform full-connection encoding on the CTC cell prediction one-dimensional pixel feature vector by using the first full-connection layer to obtain the CTC cell prediction result map full-pixel associated feature vector.
Specifically, in the embodiment of the present application, the feature fusion module 160 and the number counting module 170 are configured to concatenate the CTC cell prediction result map full-pixel associated feature vector and the shallow feature full-element associated feature vector to obtain a decoded feature vector; and, means for passing the decoded feature vector through a decoder to obtain a decoded value, the decoded value being indicative of a number value of CTC cells.
And then, cascading the full-pixel association feature vector and the shallow feature full-element association feature vector of the CTC cell prediction result graph to fuse global association feature information of each pixel in the CTC cell prediction result graph and shallow feature information of the CTC cell to obtain a decoding feature vector, and enabling the decoding feature vector to pass through a decoder to obtain a decoding value for representing the number value of the CTC cell. Specifically, the CTC cell prediction result map is subjected to full-connection feature extraction, then is spliced and fused with shallow features related to CTC cells in the detection image to obtain spliced features, and a layer of full-connection and ReLU activation function is used for ensuring that the prediction number is greater than 0. In this way, the accuracy of the detection of CTC cell number can be improved.
Wherein the number counting module 170 for: performing a decoding regression on the decoded feature vector using the decoder in the following formula to obtain the decoded value; wherein, the formula is:
Figure SMS_26
wherein->
Figure SMS_27
Representing said decoded feature vector,/->
Figure SMS_28
Representing the decoded value->
Figure SMS_29
Representing a weight matrix, +.>
Figure SMS_30
Representing the bias vector +_>
Figure SMS_31
Representing a matrix multiplication.
Further, the fully automatic fluorescence scanner further comprises a training module for training the encoder-decoder structure based image filtering module, the first fully connected layer, the second fully connected layer, and the decoder; fig. 5 is a block diagram of the training module in the full-automatic fluorescence scanner according to an embodiment of the present application, and as shown in fig. 5, the training module 180 includes: a training data acquisition unit 181 for acquiring a training detection image and a true value of the number value of CTC cells; a training image noise reduction unit 182, configured to pass the training detection image through the image filtering module based on the encoder-decoder structure to obtain a training noise reduced detection image and a training shallow feature map output by the first convolution layer of the encoder; the training image semantic segmentation unit 183 is configured to perform image semantic segmentation on the training noise-reduced detection image to obtain a training CTC cell prediction result map; a training first full-connection coding unit 184, configured to expand the training CTC cell prediction result map into a training one-dimensional pixel vector, and then pass through the first full-connection layer to obtain a full-pixel associated feature vector of the training CTC cell prediction result map; training a second full-connection encoding unit 185, configured to expand the training shallow feature map into a training shallow feature vector, and then pass through the second full-connection layer to obtain a training shallow feature full-element associated feature vector; the training feature fusion unit 186 is configured to concatenate the training CTC cell prediction result map full-pixel associated feature vector and the training shallow feature full-element associated feature vector to obtain a training decoding feature vector; a decoding loss unit 187 for passing the training decoding feature vector through the decoder to obtain a decoding loss function value; a stream refinement loss unit 188, configured to calculate a stream refinement loss function value of the training CTC cell prediction result map full-pixel associated feature vector and the training shallow feature full-element associated feature vector; and a model training unit 189 for training the encoder-decoder structure based image filtering module, the first fully connected layer, the second fully connected layer and the decoder with a weighted sum of the classification loss function value and the stream refinement loss function value as a loss function value and by back propagation of gradient descent.
Fig. 6 is a block diagram of the decoding loss unit in the full-automatic fluorescence scanner according to an embodiment of the present application, and as shown in fig. 6, the decoding loss unit 187 includes: a training decoding subunit 1871 configured to perform decoding regression on the training decoded feature vector using the decoder in a training decoding formula to obtain a training decoded value; wherein, training decoding formula is:
Figure SMS_32
wherein->
Figure SMS_33
Is the training decoded feature vector, +.>
Figure SMS_34
Is the training decoded value,/->
Figure SMS_35
Is the weight momentArray (S)>
Figure SMS_36
Representing a matrix multiplication; and a loss function value calculation subunit 1872 for calculating, as the decoding loss function value, a variance between the training decoded value and a true value of the number value of CTC cells in the training data.
In particular, in the technical solution of the present application, when the CTC cell predictor full-pixel associated feature vector and the shallow feature full-element associated feature vector are cascaded to obtain the decoded feature vector, because both the CTC cell predictor full-pixel associated feature vector and the shallow feature full-element associated feature vector are serialized representations of image semantic features of the detected image, when the CTC cell predictor full-pixel associated feature vector and the shallow feature full-element associated feature vector are cascaded to obtain the decoded feature vector, it is desirable to promote correlation of respective image semantic serialization expressions of the CTC cell predictor full-pixel associated feature vector and the shallow feature full-element associated feature vector in a merged high-dimensional feature space.
Based on this, the applicant of the present application further introduced a full-pixel correlation feature vector for the CTC cell predictor map in addition to the error loss function for the decoder
Figure SMS_37
And the shallow feature total element associated feature vector +.>
Figure SMS_38
The streaming refinement loss function of (2) is expressed as: calculating the stream refinement loss function value of the training CTC cell prediction result graph full-pixel association feature vector and the training shallow feature full-element association feature vector according to the following optimization formula; wherein, the optimization formula is:
Figure SMS_39
wherein,,
Figure SMS_40
representing the full-pixel associated feature vector of the training CTC cell predictor map,/for>
Figure SMS_41
Representing the training shallow feature full element association feature vector,>
Figure SMS_42
represents the square of the two norms of the vector, and +.>
Figure SMS_43
And->
Figure SMS_44
Represents position-by-position subtraction and multiplication of vectors, respectively, ">
Figure SMS_45
An exponential operation representing a vector representing a calculation of a natural exponential function value raised to a power by a characteristic value of each position in the vector, ">
Figure SMS_46
Representing the streaming refinement loss function value.
Here, the stream refinement loss function is based on the CTC cell predictor map full-pixel correlation feature vector
Figure SMS_47
And the shallow feature total element associated feature vector +. >
Figure SMS_48
Conversion of a serialized streaming distribution of image semantic features into spatial distribution within a merged high-dimensional feature space, super-resolution enhancement of spatial distribution within the high-dimensional feature space by interpolation under sequential distribution of simultaneous vectors, providing finer alignment of distribution differences within the high-dimensional feature space by balancing inter-class probabilistic relationships under sequence to jointly present in the serialized image semantic feature dimension and the spatial dimension of the high-dimensional feature spaceAnd the cross dimension context correlation is carried out, so that the correlation of the image semantic serialization expression of the CTC cell prediction result graph full-pixel correlation feature vector and the shallow feature full-element correlation feature vector in the combined high-dimensional feature space is improved, the expression effect of the decoding feature vector obtained by cascading on the CTC cell prediction result graph full-pixel correlation feature vector and the shallow feature full-element correlation feature vector is improved, and the accuracy of a decoding value obtained by the decoding feature vector through a decoder is improved. Thus, background noise interference can be reduced, and further, accuracy of detecting the number of CTC cells can be improved.
In summary, a fully automated fluorescence scanner 100 according to an embodiment of the present application is illustrated that acquires a detection image acquired by a fluorescence microscope; and excavating implicit characteristic information about CTC cells in the detection image by adopting an artificial intelligence technology based on deep learning, and carrying out noise reduction of the detection image and full expression of the CTC cell characteristics based on the implicit characteristic information to improve the number detection accuracy of the CTC cells.
In one embodiment of the present application, fig. 7 is a flowchart of a fully automated fluorescence scanning method according to an embodiment of the present application. As shown in fig. 7, a fully automatic fluorescence scanning method according to an embodiment of the present application includes: 210, acquiring a detection image acquired by a fluorescence microscope; 220, passing the detected image through an image filtering module based on an encoder-decoder structure to obtain a noise-reduced detected image and a shallow feature map output by a first convolution layer of the encoder; 230, performing image semantic segmentation on the noise-reduced detection image to obtain a CTC cell prediction result graph; 240, expanding the CTC cell prediction result graph into a one-dimensional pixel vector, and then passing through a first full-connection layer to obtain a full-pixel associated feature vector of the CTC cell prediction result graph; 250, expanding the shallow feature map into a shallow feature vector, and then passing through a second full-connection layer to obtain a shallow feature full-element associated feature vector; 260, cascading the CTC cell predictor graph full-pixel associated feature vector and the shallow feature full-element associated feature vector to obtain a decoded feature vector; and, passing the decoded feature vector through a decoder to obtain a decoded value, the decoded value being used to represent a number value of CTC cells 270.
Fig. 8 is a schematic diagram of a system architecture of a fully automatic fluorescence scanning method according to an embodiment of the application. As shown in fig. 8, in the system architecture of the fully automatic fluorescence scanning method, first, a detection image acquired by a fluorescence microscope is acquired; then, the detected image is passed through an image filtering module based on an encoder-decoder structure to obtain a noise-reduced detected image and a shallow feature map output by a first convolution layer of the encoder; then, performing image semantic segmentation on the noise-reduced detection image to obtain a CTC cell prediction result image; then, expanding the CTC cell prediction result graph into a one-dimensional pixel vector, and then passing through a first full-connection layer to obtain a full-pixel associated feature vector of the CTC cell prediction result graph; then, expanding the shallow feature map into a shallow feature vector, and then passing through a second full-connection layer to obtain a shallow feature full-element associated feature vector; then, cascading the all-pixel associated feature vector of the CTC cell prediction result graph and the shallow feature all-element associated feature vector to obtain a decoded feature vector; and finally, passing the decoded feature vector through a decoder to obtain a decoded value, wherein the decoded value is used for representing the number value of the CTC cells.
In a specific example, in the above-described fully automatic fluorescence scanning method, the encoder and the decoder have a symmetrical structure, and the encoder includes five convolution layers, and the decoder includes five deconvolution layers.
In a specific example, in the above-mentioned fully automatic fluorescence scanning method, passing the detection image through an image filtering module based on an encoder-decoder structure to obtain a noise-reduced detection image and a shallow feature map output by a first convolution layer of the encoder includes: passing the detected image through an encoder of the image filtering module to obtain a plurality of detected feature images, wherein the detected feature images output by a first convolution layer of the encoder are shallow feature images; fusing the detection feature images to obtain a multi-scale detection feature image; and inputting the multi-scale detection feature map into a decoder of the image filtering module based on the layer jump connection of the detection feature maps to obtain the noise-reduced detection image.
In a specific example, in the above full-automatic fluorescence scanning method, the detecting image is passed through an encoder of the image filtering module to obtain a plurality of detecting feature maps, where the detecting feature map output by a first convolution layer of the encoder is a shallow feature map, and the method includes: the respective convolution layers of the encoder using the image filtering module perform convolution processing, pooling processing, and nonlinear activation processing on input data, respectively, to output the detection feature map by the respective convolution layers of the encoder.
In a specific example, in the above-mentioned fully automatic fluorescence scanning method, the developing the CTC cell predictor map into a one-dimensional pixel vector and then passing through the first fully connecting layer to obtain a full-pixel associated feature vector of the CTC cell predictor map includes: expanding the CTC cell prediction result graph into a CTC cell prediction one-dimensional pixel feature vector; and performing full-connection coding on the CTC cell prediction one-dimensional pixel feature vector by using the first full-connection layer to obtain the CTC cell prediction result image full-pixel association feature vector.
In a specific example, in the above-mentioned fully automatic fluorescence scanning method, the decoding feature vector is passed through a decoder to obtain a decoded value, the decoded value being used to represent a number value of CTC cells, including: performing a decoding regression on the decoded feature vector using the decoder in the following formula to obtain the decoded value; wherein, the formula is:
Figure SMS_49
wherein->
Figure SMS_50
Representing said decoded feature vector,/->
Figure SMS_51
Representing the decoded value->
Figure SMS_52
Representing a weight matrix, +.>
Figure SMS_53
Representing the bias vector +_>
Figure SMS_54
Representing a matrix multiplication.
In a specific example, in the above-mentioned fully automatic fluorescence scanning method, training the encoder-decoder structure-based image filtering module, the first fully connected layer, the second fully connected layer, and the decoder is further included;
Wherein training the encoder-decoder structure based image filtering module, the first fully-connected layer, the second fully-connected layer, and the decoder comprises: acquiring a training detection image and a true value of the number value of the CTC cells; passing the training detection image through the image filtering module based on the encoder-decoder structure to obtain a training noise-reduced detection image and a training shallow feature map output by a first convolution layer of the encoder; performing image semantic segmentation on the training noise-reduced detection image to obtain a training CTC cell prediction result image; expanding the training CTC cell prediction result graph into a training one-dimensional pixel vector, and then passing through the first full-connection layer to obtain a training CTC cell prediction result graph full-pixel associated feature vector; expanding the training shallow feature map into training shallow feature vectors, and then passing through the second full-connection layer to obtain training shallow feature full-element associated feature vectors; cascading the training CTC cell prediction result graph full-pixel association feature vector and the training shallow feature full-element association feature vector to obtain a training decoding feature vector; passing the training decoded feature vector through the decoder to obtain a decoding loss function value; calculating a stream refinement loss function value of the training CTC cell prediction result graph full-pixel association feature vector and the training shallow feature full-element association feature vector; and training the encoder-decoder structure based image filtering module, the first fully connected layer, the second fully connected layer, and the decoder with a weighted sum of the classification loss function value and the stream refinement loss function value as a loss function value and by back propagation of gradient descent.
In a specific example, in the above-mentioned fully automatic fluorescence scanning method, passing the training decoding feature vector through the decoder to obtain a decoding loss function value includes: performing decoding regression on the training decoding feature vector using the decoder in a training decoding formula to obtain a training decoding value; wherein, training decoding formula is:
Figure SMS_55
wherein->
Figure SMS_56
Is the training decoded feature vector, +.>
Figure SMS_57
Is the training decoded value,/->
Figure SMS_58
Is a weight matrix, < >>
Figure SMS_59
Representing a matrix multiplication; and calculating a variance between the training decoded value and a true value of the number value of CTC cells in the training data as the decoding loss function value.
In a specific example, in the above-mentioned fully automatic fluorescence scanning method, calculating the stream refinement loss function value of the training CTC cell predictor map full-pixel associated feature vector and the training shallow feature full-element associated feature vector includes: calculating the stream refinement loss function value of the training CTC cell prediction result graph full-pixel association feature vector and the training shallow feature full-element association feature vector according to the following optimization formula; wherein, the optimization formula is:
Figure SMS_60
Wherein,,
Figure SMS_61
representing the full-pixel associated feature vector of the training CTC cell predictor map,/for>
Figure SMS_62
Representing the training shallow feature full element association feature vector,>
Figure SMS_63
represents the square of the two norms of the vector, and +.>
Figure SMS_64
And->
Figure SMS_65
Represents position-by-position subtraction and multiplication of vectors, respectively, ">
Figure SMS_66
An exponential operation representing a vector representing a calculation of a natural exponential function value raised to a power by a characteristic value of each position in the vector, ">
Figure SMS_67
Representing the streaming refinement loss function value.
It will be appreciated by those skilled in the art that the specific operation of the respective steps in the above-described full-automatic fluorescence scanning method has been described in detail in the above description of the full-automatic fluorescence scanner with reference to fig. 1 to 6, and thus, repetitive description thereof will be omitted.
The present application also provides a computer program product comprising instructions which, when executed, cause an apparatus to perform operations corresponding to the above-described methods.
In one embodiment of the present application, there is also provided a computer readable storage medium storing a computer program for executing the above-described method.
It should be appreciated that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the forms of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects may be utilized. Furthermore, the computer program product may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Methods, systems, and computer program products of embodiments of the present application are described in terms of flow diagrams and/or block diagrams. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not intended to be limited to the details disclosed herein as such.
The block diagrams of the devices, apparatuses, devices, systems referred to in this application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent to the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (10)

1. A fully automatic fluorescence scanner, comprising:
the detection image acquisition module is used for acquiring detection images acquired by the fluorescence microscope;
the image noise reduction module is used for enabling the detection image to pass through the image filtering module based on the encoder-decoder structure to obtain a noise-reduced detection image and a shallow feature map output by a first convolution layer of the encoder;
the image semantic segmentation module is used for carrying out image semantic segmentation on the noise-reduced detection image to obtain a CTC cell prediction result image;
the first full-connection coding module is used for expanding the CTC cell prediction result graph into one-dimensional pixel vectors and then passing through the first full-connection layer to obtain full-pixel associated feature vectors of the CTC cell prediction result graph;
the second full-connection coding module is used for expanding the shallow feature map into shallow feature vectors and then obtaining shallow feature full-element associated feature vectors through a second full-connection layer;
The feature fusion module is used for cascading the all-pixel associated feature vector of the CTC cell prediction result graph and the shallow feature all-element associated feature vector to obtain a decoded feature vector; and
and the quantity counting module is used for enabling the decoding characteristic vector to pass through a decoder to obtain a decoding value, wherein the decoding value is used for representing the quantity value of the CTC cells.
2. The fully automatic fluorescence scanner of claim 1, wherein the encoder and the decoder have a symmetrical structure and the encoder comprises five convolutional layers and the decoder comprises five deconvolution layers.
3. The fully automatic fluorescence scanner of claim 2, wherein the image noise reduction module comprises:
the image noise reduction coding unit is used for enabling the detection image to pass through an encoder of the image filtering module to obtain a plurality of detection feature images, wherein the detection feature images output by a first convolution layer of the encoder are shallow feature images;
the multi-scale feature fusion unit is used for fusing the detection feature images to obtain a multi-scale detection feature image; and
and the image noise reduction decoding unit is used for inputting the multi-scale detection feature images into a decoder of the image filtering module based on the layer jump connection of the detection feature images so as to obtain the noise reduced detection image.
4. A fully automatic fluorescence scanner according to claim 3, wherein the image noise reduction encoding unit is configured to: the respective convolution layers of the encoder using the image filtering module perform convolution processing, pooling processing, and nonlinear activation processing on input data, respectively, to output the detection feature map by the respective convolution layers of the encoder.
5. The fully automatic fluorescence scanner of claim 4, wherein the first fully connected encoding module comprises:
the image unfolding unit is used for unfolding the CTC cell prediction result image into a CTC cell prediction one-dimensional pixel characteristic vector; and
and the first full-connection unit is used for carrying out full-connection coding on the CTC cell prediction one-dimensional pixel feature vector by using the first full-connection layer so as to obtain the CTC cell prediction result graph full-pixel association feature vector.
6. The fully automatic fluorescence scanner of claim 5, wherein the number counting module is configured to: performing a decoding regression on the decoded feature vector using the decoder in the following formula to obtain the decoded value;
wherein, the formula is:
Figure QLYQS_1
wherein- >
Figure QLYQS_2
Representing said decoded feature vector,/->
Figure QLYQS_3
Representing the decoded value->
Figure QLYQS_4
Representing a weight matrix, +.>
Figure QLYQS_5
Representing the bias vector +_>
Figure QLYQS_6
Representing a matrix multiplication.
7. The fully automatic fluorescence scanner of claim 6, further comprising a training module for training the encoder-decoder structure based image filtering module, the first fully connected layer, the second fully connected layer, and the decoder;
wherein, training module includes:
the training data acquisition unit is used for acquiring training detection images and the true value of the quantity value of the CTC cells;
the training image noise reduction unit is used for enabling the training detection image to pass through the image filtering module based on the encoder-decoder structure so as to obtain a training noise reduction detection image and a training shallow feature map output by a first convolution layer of the encoder;
the training image semantic segmentation unit is used for carrying out image semantic segmentation on the training noise-reduced detection image so as to obtain a training CTC cell prediction result image;
the training first full-connection coding unit is used for expanding the training CTC cell prediction result graph into training one-dimensional pixel vectors and then passing through the first full-connection layer to obtain training CTC cell prediction result graph full-pixel associated feature vectors;
The training second full-connection coding unit is used for expanding the training shallow feature map into training shallow feature vectors and then obtaining training shallow feature full-element associated feature vectors through the second full-connection layer;
the training feature fusion unit is used for cascading the training CTC cell prediction result graph full-pixel association feature vector and the training shallow feature full-element association feature vector to obtain a training decoding feature vector;
a decoding loss unit, configured to pass the training decoding feature vector through the decoder to obtain a decoding loss function value;
the stream refinement loss unit is used for calculating stream refinement loss function values of the training CTC cell prediction result graph full-pixel association feature vector and the training shallow feature full-element association feature vector; and
a model training unit for training the encoder-decoder structure based image filtering module, the first fully connected layer, the second fully connected layer and the decoder with a weighted sum of the classification loss function value and the stream refinement loss function value as a loss function value and by back propagation of gradient descent.
8. The fully automatic fluorescence scanner of claim 7, wherein the decode loss unit comprises:
a training decoding subunit for performing decoding regression on the training decoding feature vector using the decoder in a training decoding formula to obtain a training decoding value; wherein, training decoding formula is:
Figure QLYQS_7
wherein->
Figure QLYQS_8
Is the training decoded feature vector, +.>
Figure QLYQS_9
Is the training decoded value,/->
Figure QLYQS_10
Is a weight matrix, < >>
Figure QLYQS_11
Representing a matrix multiplication; and
a loss function value calculation subunit for calculating, as the decoding loss function value, a variance between the training decoded value and a true value of the number value of CTC cells in the training data.
9. The fully automatic fluorescence scanner of claim 8, wherein the flow refinement loss unit is configured to: calculating the stream refinement loss function value of the training CTC cell prediction result graph full-pixel association feature vector and the training shallow feature full-element association feature vector according to the following optimization formula;
wherein, the optimization formula is:
Figure QLYQS_12
wherein,,
Figure QLYQS_13
representing the full-pixel associated feature vector of the training CTC cell predictor map,/for >
Figure QLYQS_14
Representing the training shallow feature full element association feature vector,>
Figure QLYQS_15
represents the square of the two norms of the vector, and +.>
Figure QLYQS_16
And->
Figure QLYQS_17
Represents position-by-position subtraction and multiplication of vectors, respectively, ">
Figure QLYQS_18
An exponential operation representing a vector representing a calculation of a natural exponential function value raised to a power by a characteristic value of each position in the vector, ">
Figure QLYQS_19
Representing the streaming refinement loss function value.
10. A fully automatic fluorescence scanning method, comprising:
acquiring a detection image acquired by a fluorescence microscope;
passing the detected image through an image filtering module based on an encoder-decoder structure to obtain a noise-reduced detected image and a shallow feature map output by a first convolution layer of the encoder;
performing image semantic segmentation on the noise-reduced detection image to obtain a CTC cell prediction result image;
expanding the CTC cell prediction result graph into a one-dimensional pixel vector, and then passing through a first full-connection layer to obtain a full-pixel associated feature vector of the CTC cell prediction result graph;
expanding the shallow feature map into a shallow feature vector, and then passing through a second full-connection layer to obtain a shallow feature full-element associated feature vector;
cascading the all-pixel associated feature vector of the CTC cell prediction result graph and the all-element associated feature vector of the shallow feature to obtain a decoded feature vector; and
The decoded feature vector is passed through a decoder to obtain a decoded value, which is used to represent the number value of CTC cells.
CN202310671643.8A 2023-06-08 2023-06-08 Full-automatic fluorescence scanner and method thereof Pending CN116402818A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310671643.8A CN116402818A (en) 2023-06-08 2023-06-08 Full-automatic fluorescence scanner and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310671643.8A CN116402818A (en) 2023-06-08 2023-06-08 Full-automatic fluorescence scanner and method thereof

Publications (1)

Publication Number Publication Date
CN116402818A true CN116402818A (en) 2023-07-07

Family

ID=87007992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310671643.8A Pending CN116402818A (en) 2023-06-08 2023-06-08 Full-automatic fluorescence scanner and method thereof

Country Status (1)

Country Link
CN (1) CN116402818A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115410050A (en) * 2022-11-02 2022-11-29 杭州华得森生物技术有限公司 Tumor cell detection equipment based on machine vision and method thereof
CN116046810A (en) * 2023-04-03 2023-05-02 云南通衢工程检测有限公司 Nondestructive testing method based on RPC cover plate damage load
CN116110081A (en) * 2023-04-12 2023-05-12 齐鲁工业大学(山东省科学院) Detection method and system for wearing safety helmet based on deep learning
CN116150566A (en) * 2023-04-20 2023-05-23 浙江浙能迈领环境科技有限公司 Ship fuel supply safety monitoring system and method thereof
CN116189179A (en) * 2023-04-28 2023-05-30 北京航空航天大学杭州创新研究院 Circulating tumor cell scanning analysis equipment
CN116201316A (en) * 2023-04-27 2023-06-02 佛山市佳密特防水材料有限公司 Close joint paving method and system for large-size ceramic tiles

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115410050A (en) * 2022-11-02 2022-11-29 杭州华得森生物技术有限公司 Tumor cell detection equipment based on machine vision and method thereof
CN116046810A (en) * 2023-04-03 2023-05-02 云南通衢工程检测有限公司 Nondestructive testing method based on RPC cover plate damage load
CN116110081A (en) * 2023-04-12 2023-05-12 齐鲁工业大学(山东省科学院) Detection method and system for wearing safety helmet based on deep learning
CN116150566A (en) * 2023-04-20 2023-05-23 浙江浙能迈领环境科技有限公司 Ship fuel supply safety monitoring system and method thereof
CN116201316A (en) * 2023-04-27 2023-06-02 佛山市佳密特防水材料有限公司 Close joint paving method and system for large-size ceramic tiles
CN116189179A (en) * 2023-04-28 2023-05-30 北京航空航天大学杭州创新研究院 Circulating tumor cell scanning analysis equipment

Similar Documents

Publication Publication Date Title
CN108665441B (en) A kind of Near-duplicate image detection method and device, electronic equipment
CN115410050B (en) Tumor cell detection equipment based on machine vision and method thereof
CN110263666B (en) Action detection method based on asymmetric multi-stream
Sun et al. Feature pyramid reconfiguration with consistent loss for object detection
CN112507990A (en) Video time-space feature learning and extracting method, device, equipment and storage medium
CN116403213A (en) Circulating tumor cell detector based on artificial intelligence and method thereof
CN116434226B (en) Circulating tumor cell analyzer
Pang et al. Towards balanced learning for instance recognition
CN113221680B (en) Text pedestrian retrieval method based on text dynamic guiding visual feature extraction
CN116956929B (en) Multi-feature fusion named entity recognition method and device for bridge management text data
CN114495129A (en) Character detection model pre-training method and device
Weng et al. Multimodal multitask representation learning for pathology biobank metadata prediction
CN116416248A (en) Intelligent analysis system and method based on fluorescence microscope
CN112529862A (en) Significance image detection method for interactive cycle characteristic remodeling
CN116287138A (en) FISH-based cell detection system and method thereof
CN110210523B (en) Method and device for generating image of clothes worn by model based on shape graph constraint
CN116402818A (en) Full-automatic fluorescence scanner and method thereof
Qiu et al. Revisiting multi-level feature fusion: A simple yet effective network for salient object detection
CN116844242A (en) Face fake detection method and system based on double-branch convolution inhibition texture network
Ward et al. A practical guide to graph neural networks
CN114529794A (en) Infrared and visible light image fusion method, system and medium
Song et al. Towards End-to-End Unsupervised Saliency Detection with Self-Supervised Top-Down Context
Do Hong et al. Medical image segmentation using deep learning and blending loss
Zhang et al. Ctnet: rethinking convolutional neural networks and vision transformer for medical image segmentation
CN116309543B (en) Image-based circulating tumor cell detection equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20230707