CN116403213A - Circulating tumor cell detector based on artificial intelligence and method thereof - Google Patents
Circulating tumor cell detector based on artificial intelligence and method thereof Download PDFInfo
- Publication number
- CN116403213A CN116403213A CN202310671408.0A CN202310671408A CN116403213A CN 116403213 A CN116403213 A CN 116403213A CN 202310671408 A CN202310671408 A CN 202310671408A CN 116403213 A CN116403213 A CN 116403213A
- Authority
- CN
- China
- Prior art keywords
- feature map
- shallow
- deep
- classification
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 208000005443 Circulating Neoplastic Cells Diseases 0.000 title claims abstract description 69
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 61
- 238000000034 method Methods 0.000 title abstract description 39
- 238000002073 fluorescence micrograph Methods 0.000 claims abstract description 74
- 210000004369 blood Anatomy 0.000 claims abstract description 49
- 239000008280 blood Substances 0.000 claims abstract description 49
- 238000012545 processing Methods 0.000 claims abstract description 45
- 238000012549 training Methods 0.000 claims description 118
- 239000010410 layer Substances 0.000 claims description 70
- 238000013527 convolutional neural network Methods 0.000 claims description 45
- 238000010586 diagram Methods 0.000 claims description 45
- 239000013598 vector Substances 0.000 claims description 40
- 238000003062 neural network model Methods 0.000 claims description 31
- 239000003623 enhancer Substances 0.000 claims description 30
- 238000000605 extraction Methods 0.000 claims description 29
- 210000004027 cell Anatomy 0.000 claims description 26
- 238000005457 optimization Methods 0.000 claims description 26
- 230000004927 fusion Effects 0.000 claims description 20
- 238000001514 detection method Methods 0.000 claims description 16
- 239000004973 liquid crystal related substance Substances 0.000 claims description 16
- 230000008485 antagonism Effects 0.000 claims description 14
- 238000011176 pooling Methods 0.000 claims description 13
- 238000005728 strengthening Methods 0.000 claims description 13
- 230000004913 activation Effects 0.000 claims description 12
- 230000003213 activating effect Effects 0.000 claims description 11
- 239000011159 matrix material Substances 0.000 claims description 8
- 239000002356 single layer Substances 0.000 claims description 5
- 230000005540 biological transmission Effects 0.000 claims description 4
- 238000012546 transfer Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 2
- 238000013135 deep learning Methods 0.000 abstract description 6
- 238000005516 engineering process Methods 0.000 abstract description 6
- 230000006870 function Effects 0.000 description 21
- 238000004590 computer program Methods 0.000 description 10
- 230000007246 mechanism Effects 0.000 description 9
- 206010028980 Neoplasm Diseases 0.000 description 8
- 238000007792 addition Methods 0.000 description 8
- 238000009826 distribution Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 230000000694 effects Effects 0.000 description 7
- 230000002708 enhancing effect Effects 0.000 description 6
- 230000008901 benefit Effects 0.000 description 4
- 239000000090 biomarker Substances 0.000 description 4
- 238000012512 characterization method Methods 0.000 description 4
- 238000003860 storage Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 201000011510 cancer Diseases 0.000 description 3
- 238000003125 immunofluorescent labeling Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 102000018651 Epithelial Cell Adhesion Molecule Human genes 0.000 description 2
- 108010066687 Epithelial Cell Adhesion Molecule Proteins 0.000 description 2
- 102000011782 Keratins Human genes 0.000 description 2
- 108010076876 Keratins Proteins 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 238000011065 in-situ storage Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- FWBHETKCLVMNFS-UHFFFAOYSA-N 4',6-Diamino-2-phenylindol Chemical compound C1=CC(C(=N)N)=CC=C1C1=CC2=CC=C(C(N)=N)C=C2N1 FWBHETKCLVMNFS-UHFFFAOYSA-N 0.000 description 1
- 102000017095 Leukocyte Common Antigens Human genes 0.000 description 1
- 108010013709 Leukocyte Common Antigens Proteins 0.000 description 1
- 206010027476 Metastases Diseases 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 210000000601 blood cell Anatomy 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000013399 early diagnosis Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000004393 prognosis Methods 0.000 description 1
- 230000006798 recombination Effects 0.000 description 1
- 238000005215 recombination Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 210000004881 tumor cell Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/62—Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light
- G01N21/63—Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light optically excited
- G01N21/64—Fluorescence; Phosphorescence
- G01N21/6486—Measuring fluorescence of biological material, e.g. DNA, RNA, cells
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N33/00—Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
- G01N33/48—Biological material, e.g. blood, urine; Haemocytometers
- G01N33/50—Chemical analysis of biological material, e.g. blood, urine; Testing involving biospecific ligand binding methods; Immunological testing
- G01N33/53—Immunoassay; Biospecific binding assay; Materials therefor
- G01N33/569—Immunoassay; Biospecific binding assay; Materials therefor for microorganisms, e.g. protozoa, bacteria, viruses
- G01N33/56966—Animal cells
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N33/00—Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
- G01N33/48—Biological material, e.g. blood, urine; Haemocytometers
- G01N33/50—Chemical analysis of biological material, e.g. blood, urine; Testing involving biospecific ligand binding methods; Immunological testing
- G01N33/53—Immunoassay; Biospecific binding assay; Materials therefor
- G01N33/574—Immunoassay; Biospecific binding assay; Materials therefor for cancer
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N33/00—Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
- G01N33/48—Biological material, e.g. blood, urine; Haemocytometers
- G01N33/50—Chemical analysis of biological material, e.g. blood, urine; Testing involving biospecific ligand binding methods; Immunological testing
- G01N33/58—Chemical analysis of biological material, e.g. blood, urine; Testing involving biospecific ligand binding methods; Immunological testing involving labelled substances
- G01N33/582—Chemical analysis of biological material, e.g. blood, urine; Testing involving biospecific ligand binding methods; Immunological testing involving labelled substances with fluorescent label
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0475—Generative networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/094—Adversarial learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/693—Acquisition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/695—Preprocessing, e.g. image segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/69—Microscopic objects, e.g. biological cells or cellular parts
- G06V20/698—Matching; Classification
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Abstract
An artificial intelligence-based circulating tumor cell detector and a method thereof acquire a fluorescence image of a detected blood sample; and extracting implicit characteristics from the fluorescence image of the detected blood sample based on the image processing technology of deep learning, and adopting classification processing to realize automatic classification of the CTC cell types. Therefore, the type labels of the CTC cells can be intelligently divided, and the identification efficiency of the CTC cells is effectively improved.
Description
Technical Field
The application relates to the technical field of intelligent detection, and more particularly relates to an artificial intelligence-based circulating tumor cell detector and a method thereof.
Background
Circulating Tumor Cells (CTCs) are tumor cells shed from a primary tumor into the blood, where they can migrate and form metastases in distant organs. The detection and analysis of CTCs is of great importance for early diagnosis, prognosis evaluation and personalized treatment of cancer.
Generally, the working principle of the circulating tumor cell detector is as follows: firstly, separating CTC in a blood sample from other blood cells, and fixing the CTC on a glass slide; then, CTCs were subjected to multiplex immunofluorescent labeling, including epithelial cell adhesion molecule (EpCAM), cytokeratin (CK), leukocyte common antigen (CD 45), and nuclei (DAPI), etc.; then, carrying out high-resolution fluorescence image acquisition on all cells on the glass slide; finally, CTC detection and identification is performed on the fluorescent image.
However, the conventional CTC detection method has problems of low sensitivity, poor specificity, complicated operation, and the like due to the very small number of CTCs, various morphologies, and complex biological characteristics. Thus, an optimized solution is desired.
Disclosure of Invention
The present application has been made in order to solve the above technical problems. The embodiment of the application provides an artificial intelligence-based circulating tumor cell detector and a method thereof, wherein the circulating tumor cell detector acquires a fluorescence image of a detected blood sample; and extracting implicit characteristics from the fluorescence image of the detected blood sample based on the image processing technology of deep learning, and adopting classification processing to realize automatic classification of the CTC cell types. Therefore, the type labels of the CTC cells can be intelligently divided, and the identification efficiency of the CTC cells is effectively improved.
In a first aspect, there is provided an artificial intelligence based circulating tumor cell detector comprising:
the data acquisition module is used for acquiring a fluorescence image of the detected blood sample;
a resolution enhancement module for passing the fluorescence image of the detected blood sample through a resolution enhancer based on an antagonism generation network to obtain a sharpened fluorescence image;
The shallow feature extraction module is used for enabling the clear fluorescent image to pass through a first convolution neural network model serving as a shallow feature extractor to obtain a fluorescent display shallow feature map;
the space strengthening module is used for enabling the fluorescence presentation shallow feature map to pass through the space attention module so as to obtain a space strengthening fluorescence presentation shallow feature map;
the deep feature extraction module is used for enabling the space-enhanced fluorescence presentation shallow feature map to pass through a second convolution neural network model serving as a deep feature extractor to obtain a fluorescence presentation deep feature map;
the channel strengthening module is used for enabling the fluorescence presentation deep feature map to pass through the channel attention module so as to obtain a channel enhanced fluorescence presentation deep feature map;
the feature fusion module is used for fusing the space enhanced fluorescence presentation shallow feature map and the channel enhanced fluorescence presentation deep feature map to obtain a classification feature map; and
and the cell type dividing module is used for passing the classification characteristic diagram through a classifier to obtain a classification result, wherein the classification result is used for representing the type label of the CTC cells.
In the above-mentioned artificial intelligence-based circulating tumor cell detector, the resolution enhancement module is configured to: inputting the fluorescence image of the detected blood sample into the resolution enhancer based on the countermeasure-generating network to generate the sharpened fluorescence image by deconvolution encoding by the resolution enhancer based on the countermeasure-generating network.
In the above-mentioned circulating tumor cell detector based on artificial intelligence, the shallow feature extraction module is used for: and respectively carrying out convolution processing, pooling processing and nonlinear activation processing on the clear fluorescent image in forward transmission of layers by using each layer of the first convolution neural network model serving as the shallow feature extractor to extract the fluorescent display shallow feature map from the shallow layer of the first convolution neural network model serving as the shallow feature extractor.
In the above-mentioned circulating tumor cell detector based on artificial intelligence, the space enhancement module includes: the shallow convolutional coding unit is used for performing convolutional coding on the fluorescence presentation shallow characteristic map by using a convolutional coding part of the spatial attention module so as to obtain a shallow convolutional characteristic map; a shallow spatial attention unit for inputting the shallow convolution feature map into a spatial attention portion of the spatial attention module to obtain a shallow spatial attention map; a shallow activating unit, configured to obtain a shallow spatial attention profile by using a Softmax activating function according to the shallow spatial attention map; and the shallow feature map calculation unit is used for calculating the position-wise point multiplication of the shallow spatial attention feature map and the shallow convolution feature map to obtain the spatial enhanced fluorescence display shallow feature map.
In the above-mentioned circulating tumor cell detector based on artificial intelligence, the deep feature extraction module is used for: and respectively carrying out convolution processing, pooling processing and nonlinear activation processing on the space enhanced fluorescence presentation shallow layer characteristic map in forward transfer of layers by using each layer of the second convolution neural network model serving as the deep layer characteristic extractor so as to extract the fluorescence presentation deep layer characteristic map from the deep layer of the second convolution neural network model serving as the deep layer characteristic extractor.
In the above-mentioned circulating tumor cell detector based on artificial intelligence, the channel strengthening module includes: the deep convolution unit is used for inputting the fluorescence presentation deep feature map into the multi-layer convolution layers of the channel attention module to obtain a deep convolution feature map; the deep global mean unit is used for calculating global mean values of all feature matrixes of the deep convolution feature graphs along the channel dimension to obtain deep feature vectors; the deep activating unit is used for inputting the deep feature vector into the Sigmoid activating function to obtain a deep attention weight vector; and the deep weighting unit is used for weighting each feature matrix of the deep convolution feature map along the channel dimension by taking the feature value of each position in the deep attention weight vector as a weight so as to obtain the channel enhanced fluorescence presentation deep feature map.
In the above-mentioned artificial intelligence-based circulating tumor cell detector, the feature fusion module is configured to: fusing the space enhanced fluorescence presentation shallow feature map and the channel enhanced fluorescence presentation deep feature map by the following fusion formula to obtain a classification feature map; wherein, the fusion formula is:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing the classification characteristic map,>representing that the spatially enhanced fluorescence presents a shallow feature map,>representing the general wayTrace enhanced fluorescence presents a deep profile, ">"means that the spatially enhanced fluorescence presents a shallow feature map and the channel enhanced fluorescence presents an addition of elements at the corresponding position of the deep feature map,">And->And (c) representing weighting parameters for controlling the balance between the spatially enhanced fluorescence presentation shallow feature map and the channel enhanced fluorescence presentation deep feature map.
In the above-mentioned artificial intelligence-based circulating tumor cell detector, the cell type dividing module includes: the matrix unfolding unit is used for unfolding the classification characteristic graph into a classification characteristic vector according to a row vector or a column vector; the full-connection coding unit is used for carrying out full-connection coding on the classification characteristic vectors by using a plurality of full-connection layers of the classifier so as to obtain coded classification characteristic vectors; and the classification unit is used for passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
The artificial intelligence-based circulating tumor cell detector further comprises a training module for training the resolution enhancer based on the countermeasure generation network, the first convolutional neural network model serving as a shallow feature extractor, the spatial attention module, the second convolutional neural network model serving as a deep feature extractor, the channel attention module and the classifier; wherein, training module includes: the training data acquisition unit is used for acquiring training data, wherein the training data comprises training fluorescence images of the detected blood sample and a true value of a type label of the CTC cell; a training resolution enhancement unit for passing the training fluorescence image of the detected blood sample through the resolution enhancer based on the countermeasure generation network to obtain a training sharpened fluorescence image; the training shallow feature extraction unit is used for enabling the training clear fluorescent image to pass through the first convolution neural network model serving as a shallow feature extractor to obtain a training fluorescent display shallow feature map; the training space strengthening unit is used for enabling the training fluorescence to show a shallow characteristic map to pass through the space attention module so as to obtain a training space strengthening fluorescence to show a shallow characteristic map; the training deep feature extraction unit is used for enabling the training space enhanced fluorescence presentation shallow feature map to pass through the second convolution neural network model serving as the deep feature extractor so as to obtain a training fluorescence presentation deep feature map; the training channel strengthening unit is used for enabling the training fluorescence presentation deep feature map to pass through the channel attention module so as to obtain a training channel enhanced fluorescence presentation deep feature map; the training feature fusion unit is used for fusing the training space enhanced fluorescence presentation shallow feature map and the training channel enhanced fluorescence presentation deep feature map to obtain a training classification feature map; the feature redundancy optimization unit is used for performing feature redundancy optimization on the training classification feature map to obtain an optimized classification feature map; the classification loss unit is used for enabling the optimized classification characteristic diagram to pass through a classifier to obtain a classification loss function value; and a training unit for training the resolution enhancer based on the countermeasure generation network, the first convolutional neural network model as a shallow feature extractor, the spatial attention module, the second convolutional neural network model as a deep feature extractor, the channel attention module, and the classifier based on the classification loss function value and based on a propagation direction of gradient descent.
In the above-mentioned artificial intelligence-based circulating tumor cell detector, the feature redundancy optimization unit is configured to: performing feature redundancy optimization on the training classification feature map by using the following optimization formula to obtain an optimized classification feature map; wherein, the optimization formula is:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing the training classification feature map, ++>Representing the optimized classification characteristic diagram, +.>Representing a single layer convolution operation,/->、/>And->Respectively representing the position-by-position addition, subtraction and multiplication of the feature maps, and +.>And->Is a bias characteristic diagram, wherein the initial bias characteristic diagram +.>And->Different.
In a second aspect, there is provided an artificial intelligence based method for detecting circulating tumor cells, comprising:
acquiring a fluorescence image of a detected blood sample;
passing the fluorescence image of the detected blood sample through a resolution enhancer based on an antagonism generation network to obtain a sharpened fluorescence image;
passing the sharpened fluorescent image through a first convolution neural network model serving as a shallow feature extractor to obtain a fluorescent display shallow feature map;
the fluorescence presentation shallow feature map passes through a spatial attention module to obtain a spatial enhanced fluorescence presentation shallow feature map;
The space enhanced fluorescence presentation shallow feature map is passed through a second convolution neural network model serving as a deep feature extractor to obtain a fluorescence presentation deep feature map;
the fluorescence presentation deep feature map passes through a channel attention module to obtain a channel enhanced fluorescence presentation deep feature map;
fusing the space enhanced fluorescence presentation shallow feature map and the channel enhanced fluorescence presentation deep feature map to obtain a classification feature map; and
and passing the classification characteristic diagram through a classifier to obtain a classification result, wherein the classification result is used for representing the type label of the CTC cells.
Compared with the prior art, the circulating tumor cell detector based on artificial intelligence and the method thereof provided by the application acquire fluorescent images of the detected blood sample; and extracting implicit characteristics from the fluorescence image of the detected blood sample based on the image processing technology of deep learning, and adopting classification processing to realize automatic classification of the CTC cell types. Therefore, the type labels of the CTC cells can be intelligently divided, and the identification efficiency of the CTC cells is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an application scenario diagram of an artificial intelligence-based circulating tumor cell detector according to an embodiment of the present application.
FIG. 2 is a block diagram of an artificial intelligence based circulating tumor cell detector according to an embodiment of the present application.
FIG. 3 is a block diagram of the spatial enhancement module in an artificial intelligence based circulating tumor cell detector according to an embodiment of the present application.
FIG. 4 is a block diagram of the channel enhancement module in an artificial intelligence based circulating tumor cell detector according to an embodiment of the present application.
FIG. 5 is a block diagram of the cell type partitioning module in an artificial intelligence based circulating tumor cell detector according to an embodiment of the present application.
FIG. 6 is a block diagram of the training module in an artificial intelligence based circulating tumor cell detector according to an embodiment of the present application.
FIG. 7 is a flow chart of an artificial intelligence based method for detecting circulating tumor cells according to an embodiment of the present application.
Fig. 8 is a schematic diagram of a system architecture of an artificial intelligence-based method for detecting circulating tumor cells according to an embodiment of the present application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Unless defined otherwise, all technical and scientific terms used in the examples of this application have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present application.
In the description of the embodiments of the present application, unless otherwise indicated and defined, the term "connected" should be construed broadly, and for example, may be an electrical connection, may be a communication between two elements, may be a direct connection, or may be an indirect connection via an intermediary, and it will be understood by those skilled in the art that the specific meaning of the term may be understood according to the specific circumstances.
It should be noted that, the term "first\second\third" in the embodiments of the present application is merely to distinguish similar objects, and does not represent a specific order for the objects, it is to be understood that "first\second\third" may interchange a specific order or sequence where allowed. It is to be understood that the "first\second\third" distinguishing objects may be interchanged where appropriate such that the embodiments of the present application described herein may be implemented in sequences other than those illustrated or described herein.
Aiming at the technical problems, the technical conception of the application is as follows: and extracting implicit characteristics from the fluorescence image of the detected blood sample by using an image processing technology based on deep learning, and realizing automatic classification of the types of CTC cells by using classification processing.
Specifically, in the technical solution of the present application, first, a fluorescence image of a blood sample to be detected is acquired. Here, the fluorescence image is obtained by multiplex immunofluorescence labeling of cells in a blood sample, and then photographing with a fluorescence microscope. The fluorescent image may show the morphology, size, location and fluorescent signal of the cell, which information is important for the detection and identification of CTC cells.
Given that CTC cells are very rare in blood samples, the acquired fluorescent images are typically low in resolution, resulting in indistinguishable morphology of CTC cells and morphology of biomarker cells and biomarkers. In the technical scheme of the application, the fluorescence image of the detected blood sample is subjected to resolution enhancement based on an antagonism generation network so as to improve the quality of the fluorescence image, and therefore a clear fluorescence image is obtained. Wherein, based on the resolution enhancer of the antagonism generation network, the antagonism learning thought of the generator and the discriminator is utilized to reconstruct a high-resolution clear fluorescent image from the fluorescent image with low resolution. Thus, the detailed information of CTC cells can be enhanced, and more accurate input is provided for subsequent feature extraction and classification.
The clarified fluorescence image is then passed through a first convolutional neural network model as a shallow feature extractor to obtain a fluorescence-rendered shallow feature map. Here, the shallow feature extractor is a model for extracting shallow features, such as edges, corner points, textures, etc., from an image, which may reflect the basic structure and shape of cells in the image, and may be used to distinguish cell types. In particular, the first convolutional neural network model is composed of a plurality of convolutional layers and a pooling layer, local features can be extracted from an image, and the dimension and spatial complexity of the features can be reduced through the pooling layer.
In the technical scheme of the application, the fluorescence presentation shallow feature map is used for enhancing the extraction and analysis of the spatial feature information of the CTC through a spatial attention module so as to obtain the spatially enhanced fluorescence presentation shallow feature map. Wherein the spatial attention module may help the network better understand and process spatial structure information of the image. By using the spatial attention module, the network can adaptively adjust the weight of the feature map according to the importance of different areas, so that the attention and the processing of a specific area are enhanced, and the spatial distribution and morphological detail information of CTC can be better captured.
As described above, the spatially enhanced fluorescence presentation shallow feature map is weighted in the spatial dimension by the spatial attention module, thereby highlighting the position and shape information of the CTC cells. However, these shallow feature maps contain only local detail information, and do not reflect the semantic information and deep implicit feature information of CTC cells. In the technical scheme of the application, the space-enhanced fluorescence presentation shallow feature map is extracted and represented by a second convolutional neural network model serving as a deep feature extractor, so that a fluorescence presentation deep feature map is obtained. That is, more complex and deep processing and extraction of features is possible through the second convolutional neural network model.
And then, the fluorescence presentation deep characteristic map passes through a channel attention module to obtain a channel enhanced fluorescence presentation deep characteristic map. Here, the channel attention module may increase the convolutional neural network feature expression capability. In particular, the channel attention module may assign different weights according to the importance of different channels, thereby enhancing useful features and suppressing useless features. In the technical scheme of the application, the channel attention module can be used for enhancing the characteristic characterization capability of the fluorescent representation deep characteristic map, namely, the salient characteristics of CTC cells on different channels can be effectively extracted.
In the technical scheme of the application, the shallow feature map contains spatial information such as the shape, the size and the position of cells, the deep feature map contains semantic information such as the type, the state and the function of the cells, the fusion space enhanced fluorescence presents the shallow feature map and the channel enhanced fluorescence presents the deep feature map so as to fuse the two, and richer and more complete feature expression, namely the classification feature map is obtained.
The classification feature map is then passed through a classifier to obtain classification results, which are used to represent the type tags of CTC cells. Among other things, CTC cell type tags can be determined according to research needs and clinical applications, e.g., can be categorized according to tumor in situ, tumor invasiveness, cell subtype, or cancer treatment response. Here, the classifier may automatically recognize the target class according to the input classification feature map. The classifier can improve the automation degree of the CTC cell detector, reduce human intervention and errors and improve the efficiency and accuracy of the CTC cell detector.
Here, when the spatial enhanced fluorescence presentation shallow feature map and the channel enhanced fluorescence presentation deep feature map are fused to obtain the classification feature map, in consideration of the spatial dimension enhanced distribution of the spatial enhanced fluorescence presentation shallow feature map and the channel enhanced fluorescence presentation deep feature map, which respectively express the shallow image semantic features of the fluorescence image, and the channel dimension enhanced distribution of the deep image semantic features, in order to fully utilize the image semantic features at different depths and the enhanced characterization at different dimensions, the classification feature map is preferably obtained by directly cascading the spatial enhanced fluorescence presentation shallow feature map and the channel enhanced fluorescence presentation deep feature map along the channel dimension, but in this way, more redundant features exist in the classification feature map, which affect the classification regression convergence effect of the classification feature map through the classifier, and reduce the accuracy of the classification result obtained through the classifier.
Thus, the applicant of the present application, during the training process, performed on the classification feature map, for example, noted asFeature redundancy optimization based on low-cost bottleneck-mechanism stacking is performed to obtain an optimized classification feature map, for example, marked as +.>The method is specifically expressed as follows:
representing a single layer convolution operation,/->、/>And->Respectively representing the position-by-position addition, subtraction and multiplication of the feature maps, and +.>And->For bias feature maps, for example, a global mean feature map or a unit feature map of the classification feature map can be initially provided, wherein the initial bias feature map +.>And->Different.
Here, the feature redundancy optimization based on the low-cost bottleneck mechanism stacking may use the low-cost bottleneck mechanism of the multiply-add stacking of two low-cost transformation features to perform feature expansion, and match the residual paths by biasing the stacking channels with uniform values, so as to reveal hidden distribution information under intrinsic features in the redundancy features through low-cost operation transformation similar to the basic residual modules, so as to obtain more intrinsic expression of features through simple and effective convolution operation architecture, thereby optimizing redundant feature expression of the classification feature map, and improving classification regression convergence effect of the classification feature map through the classifier, so as to improve accuracy of classification results obtained by the classifier.
The application has the following technical effects: 1. an intelligent circulating tumor cell detection scheme is provided. 2. The scheme can intelligently divide the type labels of the CTC cells, and effectively improves the identification efficiency of the CTC cells.
Fig. 1 is an application scenario diagram of an artificial intelligence-based circulating tumor cell detector according to an embodiment of the present application. As shown in fig. 1, in this application scenario, first, a fluorescence image (e.g., C as illustrated in fig. 1) of a blood sample to be detected (e.g., M as illustrated in fig. 1) is acquired; the acquired fluorescence image is then input into a server (e.g., S as illustrated in fig. 1) deployed with an artificial intelligence based circulating tumor cell detection algorithm, wherein the server is capable of processing the fluorescence image based on the artificial intelligence circulating tumor cell detection algorithm to generate a classification result for representing a type tag of CTC cells.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
In one embodiment of the present application, FIG. 2 is a block diagram of an artificial intelligence based circulating tumor cell detector according to an embodiment of the present application. As shown in fig. 2, an artificial intelligence based circulating tumor cell detector 100 according to an embodiment of the present application includes: a data acquisition module 110 for acquiring a fluorescence image of the blood sample being tested; a resolution enhancement module 120 for passing the fluorescence image of the detected blood sample through a resolution enhancer based on an antagonism generation network to obtain a sharpened fluorescence image; the shallow feature extraction module 130 is configured to pass the sharpened fluorescence image through a first convolutional neural network model serving as a shallow feature extractor to obtain a fluorescence representation shallow feature map; the space enhancement module 140 is configured to pass the fluorescence presentation shallow feature map through the space attention module to obtain a space enhanced fluorescence presentation shallow feature map;
The deep feature extraction module 150 is configured to pass the spatially enhanced fluorescence presentation shallow feature map through a second convolutional neural network model serving as a deep feature extractor to obtain a fluorescence presentation deep feature map; a channel enhancement module 160, configured to pass the fluorescence presentation deep feature map through a channel attention module to obtain a channel enhanced fluorescence presentation deep feature map; the feature fusion module 170 is configured to fuse the spatial enhanced fluorescence presentation shallow feature map and the channel enhanced fluorescence presentation deep feature map to obtain a classification feature map; and a cell type classification module 180 for passing the classification feature map through a classifier to obtain a classification result, wherein the classification result is used for representing a type tag of the CTC cell.
Specifically, in the embodiment of the present application, the data acquisition module 110 is configured to acquire a fluorescence image of the blood sample to be tested. Aiming at the technical problems, the technical conception of the application is as follows: and extracting implicit characteristics from the fluorescence image of the detected blood sample by using an image processing technology based on deep learning, and realizing automatic classification of the types of CTC cells by using classification processing.
Specifically, in the technical solution of the present application, first, a fluorescence image of a blood sample to be detected is acquired. Here, the fluorescence image is obtained by multiplex immunofluorescence labeling of cells in a blood sample, and then photographing with a fluorescence microscope. The fluorescent image may show the morphology, size, location and fluorescent signal of the cell, which information is important for the detection and identification of CTC cells.
Specifically, in the embodiment of the present application, the resolution enhancement module 120 is configured to pass the fluorescence image of the detected blood sample through a resolution enhancer based on an antagonism generation network to obtain a sharpened fluorescence image. Given that CTC cells are very rare in blood samples, the acquired fluorescent images are typically low in resolution, resulting in indistinguishable morphology of CTC cells and morphology of biomarker cells and biomarkers. In the technical scheme of the application, the fluorescence image of the detected blood sample is subjected to resolution enhancement based on an antagonism generation network so as to improve the quality of the fluorescence image, and therefore a clear fluorescence image is obtained.
Wherein, based on the resolution enhancer of the antagonism generation network, the antagonism learning thought of the generator and the discriminator is utilized to reconstruct a high-resolution clear fluorescent image from the fluorescent image with low resolution. Thus, the detailed information of CTC cells can be enhanced, and more accurate input is provided for subsequent feature extraction and classification.
Wherein, the resolution enhancement module 120 is configured to: inputting the fluorescence image of the detected blood sample into the resolution enhancer based on the countermeasure-generating network to generate the sharpened fluorescence image by deconvolution encoding by the resolution enhancer based on the countermeasure-generating network.
Specifically, in the embodiment of the present application, the shallow feature extraction module 130 is configured to pass the sharpened fluorescence image through a first convolutional neural network model that is a shallow feature extractor to obtain a fluorescence representation shallow feature map. The clarified fluorescence image is then passed through a first convolutional neural network model as a shallow feature extractor to obtain a fluorescence-rendered shallow feature map. Here, the shallow feature extractor is a model for extracting shallow features, such as edges, corner points, textures, etc., from an image, which may reflect the basic structure and shape of cells in the image, and may be used to distinguish cell types. In particular, the first convolutional neural network model is composed of a plurality of convolutional layers and a pooling layer, local features can be extracted from an image, and the dimension and spatial complexity of the features can be reduced through the pooling layer.
Wherein, the shallow feature extraction module 130 is configured to: and respectively carrying out convolution processing, pooling processing and nonlinear activation processing on the clear fluorescent image in forward transmission of layers by using each layer of the first convolution neural network model serving as the shallow feature extractor to extract the fluorescent display shallow feature map from the shallow layer of the first convolution neural network model serving as the shallow feature extractor.
The convolutional neural network (Convolutional Neural Network, CNN) is an artificial neural network and has wide application in the fields of image recognition and the like. The convolutional neural network may include an input layer, a hidden layer, and an output layer, where the hidden layer may include a convolutional layer, a pooling layer, an activation layer, a full connection layer, etc., where the previous layer performs a corresponding operation according to input data, outputs an operation result to the next layer, and obtains a final result after the input initial data is subjected to a multi-layer operation.
The convolutional neural network model has excellent performance in the aspect of image local feature extraction by taking a convolutional kernel as a feature filtering factor, and has stronger feature extraction generalization capability and fitting capability compared with the traditional image feature extraction algorithm based on statistics or feature engineering.
Specifically, in the embodiment of the present application, the spatial enhancement module 140 is configured to pass the fluorescence presentation shallow feature map through a spatial attention module to obtain a spatially enhanced fluorescence presentation shallow feature map. In the technical scheme of the application, the fluorescence presentation shallow feature map is used for enhancing the extraction and analysis of the spatial feature information of the CTC through a spatial attention module so as to obtain the spatially enhanced fluorescence presentation shallow feature map. Wherein the spatial attention module may help the network better understand and process spatial structure information of the image. By using the spatial attention module, the network can adaptively adjust the weight of the feature map according to the importance of different areas, so that the attention and the processing of a specific area are enhanced, and the spatial distribution and morphological detail information of CTC can be better captured.
FIG. 3 is a block diagram of the spatial enhancement module in the artificial intelligence based circulating tumor cell detector according to an embodiment of the present application, and as shown in FIG. 3, the spatial enhancement module 140 includes: a shallow convolutional encoding unit 141, configured to convolutionally encode the fluorescence-represented shallow feature map by using a convolutional encoding portion of the spatial attention module to obtain a shallow convolutional feature map; a shallow spatial attention unit 142 for inputting the shallow convolution feature map into a spatial attention portion of the spatial attention module to obtain a shallow spatial attention map; a shallow activating unit 143, configured to obtain a shallow spatial attention profile by using a Softmax activating function in the shallow spatial attention map; and a shallow feature map calculating unit 144, configured to calculate a position-wise point multiplication of the shallow spatial attention feature map and the shallow convolution feature map to obtain the spatially enhanced fluorescence presentation shallow feature map.
The attention mechanism is a data processing method in machine learning, and is widely applied to various machine learning tasks such as natural language processing, image recognition, voice recognition and the like. On one hand, the attention mechanism is that the network is hoped to automatically learn out the places needing attention in the picture or text sequence; on the other hand, the attention mechanism generates a mask by the operation of the neural network, the weights of the values on the mask. In general, the spatial attention mechanism calculates the average value of different channels of the same pixel point, and then obtains spatial features through some convolution and up-sampling operations, and the pixels of each layer of the spatial features are given different weights.
Specifically, in the embodiment of the present application, the deep feature extraction module 150 is configured to pass the spatially enhanced fluorescence presentation shallow feature map through a second convolutional neural network model that is a deep feature extractor to obtain a fluorescence presentation deep feature map. As described above, the spatially enhanced fluorescence presentation shallow feature map is weighted in the spatial dimension by the spatial attention module, thereby highlighting the position and shape information of the CTC cells. However, these shallow feature maps contain only local detail information, and do not reflect the semantic information and deep implicit feature information of CTC cells. In the technical scheme of the application, the space-enhanced fluorescence presentation shallow feature map is extracted and represented by a second convolutional neural network model serving as a deep feature extractor, so that a fluorescence presentation deep feature map is obtained. That is, more complex and deep processing and extraction of features is possible through the second convolutional neural network model.
Wherein, the deep feature extraction module 150 is configured to: and respectively carrying out convolution processing, pooling processing and nonlinear activation processing on the space enhanced fluorescence presentation shallow layer characteristic map in forward transfer of layers by using each layer of the second convolution neural network model serving as the deep layer characteristic extractor so as to extract the fluorescence presentation deep layer characteristic map from the deep layer of the second convolution neural network model serving as the deep layer characteristic extractor.
Specifically, in the embodiment of the present application, the channel enhancement module 160 is configured to pass the fluorescence presentation deep feature map through a channel attention module to obtain a channel enhanced fluorescence presentation deep feature map. And then, the fluorescence presentation deep characteristic map passes through a channel attention module to obtain a channel enhanced fluorescence presentation deep characteristic map. Here, the channel attention module may increase the convolutional neural network feature expression capability. In particular, the channel attention module may assign different weights according to the importance of different channels, thereby enhancing useful features and suppressing useless features. In the technical scheme of the application, the channel attention module can be used for enhancing the characteristic characterization capability of the fluorescent representation deep characteristic map, namely, the salient characteristics of CTC cells on different channels can be effectively extracted.
FIG. 4 is a block diagram of the channel enhancement module in the artificial intelligence based circulating tumor cell detector according to an embodiment of the present application, as shown in FIG. 4, the channel enhancement module 160 includes: a deep convolution unit 161, configured to input the fluorescence presentation deep feature map into a multi-layer convolution layer of the channel attention module to obtain a deep convolution feature map; a deep global mean unit 162, configured to calculate a global mean of feature matrices of the deep convolutional feature map along a channel dimension to obtain a deep feature vector; a deep level activation unit 163, configured to input the deep level feature vector into the Sigmoid activation function to obtain a deep level attention weight vector; and a deep weighting unit 164, configured to weight each feature matrix of the deep convolution feature map along a channel dimension with feature values of each position in the deep attention weight vector as weights, so as to obtain the channel enhanced fluorescence presentation deep feature map.
It should be understood that the image features extracted by the channel attention reflect the correlation and importance among feature channels, so that the convolutional neural network model of the channel attention mechanism is used for feature mining of the echo detection signals, and the distance content feature information about the oncoming vehicles in the echo detection signals can be extracted, so that early warning is provided for the driver, the driver is given sufficient response time, and the driving safety is ensured.
Specifically, in the embodiment of the present application, the feature fusion module 170 is configured to fuse the spatial enhanced fluorescence presentation shallow feature map and the channel enhanced fluorescence presentation deep feature map to obtain a classification feature map. In the technical scheme of the application, the shallow feature map contains spatial information such as the shape, the size and the position of cells, the deep feature map contains semantic information such as the type, the state and the function of the cells, the fusion space enhanced fluorescence presents the shallow feature map and the channel enhanced fluorescence presents the deep feature map so as to fuse the two, and richer and more complete feature expression, namely the classification feature map is obtained.
Wherein, the feature fusion module 170 is configured to: fusing the space enhanced fluorescence presentation shallow feature map and the channel enhanced fluorescence presentation deep feature map by the following fusion formula to obtain a classification feature map; wherein, the fusion formula is:
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing the classification characteristic map,>representing that the spatially enhanced fluorescence presents a shallow feature map,>representing that the channel enhanced fluorescence presents a deep profile, ">"means that the spatially enhanced fluorescence presents a shallow feature map and the channel enhanced fluorescence presents an addition of elements at the corresponding position of the deep feature map,">And->And (c) representing weighting parameters for controlling the balance between the spatially enhanced fluorescence presentation shallow feature map and the channel enhanced fluorescence presentation deep feature map.
Specifically, in the embodiment of the present application, the cell type classification module 180 is configured to pass the classification feature map through a classifier to obtain a classification result, where the classification result is used to represent a type tag of CTC cells. The classification feature map is then passed through a classifier to obtain classification results, which are used to represent the type tags of CTC cells. Among other things, CTC cell type tags can be determined according to research needs and clinical applications, e.g., can be categorized according to tumor in situ, tumor invasiveness, cell subtype, or cancer treatment response. Here, the classifier may automatically recognize the target class according to the input classification feature map. The classifier can improve the automation degree of the CTC cell detector, reduce human intervention and errors and improve the efficiency and accuracy of the CTC cell detector.
FIG. 5 is a block diagram of the cell type classification module in the artificial intelligence based circulating tumor cell detector according to an embodiment of the present application, as shown in FIG. 5, the cell type classification module 180 includes: a matrix developing unit 181, configured to develop the classification feature map into classification feature vectors according to row vectors or column vectors; a full-connection encoding unit 182, configured to perform full-connection encoding on the classification feature vector by using multiple full-connection layers of the classifier to obtain an encoded classification feature vector; and a classification unit 183, configured to pass the encoded classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
Further, the artificial intelligence based circulating tumor cell detector further comprises a training module for training the resolution enhancer based on the countermeasure generation network, the first convolutional neural network model as a shallow feature extractor, the spatial attention module, the second convolutional neural network model as a deep feature extractor, the channel attention module and the classifier; FIG. 6 is a block diagram of the training module in the artificial intelligence based circulating tumor cell detector according to an embodiment of the present application, as shown in FIG. 6, the training module 190 includes: a training data acquisition unit 1901 for acquiring training data including a training fluorescence image of a blood sample to be detected and a true value of a CTC cell type tag; a training resolution enhancement unit 1902 for passing the training fluorescence image of the detected blood sample through the resolution enhancer based on the countermeasure generation network to obtain a training sharpened fluorescence image; a training shallow feature extraction unit 1903, configured to pass the training sharpened fluorescence image through the first convolutional neural network model as a shallow feature extractor to obtain a training fluorescence presentation shallow feature map; a training space enhancement unit 1904, configured to pass the training fluorescence presentation shallow feature map through the spatial attention module to obtain a training space enhanced fluorescence presentation shallow feature map; a training deep feature extraction unit 1905, configured to pass the training space enhanced fluorescence presentation shallow feature map through the second convolutional neural network model serving as a deep feature extractor to obtain a training fluorescence presentation deep feature map; a training channel enhancement unit 1906, configured to pass the training fluorescence presentation deep feature map through the channel attention module to obtain a training channel enhanced fluorescence presentation deep feature map; a training feature fusion unit 1907, configured to fuse the training space enhanced fluorescence presentation shallow feature map and the training channel enhanced fluorescence presentation deep feature map to obtain a training classification feature map; a feature redundancy optimization unit 1908, configured to perform feature redundancy optimization on the training classification feature map to obtain an optimized classification feature map; a classification loss unit 1909, configured to pass the optimized classification feature map through a classifier to obtain a classification loss function value; and a training unit 1910 for training the resolution enhancer based on the countermeasure generation network, the first convolutional neural network model as a shallow feature extractor, the spatial attention module, the second convolutional neural network model as a deep feature extractor, the channel attention module, and the classifier based on the classification loss function value and based on a propagation direction of gradient descent.
Here, when the spatial enhanced fluorescence presentation shallow feature map and the channel enhanced fluorescence presentation deep feature map are fused to obtain the classification feature map, in consideration of the spatial dimension enhanced distribution of the spatial enhanced fluorescence presentation shallow feature map and the channel enhanced fluorescence presentation deep feature map, which respectively express the shallow image semantic features of the fluorescence image, and the channel dimension enhanced distribution of the deep image semantic features, in order to fully utilize the image semantic features at different depths and the enhanced characterization at different dimensions, the classification feature map is preferably obtained by directly cascading the spatial enhanced fluorescence presentation shallow feature map and the channel enhanced fluorescence presentation deep feature map along the channel dimension, but in this way, more redundant features exist in the classification feature map, which affect the classification regression convergence effect of the classification feature map through the classifier, and reduce the accuracy of the classification result obtained through the classifier.
Thus, the applicant of the present application, during the training process, performed on the classification feature map, for example, noted asFeature redundancy optimization based on low-cost bottleneck-mechanism stacking is performed to obtain an optimized classification feature map, for example, marked as +. >The method is specifically expressed as follows: performing feature redundancy optimization on the training classification feature map by using the following optimization formula to obtain an optimized classification feature map; wherein, the optimization formula is:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing the training classification feature map, ++>Representing the optimized classification characteristic diagram, +.>A single-layer convolution operation is shown,/>、/>and->Respectively representing the position-by-position addition, subtraction and multiplication of the feature maps, and +.>And->Is a bias characteristic diagram, wherein the initial bias characteristic diagram +.>And->Different.
Here, the feature redundancy optimization based on the low-cost bottleneck mechanism stacking may use the low-cost bottleneck mechanism of the multiply-add stacking of two low-cost transformation features to perform feature expansion, and match the residual paths by biasing the stacking channels with uniform values, so as to reveal hidden distribution information under intrinsic features in the redundancy features through low-cost operation transformation similar to the basic residual modules, so as to obtain more intrinsic expression of features through simple and effective convolution operation architecture, thereby optimizing redundant feature expression of the classification feature map, and improving classification regression convergence effect of the classification feature map through the classifier, so as to improve accuracy of classification results obtained by the classifier.
In summary, an artificial intelligence based circulating tumor cell detector 100 according to embodiments of the present application is illustrated that acquires a fluorescence image of a blood sample being tested; and extracting implicit characteristics from the fluorescence image of the detected blood sample based on the image processing technology of deep learning, and adopting classification processing to realize automatic classification of the CTC cell types. Therefore, the type labels of the CTC cells can be intelligently divided, and the identification efficiency of the CTC cells is effectively improved.
In one embodiment of the present application, fig. 7 is a flow chart of an artificial intelligence based method for detecting circulating tumor cells according to an embodiment of the present application. As shown in fig. 7, an artificial intelligence-based method for detecting circulating tumor cells according to an embodiment of the present application includes: 210, obtaining a fluorescence image of a detected blood sample; 220 passing the fluorescence image of the detected blood sample through a resolution enhancer based on an antagonism generating network to obtain a sharpened fluorescence image; 230, passing the clarified fluorescence image through a first convolutional neural network model serving as a shallow feature extractor to obtain a fluorescence presentation shallow feature map; 240, passing the fluorescence presentation shallow feature map through a spatial attention module to obtain a spatially enhanced fluorescence presentation shallow feature map; 250, passing the space enhanced fluorescence presentation shallow feature map through a second convolutional neural network model serving as a deep feature extractor to obtain a fluorescence presentation deep feature map; 260, passing the fluorescence presentation deep feature map through a channel attention module to obtain a channel enhanced fluorescence presentation deep feature map; 270, fusing the spatial enhanced fluorescence presentation shallow feature map and the channel enhanced fluorescence presentation deep feature map to obtain a classification feature map; and, 280, passing the classification feature map through a classifier to obtain a classification result, wherein the classification result is used for representing the type label of the CTC cell.
Fig. 8 is a schematic diagram of a system architecture of an artificial intelligence-based method for detecting circulating tumor cells according to an embodiment of the present application. As shown in fig. 8, in the system architecture of the artificial intelligence-based circulating tumor cell detection method, first, a fluorescence image of a blood sample to be detected is acquired; then, passing the fluorescence image of the detected blood sample through a resolution enhancer based on an antagonism generation network to obtain a sharpened fluorescence image; then, the clear fluorescent image passes through a first convolution neural network model serving as a shallow feature extractor to obtain a fluorescent display shallow feature map; then, the fluorescence presentation shallow feature map passes through a spatial attention module to obtain a spatially enhanced fluorescence presentation shallow feature map; then, the space enhanced fluorescence presentation shallow feature map is passed through a second convolution neural network model serving as a deep feature extractor to obtain a fluorescence presentation deep feature map; then, the fluorescence presentation deep feature map passes through a channel attention module to obtain a channel enhanced fluorescence presentation deep feature map; then, fusing the space enhanced fluorescence presentation shallow feature map and the channel enhanced fluorescence presentation deep feature map to obtain a classification feature map; and finally, the classification characteristic diagram is passed through a classifier to obtain a classification result, wherein the classification result is used for representing the type label of the CTC cells.
In a specific example, in the above-described artificial intelligence-based circulating tumor cell detection method, passing the fluorescent image of the detected blood sample through a resolution enhancer based on an countermeasure generation network to obtain a sharpened fluorescent image includes: inputting the fluorescence image of the detected blood sample into the resolution enhancer based on the countermeasure-generating network to generate the sharpened fluorescence image by deconvolution encoding by the resolution enhancer based on the countermeasure-generating network.
In a specific example, in the above method for detecting circulating tumor cells based on artificial intelligence, passing the clarified fluorescence image through a first convolutional neural network model as a shallow feature extractor to obtain a fluorescence presentation shallow feature map, including: and respectively carrying out convolution processing, pooling processing and nonlinear activation processing on the clear fluorescent image in forward transmission of layers by using each layer of the first convolution neural network model serving as the shallow feature extractor to extract the fluorescent display shallow feature map from the shallow layer of the first convolution neural network model serving as the shallow feature extractor.
In a specific example, in the above artificial intelligence based method for detecting circulating tumor cells, the step of passing the fluorescence presentation shallow feature map through a spatial attention module to obtain a spatially enhanced fluorescence presentation shallow feature map includes: convolutionally encoding the fluorescence presentation shallow feature map by using a convolutionally encoding part of the spatial attention module to obtain a shallow convolutionally feature map; inputting the shallow convolution feature map into a spatial attention portion of the spatial attention module to obtain a shallow spatial attention map; activating a function through Softmax to obtain a shallow space attention characteristic diagram by the shallow space attention; and calculating the position-wise point multiplication of the shallow space attention characteristic map and the shallow convolution characteristic map to obtain the space enhanced fluorescence display shallow characteristic map.
In a specific example, in the above artificial intelligence based method for detecting circulating tumor cells, the step of passing the spatially enhanced fluorescence presentation shallow feature map through a second convolutional neural network model as a deep feature extractor to obtain a fluorescence presentation deep feature map includes: and respectively carrying out convolution processing, pooling processing and nonlinear activation processing on the space enhanced fluorescence presentation shallow layer characteristic map in forward transfer of layers by using each layer of the second convolution neural network model serving as the deep layer characteristic extractor so as to extract the fluorescence presentation deep layer characteristic map from the deep layer of the second convolution neural network model serving as the deep layer characteristic extractor.
In a specific example, in the above artificial intelligence based method for detecting circulating tumor cells, the method for obtaining a channel enhanced fluorescence presentation depth profile by passing the fluorescence presentation depth profile through a channel attention module comprises: inputting the fluorescence presentation deep feature map into a multi-layer convolution layer of the channel attention module to obtain a deep convolution feature map; calculating the global average value of each feature matrix of the deep convolution feature graph along the channel dimension to obtain a deep feature vector;
Inputting the deep feature vector into the Sigmoid activation function to obtain a deep attention weight vector; and weighting each feature matrix of the deep convolution feature map along the channel dimension by taking the feature value of each position in the deep attention weight vector as a weight to obtain the channel enhanced fluorescence presentation deep feature map.
In a specific example, in the above artificial intelligence-based method for detecting circulating tumor cells, fusing the spatially enhanced fluorescence presentation shallow feature map and the channel enhanced fluorescence presentation deep feature map to obtain the classification feature map includes: fusing the space enhanced fluorescence presentation shallow feature map and the channel enhanced fluorescence presentation deep feature map by the following fusion formula to obtain a classification feature map; wherein, the fusion formula is:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing the classification characteristic map,>representing that the spatially enhanced fluorescence presents a shallow feature map,>representing that the channel enhanced fluorescence presents a deep profile, ">"means that the spatially enhanced fluorescence presents a shallow feature map and the channel enhanced fluorescence presents an addition of elements at the corresponding position of the deep feature map," >And->And (c) representing weighting parameters for controlling the balance between the spatially enhanced fluorescence presentation shallow feature map and the channel enhanced fluorescence presentation deep feature map.
In a specific example, in the above artificial intelligence based method for detecting circulating tumor cells, the classifying feature map is passed through a classifier to obtain a classification result, where the classification result is used to represent a type tag of CTC cells, and the method includes: expanding the classification characteristic diagram into classification characteristic vectors according to row vectors or column vectors; performing full-connection coding on the classification feature vectors by using a plurality of full-connection layers of the classifier to obtain coded classification feature vectors; and passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
In a specific example, in the above artificial intelligence based circulating tumor cell detection method, training the resolution enhancer based on the countermeasure generation network, the first convolutional neural network model as a shallow feature extractor, the spatial attention module, the second convolutional neural network model as a deep feature extractor, the channel attention module, and the classifier is further included; wherein training the resolution enhancer based on the countermeasure generation network, the first convolutional neural network model as a shallow feature extractor, the spatial attention module, the second convolutional neural network model as a deep feature extractor, the channel attention module, and the classifier comprises: acquiring training data, wherein the training data comprises training fluorescence images of detected blood samples and true values of type tags of CTC cells; passing the training fluorescence image of the detected blood sample through the resolution enhancer based on the challenge-generating network to obtain a training sharpened fluorescence image; the training clear fluorescent image passes through the first convolution neural network model serving as a shallow feature extractor to obtain a training fluorescent display shallow feature map; the training fluorescence presentation shallow feature map passes through the spatial attention module to obtain a training space enhanced fluorescence presentation shallow feature map; the training space enhanced fluorescence presentation shallow feature map is passed through the second convolutional neural network model serving as a deep feature extractor to obtain a training fluorescence presentation deep feature map; the training fluorescence presentation deep feature map passes through the channel attention module to obtain a training channel enhanced fluorescence presentation deep feature map; fusing the training space enhanced fluorescence presentation shallow feature map and the training channel enhanced fluorescence presentation deep feature map to obtain a training classification feature map; performing feature redundancy optimization on the training classification feature map to obtain an optimized classification feature map; the optimized classification characteristic diagram passes through a classifier to obtain a classification loss function value; and training the contrast generation network-based resolution enhancer, the first convolutional neural network model as a shallow feature extractor, the spatial attention module, the second convolutional neural network model as a deep feature extractor, the channel attention module, and the classifier based on the classification loss function value and based on a propagation direction of gradient descent.
In a specific example, in the above artificial intelligence based method for detecting circulating tumor cells, performing feature redundancy optimization on the training classification feature map to obtain an optimized classification feature map, including: performing feature redundancy optimization on the training classification feature map by using the following optimization formula to obtain an optimized classification feature map; wherein, the optimization formula is:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing the training classification feature map, ++>Representing the optimized classification characteristic diagram, +.>Representing a single layer convolution operation,/->、/>And->Respectively representing the position-by-position addition, subtraction and multiplication of the feature maps, and +.>And->Is a bias characteristic diagram, wherein the initial bias characteristic diagram +.>And->Different.
It will be appreciated by those skilled in the art that the specific operation of the respective steps in the above-described artificial intelligence-based circulating tumor cell detection method has been described in detail in the above description of the artificial intelligence-based circulating tumor cell detector with reference to fig. 1 to 6, and thus, repetitive descriptions thereof will be omitted.
The present application also provides a computer program product comprising instructions which, when executed, cause an apparatus to perform operations corresponding to the above-described methods.
In one embodiment of the present application, there is also provided a computer readable storage medium storing a computer program for executing the above-described method.
It should be appreciated that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the forms of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects may be utilized. Furthermore, the computer program product may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Methods, systems, and computer program products of embodiments of the present application are described in terms of flow diagrams and/or block diagrams. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not intended to be limited to the details disclosed herein as such.
The block diagrams of the devices, apparatuses, devices, systems referred to in this application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent to the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.
Claims (10)
1. An artificial intelligence based circulating tumor cell detector, comprising:
the data acquisition module is used for acquiring a fluorescence image of the detected blood sample;
a resolution enhancement module for passing the fluorescence image of the detected blood sample through a resolution enhancer based on an antagonism generation network to obtain a sharpened fluorescence image;
the shallow feature extraction module is used for enabling the clear fluorescent image to pass through a first convolution neural network model serving as a shallow feature extractor to obtain a fluorescent display shallow feature map;
the space strengthening module is used for enabling the fluorescence presentation shallow feature map to pass through the space attention module so as to obtain a space strengthening fluorescence presentation shallow feature map;
the deep feature extraction module is used for enabling the space-enhanced fluorescence presentation shallow feature map to pass through a second convolution neural network model serving as a deep feature extractor to obtain a fluorescence presentation deep feature map;
the channel strengthening module is used for enabling the fluorescence presentation deep feature map to pass through the channel attention module so as to obtain a channel enhanced fluorescence presentation deep feature map;
the feature fusion module is used for fusing the space enhanced fluorescence presentation shallow feature map and the channel enhanced fluorescence presentation deep feature map to obtain a classification feature map; and
And the cell type dividing module is used for passing the classification characteristic diagram through a classifier to obtain a classification result, wherein the classification result is used for representing the type label of the CTC cells.
2. The artificial intelligence based circulating tumor cell detector of claim 1, wherein the shallow feature extraction module is configured to: and respectively carrying out convolution processing, pooling processing and nonlinear activation processing on the clear fluorescent image in forward transmission of layers by using each layer of the first convolution neural network model serving as the shallow feature extractor to extract the fluorescent display shallow feature map from the shallow layer of the first convolution neural network model serving as the shallow feature extractor.
3. The artificial intelligence based circulating tumor cell detector of claim 2, wherein the spatial enhancement module comprises:
the shallow convolutional coding unit is used for performing convolutional coding on the fluorescence presentation shallow characteristic map by using a convolutional coding part of the spatial attention module so as to obtain a shallow convolutional characteristic map;
a shallow spatial attention unit for inputting the shallow convolution feature map into a spatial attention portion of the spatial attention module to obtain a shallow spatial attention map;
A shallow activating unit, configured to obtain a shallow spatial attention profile by using a Softmax activating function according to the shallow spatial attention map; and
and the shallow feature map calculation unit is used for calculating the position-based point multiplication of the shallow space attention feature map and the shallow convolution feature map to obtain the space enhanced fluorescence display shallow feature map.
4. The artificial intelligence based circulating tumor cell detector of claim 3, wherein the deep feature extraction module is configured to: and respectively carrying out convolution processing, pooling processing and nonlinear activation processing on the space enhanced fluorescence presentation shallow layer characteristic map in forward transfer of layers by using each layer of the second convolution neural network model serving as the deep layer characteristic extractor so as to extract the fluorescence presentation deep layer characteristic map from the deep layer of the second convolution neural network model serving as the deep layer characteristic extractor.
5. The artificial intelligence based circulating tumor cell detector of claim 4, wherein the channel enhancement module comprises:
the deep convolution unit is used for inputting the fluorescence presentation deep feature map into the multi-layer convolution layers of the channel attention module to obtain a deep convolution feature map;
The deep global mean unit is used for calculating global mean values of all feature matrixes of the deep convolution feature graphs along the channel dimension to obtain deep feature vectors;
the deep activating unit is used for inputting the deep feature vector into the Sigmoid activating function to obtain a deep attention weight vector; and
and the deep weighting unit is used for respectively weighting each characteristic matrix of the deep convolution characteristic map along the channel dimension by taking the characteristic value of each position in the deep attention weight vector as a weight so as to obtain the channel enhanced fluorescence presentation deep characteristic map.
6. The artificial intelligence based circulating tumor cell detector of claim 5, wherein the feature fusion module is configured to: fusing the space enhanced fluorescence presentation shallow feature map and the channel enhanced fluorescence presentation deep feature map by the following fusion formula to obtain a classification feature map;
wherein, the fusion formula is:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing the classification characteristic map,>representing the spaceEnhanced fluorescence presents a shallow feature map, +.>Representing that the channel enhanced fluorescence presents a deep profile, ">"means that the spatially enhanced fluorescence presents a shallow feature map and the channel enhanced fluorescence presents an addition of elements at the corresponding position of the deep feature map," >And->And (c) representing weighting parameters for controlling the balance between the spatially enhanced fluorescence presentation shallow feature map and the channel enhanced fluorescence presentation deep feature map.
7. The artificial intelligence based circulating tumor cell detector of claim 6, wherein the cell type classification module comprises:
the matrix unfolding unit is used for unfolding the classification characteristic graph into a classification characteristic vector according to a row vector or a column vector;
the full-connection coding unit is used for carrying out full-connection coding on the classification characteristic vectors by using a plurality of full-connection layers of the classifier so as to obtain coded classification characteristic vectors; and
and the classification unit is used for passing the coding classification feature vector through a Softmax classification function of the classifier to obtain the classification result.
8. The artificial intelligence based circulating tumor cell detector of claim 7, further comprising a training module that trains the resolution enhancer based on the countermeasure generation network, the first convolutional neural network model as a shallow feature extractor, the spatial attention module, the second convolutional neural network model as a deep feature extractor, the channel attention module, and the classifier;
Wherein, training module includes:
the training data acquisition unit is used for acquiring training data, wherein the training data comprises training fluorescence images of the detected blood sample and a true value of a type label of the CTC cell;
a training resolution enhancement unit for passing the training fluorescence image of the detected blood sample through the resolution enhancer based on the countermeasure generation network to obtain a training sharpened fluorescence image;
the training shallow feature extraction unit is used for enabling the training clear fluorescent image to pass through the first convolution neural network model serving as a shallow feature extractor to obtain a training fluorescent display shallow feature map;
the training space strengthening unit is used for enabling the training fluorescence to show a shallow characteristic map to pass through the space attention module so as to obtain a training space strengthening fluorescence to show a shallow characteristic map;
the training deep feature extraction unit is used for enabling the training space enhanced fluorescence presentation shallow feature map to pass through the second convolution neural network model serving as the deep feature extractor so as to obtain a training fluorescence presentation deep feature map;
the training channel strengthening unit is used for enabling the training fluorescence presentation deep feature map to pass through the channel attention module so as to obtain a training channel enhanced fluorescence presentation deep feature map;
The training feature fusion unit is used for fusing the training space enhanced fluorescence presentation shallow feature map and the training channel enhanced fluorescence presentation deep feature map to obtain a training classification feature map;
the feature redundancy optimization unit is used for performing feature redundancy optimization on the training classification feature map to obtain an optimized classification feature map;
the classification loss unit is used for enabling the optimized classification characteristic diagram to pass through a classifier to obtain a classification loss function value; and
a training unit for training the resolution enhancer based on the countermeasure generation network, the first convolutional neural network model as a shallow feature extractor, the spatial attention module, the second convolutional neural network model as a deep feature extractor, the channel attention module, and the classifier based on the classification loss function value and based on a propagation direction of gradient descent.
9. The artificial intelligence based circulating tumor cell detector of claim 8, wherein the feature redundancy optimization unit is configured to: performing feature redundancy optimization on the training classification feature map by using the following optimization formula to obtain an optimized classification feature map;
Wherein, the optimization formula is:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing the training classification feature map, ++>Representing the optimized classification characteristic diagram, +.>Representing a single layer convolution operation,/->、/>And->Respectively representing the position-by-position addition, subtraction and multiplication of the feature maps, and +.>And->Is a bias characteristic diagram, wherein the initial bias characteristic diagram +.>And->Different.
10. An artificial intelligence-based circulating tumor cell detection method is characterized by comprising the following steps:
acquiring a fluorescence image of a detected blood sample;
passing the fluorescence image of the detected blood sample through a resolution enhancer based on an antagonism generation network to obtain a sharpened fluorescence image;
passing the sharpened fluorescent image through a first convolution neural network model serving as a shallow feature extractor to obtain a fluorescent display shallow feature map;
the fluorescence presentation shallow feature map passes through a spatial attention module to obtain a spatial enhanced fluorescence presentation shallow feature map;
the space enhanced fluorescence presentation shallow feature map is passed through a second convolution neural network model serving as a deep feature extractor to obtain a fluorescence presentation deep feature map;
the fluorescence presentation deep feature map passes through a channel attention module to obtain a channel enhanced fluorescence presentation deep feature map;
Fusing the space enhanced fluorescence presentation shallow feature map and the channel enhanced fluorescence presentation deep feature map to obtain a classification feature map; and
and passing the classification characteristic diagram through a classifier to obtain a classification result, wherein the classification result is used for representing the type label of the CTC cells.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310671408.0A CN116403213A (en) | 2023-06-08 | 2023-06-08 | Circulating tumor cell detector based on artificial intelligence and method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310671408.0A CN116403213A (en) | 2023-06-08 | 2023-06-08 | Circulating tumor cell detector based on artificial intelligence and method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116403213A true CN116403213A (en) | 2023-07-07 |
Family
ID=87016511
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310671408.0A Pending CN116403213A (en) | 2023-06-08 | 2023-06-08 | Circulating tumor cell detector based on artificial intelligence and method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116403213A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116612472A (en) * | 2023-07-21 | 2023-08-18 | 北京航空航天大学杭州创新研究院 | Single-molecule immune array analyzer based on image and method thereof |
CN116630313A (en) * | 2023-07-21 | 2023-08-22 | 北京航空航天大学杭州创新研究院 | Fluorescence imaging detection system and method thereof |
CN116664961A (en) * | 2023-07-31 | 2023-08-29 | 东莞市将为防伪科技有限公司 | Intelligent identification method and system for anti-counterfeit label based on signal code |
CN116872233A (en) * | 2023-09-07 | 2023-10-13 | 泉州师范学院 | Campus inspection robot and control method thereof |
CN117522861A (en) * | 2023-12-26 | 2024-02-06 | 吉林大学 | Intelligent monitoring system and method for animal rotator cuff injury |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115791640A (en) * | 2023-02-06 | 2023-03-14 | 杭州华得森生物技术有限公司 | Tumor cell detection device and method based on spectroscopic spectrum |
CN116071229A (en) * | 2022-08-24 | 2023-05-05 | 中国矿业大学 | Image super-resolution reconstruction method for wearable helmet |
CN116189179A (en) * | 2023-04-28 | 2023-05-30 | 北京航空航天大学杭州创新研究院 | Circulating tumor cell scanning analysis equipment |
CN116188584A (en) * | 2023-04-23 | 2023-05-30 | 成都睿瞳科技有限责任公司 | Method and system for identifying object polishing position based on image |
-
2023
- 2023-06-08 CN CN202310671408.0A patent/CN116403213A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116071229A (en) * | 2022-08-24 | 2023-05-05 | 中国矿业大学 | Image super-resolution reconstruction method for wearable helmet |
CN115791640A (en) * | 2023-02-06 | 2023-03-14 | 杭州华得森生物技术有限公司 | Tumor cell detection device and method based on spectroscopic spectrum |
CN116188584A (en) * | 2023-04-23 | 2023-05-30 | 成都睿瞳科技有限责任公司 | Method and system for identifying object polishing position based on image |
CN116189179A (en) * | 2023-04-28 | 2023-05-30 | 北京航空航天大学杭州创新研究院 | Circulating tumor cell scanning analysis equipment |
Non-Patent Citations (1)
Title |
---|
童小钟等: "融合注意力和多尺度特征的典型水面小目标检测", 《仪器仪表学报》, pages 1 - 12 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116612472A (en) * | 2023-07-21 | 2023-08-18 | 北京航空航天大学杭州创新研究院 | Single-molecule immune array analyzer based on image and method thereof |
CN116630313A (en) * | 2023-07-21 | 2023-08-22 | 北京航空航天大学杭州创新研究院 | Fluorescence imaging detection system and method thereof |
CN116612472B (en) * | 2023-07-21 | 2023-09-19 | 北京航空航天大学杭州创新研究院 | Single-molecule immune array analyzer based on image and method thereof |
CN116630313B (en) * | 2023-07-21 | 2023-09-26 | 北京航空航天大学杭州创新研究院 | Fluorescence imaging detection system and method thereof |
CN116664961A (en) * | 2023-07-31 | 2023-08-29 | 东莞市将为防伪科技有限公司 | Intelligent identification method and system for anti-counterfeit label based on signal code |
CN116664961B (en) * | 2023-07-31 | 2023-12-05 | 东莞市将为防伪科技有限公司 | Intelligent identification method and system for anti-counterfeit label based on signal code |
CN116872233A (en) * | 2023-09-07 | 2023-10-13 | 泉州师范学院 | Campus inspection robot and control method thereof |
CN117522861A (en) * | 2023-12-26 | 2024-02-06 | 吉林大学 | Intelligent monitoring system and method for animal rotator cuff injury |
CN117522861B (en) * | 2023-12-26 | 2024-04-19 | 吉林大学 | Intelligent monitoring system and method for animal rotator cuff injury |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116403213A (en) | Circulating tumor cell detector based on artificial intelligence and method thereof | |
Isa et al. | Optimizing the hyperparameter tuning of YOLOv5 for underwater detection | |
CN113505792B (en) | Multi-scale semantic segmentation method and model for unbalanced remote sensing image | |
CN113780296A (en) | Remote sensing image semantic segmentation method and system based on multi-scale information fusion | |
CN112862774B (en) | Accurate segmentation method for remote sensing image building | |
CN110781744A (en) | Small-scale pedestrian detection method based on multi-level feature fusion | |
CN112149547A (en) | Remote sensing image water body identification based on image pyramid guidance and pixel pair matching | |
CN110826609B (en) | Double-current feature fusion image identification method based on reinforcement learning | |
CN115830471B (en) | Multi-scale feature fusion and alignment domain self-adaptive cloud detection method | |
CN112861931B (en) | Multi-level change detection method, system, medium and electronic device based on difference attention neural network | |
CN116612472B (en) | Single-molecule immune array analyzer based on image and method thereof | |
CN112084859A (en) | Building segmentation method based on dense boundary block and attention mechanism | |
CN115908772A (en) | Target detection method and system based on Transformer and fusion attention mechanism | |
CN116665176A (en) | Multi-task network road target detection method for vehicle automatic driving | |
CN115273154A (en) | Thermal infrared pedestrian detection method and system based on edge reconstruction and storage medium | |
CN116861262B (en) | Perception model training method and device, electronic equipment and storage medium | |
CN116287138B (en) | FISH-based cell detection system and method thereof | |
CN116977747A (en) | Small sample hyperspectral classification method based on multipath multi-scale feature twin network | |
CN110751061B (en) | SAR image recognition method, device, equipment and storage medium based on SAR network | |
CN116740362A (en) | Attention-based lightweight asymmetric scene semantic segmentation method and system | |
EP4235492A1 (en) | A computer-implemented method, data processing apparatus and computer program for object detection | |
CN113920311A (en) | Remote sensing image segmentation method and system based on edge auxiliary information | |
CN116821699B (en) | Perception model training method and device, electronic equipment and storage medium | |
Sasirekha et al. | Review on Deep Learning Algorithms for Object Detection | |
Ju et al. | Multiscale feature fusion network for automatic port segmentation from remote sensing images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |