CN116416248A - Intelligent analysis system and method based on fluorescence microscope - Google Patents

Intelligent analysis system and method based on fluorescence microscope Download PDF

Info

Publication number
CN116416248A
CN116416248A CN202310671435.8A CN202310671435A CN116416248A CN 116416248 A CN116416248 A CN 116416248A CN 202310671435 A CN202310671435 A CN 202310671435A CN 116416248 A CN116416248 A CN 116416248A
Authority
CN
China
Prior art keywords
detection
feature map
training
image
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310671435.8A
Other languages
Chinese (zh)
Inventor
张开山
赵丹
周韵斓
李超
郭志敏
饶浪晴
孔令武
田华
吴乐中
刘艳省
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HANGZHOU WATSON BIOTECH Inc
Original Assignee
HANGZHOU WATSON BIOTECH Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HANGZHOU WATSON BIOTECH Inc filed Critical HANGZHOU WATSON BIOTECH Inc
Priority to CN202310671435.8A priority Critical patent/CN116416248A/en
Publication of CN116416248A publication Critical patent/CN116416248A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application relates to the field of image analysis, and particularly discloses an intelligent analysis system and a method based on a fluorescence microscope.

Description

Intelligent analysis system and method based on fluorescence microscope
Technical Field
The present application relates to the field of image analysis, and more particularly, to an intelligent analysis system based on fluorescence microscopy and a method thereof.
Background
Circulating Tumor Cells (CTCs) are cancer cells shed from primary tumors or metastases into the blood circulation, which are important markers for tumor metastasis and recurrence, and detection and analysis of circulating tumor cells in the blood is of great importance for early diagnosis and treatment of cancer.
CTC cell recognition methods based on fluorescence microscopy are a common method for detecting CTCs, which can distinguish CTCs from normal blood cells using a fluorescent-labeled specific antibody, and then image and count the CTCs using a fluorescence microscope. However, in the prior art, the detection difficulty and error of CTCs are large due to background noise, such as interference of proteins, platelets, erythrocytes, etc. in plasma, and the signal-to-noise ratio and contrast ratio of CTCs are reduced.
Therefore, an optimized fluorescence microscope-based intelligent analysis system is desired to perform background noise reduction, thereby improving the accuracy of the detection results.
Disclosure of Invention
The present application has been made in order to solve the above technical problems. The embodiment of the application provides an intelligent analysis system and a method based on a fluorescence microscope, which excavates full expression of CTC hidden features in a detection image by adopting an image detection technology based on deep learning, so that the intelligent analysis system and the method correspondingly decode based on the CTC hidden features, thereby effectively reducing noise of the background of the image and improving the accuracy of a detection result.
According to one aspect of the present application, there is provided a fluorescence microscope-based intelligent analysis system, comprising:
the detection image acquisition module is used for acquiring detection images acquired by the fluorescence microscope;
the image feature extraction module is used for enabling the detection image to pass through a pyramid network-based encoder to obtain first to fifth detection feature graphs;
the feature fusion module is used for fusing the first detection feature map to the fifth detection feature map to obtain a multi-scale detection feature map; and
and the noise reduction image generation module is used for connecting the multi-scale detection feature images through a decoder comprising a plurality of deconvolution layers based on the skip level of the first to fifth detection feature images so as to obtain noise reduction detection images.
In the above intelligent analysis system based on fluorescence microscope, the image feature extraction module includes: the first coding unit is used for inputting the detection image into a first convolution module of the encoder to obtain the first detection feature map; the second coding unit is used for inputting the first detection feature map into a second convolution module of the coder to obtain the second detection feature map; a third encoding unit, configured to input the second detection feature map to a third convolution module of the encoder to obtain the third detection feature map; a fourth encoding unit, configured to input the third detection feature map to a fourth convolution module of the encoder to obtain the fourth detection feature map; and a fifth encoding unit, configured to input the fourth detection feature map to a fifth convolution module of the encoder to obtain the fifth detection feature map.
In the above intelligent analysis system based on a fluorescence microscope, the noise reduction image generation module includes: a first deconvolution unit for inputting the multi-scale detection feature map into a first deconvolution layer of the decoder to obtain a first decoding feature map; and a first decoding fusion unit, configured to fuse the first decoding feature map and the fifth detection feature map to obtain a first fused decoding feature map as an input of a second deconvolution layer of the decoder.
The intelligent analysis system based on the fluorescence microscope further comprises a training module for training the encoder based on the pyramid network and the decoder comprising a plurality of deconvolution layers.
In the above-mentioned intelligent analysis system based on fluorescence microscope, the training module includes: the training data acquisition module is used for acquiring training data, wherein the training data comprises a training detection image and a real image of the noise-reduced detection image; a training image feature extraction module for passing the training detection image through the pyramid network-based encoder to obtain training first to fifth detection feature maps; the training feature fusion module is used for fusing the training first detection feature image to the training fifth detection feature image to obtain a training multi-scale detection feature image; a first deconvolution decoding module for inputting the training multi-scale detection feature map into a first deconvolution layer of the decoder comprising a plurality of deconvolution layers to obtain a training first decoding feature map; the first fusion module is used for fusing the training first decoding feature map and the training fifth detection feature map to obtain a training first fusion decoding feature map; the feature optimization module is used for performing feature redundancy optimization on the training first fusion decoding feature map based on the low-cost bottleneck mechanism stack to obtain an optimized training first fusion decoding feature map serving as an input of a second deconvolution layer of the decoder; the training noise reduction image generation module is used for connecting the jump level of the training first to fourth detection feature images, and enabling the optimized training first fusion decoding feature image to pass through the decoder comprising a plurality of deconvolution layers to obtain a training noise reduction detection image; the mean square error calculation module is used for calculating a mean square error value between the training noise-reduced detection image and the real image; and a model training module for training the pyramid network based encoder and the decoder comprising a plurality of deconvolution layers with the mean square error value as a loss function value and by back propagation of gradient descent.
In the above intelligent analysis system based on fluorescence microscope, the feature optimization module is configured to: performing feature redundancy optimization on the training first fusion feature map based on the stacking of the low-cost bottleneck mechanisms by using the following optimization formula to obtain the optimized training first fusion feature map; wherein, the optimization formula is:
Figure SMS_1
Figure SMS_2
Figure SMS_3
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_5
for said training a first fusion profile, < >>
Figure SMS_10
Representing a single layer convolution operation,/->
Figure SMS_11
、/>
Figure SMS_6
And->
Figure SMS_7
Respectively representing the position-by-position addition, subtraction and multiplication of the feature maps, and +.>
Figure SMS_8
And->
Figure SMS_9
For biasing the feature map, ++>
Figure SMS_4
Training a first fused feature map for the optimization.
According to another aspect of the present application, there is provided a fluorescence microscope-based intelligent analysis method, including:
acquiring a detection image acquired by a fluorescence microscope;
passing the detected image through a pyramid network-based encoder to obtain first through fifth detected feature maps;
fusing the first detection feature map to the fifth detection feature map to obtain a multi-scale detection feature map; and
and based on the skip-level connection of the first detection feature map to the fifth detection feature map, the multi-scale detection feature map is passed through a decoder comprising a plurality of deconvolution layers to obtain a noise-reduced detection image.
According to still another aspect of the present application, there is provided an electronic apparatus including: a processor; and a memory having stored therein computer program instructions that, when executed by the processor, cause the processor to perform the fluorescence microscope-based intelligent analysis method as described above.
According to yet another aspect of the present application, there is provided a computer readable medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the fluorescence microscope based intelligent analysis method as described above.
Compared with the prior art, the intelligent analysis system and the intelligent analysis method based on the fluorescence microscope, provided by the application, have the advantages that the full expression of the CTC hidden features in the detection image is dug by adopting the image detection technology based on deep learning, so that the corresponding decoding is performed based on the CTC hidden features, the background noise reduction of the image is effectively performed, and the accuracy of the detection result is improved.
Drawings
The foregoing and other objects, features and advantages of the present application will become more apparent from the following more particular description of embodiments of the present application, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 is a schematic view of a fluorescent microscope-based intelligent analysis system according to an embodiment of the present application;
FIG. 2 is a block diagram of a fluorescence microscope-based intelligent analysis system according to an embodiment of the present application;
FIG. 3 is a block diagram of a training module in a fluorescence microscope-based intelligent analysis system according to an embodiment of the present application;
FIG. 4 is a system architecture diagram of a fluorescence microscope-based intelligent analysis system according to an embodiment of the present application;
FIG. 5 is a system architecture diagram of a training module in a fluorescence microscope-based intelligent analysis system according to an embodiment of the present application;
FIG. 6 is a block diagram of an image feature extraction module in a fluorescence microscope-based intelligent analysis system according to an embodiment of the present application;
FIG. 7 is a block diagram of a noise reduction image generation module in a fluorescence microscope-based intelligent analysis system according to an embodiment of the present application;
FIG. 8 is a flow chart of a fluorescence microscope-based intelligent analysis method according to an embodiment of the present application;
fig. 9 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Summary of the application
As described above, the CTC cell recognition method based on a fluorescence microscope is a common method for detecting CTCs, which is capable of distinguishing CTCs from normal blood cells using a fluorescent-labeled specific antibody, and then imaging and counting the CTCs using a fluorescence microscope.
Accordingly, considering that there is interference of background noise in the process of actually detecting CTCs using a fluorescence microscope, background noise refers to other fluorescent signals other than CTCs, such as proteins in plasma, platelets, erythrocytes, etc. Background noise can increase the difficulty and error of CTC detection, reducing the signal-to-noise ratio and contrast of CTCs. Therefore, in the technical scheme of the application, when imaging CTCs by using a fluorescence microscope, noise reduction is expected to be performed on the acquired detection image so as to reduce interference of background noise and improve accuracy of detecting circulating tumor cells in blood subsequently. In addition, when image noise reduction is performed, feature information about CTCs in the detected image needs to be captured to improve the expression capability of the features, thereby reducing the interference effect of noise. However, since there is a large amount of information in the detected image acquired by the fluorescence microscope, and the characteristic information about CTCs is implicit characteristic information of a small scale, it is difficult to capture and extract the CTCs in a conventional manner, resulting in a low capability of extracting the CTCs, thereby reducing the noise reduction effect of the image. Therefore, in this process, it is difficult to perform sufficient expression of the CTC implicit features in the detected image and perform corresponding decoding based on the CTC feature information in the detected image, so that background noise reduction of the image is effectively performed to improve accuracy of the detection result.
In recent years, deep learning and neural networks have been widely used in the fields of computer vision, natural language processing, text signal processing, and the like. Deep learning and development of neural networks provide new solutions and schemes for mining implicit characteristic information about CTCs in the detected images and for decoding based on the implicit characteristic information of CTCs in the detected images accordingly.
Specifically, in the technical scheme of the application, the FPN form of the multi-scale pyramid Encoder-Decode is adopted. In the Encoder part, 5 kinds of features from deep to shallow are acquired by using a convolutional neural network ResNet-50, and rich feature information such as the appearance, detail, position and the like of an image is contained. In the decoder process, the symmetrical design is adopted, and the resolution of the detection image after noise reduction is gradually recovered in cooperation with the form of layer jump connection.
More specifically, in the technical solution of the present application, first, a detection image is acquired by a fluorescence microscope. Next, feature mining is performed on the detection image using a convolutional neural network model having excellent performance in terms of implicit feature extraction, and in particular, focusing on shallow feature information such as details, edges, positions, and the like of CTCs is required in consideration of not only focusing on circulating tumor cell deep implicit semantic feature information in the detection image when feature extraction of the detection image is performed. The pyramid network mainly solves the multi-scale problem in target detection, and can simultaneously utilize the high resolution of low-layer features and the high semantic information of high-layer features to achieve a good effect by fusing the features of different layers. Therefore, in the technical solution of the present application, the detected image is passed through a pyramid network-based encoder to obtain first to fifth detection feature maps. In particular, the encoder based on the pyramid network adopts the first to fifth convolution modules with different depths to respectively perform feature mining on the detection image, so that abundant feature information such as edges, details and positions of the deep implicit semantic features on CTC in the detection image are reserved at a shallow layer while deep implicit semantic features on CTC in the detection image are extracted, and further the accuracy of detection is improved when CTC imaging and counting are performed subsequently. It should be understood that the pyramid network mainly solves the multi-scale problem in target detection, and can independently detect on different feature layers by simply changing network connection under the condition of basically not increasing the calculation amount of the original model, thereby greatly improving the performance of small target detection.
Further, the first to fifth detection feature images are fused, so that shallow feature information about edges, details, positions and the like of the CTC in the detection image and deep implicit semantic feature information of the CTC are fused, and a multi-scale detection feature image with multi-scale fusion features of the CTC is obtained.
Then, in a decoding stage, the multi-scale detection feature map is passed through a decoder including a plurality of deconvolution layers to obtain a noise-reduced detection image based on a skip-level connection of the first to fifth detection feature maps, in particular, where the decoder and the encoder have a symmetrical network structure. That is, specifically, the decoder and the encoder adopt a symmetrical design and cooperate with a mode of layer jump addition connection to gradually restore the resolution of the detection image after noise reduction, so that the deep hidden features and the shallow edges of the CTC can be reserved, and the subsequent detection and counting of the CTC are facilitated.
More specifically, in the technical solution of the present application, the multi-scale detection feature map is input into a first deconvolution layer of the decoder, so as to obtain a first decoding feature map through decoding by the first deconvolution layer of the decoder symmetrical to the encoder. Then, the resolution of the image is gradually restored using the form of the skip connection. Specifically, the first decoding feature map and the fifth decoding feature map are fused to fuse the depth feature information and the first decoding feature information about CTCs in the detected image, so as to be used as the input of the second deconvolution layer of the decoder, and the noise-reduced detected image is obtained through cyclic decoding, so that the background interference of other fluorescent signals is removed, and the accuracy of detecting the circulating tumor cells in blood in the follow-up process is improved.
In particular, in the technical solution of the present application, when the first decoding feature map and the fifth detection feature map are fused, it is considered that the fifth detection feature map is the last-stage output of the encoder based on the pyramid network, and the first decoding feature map is the first-stage output of the decoder, and has a lower feature decoding abstraction level, so when the first decoding feature map is obtained from the multi-scale detection feature map, the low-abstraction-level feature decoding of the first decoding feature map relative to the high feature encoding abstraction level of the fifth detection feature map may cause more redundant features between the first decoding feature map and the fifth detection feature map, thereby reducing the decoding efficiency of the first fused decoding feature map through the second deconvolution layer serving as the decoder, that is, reducing the training speed of the model and the accuracy of the decoding result.
Accordingly, the applicant of the present application refers to a fused feature map obtained by fusing the first decoding feature map and the fifth detection feature map, for example, denoted as
Figure SMS_12
Feature redundancy optimization based on low-cost bottleneck mechanism stacking is carried out to obtain an optimized fusion feature map +. >
Figure SMS_13
The method is specifically expressed as follows:
Figure SMS_14
Figure SMS_15
Figure SMS_16
Figure SMS_17
representing a single layer convolution operation,/->
Figure SMS_18
、/>
Figure SMS_20
And->
Figure SMS_21
Respectively representing the position-by-position addition, subtraction and multiplication of the feature maps, and +.>
Figure SMS_22
And->
Figure SMS_23
For biasing the feature map, for example, a global mean feature map or a unit feature map of the fusion feature map can be initially provided, wherein the initial biasing feature map +.>
Figure SMS_24
And->
Figure SMS_19
Different.
Here, the feature redundancy optimization based on the low-cost bottleneck-mechanism stack may perform feature expansion using the low-cost bottleneck mechanism of the multiply-add stack of two low-cost transform features and match the residual paths by offsetting the stack channels by a uniform value, thereby revealing implicit distribution information under intrinsic features in the redundancy features through low-cost operation transformation similar to the basic residual module, to obtain a more intrinsic expression of features through a simple and efficient convolution operation architecture, thereby optimizing the redundant feature expression of the fusion feature map, improving decoding efficiency of the second deconvolution layer as the decoder, and thereby improving training speed of the model and accuracy of decoding results. Therefore, the collected detection image can be effectively reduced in noise, so that the interference of the background noise of other fluorescent signals is reduced, and the accuracy of detecting the circulating tumor cells in the blood subsequently is improved.
Based on this, the application proposes an intelligent analysis system based on fluorescence microscopy, comprising: the detection image acquisition module is used for acquiring detection images acquired by the fluorescence microscope; the image feature extraction module is used for enabling the detection image to pass through a pyramid network-based encoder to obtain first to fifth detection feature graphs; the feature fusion module is used for fusing the first detection feature map to the fifth detection feature map to obtain a multi-scale detection feature map; and a noise reduction image generation module, configured to pass the multi-scale detection feature map through a decoder including a plurality of deconvolution layers to obtain a noise reduction detection image based on the skip level connection of the first to fifth detection feature maps.
Fig. 1 is a schematic view of a scenario of a fluorescence microscope-based intelligent analysis system according to an embodiment of the present application. As shown in fig. 1, in this application scenario, a detection image is acquired by a fluorescence microscope (e.g., M as illustrated in fig. 1). The image is then input to a server (e.g., S in fig. 1) deployed with a fluorescence microscope-based intelligent analysis algorithm, where the server is capable of processing the input image with the fluorescence microscope-based intelligent analysis algorithm to generate a noise-reduced detection image.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Exemplary System
FIG. 2 is a schematic diagram of an embodiment of the present application a block diagram of an intelligent analysis system for a fluorescence microscope. As shown in fig. 2, a fluorescence microscope-based intelligent analysis system 300 according to an embodiment of the present application includes: an inference module comprising: a detection image acquisition module 310; an image feature extraction module 320; a feature fusion module 330; and a noise reduction image generation module 340.
Wherein, the detection image acquisition module 310 is configured to acquire a detection image acquired by a fluorescence microscope; the image feature extraction module 320 is configured to pass the detected image through a pyramid network-based encoder to obtain first to fifth detection feature maps; the feature fusion module 330 is configured to fuse the first to fifth detection feature maps to obtain a multi-scale detection feature map; and the noise-reduced image generating module 340 is configured to pass the multi-scale detection feature map through a decoder including a plurality of deconvolution layers to obtain a noise-reduced detection image based on the skip-level connection of the first to fifth detection feature maps.
Fig. 4 is a system architecture diagram of a fluorescence microscope-based intelligent analysis system according to an embodiment of the present application. As shown in fig. 4, in the inference module, a detection image acquired by a fluorescence microscope is first acquired by the detection image acquisition module 310; next, the image feature extraction module 320 passes the detected image acquired by the detected image acquisition module 310 through a pyramid network-based encoder to obtain first to fifth detected feature maps; the feature fusion module 330 fuses the first to fifth detection feature maps obtained by the image feature extraction module 320 to obtain a multi-scale detection feature map; further, the noise reduction image generation module 340 passes the multi-scale detection feature map obtained by the feature fusion module 330 through a decoder including a plurality of deconvolution layers based on the skip-level connection of the first to fifth detection feature maps to obtain a noise reduction detection image.
Specifically, during operation of the fluorescence microscope-based intelligent analysis system 300, the detection image acquisition module 310 and the image feature extraction module 320 are configured to acquire a detection image acquired by a fluorescence microscope; and then passing the detected image through a pyramid network-based encoder to obtain first through fifth detection feature maps. It should be understood that during the CTC detection process using the fluorescence microscope, there may be interference of background noise, where background noise refers to other fluorescence signals except CTCs, and background noise may increase difficulty and error of CTC detection, and reduce signal-to-noise ratio and contrast of CTCs. Therefore, in the technical scheme of the application, noise reduction can be performed on the acquired detection image when the fluorescence microscope is used for imaging the CTC so as to reduce interference of background noise, and firstly, the detection image is acquired through the fluorescence microscope; next, feature mining is performed on the detection image using a convolutional neural network model having excellent performance in terms of implicit feature extraction, and in particular, focusing on shallow feature information such as details, edges, positions, and the like of CTCs is required in consideration of not only focusing on circulating tumor cell deep implicit semantic feature information in the detection image when feature extraction of the detection image is performed. The pyramid network mainly solves the multi-scale problem in target detection, and can simultaneously utilize the high resolution of low-layer features and the high semantic information of high-layer features to achieve a good effect by fusing the features of different layers. Therefore, in the technical solution of the present application, the detected image is passed through a pyramid network-based encoder to obtain first to fifth detection feature maps. In particular, the encoder based on the pyramid network adopts the first to fifth convolution modules with different depths to respectively perform feature mining on the detection image, so that abundant feature information such as edges, details and positions of the deep implicit semantic features on CTC in the detection image are reserved at a shallow layer while deep implicit semantic features on CTC in the detection image are extracted, and further the accuracy of detection is improved when CTC imaging and counting are performed subsequently. It should be appreciated that pyramid networks are primarily intended to address the multi-scale problem in object detection, by simply changing network connections, under the condition of basically not increasing the calculation amount of the original model, the detection can be independently carried out on different characteristic layers, and the detection performance of a small target is greatly improved. In one example, during the encoding of the pyramid network, the input data is convolutionally encoded by five convolution modules having different depths, in particular, each convolution module comprising a convolution layer, a pooling layer, and an activation layer. In the coding process of each convolution module, each layer of each convolution module carries out convolution processing based on convolution kernel on input data by using the convolution layer in the forward transmission process of the layer, carries out pooling processing on a convolution characteristic diagram output by the convolution layer by using the pooling layer, and carries out activation processing on the pooling characteristic diagram output by the pooling layer by using the activation layer.
Fig. 6 is a block diagram of an image feature extraction module in a fluorescence microscope-based intelligent analysis system according to an embodiment of the present application. As shown in fig. 6, the image feature extraction module 320 includes: a first encoding unit 321, configured to input the detected image into a first convolution module of the encoder to obtain the first detection feature map; a second encoding unit 322, configured to input the first detected feature map into a second convolution module of the encoder to obtain the second detected feature map; a third encoding unit 323, configured to input the second detection feature map to a third convolution module of the encoder to obtain the third detection feature map; a fourth encoding unit 324, configured to input the third detection feature map into a fourth convolution module of the encoder to obtain the fourth detection feature map; and a fifth encoding unit 325, configured to input the fourth detection feature map into a fifth convolution module of the encoder to obtain the fifth detection feature map.
Specifically, during operation of the fluorescence microscope-based intelligent analysis system 300, the feature fusion module 330 is configured to fuse the first to fifth detection feature maps to obtain a multi-scale detection feature map. That is, after the first to fifth detection feature images are obtained, feature fusion is further performed on the first to fifth detection feature images, so that shallow feature information about edges, details, positions and the like of CTCs in the detection images and deep implicit semantic feature information of CTCs are fused, and a multi-scale detection feature image with multi-scale fusion features of CTCs is obtained.
Fig. 7 is a block diagram of a noise reduction image generation module in a fluorescence microscope-based intelligent analysis system according to an embodiment of the present application. As shown in fig. 7, the noise reduction image generation module 330 includes: a first deconvolution unit 331 for inputting the multi-scale detected feature map into a first deconvolution layer of the decoder to obtain a first decoded feature map; and a first decoding fusion unit 332, configured to fuse the first decoding feature map and the fifth detection feature map to obtain a first fused decoding feature map as an input of the second deconvolution layer of the decoder.
Specifically, during operation of the fluorescence microscope-based intelligent analysis system 300, the noise reduction image generation module 340 is configured to pass the multi-scale detection feature map through a decoder including a plurality of deconvolution layers to obtain a noise reduction detected image based on the skip-level connection of the first to fifth detection feature maps. In the technical scheme of the application, the decoder and the encoder are symmetrically designed and are matched with a mode of layer jump addition connection, the resolution of the detection image after noise reduction is gradually recovered, so that the deep implicit characteristics and shallow edges and other characteristics of the CTC can be reserved, and the subsequent detection and counting of the CTC are facilitated. More specifically, in the technical solution of the present application, the multi-scale detection feature map is input into a first deconvolution layer of the decoder, so as to obtain a first decoding feature map through decoding by the first deconvolution layer of the decoder symmetrical to the encoder. Then, the resolution of the image is gradually restored using the form of the skip connection. Specifically, the first decoding feature map and the fifth decoding feature map are fused to fuse the depth feature information and the first decoding feature information about CTCs in the detected image, so as to be used as the input of the second deconvolution layer of the decoder, and the noise-reduced detected image is obtained through cyclic decoding, so that the background interference of other fluorescent signals is removed, and the accuracy of detecting the circulating tumor cells in blood in the follow-up process is improved.
It will be appreciated that training of the pyramid network-based encoder and the decoder comprising a plurality of deconvolution layers is required before the inference can be made using the neural network model described above. That is, in the fluorescence microscope-based intelligent analysis system of the present application, a training module is further included for training the pyramid network-based encoder and the decoder including the plurality of deconvolution layers.
FIG. 3 is a block diagram of a training module in a fluorescence microscope-based intelligent analysis system according to an embodiment of the present application. As shown in fig. 3, the fluorescence microscope-based intelligent analysis system 300 according to an embodiment of the present application further includes a training module 400, including: a training data acquisition module 410; training the image feature extraction module 420; training a feature fusion module 430; a first deconvolution decoding module 440; a first fusion module 450; a feature optimization module 460; training the noise reduction image generation module 470; a mean square error calculation module 480; and, a model training module 490.
The training data acquisition module 410 is configured to acquire training data, where the training data includes a training detection image and a real image of the noise-reduced detection image; the training image feature extraction module 420 is configured to pass the training detection image through the pyramid network-based encoder to obtain training first to fifth detection feature maps; the training feature fusion module 430 is configured to fuse the training first to fifth detection feature maps to obtain a training multi-scale detection feature map;
The first deconvolution decoding module 440 is configured to input the training multi-scale detection feature map into a first deconvolution layer of the decoder comprising a plurality of deconvolution layers to obtain a training first decoding feature map; the first fusing module 450 is configured to fuse the training first decoding feature map and the training fifth detection feature map to obtain a training first fused decoding feature map; the feature optimization module 460 is configured to perform feature redundancy optimization on the training first fusion feature map based on the stacking of the low-cost bottleneck mechanisms to obtain an optimized training first fusion feature map as an input of the second deconvolution layer of the decoder; the training noise reduction image generating module 470 is configured to pass the optimized training first fusion feature map through the decoder including a plurality of deconvolution layers based on the skip-level connection of the training first to fourth detection feature maps to obtain a training noise reduction detection image; the mean square error calculation module 480 is configured to calculate a mean square error value between the training noise-reduced detection image and the real image; and, the model training module 490 for training the pyramid network based encoder and the decoder comprising a plurality of deconvolution layers with the mean square error value as a loss function value and by back propagation of gradient descent.
Fig. 5 is a system architecture diagram of a training module in a fluorescence microscope-based intelligent analysis system according to an embodiment of the present application. As shown in fig. 5, in the training module, training data is first acquired by the training data acquisition module 410, where the training data includes a training detection image and a real image of the noise-reduced detection image; next, the training image feature extraction module 420 passes the training detection image acquired by the training data acquisition module 410 through the pyramid network-based encoder to obtain training first to fifth detection feature maps; the training feature fusion module 430 fuses the training first to fifth detection feature maps obtained by the training image feature extraction module 420 to obtain a training multi-scale detection feature map; then, the first deconvolution decoding module 440 inputs the training multi-scale detection feature map obtained by the training feature fusion module 430 into the first deconvolution layer of the decoder including a plurality of deconvolution layers to obtain a training first decoding feature map; the first fusion module 450 fuses the training first decoding feature map obtained by the first deconvolution decoding module 440 and the training fifth detection feature map to obtain a training first fusion decoding feature map; the feature optimization module 460 performs feature redundancy optimization based on stacking of a low-cost bottleneck mechanism on the training first fusion decoding feature map obtained by fusion of the first fusion module 450 to obtain an optimized training first fusion feature map as an input of a second deconvolution layer of the decoder; the training noise reduction image generating module 470 uses the optimized training first fusion feature map obtained by the feature optimizing module 460 to pass through the decoder including a plurality of deconvolution layers to obtain a training noise reduction post-detection image based on the skip-level connection of the training first to fourth detection feature maps obtained by the training image feature extracting module 420; the mean square error calculation module 480 calculates a mean square error value between the training noise-reduced detection image and the real image; further, the model training module 490 trains the pyramid network based encoder and the decoder including the plurality of deconvolution layers with the mean square error value as a loss function value and by back propagation of gradient descent.
In summary, the fluorescence microscope-based intelligent analysis system 300 according to the embodiment of the present application is illustrated, which extracts sufficient expressions about CTC implicit features in a detected image by using an image detection technique based on deep learning, so as to perform corresponding decoding based on the CTC implicit features, thereby effectively performing background noise reduction of the image to improve accuracy of a detection result.
As described above, the fluorescence microscope-based intelligent analysis system according to the embodiment of the present application may be implemented in various terminal devices. In one example, the fluorescence microscope-based intelligent analysis system 300 according to embodiments of the present application may be integrated into a terminal device as a software module and/or hardware module. For example, the fluorescence microscope-based intelligent analysis system 300 may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the fluorescence microscope-based intelligent analysis system 300 can equally be one of the numerous hardware modules of the terminal device.
Alternatively, in another example, the fluorescence microscope-based intelligent analysis system 300 and the terminal device may be separate devices, and the fluorescence microscope-based intelligent analysis system 300 may be connected to the terminal device through a wired and/or wireless network and transmit interactive information in a agreed data format.
Exemplary method
Fig. 8 is a flow chart of a fluorescence microscope-based intelligent analysis method according to an embodiment of the present application. As shown in fig. 8, the fluorescence microscope-based intelligent analysis method according to the embodiment of the present application includes the steps of: s110, acquiring a detection image acquired by a fluorescence microscope; s120, passing the detection image through a pyramid network-based encoder to obtain first to fifth detection feature maps; s130, fusing the first detection feature map to the fifth detection feature map to obtain a multi-scale detection feature map; and S140, based on the jump-level connection of the first to fifth detection feature graphs, passing the multi-scale detection feature graph through a decoder comprising a plurality of deconvolution layers to obtain a noise-reduced detection image.
In one example, in the above-mentioned fluorescence microscope-based intelligent analysis method, the step S120 includes: inputting the detected image into a first convolution module of the encoder to obtain the first detection feature map; inputting the first detection feature map into a second convolution module of the encoder to obtain the second detection feature map; inputting the second detection feature map to a third convolution module of the encoder to obtain the third detection feature map; inputting the third detection feature map into a fourth convolution module of the encoder to obtain the fourth detection feature map; and inputting the fourth detection feature map into a fifth convolution module of the encoder to obtain the fifth detection feature map.
In one example, in the above-mentioned fluorescence microscope-based intelligent analysis method, the step S130 includes: inputting the multi-scale detection feature map into a first deconvolution layer of the decoder to obtain a first decoding feature map; and fusing the first decoding feature map and the fifth detection feature map to obtain a first fused decoding feature map as an input of a second deconvolution layer of the decoder.
In summary, the intelligent analysis method based on the fluorescence microscope according to the embodiment of the application is clarified, and by adopting the image detection technology based on deep learning to mine out the sufficient expression of the CTC implicit characteristics in the detected image, the intelligent analysis method based on the fluorescence microscope correspondingly decodes based on the CTC implicit characteristics, so that the background noise reduction of the image is effectively performed, and the accuracy of the detection result is improved.
Exemplary electronic device
Next, an electronic device according to an embodiment of the present application is described with reference to fig. 9.
Fig. 9 illustrates a block diagram of an electronic device according to an embodiment of the present application.
As shown in fig. 9, the electronic device 10 includes one or more processors 11 and a memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processor 11 to implement the functions in the fluorescence microscope-based intelligent analysis system and/or other desired functions of the various embodiments of the present application described above. Various contents such as the first to fifth detection feature maps may also be stored in the computer-readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
The input means 13 may comprise, for example, a keyboard, a mouse, etc.
The output device 14 can output various information to the outside, including a noise-reduced detected image, and the like. The output means 14 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device 10 relevant to the present application are shown in fig. 9 for simplicity, components such as buses, input/output interfaces, etc. being omitted. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer readable storage Medium
In addition to the methods and apparatus described above, embodiments of the present application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform steps in the functions of the fluorescence microscope-based intelligent analysis method according to various embodiments of the present application described in the "exemplary systems" section of the present specification.
The computer program product may write program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium, having stored thereon computer program instructions, which when executed by a processor, cause the processor to perform steps in the functions of the fluorescence microscope-based intelligent analysis method according to various embodiments of the present application described in the above-mentioned "exemplary systems" section of the present specification.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present application have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present application are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present application. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, as the application is not intended to be limited to the details disclosed herein as such.
The block diagrams of the devices, apparatuses, devices, systems referred to in this application are only illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
It is also noted that in the apparatus, devices and methods of the present application, the components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered as equivalent to the present application.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present application. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the application. Thus, the present application is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the application to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (9)

1. An intelligent analysis system based on a fluorescence microscope, comprising:
the detection image acquisition module is used for acquiring detection images acquired by the fluorescence microscope;
The image feature extraction module is used for enabling the detection image to pass through a pyramid network-based encoder to obtain first to fifth detection feature graphs;
the feature fusion module is used for fusing the first detection feature map to the fifth detection feature map to obtain a multi-scale detection feature map; and
and the noise reduction image generation module is used for connecting the multi-scale detection feature images through a decoder comprising a plurality of deconvolution layers based on the skip level of the first to fifth detection feature images so as to obtain noise reduction detection images.
2. The fluorescence microscope-based intelligent analysis system of claim 1, wherein the image feature extraction module comprises:
the first coding unit is used for inputting the detection image into a first convolution module of the encoder to obtain the first detection feature map;
the second coding unit is used for inputting the first detection feature map into a second convolution module of the coder to obtain the second detection feature map;
a third encoding unit, configured to input the second detection feature map to a third convolution module of the encoder to obtain the third detection feature map;
a fourth encoding unit, configured to input the third detection feature map to a fourth convolution module of the encoder to obtain the fourth detection feature map; and
And the fifth coding unit is used for inputting the fourth detection characteristic diagram into a fifth convolution module of the coder to obtain the fifth detection characteristic diagram.
3. The fluorescence microscope-based intelligent analysis system of claim 2, wherein the noise reduction image generation module comprises:
a first deconvolution unit for inputting the multi-scale detection feature map into a first deconvolution layer of the decoder to obtain a first decoding feature map; and
and the first decoding fusion unit is used for fusing the first decoding characteristic diagram and the fifth detection characteristic diagram to obtain a first fusion decoding characteristic diagram which is used as the input of a second deconvolution layer of the decoder.
4. The fluorescence microscope-based intelligent analysis system of claim 3, further comprising a training module for training the pyramid network-based encoder and the decoder comprising a plurality of deconvolution layers.
5. The fluorescence microscope-based intelligent analysis system of claim 4, wherein the training module comprises:
the training data acquisition module is used for acquiring training data, wherein the training data comprises a training detection image and a real image of the noise-reduced detection image;
A training image feature extraction module for passing the training detection image through the pyramid network-based encoder to obtain training first to fifth detection feature maps;
the training feature fusion module is used for fusing the training first detection feature image to the training fifth detection feature image to obtain a training multi-scale detection feature image;
a first deconvolution decoding module for inputting the training multi-scale detection feature map into a first deconvolution layer of the decoder comprising a plurality of deconvolution layers to obtain a training first decoding feature map;
the first fusion module is used for fusing the training first decoding feature map and the training fifth detection feature map to obtain a training first fusion decoding feature map;
the characteristic optimization module is used for optimizing the characteristics of the object, the method comprises the steps of performing feature redundancy optimization on the training first fusion decoding feature map based on low-cost bottleneck mechanism stacking to obtain an optimized training first fusion decoding feature map as an input of a second deconvolution layer of the decoder;
the training noise reduction image generation module is used for connecting the jump level of the training first to fourth detection feature images, and enabling the optimized training first fusion decoding feature image to pass through the decoder comprising a plurality of deconvolution layers to obtain a training noise reduction detection image;
The mean square error calculation module is used for calculating a mean square error value between the training noise-reduced detection image and the real image; and
a model training module for training the pyramid network based encoder and the decoder comprising a plurality of deconvolution layers with the mean square error value as a loss function value and by back propagation of gradient descent.
6. The fluorescence microscope-based intelligent analysis system of claim 5, wherein the feature optimization module is configured to: performing feature redundancy optimization on the training first fusion feature map based on the stacking of the low-cost bottleneck mechanisms by using the following optimization formula to obtain the optimized training first fusion feature map;
wherein, the optimization formula is:
Figure QLYQS_1
Figure QLYQS_2
Figure QLYQS_3
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure QLYQS_5
for said training a first fusion profile, < >>
Figure QLYQS_8
Representing a single layer convolution operation,/->
Figure QLYQS_9
、/>
Figure QLYQS_6
And->
Figure QLYQS_7
Respectively representing the position-by-position addition, subtraction and multiplication of the feature maps, and +.>
Figure QLYQS_10
And->
Figure QLYQS_11
For biasing the feature map, ++>
Figure QLYQS_4
Training a first fused feature map for the optimization.
7. An intelligent analysis method based on a fluorescence microscope is characterized by comprising the following steps:
acquiring a detection image acquired by a fluorescence microscope;
passing the detected image through a pyramid network-based encoder to obtain first through fifth detected feature maps;
Fusing the first detection feature map to the fifth detection feature map to obtain a multi-scale detection feature map; and
and based on the skip-level connection of the first detection feature map to the fifth detection feature map, the multi-scale detection feature map is passed through a decoder comprising a plurality of deconvolution layers to obtain a noise-reduced detection image.
8. The fluorescence microscope-based intelligent analysis method according to claim 7, wherein the detected image is passed through a pyramid network-based encoder to obtain first to fifth detection feature maps; comprising the following steps:
inputting the detected image into a first convolution module of the encoder to obtain the first detection feature map;
inputting the first detection feature map into a second convolution module of the encoder to obtain the second detection feature map;
inputting the second detection feature map to a third convolution module of the encoder to obtain the third detection feature map;
inputting the third detection feature map into a fourth convolution module of the encoder to obtain the fourth detection feature map; and
and inputting the fourth detection characteristic diagram into a fifth convolution module of the encoder to obtain the fifth detection characteristic diagram.
9. The fluorescence microscope-based intelligent analysis method according to claim 8, wherein the multi-scale detection feature map is passed through a decoder including a plurality of deconvolution layers to obtain a noise-reduced detection image based on a skip-level connection of the first to fifth detection feature maps; comprising the following steps:
inputting the multi-scale detection feature map into a first deconvolution layer of the decoder to obtain a first decoding feature map; and
and fusing the first decoding characteristic diagram and the fifth detection characteristic diagram to obtain a first fused decoding characteristic diagram as an input of a second deconvolution layer of the decoder.
CN202310671435.8A 2023-06-08 2023-06-08 Intelligent analysis system and method based on fluorescence microscope Pending CN116416248A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310671435.8A CN116416248A (en) 2023-06-08 2023-06-08 Intelligent analysis system and method based on fluorescence microscope

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310671435.8A CN116416248A (en) 2023-06-08 2023-06-08 Intelligent analysis system and method based on fluorescence microscope

Publications (1)

Publication Number Publication Date
CN116416248A true CN116416248A (en) 2023-07-11

Family

ID=87059679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310671435.8A Pending CN116416248A (en) 2023-06-08 2023-06-08 Intelligent analysis system and method based on fluorescence microscope

Country Status (1)

Country Link
CN (1) CN116416248A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116612472A (en) * 2023-07-21 2023-08-18 北京航空航天大学杭州创新研究院 Single-molecule immune array analyzer based on image and method thereof
CN116630313A (en) * 2023-07-21 2023-08-22 北京航空航天大学杭州创新研究院 Fluorescence imaging detection system and method thereof

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110025880A1 (en) * 2009-08-03 2011-02-03 Genetix Corporation Fluorescence imaging
CN106251303A (en) * 2016-07-28 2016-12-21 同济大学 A kind of image denoising method using the degree of depth full convolutional encoding decoding network
CN111539886A (en) * 2020-04-21 2020-08-14 西安交通大学 Defogging method based on multi-scale feature fusion
TWI779927B (en) * 2021-11-17 2022-10-01 宏碁股份有限公司 Noise reduction convolution auto-encoding device and noise reduction convolution self-encoding method
CN116189179A (en) * 2023-04-28 2023-05-30 北京航空航天大学杭州创新研究院 Circulating tumor cell scanning analysis equipment
CN116188584A (en) * 2023-04-23 2023-05-30 成都睿瞳科技有限责任公司 Method and system for identifying object polishing position based on image
CN116206116A (en) * 2021-11-29 2023-06-02 宏碁股份有限公司 Noise reduction convolution self-coding device and noise reduction convolution self-coding method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110025880A1 (en) * 2009-08-03 2011-02-03 Genetix Corporation Fluorescence imaging
CN106251303A (en) * 2016-07-28 2016-12-21 同济大学 A kind of image denoising method using the degree of depth full convolutional encoding decoding network
CN111539886A (en) * 2020-04-21 2020-08-14 西安交通大学 Defogging method based on multi-scale feature fusion
TWI779927B (en) * 2021-11-17 2022-10-01 宏碁股份有限公司 Noise reduction convolution auto-encoding device and noise reduction convolution self-encoding method
CN116206116A (en) * 2021-11-29 2023-06-02 宏碁股份有限公司 Noise reduction convolution self-coding device and noise reduction convolution self-coding method
CN116188584A (en) * 2023-04-23 2023-05-30 成都睿瞳科技有限责任公司 Method and system for identifying object polishing position based on image
CN116189179A (en) * 2023-04-28 2023-05-30 北京航空航天大学杭州创新研究院 Circulating tumor cell scanning analysis equipment

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CHENG YAO ET AL.: "Multiscale residual fusion network for image denoising", 《WILEY》, pages 878 - 887 *
ZONGWEI ZHOU ET AL.: "UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation", 《JOURNAL OF IEEE TRANSACTIONS ON MEDICAL IMAGING》, pages 1 - 12 *
关煜: "基于深度学习的图像降噪研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, pages 138 - 443 *
吴波: "基于卷积神经网络的单幅图像去模糊研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, pages 138 - 2752 *
方志军 等: "《TensorFlow应用案例教程》", 中国铁道出版社有限公司, pages: 111 - 112 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116612472A (en) * 2023-07-21 2023-08-18 北京航空航天大学杭州创新研究院 Single-molecule immune array analyzer based on image and method thereof
CN116630313A (en) * 2023-07-21 2023-08-22 北京航空航天大学杭州创新研究院 Fluorescence imaging detection system and method thereof
CN116612472B (en) * 2023-07-21 2023-09-19 北京航空航天大学杭州创新研究院 Single-molecule immune array analyzer based on image and method thereof
CN116630313B (en) * 2023-07-21 2023-09-26 北京航空航天大学杭州创新研究院 Fluorescence imaging detection system and method thereof

Similar Documents

Publication Publication Date Title
Li et al. A parallel down-up fusion network for salient object detection in optical remote sensing images
Guo et al. Dense Temporal Convolution Network for Sign Language Translation.
CN116416248A (en) Intelligent analysis system and method based on fluorescence microscope
Cao et al. Image-text retrieval: A survey on recent research and development
US20210390700A1 (en) Referring image segmentation
Cong et al. A weakly supervised learning framework for salient object detection via hybrid labels
US20210110189A1 (en) Character-based text detection and recognition
CN115564766B (en) Preparation method and system of water turbine volute seat ring
WO2023174098A1 (en) Real-time gesture detection method and apparatus
CN115754107B (en) Automatic sampling analysis system and method for lithium hexafluorophosphate preparation
Long et al. A new perspective for flexible feature gathering in scene text recognition via character anchor pooling
CN116168243A (en) Intelligent production system and method for shaver
Xu et al. Boosting connectivity in retinal vessel segmentation via a recursive semantics-guided network
Wang et al. STCD: efficient Siamese transformers-based change detection method for remote sensing images
Ma et al. Label distribution learning for scene text detection
Huo et al. Multi‐source heterogeneous iris segmentation method based on lightweight convolutional neural network
Xiao et al. A text-context-aware CNN network for multi-oriented and multi-language scene text detection
CN112529930A (en) Context learning medical image segmentation method based on focus fusion
Bi et al. HGR-Net: Hierarchical graph reasoning network for arbitrary shape scene text detection
CN116467485A (en) Video image retrieval construction system and method thereof
CN115143128B (en) Fault diagnosis method and system for small-sized submersible electric pump
CN116343190A (en) Natural scene character recognition method, system, equipment and storage medium
CN116127019A (en) Dynamic parameter and visual model generation WEB 2D automatic modeling engine system
CN115932140A (en) Quality inspection system and method for electronic-grade hexafluorobutadiene
Yuan et al. CTIF-Net: A CNN-Transformer Iterative Fusion Network for Salient Object Detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination