CN110598724A - Cell low-resolution image fusion method based on convolutional neural network - Google Patents

Cell low-resolution image fusion method based on convolutional neural network Download PDF

Info

Publication number
CN110598724A
CN110598724A CN201910044863.1A CN201910044863A CN110598724A CN 110598724 A CN110598724 A CN 110598724A CN 201910044863 A CN201910044863 A CN 201910044863A CN 110598724 A CN110598724 A CN 110598724A
Authority
CN
China
Prior art keywords
neural network
convolutional neural
cell
target cell
resolution image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910044863.1A
Other languages
Chinese (zh)
Other versions
CN110598724B (en
Inventor
余宁梅
马祥
方元
张雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN201910044863.1A priority Critical patent/CN110598724B/en
Publication of CN110598724A publication Critical patent/CN110598724A/en
Application granted granted Critical
Publication of CN110598724B publication Critical patent/CN110598724B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a cell low-resolution image fusion method based on a convolutional neural network, which comprises the following steps of: step 1) acquiring and segmenting to obtain a high-resolution image of a target cell; step 2) extracting label of a convolutional neural network training set and label of a test set; step 3) obtaining a low-resolution image of the target cell; step 4), selecting a training set, a testing set and a verifying set; and 5) building a CFFnet convolutional neural network, training and testing the data in the training set until the CFFnet convolutional neural network is converged, wherein the converged CFFnet convolutional neural network is a cell fusion model, an output layer of the CFFnet convolutional neural network outputs a fusion image, and the fusion of the cell low-resolution images is completed. The invention enables the common visual appearance and structural characteristics of the cells of the same kind to be fused and reacted; can realize the detection of cytopathic effect and provide basis for automatic diagnosis of diseases.

Description

Cell low-resolution image fusion method based on convolutional neural network
Technical Field
The invention belongs to the field of combination of a lensless cell detection technology and medical image processing, and particularly relates to a cell low-resolution image fusion method based on a convolutional neural network.
Background
Cell counting is of great significance to disease diagnosis and efficacy assessment, and real-time cell detection is of great importance in future personalized biomedical diagnosis. The traditional cell detection and counting method is to prepare a test sample on a slide glass, and manually analyze and count the test sample through microscope observation, and the method has the problems of large equipment volume, high requirement on professional knowledge of operators, different examination results among people and the like, and is difficult to apply to occasions such as remote clinics, remote medical treatment and the like. In 2006, inspired by the mosquito-repelling phenomenon of the human eye, the Yang Changhuei research group of the university of california, usa first proposed the concept of a lensless optofluidic microchip based on a CMOS image sensor and a microfluidic technology, and when suspended matters in the vitreous body of the human eye are close to the fundus, although the suspended matters are small, the human eye can still distinguish the detailed characteristics.
Inspired by Yang Changhuei et al, the subject research group made a lot of studies on lens-free cell collection systems. In the research process, the collected cell image is imaged by a method of directly placing the cell above the image sensor without being amplified by a convex lens, the resolution of the cell image is influenced by the pixel size of the image sensor, and finally, the collected cell image is a low-resolution image compared with an optical microscope and contains less cell characteristic information. Aiming at the problem, an interpolation pre-amplification algorithm is adopted, so that the detailed information of the cell is necessarily enhanced. However, interpolation pre-amplification is performed on one cell, the information of only one cell is obtained, and the cells are tracked and amplified, so that the calculation amount is large, and real-time completion is difficult. Therefore, the problem can be solved by performing feature fusion on similar cell populations to obtain a 'virtual cell'. The "virtual cell" is not a cell in the traditional sense, but a statistical result of similar cells, which contains statistical information of cells such as the overall shape, the average size, the nuclear-cytoplasmic ratio and the like of the similar cells.
Disclosure of Invention
The invention aims to provide a cell low-resolution image fusion method based on a convolutional neural network, so that common visual appearance and structural characteristics of similar cells can be fused and reacted.
The invention adopts the technical scheme that a cell low-resolution image fusion method based on a convolutional neural network comprises the following steps:
step 1) acquiring a high-resolution image of a cell through an optical microscope, and segmenting the high-resolution image of the cell by using an image segmentation algorithm to obtain a high-resolution image of a target cell;
step 2) randomly selecting two target cell high-resolution images, and extracting images on brightness channels of the two target cell high-resolution images to be respectively used as label of a convolutional neural network training set and label of a testing set;
step 3) performing down-sampling on the high-resolution image of the target cell remained in the step 2 by using a Bicubic algorithm to obtain a low-resolution image of the target cell;
step 4) selecting 3/5 target cell low-resolution images as convolutional neural network training data from the target cell low-resolution images, extracting images on all training data brightness channels to form a training set, using 1/5 target cell low-resolution images as convolutional neural network test data, extracting images on all test data brightness channels to form a test set, using 1/5 target cell low-resolution images as convolutional neural network model verification data, and extracting images on all verification data brightness channels to form a verification set;
and 5) building a CFFnet convolutional neural network on a deep learning frame caffe, wherein the CFFnet convolutional neural network comprises a data input layer, a convolutional layer, an anti-convolutional layer and an output layer, training parameters are set, the data input layer comprises a label of a training set, a label of a test set, the training set and the test set, the CFFnet convolutional neural network trains and tests data in the training set, the CFFnet convolutional neural network is continuously iterated in the training process until the CFFnet convolutional neural network converges, the converged CFFnet convolutional neural network is a cell fusion model, the output layer of the CFFnet convolutional neural network outputs fusion images, and the low-resolution fusion of the cells is completed.
The present invention is also characterized in that,
the step 1) of segmenting the high-resolution image of the cell by using an image segmentation algorithm to obtain the high-resolution image of the target cell specifically comprises the steps of solving a gray value statistical graph, namely a gray histogram, of the high-resolution image of the cell acquired by matlab software, determining a gray segmentation threshold of the target cell according to the distribution condition of gray values in the gray histogram, then separating the target cell from a background by adopting threshold segmentation, and finally intercepting a single target cell according to the pixel position of the central point of each target cell to obtain the high-resolution image of the target cell.
And 3) specifically, for each residual target cell high-resolution image in the step 2, finding 16 nearest pixel points around each target pixel point, calculating the weight of each corresponding pixel by using a basis function, finally obtaining the pixel values of the target point, and forming the pixel values of all the target points into a target cell low-resolution image.
The training parameters in the step 5) comprise basic learning rate, training momentum, learning strategy mode and power value, weight attenuation item and maximum iteration number.
The activation function of the CFFnet convolutional neural network in the step 5) is a PRelu activation function.
The invention has the beneficial effects that:
the invention relates to a cell low-resolution image fusion method based on a convolutional neural network, which uses similar cell groups of the same type to carry out cell fitting, reproduces different characteristic details expressed by the similar cell low-resolution images instead of amplifying one cell, and obtains a fusion image which can be regarded as a virtual cell, wherein the virtual cell reflects the visual appearance and structural characteristics of the cell, and more importantly, the virtual cell reflects a statistical result. When all cells in a cell population share a common characteristic, this characteristic is emphasized when fusing "virtual cells"; the individual characteristics of individual cells are weakened in the synthesis process, the expression of certain characteristics by the virtual cells is distributed according to the number of cells with common characteristics, and the individual characteristics of certain cells in the same type of cells are abandoned in the fusion process, so that the detection of cytopathic effect can be realized, and a basis is provided for automatic diagnosis of diseases.
Drawings
FIG. 1 is a high resolution image of normal white blood cells;
FIG. 2 is a high resolution image of vacuolated diseased white blood cells;
FIG. 3 is a schematic diagram of a CFFnet convolutional neural network structure;
FIG. 4 is a graph of learning rate versus iteration number for CFFnet convolutional neural network training in an embodiment;
FIG. 5 is a normal leukocyte fusion image in the examples;
FIG. 6 is a first set of test images, wherein FIG. 6(a), FIG. 6(b), FIG. 6(c), FIG. 6(d), FIG. 6(e), FIG. 6(f), FIG. 6(g), FIG. 6(h), FIG. 6(i), FIG. 6(j), FIG. 6(k) are low resolution images of white blood cells of a lesion;
fig. 7 is a second set of test images, wherein fig. 7(a), 7(b), 7(c), 7(d), 7(g) and 7(k) are normal white blood cell low resolution images, and fig. 7(e), 7(f), 7 (h), 7(i) and 7(j) are lesion white blood cell low resolution images;
fig. 8 is a third set of test images, in which fig. 8(a), 8(b), 8(c), 8(d), 8(e), 8(f), 8(g), 8(h), 8(k) are low-resolution images of diseased white blood cells, and fig. 8(i) and 8(j) are low-resolution images of normal white blood cells;
FIG. 9 is a fused image after the fusion of a first set of test images;
FIG. 10 is a fused image after the second set of test images has been fused;
FIG. 11 is a fused image after the third set of test images are fused;
fig. 12 is a set of low-resolution images of cells collected by a lens-free cell collection system, wherein each of fig. 12 (a), 12(b), 12(c), 12(d), 12(e), 12(f), 12(g), 12(h), 12(i), 12(j) and 12(k) is a low-resolution image of cells collected by a lens-free cell collection system;
FIG. 13 is a fused image of a fused set of low resolution images of cells acquired by the lens-less cell acquisition system of FIG. 12.
Detailed Description
The present invention will be described in detail with reference to the following embodiments.
The invention relates to a cell low-resolution image fusion method based on a convolutional neural network, which comprises the following steps of:
step 1) acquiring a high-resolution image of a cell through an optical microscope, and segmenting the high-resolution image of the cell by using an image segmentation algorithm to obtain a high-resolution image of a target cell;
step 2) randomly selecting two target cell high-resolution images, and extracting images on brightness channels of the two target cell high-resolution images to be respectively used as label of a convolutional neural network training set and label of a testing set;
step 3) performing down-sampling on the high-resolution image of the target cell remained in the step 2 by using a Bicubic algorithm to obtain a low-resolution image of the target cell;
step 4) selecting 3/5 target cell low-resolution images as convolutional neural network training data from the target cell low-resolution images, extracting images on all training data brightness channels to form a training set, using 1/5 target cell low-resolution images as convolutional neural network test data, extracting images on all test data brightness channels to form a test set, using 1/5 target cell low-resolution images as convolutional neural network model verification data, and extracting images on all verification data brightness channels to form a verification set;
and step 5) building a CFFnet convolutional neural network on a deep learning frame caffe, wherein the activation function of the CFFnet convolutional neural network is a PRelu activation function, the CFFnet convolutional neural network comprises a data input layer, a convolutional layer, an anti-convolutional layer and an output layer, training parameters are set, the training parameters comprise a basic learning rate, a training momentum, a learning strategy mode, a power value, a weight attenuation item and a maximum iteration number, the data input layer comprises a label of a training set, a label of a test set, the training set and the test set, the CFFnet convolutional neural network trains and tests data in the training set, the CFFnet convolutional neural network is continuously iterated in the training process until the CFFnet convolutional neural network is converged, the converged CFFnet convolutional neural network is a cell fusion model, the output layer of the CFFnet convolutional neural network outputs a fusion image, and the fusion of the cell low-resolution image is completed.
The step 1) of segmenting the high-resolution image of the cell by using an image segmentation algorithm to obtain the high-resolution image of the target cell specifically comprises the steps of solving a gray value statistical graph, namely a gray histogram, of the high-resolution image of the cell acquired by matlab software, determining a gray segmentation threshold of the target cell according to the distribution condition of gray values in the gray histogram, then separating the target cell from a background by adopting threshold segmentation, and finally intercepting a single target cell according to the pixel position of the central point of each target cell to obtain the high-resolution image of the target cell.
And 3) specifically, for each residual target cell high-resolution image in the step 2, finding 16 nearest pixel points around each target pixel point, calculating the weight of each corresponding pixel by using a basis function, finally obtaining the pixel values of the target point, and forming the pixel values of all the target points into a target cell low-resolution image.
Through the mode, the cell low-resolution image fusion method based on the convolutional neural network uses the similar cell groups to perform cell fitting, the reproduced cells are not amplified, but different characteristic details embodied by the similar cell low-resolution images are used to obtain a fusion image, the fusion image can be regarded as a virtual cell, the virtual cell reflects the visual appearance and the structural characteristics of the cell, and more importantly, the virtual cell reflects a statistical result. When all cells in a cell population share a common characteristic, this characteristic is emphasized when fusing "virtual cells"; the individual characteristics of individual cells are weakened in the synthesis process, the expression of certain characteristics by the virtual cells is distributed according to the number of cells with common characteristics, and the individual characteristics of certain cells in the same type of cells are abandoned in the fusion process, so that the detection of cytopathic effect can be realized, and a basis is provided for automatic diagnosis of diseases.
Examples
The embodiment provides a method for fusing a normal leukocyte low-resolution image based on a convolutional neural network, which comprises the following steps of:
step 1) collecting a high-resolution image of a cell through an optical microscope, and segmenting the high-resolution image of the cell by using an image segmentation algorithm to obtain a high-resolution image of a normal leukocyte, wherein the high-resolution image of the normal leukocyte is shown in figure 1.
And 2) randomly selecting two images from the high-resolution images of the normal white blood cells, and extracting the images on the brightness channels of the two high-resolution images of the normal white blood cells to be respectively used as label of a convolutional neural network training set and label of a testing set.
Step 3) randomly dividing the normal white blood cell high-resolution image remained in the step 2 into two parts, including a part A normal white blood cell high-resolution image and a part B normal white blood cell high-resolution image, wherein the part A normal white blood cell high-resolution image is used for making a vacuolated lesion white blood cell high-resolution image by using image processing related software, the vacuolated lesion white blood cell high-resolution image is shown in figure 2, and the lesion white blood cell high-resolution image and the part B normal white blood cell high-resolution image are respectively subjected to down-sampling by using a Bicubic algorithm to respectively and correspondingly obtain a lesion white blood cell low-resolution image and a normal white blood cell low-resolution image;
the high-resolution image of the normal white blood cells of the part A in the step 3) is used for making a high-resolution image of the diseased white blood cells with vacuoles by using image processing related software, specifically, the high-resolution image of the normal white blood cells of the part A is read into Photoshop software or matlab software or opencv software, white vacuoles are made in the cytoplasm and the periphery of the cell nucleus by referring to the morphology of the cells containing the white blood cell vacuoles, and the number of the vacuoles is one or more.
And 4) selecting 3/5 normal leukocyte low-resolution images from the normal leukocyte low-resolution images as convolutional neural network training data, extracting images on brightness channels of all the training data to form a training set, extracting 1/5 normal leukocyte low-resolution images as convolutional neural network test data, extracting images on brightness channels of all the test data to form a test set, extracting 1/5 normal leukocyte low-resolution images as convolutional neural network model verification data, and extracting images on brightness channels of all the verification data to form a verification set.
Step 5) building a CFFnet convolutional neural network on a deep learning frame, wherein the activation function of the CFFnet convolutional neural network is a PRelu activation function, as shown in FIG. 3, the CFFnet convolutional neural network comprises a data input layer, a convolutional layer, an anti-convolutional layer and an output layer, training parameters are set, the training parameters comprise that a basic learning rate base lr is set to be 0.01, a training momentum is set to be 0.9, a learning strategy adopts an inv mode with continuous iteration reduction, a power value is set to be 0.75, a weight attenuation item is set to be 100 to prevent overfitting, and a maximum iteration number Max _ iter is set to be 3 × 105The CFFnet convolutional neural network is used for training and testing data in the training set, a change curve of learning rate and iteration times during training is shown in figure 4, continuous iteration is performed in the training process until the CFFnet convolutional neural network converges, the converged CFFnet convolutional neural network is a cell fusion model, an output layer of the CFFnet convolutional neural network outputs a normal leukocyte fusion image, the normal leukocyte fusion image is shown in figure 5, and normal leukocyte low-resolution image fusion is completed.
Calculating the area of cell nucleus by image processing algorithm such as binarization and the like on the output normal leukocyte fusion image, namely, the area of cell nucleus is 1581 mu m2The cytoplasmic area of 2033 μm2The nuclear-cytoplasmic ratio of 1/1.3, the gray density of 77.8333 and the information entropy of 7.6075, the parameters of the normal white blood cells can be obtained.
The cell low-resolution image fusion method based on the convolutional neural network fuses the cells with more proportion to obtain the fusion image of the cells, thereby achieving the function of detecting the pathological changes of the cells. To verify the accuracy of this function, 33 images of the low resolution of the diseased white blood cells and 33 images of the low resolution of the normal white blood cells prepared in step 3) of this example were mixed and tested:
all the images are divided into three groups of test images, as shown in fig. 6, the first group of test images are 11 diseased leukocyte low-resolution images, as shown in fig. 7, the second group of test images are 5 diseased leukocyte low-resolution images and 6 normal leukocyte low-resolution images, as shown in fig. 8, the third group of test images are 9 diseased leukocyte low-resolution images and 2 normal leukocyte low-resolution images, as shown in fig. 9, as shown in fig. 10, as shown in fig. 11, as fused images of the first group of test images, as fused images of the second group of test images, as fused images of the third group of test images. It can be seen from the three fused images that if all the white blood cells of the pathological changes are input, the fused result is still the white blood cells of the pathological changes, and if the white blood cells of the pathological changes are mixed with the normal white blood cells according to a certain proportion, the fused result shows the cell morphology with large proportion, which can prove that the result of the fused cell image is basically obeyed to the statistical rule.
The fusion image finally obtained by the cell low-resolution image fusion method based on the convolutional neural network has the common visual appearance and structural characteristics of the cells, and for verifying the characteristic, the following tests are carried out:
the 11 normal leukocyte low-resolution images shown in fig. 12 are acquired by the lens-free cell acquisition system and input into the trained fusion model for fusion, and the obtained fusion images are shown in fig. 13, so that it is obvious that the cell fusion images in fig. 13 combine the characteristics of the 11 cell low-resolution images in fig. 12, and the correctness of the cell fusion model applied to the actual system is effectively verified.

Claims (5)

1. A cell low-resolution image fusion method based on a convolutional neural network is characterized by comprising the following steps:
step 1) acquiring a high-resolution image of a cell through an optical microscope, and segmenting the high-resolution image of the cell by using an image segmentation algorithm to obtain a high-resolution image of a target cell;
step 2) randomly selecting two target cell high-resolution images, and extracting images on brightness channels of the two target cell high-resolution images to be respectively used as label of a convolutional neural network training set and label of a test set;
step 3) performing down-sampling on the target cell high-resolution image remained in the step 2 by using a Bicubic algorithm to obtain a target cell low-resolution image;
step 4) selecting 3/5 target cell low-resolution images from the target cell low-resolution images as convolutional neural network training data, extracting images on brightness channels of all training data to form a training set, 1/5 target cell low-resolution images as convolutional neural network test data, extracting images on brightness channels of all test data to form a test set, 1/5 target cell low-resolution images as convolutional neural network model verification data, and extracting images on brightness channels of all verification data to form a verification set;
step 5) a CFFnet convolutional neural network is built on a deep learning frame caffe, the CFFnet convolutional neural network comprises a data input layer, a convolutional layer, an anti-convolutional layer and an output layer, training parameters are set, the data input layer comprises a label of a training set, a label of a testing set, the training set and the testing set, the CFFnet convolutional neural network trains and tests data in the training set, the CFFnet convolutional neural network is continuously iterated in the training process until the CFFnet convolutional neural network reaches convergence, the CFFnet convolutional neural network which is converged is a cell fusion model, the output layer of the CFFnet convolutional neural network outputs fusion images, and the fusion of the cell low-resolution images is completed.
2. The method according to claim 1, wherein the step 1) of segmenting the high-resolution image of the cell by using an image segmentation algorithm to obtain the high-resolution image of the target cell is specifically that a gray level statistical map, which is a gray level histogram, is obtained from the high-resolution image of the cell acquired by matlab software, a gray level segmentation threshold of the target cell is determined according to a distribution condition of gray levels in the gray level histogram, then the threshold segmentation is adopted to separate the target cell from a background, and finally a single target cell is intercepted according to a pixel position of a central point of each target cell to obtain the high-resolution image of the target cell.
3. The method as claimed in claim 1, wherein the step 3) is specifically implemented by, for the remaining target cell high-resolution images in each step 2, finding 16 pixels nearest to the periphery of each target pixel, calculating a weight of each corresponding pixel by using a basis function, finally obtaining pixel values of target points, and calculating pixel values of all target points to form the target cell low-resolution image.
4. The convolutional neural network-based cellular low-resolution image fusion method of claim 1, wherein the training parameters in step 5) include a basic learning rate, a training momentum, a learning strategy pattern and power value, a weight attenuation term and a maximum iteration number.
5. The method for fusing the cellular low-resolution images based on the convolutional neural network as claimed in claim 1, wherein the activation function of the CFFnet convolutional neural network in the step 5) is a prilu activation function.
CN201910044863.1A 2019-01-17 2019-01-17 Cell low-resolution image fusion method based on convolutional neural network Active CN110598724B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910044863.1A CN110598724B (en) 2019-01-17 2019-01-17 Cell low-resolution image fusion method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910044863.1A CN110598724B (en) 2019-01-17 2019-01-17 Cell low-resolution image fusion method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN110598724A true CN110598724A (en) 2019-12-20
CN110598724B CN110598724B (en) 2022-09-23

Family

ID=68852447

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910044863.1A Active CN110598724B (en) 2019-01-17 2019-01-17 Cell low-resolution image fusion method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN110598724B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111458269A (en) * 2020-05-07 2020-07-28 厦门汉舒捷医疗科技有限公司 Artificial intelligent identification method for peripheral blood lymph micronucleus cell image
CN113435384A (en) * 2021-07-07 2021-09-24 中国人民解放军国防科技大学 Target detection method, device and equipment for medium-low resolution optical remote sensing image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017219263A1 (en) * 2016-06-22 2017-12-28 中国科学院自动化研究所 Image super-resolution enhancement method based on bidirectional recursion convolution neural network
CN107610194A (en) * 2017-08-14 2018-01-19 成都大学 MRI super resolution ratio reconstruction method based on Multiscale Fusion CNN
CN108062744A (en) * 2017-12-13 2018-05-22 中国科学院大连化学物理研究所 A kind of mass spectrum image super-resolution rebuilding method based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017219263A1 (en) * 2016-06-22 2017-12-28 中国科学院自动化研究所 Image super-resolution enhancement method based on bidirectional recursion convolution neural network
CN107610194A (en) * 2017-08-14 2018-01-19 成都大学 MRI super resolution ratio reconstruction method based on Multiscale Fusion CNN
CN108062744A (en) * 2017-12-13 2018-05-22 中国科学院大连化学物理研究所 A kind of mass spectrum image super-resolution rebuilding method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘永信等: "基于深度学习的图像超分辨率重建技术的研究", 《科技与创新》 *
刘鹏飞等: "基于卷积神经网络的图像超分辨率重建", 《计算机工程与应用》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111458269A (en) * 2020-05-07 2020-07-28 厦门汉舒捷医疗科技有限公司 Artificial intelligent identification method for peripheral blood lymph micronucleus cell image
CN113435384A (en) * 2021-07-07 2021-09-24 中国人民解放军国防科技大学 Target detection method, device and equipment for medium-low resolution optical remote sensing image

Also Published As

Publication number Publication date
CN110598724B (en) 2022-09-23

Similar Documents

Publication Publication Date Title
CN113011485B (en) Multi-mode multi-disease long-tail distribution ophthalmic disease classification model training method and device
CN109190540B (en) Biopsy region prediction method, image recognition device, and storage medium
CN106934798B (en) Diabetic retinopathy classification and classification method based on deep learning
CN106056595B (en) Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules
CN110010219A (en) Optical coherence tomography image retinopathy intelligent checking system and detection method
CN109670510A (en) A kind of gastroscopic biopsy pathological data screening system and method based on deep learning
CN107767935A (en) Medical image specification processing system and method based on artificial intelligence
CN106530295A (en) Fundus image classification method and device of retinopathy
CN109102491A (en) A kind of gastroscope image automated collection systems and method
CN109389129A (en) A kind of image processing method, electronic equipment and storage medium
CN107368671A (en) System and method are supported in benign gastritis pathological diagnosis based on big data deep learning
CN111079620B (en) White blood cell image detection and identification model construction method and application based on transfer learning
CN112101424B (en) Method, device and equipment for generating retinopathy identification model
KR20190105180A (en) Apparatus for Lesion Diagnosis Based on Convolutional Neural Network and Method thereof
CN110781953B (en) Lung cancer pathological section classification method based on multi-scale pyramid convolution neural network
CN110276763A (en) It is a kind of that drawing generating method is divided based on the retinal vessel of confidence level and deep learning
CN102567734A (en) Specific value based retina thin blood vessel segmentation method
CN110598724B (en) Cell low-resolution image fusion method based on convolutional neural network
KR20190087681A (en) A method for determining whether a subject has an onset of cervical cancer
CN110047075A (en) A kind of CT image partition method based on confrontation network
CN113946217B (en) Intelligent auxiliary evaluation system for enteroscope operation skills
CN113269799A (en) Cervical cell segmentation method based on deep learning
Cai et al. Identifying architectural distortion in mammogram images via a se-densenet model and twice transfer learning
CN113506274B (en) Detection system for human cognitive condition based on visual saliency difference map
CN113160151B (en) Panoramic sheet decayed tooth depth identification method based on deep learning and attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant