CN113591854A - Low-redundancy quick reconstruction method of plankton hologram - Google Patents

Low-redundancy quick reconstruction method of plankton hologram Download PDF

Info

Publication number
CN113591854A
CN113591854A CN202110922020.4A CN202110922020A CN113591854A CN 113591854 A CN113591854 A CN 113591854A CN 202110922020 A CN202110922020 A CN 202110922020A CN 113591854 A CN113591854 A CN 113591854A
Authority
CN
China
Prior art keywords
network
reconstruction
plankton
hologram
loss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110922020.4A
Other languages
Chinese (zh)
Other versions
CN113591854B (en
Inventor
王楠
胡文杰
张兴
杨学文
崔燕妮
辛国玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ocean University of China
Original Assignee
Ocean University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ocean University of China filed Critical Ocean University of China
Priority to CN202110922020.4A priority Critical patent/CN113591854B/en
Publication of CN113591854A publication Critical patent/CN113591854A/en
Application granted granted Critical
Publication of CN113591854B publication Critical patent/CN113591854B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of computer vision, and particularly discloses a low-redundancy quick reconstruction method of a plankton hologram, which comprises the following steps of: s1: collecting a digital hologram of plankton to generate a source image data set; s2: calibrating plankton in each holographic source image in the source image data set to obtain a calibration data set; s3: training the built hologram reconstruction network by using the calibration data set, and testing the trained hologram reconstruction network by using the source image data set; s4: and using the tested hologram reconstruction network for reconstructing other plankton holographic source images. The hologram reconstruction network can realize autonomous detection of an effective target area and reconstruction of an interested area. Compared with the traditional numerical reconstruction method, the trained hologram reconstruction network has high reconstruction speed on the plankton hologram and high efficiency of eliminating redundant information, and is expected to play an important role in marine environment detection.

Description

Low-redundancy quick reconstruction method of plankton hologram
Technical Field
The invention relates to the technical field of computer vision, in particular to a low-redundancy quick reconstruction method of a plankton hologram.
Background
The ocean has important significance for adjusting the earth climate, maintaining the global life system and the like. Plankton is an important component of a marine ecosystem and has an indicating effect on climate change and marine ecological environment change. Holographic imaging techniques have become an important marine analysis tool and are used in the detection of marine plankton. The hologram is reconstructed to obtain information such as the state, particle size spectrum and the like of plankton, and information is provided for ocean monitoring. Since the active targets in the source image dataset are sparse and there is a lot of noise. The traditional numerical reconstruction method needs to perform scanning reconstruction on different diffraction depths and then perform effective target detection in a plurality of reconstructed images. The traditional method is not only long in time consumption, but also needs a large amount of storage space and calculation expense in the scanning reconstruction process, and cannot meet the requirement of in-situ detection. Therefore, the plankton hologram processing method which can directly output the optimal reconstruction result and carry out embedded calculation has important significance for the plankton holographic in-situ detection.
Disclosure of Invention
The invention provides a low-redundancy quick reconstruction method of a plankton hologram, which solves the technical problems that: how to efficiently reconstruct a captured holographic source image with low redundancy and quickly.
In order to solve the technical problems, the invention provides a low-redundancy quick reconstruction method of a plankton hologram, which comprises the following steps:
s1: collecting a digital hologram of plankton to generate a source image data set;
s2: calibrating plankton in each holographic source image in the source image data set to obtain a calibration data set;
s3: training the built hologram reconstruction network by using the calibration data set, and testing the trained hologram reconstruction network by using the source image data set;
s4: and using the tested hologram reconstruction network for reconstructing other plankton holographic source images.
Further, the step S2 specifically includes the steps of:
s21: reconstructing each holographic source image in the source image data set by a convolution reconstruction method to obtain a plurality of reconstructed images of the holographic source image;
s22: preprocessing a plurality of reconstructed images of each holographic source image to obtain a contour curve of plankton, and adding a rectangular frame to the contour curve to obtain position information of the plankton to obtain a target detection data set;
s23: and cutting a plurality of reconstructed images of each holographic source image according to the position information, evaluating the definition of the cut images by using a focusing evaluation function, and selecting the clearest image as an ROI reconstructed true value image of the holographic source image so as to generate a target reconstructed data set.
Further, the step S22 specifically includes the steps of:
s221: denoising the obtained reconstructed image by selecting a median filter with a filter window of 3 x 3, and processing the filtered image by selecting a self-adaptive threshold value binarization algorithm to obtain a binary image;
s222: selecting an open operation with a filtering window of 2 x 2 to remove noise points from the binary image, and performing edge detection by using a Canny operator to obtain an edge curve;
s 223: performing expansion treatment on the edge curve by using 2-by-2 structural elements to completely close the edge curve, and extracting the plankton contour again by using a findContours operator;
s224: and performing an operation of adding a rectangle to the extracted contour of the plankton to obtain the position information of the plankton, thereby generating a target detection data set.
Further, the step S23 specifically includes the steps of:
s231: clipping the reconstructed image according to the position information obtained in the step S22 to obtain an ROI reconstructed image;
s232: processing the ROI reconstructed image by using Gamma transformation;
s233: selecting seven focusing evaluation functions of Brenner, Laplacian, SMD2, Variance, Energy and Vollant to evaluate the definition of the ROI reconstructed image, and taking the mode of seven evaluation results as the output result of the ROI reconstructed image;
s 234: and taking the ROI reconstructed image with the maximum mode as an ROI reconstructed true value image.
Further, the step S3 specifically includes the steps of:
s31: building a hologram reconstruction network; the hologram reconstruction network comprises a target detection unit and a target reconstruction unit which are sequentially connected;
s32: training the target detection unit by using the target detection data set to obtain the position information of the ROI;
s33: training the target reconstruction unit by using the target reconstruction data set to generate a reconstructed image of the ROI;
s34: and testing the trained hologram reconstruction network by using the source image data set to complete the reconstruction of the hologram.
Further, the target detection unit adopts a Fast RCNN network, and the Fast RCNN network comprises an RPN network and a Fast RCNN network; the RPN consists of a feature extraction network CNN-1, a parallel first classification network Softmax-1 and a first boundary frame regression network Regressor-1; the Fast RCNN network consists of the feature extraction network CNN-1, an ROI Pooling layer and a second bounding box regression network Regressor-2; the RPN network and the Fast RCNN network share the feature extraction network CNN-1;
the target reconstruction unit adopts a U-Net network, and the U-Net network comprises four down-sampling operations and four up-sampling operations.
Further, the step S32 specifically includes the steps of:
s321: the RPN network generates candidate frames of an input image and calculates a loss function L in the generation processRPNThen updating self parameters according to gradient back propagation;
s322: training the Fast RCNN network by adopting the candidate frame provided by the RPN network, and calculating a loss function L of the Fast RCNN network according to the boundary frame regression parameter of the candidate frameFast-RCNNThen updating self parameters according to gradient back propagation;
s323: and fixing the parameters of the feature extraction network CNN-1, and finely adjusting the parameters in the RPN network and the Fast RCNN network.
Further, in the step S321, the loss function LRPNThe calculation formula of (A) is as follows:
Figure BDA0003207758800000041
where i denotes the index of the candidate box, piProbability of predicting as target region for i-th candidate box, pi *A classification truth value for whether the candidate box contains a target; t is tiIs the bounding box regression parameter, t, of the ith candidate boxi *Is the boundary box regression parameter true value, N corresponding to the ith candidate boxclsIs the total number of samples, N, in a mini-batchregIs the number of anchors, lambda is the equilibrium coefficient, Ncls=256,Nreg2400, λ 10; loss LclsDescribed as cross entropy loss; loss LlocDescribed as smooth L1 Loss of Loss;
in the step S322, a loss function LFast-RCNNIs expressed by the expression and loss function LRPNAnd (5) the consistency is achieved.
Further, the step S33 specifically includes the steps of:
s331: the U-Net network generates a reconstructed image of the input image;
s332: and calculating a loss function L between the reconstructed image and the ROI reconstructed true value image in the step S23, and updating self parameters according to gradient back propagation.
Further, in step S332, the loss function L is calculated as:
L=Loss_MSE+Loss_VGG
wherein, Loss _ MSE represents MSE Loss, and Loss _ VGG represents perception Loss;
the calculation formula of the MSE loss is as follows:
Figure BDA0003207758800000042
wherein W, H represents the width and height of the reconstructed image and ROI reconstructed true value image, (i, j) represents the pixel point of ith column and jth line, D represents the pixel value of the reconstructed image, DgtPixel values representing the ROI reconstructed true value map;
the calculation formula of the perception loss is as follows:
Figure BDA0003207758800000043
wherein ,
Figure BDA0003207758800000044
representing the loss of the characteristic diagram of the output of the j-th convolutional layer before the i-th maximal pooling layer in the VGG-16 in the U-Net network; wi,j and Hi,jRepresenting the width and height of a feature map obtained by convolution of a j layer before the ith layer of the maximum pooling layer in the VGG-16 network; phii,j(I)x,yRepresenting a feature map obtained by convolution of a j layer before the ith layer of the maximum pooling layer in the VGG-16 network, wherein (x, y) represents pixels in an x row and a y row in the feature map; i represents the reconstructed image, Igt represents the true value image, α1=1、α2=10、α3=1×106、α4=5×109Representing the corresponding equilibrium coefficients.
According to the low-redundancy quick reconstruction method of the plankton hologram, provided by the invention, the network halo-Net is reconstructed by constructing, training and testing the hologram, so that the effective target area can be autonomously detected, and the region of interest is reconstructed. Compared with the traditional numerical reconstruction method, the trained hologram reconstruction network has high reconstruction speed on the plankton hologram and high efficiency of eliminating redundant information, and is expected to play an important role in marine environment detection. The hologram reconstruction can obtain information such as the state, particle size spectrum and the like of plankton, provides information for ocean monitoring and has guiding value for follow-up research on ocean ecological environment change and global climate change.
Drawings
FIG. 1 is a flow chart of the steps of a method for low-redundancy fast reconstruction of plankton holograms according to an embodiment of the present invention;
FIG. 2 is an architecture diagram of a hologram reconstruction network provided by an embodiment of the present invention;
FIG. 3 is a comparison of a hologram provided by an embodiment of the present invention before and after reconstruction.
Detailed Description
The embodiments of the present invention will be described in detail below with reference to the accompanying drawings, which are given solely for the purpose of illustration and are not to be construed as limitations of the invention, including the drawings which are incorporated herein by reference and for illustration only and are not to be construed as limitations of the invention, since many variations thereof are possible without departing from the spirit and scope of the invention.
In order to effectively reconstruct a photographed hologram source image with low redundancy and quickly, an embodiment of the present invention provides a method for quickly reconstructing a plankton hologram with low redundancy, as shown in the step flow chart of fig. 1, including the steps of:
s1: collecting a digital hologram of plankton to generate a source image data set;
s2: calibrating plankton in each holographic source image in the source image data set to obtain a calibration data set;
s3: training the built hologram reconstruction network by using a calibration data set, and testing the trained hologram reconstruction network by using a source image data set;
s4: and using the tested hologram reconstruction network for reconstructing other plankton holographic source images.
For step S1, the holographic source image is captured with the HoloCam-II @200 holographic camera in this example.
For step S2, it specifically includes the steps of:
s21: reconstructing each holographic source image in the source image data set by a convolution reconstruction method to obtain a plurality of reconstructed images (30 in the embodiment, which can be adjusted according to actual conditions) of the holographic source image;
s22: preprocessing a plurality of reconstructed images of each holographic source image to obtain a contour curve of plankton, and adding a rectangular frame to the contour curve to obtain position information of the plankton to obtain a target detection data set;
s23: and cutting a plurality of reconstructed images of each holographic source image according to the position information, evaluating the definition of the cut images by using a focusing evaluation function, and selecting the clearest image as an ROI reconstructed true value image of the holographic source image so as to generate a target reconstructed data set.
For step S22, it specifically includes the steps of:
s221: denoising the obtained reconstructed image by selecting a median filter with a filter window of 3 x 3, and processing the filtered image by selecting a self-adaptive threshold value binarization algorithm to obtain a binary image;
s 222: selecting an open operation with a filtering window of 2 x 2 to remove noise points from the binary image, and performing edge detection by using a Canny operator to obtain an edge curve;
s223: performing expansion treatment on the edge curve by using 2-by-2 structural elements to completely close the edge curve, and extracting the plankton contour again by using a findContours operator;
s224: and performing an operation of adding a rectangle to the extracted contour of the plankton to obtain the position information of the plankton, thereby generating a target detection data set.
Compared with other methods, the preprocessing method of step S22 is more suitable for holograms with more noise points, where various parameters and window sizes are selected through multiple experimental comparisons, and the extracted contour is most effective.
For step S23, it specifically includes the steps of:
s231: clipping the reconstructed image according to the position information obtained in the step S22 to obtain an ROI reconstructed image;
s232: processing the ROI reconstructed image by using Gamma transformation;
s233: selecting seven focusing evaluation functions of Brenner, Laplacian, SMD2, Variance, Energy and Vollant to evaluate the definition of the ROI reconstructed image, and taking the mode of seven evaluation results as the output result of the ROI reconstructed image;
s234: and taking the ROI reconstructed image with the maximum mode as an ROI reconstructed true value image.
The step S233 has the advantages that the result deviation caused by only one focusing evaluation function can be avoided, and seven indexes which are more suitable for image definition evaluation of the hologram are selected by combining the characteristic of high noise of the digital hologram, so that the indexes are more suitable for the hologram task, and the contour of plankton in the ROI reconstruction true value image is quite clear.
For step S3, it specifically includes the steps of:
s31: building a hologram reconstruction network, namely, a Holo-Net; the hologram reconstruction network Holo-Net comprises a target detection unit and a target reconstruction unit which are sequentially connected;
s32: training a target detection unit by using a target detection data set to obtain the position information of the ROI;
s33: training a target reconstruction unit by using a target reconstruction data set to generate a reconstructed image of the ROI;
s34: and testing the trained hologram reconstruction network Holo-Net by using a source image data set to complete the reconstruction of the hologram.
For step S31, as shown in FIG. 2, the hologram reconstruction network Holo-Net includes a target detection unit and a target reconstruction unit. The target detection unit adopts a Fast RCNN network, and the Fast RCNN network comprises an RPN network and a Fast RCNN network. The RPN consists of a feature extraction network CNN-1, a parallel first classification network Softmax-1 and a first boundary frame regression network Regressor-1; the Fast RCNN network consists of a feature extraction network CNN-1, an ROI Pooling layer and a second bounding box regression network Regressor-2; the RPN network and the Fast RCNN network share a feature extraction network CNN-1. The target reconstruction unit adopts a U-Net network, and the U-Net network comprises four down-sampling operations and four up-sampling operations.
Compared with the current whole image reconstruction mode, the Holo-Net used in the embodiment only aims at the region of interest with plankton, can effectively remove noise interference, reduce reconstruction redundancy, improve reconstruction speed and obtain high-quality reconstruction effect. The traditional numerical reconstruction method not only consumes a large amount of time and needs a large amount of storage space and cannot meet the requirements of in-situ detection, but also has important significance for the plankton holographic in-situ detection by the plankton hologram processing method in which the Holo-Net can directly output the optimal reconstruction result and can perform embedded calculation.
For step S32, it specifically includes the steps of:
s321: the RPN network generates candidate frames of an input image and calculates a loss function L in the process of generationRPNThen updating self parameters according to gradient back propagation;
s322: training a Fast RCNN network by adopting a candidate frame provided by an RPN network, and calculating a loss function L of the FastRCNN network according to a boundary frame regression parameter of the candidate frameFast-RCNNThen updating self parameters according to gradient back propagation;
s323: and fixing the parameters of the feature extraction network CNN-1, and finely adjusting the parameters in the RPN network and the Fast RCNN network.
In step S321, the loss function LRPNThe calculation formula of (A) is as follows:
Figure BDA0003207758800000081
where i denotes the index of the candidate box, piProbability of predicting as target region for i-th candidate box, pi *A classification truth value for whether the candidate box contains a target; t is tiIs the bounding box regression parameter, t, of the ith candidate boxi *Is the boundary box regression parameter true value, N corresponding to the ith candidate box cls256 is the number of candidate frames to extract, Nreg2400, λ ═ 10 is the equilibrium coefficient; loss LclsDescribed as cross entropy loss; loss LlocDescribed as smooth L1 Loss of Loss;
in step S322, the loss function LFast-RCNNIs expressed by the expression and loss function LRPNAnd (5) the consistency is achieved.
The target detection process comprises the following steps: firstly, extracting features through a CNN-1 convolutional neural network to generate Feature maps, distinguishing a foreground from a background through Softmax-1 by an RPN network to realize two classifications, adjusting a prior frame through Regressor-1, screening 2000 suggestion frames with highest class probability scores through NMS (non-maximum suppression) to train Fast RCNN.
The Fast RCNN maps the suggestion frame output by the RPN to a characteristic diagram of the CNN-1, the size of the suggestion frame is unified through an ROI Pooling layer, and the suggestion frame is further adjusted through a Regressor-2.
For step S33, it specifically includes the steps of:
s331: the U-Net network generates a reconstructed image of the input image;
s332: and calculating a loss function L between the reconstructed image and the ROI reconstructed true value image in the step S23, and updating self parameters according to gradient back propagation.
In step S332, the loss function L is calculated as:
L=Loss_MSE+Loss_VGG
wherein, Loss _ MSE represents MSE Loss, and Loss _ VGG represents perception Loss;
the MSE loss is calculated as:
Figure BDA0003207758800000091
wherein W, H represents the width and height of the reconstructed image and ROI reconstructed true value image, (i, j) represents the pixel point of ith column and jth line, D represents the pixel value of the reconstructed image, DgtPixel values representing the ROI reconstructed true value map;
the perceptual loss is calculated as:
Figure BDA0003207758800000092
wherein ,
Figure BDA0003207758800000093
representing the loss of the characteristic diagram of the output of the j-th convolutional layer before the i-th maximum pooling layer in the VGG-16 network in the U-Net network; wi,j and Hi,jRepresenting the width and height of a feature map obtained by convolution of a j layer before the ith layer of the maximum pooling layer in the VGG-16 network; phii,j(I)x,yRepresenting a feature map obtained by convolution (after activation) of a j layer before the ith layer of the maximum pooling layer in the VGG-16 network, wherein (x, y) represents pixels in an x row and a y row in the feature map; i represents a reconstructed image, IgtRepresenting a figure of truth, α1=1、α2=10、α3=1×106、α4=5×109Representing the corresponding equilibrium coefficients.
The VGG loss can ensure the generation of high-frequency image information, the MSE loss ensures the generation of low-frequency information, the combination of the two ensures the better quality of the generated image, and the balance coefficient is summarized according to multiple experiments of self projects, so that the optimal effect can be achieved.
Because the effective targets in the source image data set are sparse and have a large amount of noise, it is obvious that not only is the time consumed for reconstructing the whole hologram, but also the reconstruction process is influenced by the noise, so that the reconstruction quality is poor. Therefore, the method is innovative, and the hologram reconstruction network Holo-Net is constructed, trained and tested, so that the effective target area can be autonomously detected, and the region of interest can be reconstructed. Compared with the traditional numerical reconstruction method, the trained hologram reconstruction network has the advantages of high reconstruction speed on the plankton hologram and high efficiency of eliminating redundant information, and the reconstruction effect is shown in figure 3. The hologram reconstruction can obtain information such as the state, particle size spectrum and the like of plankton, provides information for ocean monitoring and has guiding value for follow-up research on ocean ecological environment change and global climate change.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (10)

1. A low-redundancy fast reconstruction method of a plankton hologram is characterized by comprising the following steps:
s1: collecting a digital hologram of plankton to generate a source image data set;
s2: calibrating plankton in each holographic source image in the source image data set to obtain a calibration data set;
s3: training the built hologram reconstruction network by using the calibration data set, and testing the trained hologram reconstruction network by using the source image data set;
s4: and using the tested hologram reconstruction network for reconstructing other plankton holographic source images.
2. The method for low-redundancy fast reconstruction of a plankton hologram according to claim 1, wherein said step S2 specifically comprises the steps of:
s21: reconstructing each holographic source image in the source image data set by a convolution reconstruction method to obtain a plurality of reconstructed images of the holographic source image;
s22: preprocessing a plurality of reconstructed images of each holographic source image to obtain a contour curve of plankton, and adding a rectangular frame to the contour curve to obtain position information of the plankton to obtain a target detection data set;
s23: and cutting a plurality of reconstructed images of each holographic source image according to the position information, evaluating the definition of the cut images by using a focusing evaluation function, and selecting the clearest image as an ROI reconstructed true value image of the holographic source image so as to generate a target reconstructed data set.
3. The method for low-redundancy fast reconstruction of a plankton hologram according to claim 2, wherein said step S22 specifically comprises the steps of:
s221: denoising the obtained reconstructed image by selecting a median filter with a filter window of 3 x 3, and processing the filtered image by selecting a self-adaptive threshold value binarization algorithm to obtain a binary image;
s222: selecting an open operation with a filtering window of 2 x 2 to remove noise points from the binary image, and performing edge detection by using a Canny operator to obtain an edge curve;
s223: performing expansion treatment on the edge curve by using 2-by-2 structural elements to completely close the edge curve, and extracting the plankton contour again by using a findContours operator;
s224: and performing an operation of adding a rectangle to the extracted contour of the plankton to obtain the position information of the plankton, thereby generating a target detection data set.
4. The method for low-redundancy fast reconstruction of a plankton hologram according to claim 2, wherein said step S23 specifically comprises the steps of:
s231: clipping the reconstructed image according to the position information obtained in the step S22 to obtain an ROI reconstructed image;
s232: processing the ROI reconstructed image by using Gamma transformation;
s233: selecting seven focusing evaluation functions of Brenner, Laplacian, SMD2, Variance, Energy and Vollant to evaluate the definition of the ROI reconstructed image, and taking the mode of seven evaluation results as the output result of the ROI reconstructed image;
s234: and taking the ROI reconstructed image with the maximum mode as an ROI reconstructed true value image.
5. The method for low-redundancy fast reconstruction of a plankton hologram according to any one of claims 2 to 4, wherein the step S3 specifically comprises the steps of:
s31: building a hologram reconstruction network; the hologram reconstruction network comprises a target detection unit and a target reconstruction unit which are sequentially connected;
s32: training the target detection unit by using the target detection data set to obtain the position information of the ROI;
s33: training the target reconstruction unit by using the target reconstruction data set to generate a reconstructed image of the ROI;
s34: and testing the trained hologram reconstruction network by using the source image data set to complete the reconstruction of the hologram.
6. The method for low-redundancy fast reconstruction of planktonic holograms according to claim 5, wherein:
the target detection unit adopts a Fast RCNN network, and the Fast RCNN network comprises an RPN network and a Fast RCNN network; the RPN consists of a feature extraction network CNN-1, a parallel first classification network Softmax-1 and a first boundary frame regression network Regressor-1; the Fast RCNN network consists of the feature extraction network CNN-1, an ROI Pooling layer and a second bounding box regression network Regressor-2; the RPN network and the Fast RCNN network share the feature extraction network CNN-1; the target reconstruction unit adopts a U-Net network, and the U-Net network comprises four down-sampling operations and four up-sampling operations.
7. The method for low-redundancy fast reconstruction of a plankton hologram according to claim 6, wherein said step S32 specifically comprises the steps of:
s321: the RPN network generates candidate frames of an input imageAnd calculating a loss function L in the course of the generationRPNThen updating self parameters according to gradient back propagation;
s322: training the Fast RCNN network by adopting the candidate frame provided by the RPN network, and calculating a loss function L of the Fast RCNN network according to the boundary frame regression parameter of the candidate frameFast-RCNNThen updating self parameters according to gradient back propagation;
s323: and fixing the parameters of the feature extraction network CNN-1, and finely adjusting the parameters in the RPN network and the Fast RCNN network.
8. The method for low-redundancy fast reconstruction of plankton holograms according to claim 7, wherein in said step S321, said loss function LRPNThe calculation formula of (A) is as follows:
Figure FDA0003207758790000031
where i denotes the index of the candidate box, piProbability of predicting as target region for i-th candidate box, pi *A classification truth value for whether the candidate box contains a target; t is tiIs the bounding box regression parameter, t, of the ith candidate boxi *Is the boundary box regression parameter true value, N corresponding to the ith candidate boxclsIs the total number of samples, N, in a mini-batchregIs the number of anchors, lambda is the equilibrium coefficient, Ncls=256,Nreg2400, λ 10; loss LclsDescribed as cross entropy loss; loss LlocDescribed as smooth L1 Loss of Loss;
in the step S322, a loss function LFast-RCNNIs expressed by the expression and loss function LRPNAnd (5) the consistency is achieved.
9. The method for low-redundancy fast reconstruction of a plankton hologram according to claim 8, wherein said step S33 specifically comprises the steps of:
s331: the U-Net network generates a reconstructed image of the input image;
s332: and calculating a loss function L between the reconstructed image of the step S331 and the ROI reconstructed true value image of the step S23, and updating self parameters according to gradient back propagation.
10. The method for low-redundancy fast reconstruction of plankton holograms according to claim 9, wherein in said step S332, said loss function L is calculated by:
L=Loss_MSE+Loss_VGG
wherein, Loss _ MSE represents MSE Loss, and Loss _ VGG represents perception Loss;
the calculation formula of the MSE loss is as follows:
Figure FDA0003207758790000041
wherein W, H represents the width and height of the reconstructed image and ROI reconstructed true value image, (i, j) represents the pixel point of ith column and jth line, D represents the pixel value of the reconstructed image, DgtPixel values representing the ROI reconstructed true value map;
the calculation formula of the perception loss is as follows:
Loss_VGG=α1*loss_vgg(1,2)+α2*loss_vgg(2,2)+α3*loss_vgg(3,3)+α4*loss_vgg(4,3)
wherein ,
Figure FDA0003207758790000042
representing the loss of the characteristic diagram of the output of the convolutional layer of the j layer before the maximum pooling layer of the i layer of the VGG-16 network in the U-Net network is calculated; wi,j and Hi,jRepresenting the width and height of a feature map obtained by convolution of a j layer before the ith layer of the maximum pooling layer in the VGG-16 network; phii,j(I)x,yRepresenting a feature map obtained by convolution of a j layer before the ith layer of the maximum pooling layer in the VGG-16 network, wherein (x, y) represents pixels in an x row and a y row in the feature map; i represents a reconstructed image, IgtRepresenting a figure of truth, α1=1、α2=10、α3=1×106、α4=5×109Representing the corresponding equilibrium coefficients.
CN202110922020.4A 2021-08-12 2021-08-12 Low-redundancy rapid reconstruction method of plankton hologram Active CN113591854B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110922020.4A CN113591854B (en) 2021-08-12 2021-08-12 Low-redundancy rapid reconstruction method of plankton hologram

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110922020.4A CN113591854B (en) 2021-08-12 2021-08-12 Low-redundancy rapid reconstruction method of plankton hologram

Publications (2)

Publication Number Publication Date
CN113591854A true CN113591854A (en) 2021-11-02
CN113591854B CN113591854B (en) 2023-09-26

Family

ID=78257369

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110922020.4A Active CN113591854B (en) 2021-08-12 2021-08-12 Low-redundancy rapid reconstruction method of plankton hologram

Country Status (1)

Country Link
CN (1) CN113591854B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114529679A (en) * 2022-04-19 2022-05-24 清华大学 Method and device for generating computed holographic field based on nerve radiation field
CN115145138A (en) * 2022-06-14 2022-10-04 浙江大学 Rapid processing method for sparse particle hologram

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019144575A1 (en) * 2018-01-24 2019-08-01 中山大学 Fast pedestrian detection method and device
CN110210463A (en) * 2019-07-03 2019-09-06 中国人民解放军海军航空大学 Radar target image detecting method based on Precise ROI-Faster R-CNN
KR20200129314A (en) * 2019-05-08 2020-11-18 전북대학교산학협력단 Object detection in very high-resolution aerial images feature pyramid network
CN112329615A (en) * 2020-11-04 2021-02-05 中国海洋大学 Environment situation evaluation method for autonomous underwater visual target grabbing
CN112529791A (en) * 2020-11-16 2021-03-19 中国海洋大学 Adaptive multifocal restoration method based on plankton digital holographic image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019144575A1 (en) * 2018-01-24 2019-08-01 中山大学 Fast pedestrian detection method and device
KR20200129314A (en) * 2019-05-08 2020-11-18 전북대학교산학협력단 Object detection in very high-resolution aerial images feature pyramid network
CN110210463A (en) * 2019-07-03 2019-09-06 中国人民解放军海军航空大学 Radar target image detecting method based on Precise ROI-Faster R-CNN
CN112329615A (en) * 2020-11-04 2021-02-05 中国海洋大学 Environment situation evaluation method for autonomous underwater visual target grabbing
CN112529791A (en) * 2020-11-16 2021-03-19 中国海洋大学 Adaptive multifocal restoration method based on plankton digital holographic image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张婷婷;章坚武;郭春生;陈华华;周迪;王延松;徐爱华;: "基于深度学习的图像目标检测算法综述", 电信科学, no. 07, pages 96 - 110 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114529679A (en) * 2022-04-19 2022-05-24 清华大学 Method and device for generating computed holographic field based on nerve radiation field
CN115145138A (en) * 2022-06-14 2022-10-04 浙江大学 Rapid processing method for sparse particle hologram
CN115145138B (en) * 2022-06-14 2023-09-26 浙江大学 Rapid processing method for sparse particle hologram

Also Published As

Publication number Publication date
CN113591854B (en) 2023-09-26

Similar Documents

Publication Publication Date Title
US10885381B2 (en) Ship detection method and system based on multidimensional scene features
CN108921799B (en) Remote sensing image thin cloud removing method based on multi-scale collaborative learning convolutional neural network
Zhou et al. Underwater vision enhancement technologies: A comprehensive review, challenges, and recent trends
CN112132959B (en) Digital rock core image processing method and device, computer equipment and storage medium
Nair et al. Image mining applications for underwater environment management-A review and research agenda
CN113591854A (en) Low-redundancy quick reconstruction method of plankton hologram
CN113362338B (en) Rail segmentation method, device, computer equipment and rail segmentation processing system
He et al. Remote sensing image super-resolution using deep–shallow cascaded convolutional neural networks
CN116596792B (en) Inland river foggy scene recovery method, system and equipment for intelligent ship
Sree Sharmila et al. Impact of applying pre-processing techniques for improving classification accuracy
Yu et al. Two-stage image decomposition and color regulator for low-light image enhancement
Lee et al. Speckle reduction via deep content-aware image prior for precise breast tumor segmentation in an ultrasound image
Huo et al. Two-stage image denoising algorithm based on noise localization
Yang et al. Pre-processing for single image dehazing
Zhou et al. ASFusion: Adaptive visual enhancement and structural patch decomposition for infrared and visible image fusion
Zhang et al. TANet: Transmission and atmospheric light driven enhancement of underwater images
Ferzo et al. Image Denoising Techniques Using Unsupervised Machine Learning and Deep Learning Algorithms: A Review
Salman et al. Image Enhancement using Convolution Neural Networks
CN113888488A (en) Steel rail defect detection method and system based on deep residual shrinkage network
Sebastianelli et al. A speckle filter for SAR Sentinel-1 GRD data based on Residual Convolutional Neural Networks
Deluxni et al. A Scrutiny on Image Enhancement and Restoration Techniques for Underwater Optical Imaging Applications
Tomar et al. ENHANCING IMAGE SUPER-RESOLUTION WITH DEEP CONVOLUTIONAL NEURAL NETWORKS.
Chen et al. A Self-supervised SAR Image Despeckling Strategy Based on Parameter-sharing Convolutional Neural Networks
CN115294375B (en) Speckle depth estimation method and system, electronic device and storage medium
Malathi et al. Optimzied resnet model of convolutional neural network for under sea water object detection and classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant