CN117541501B - Scanning light field self-supervision network denoising method and device, electronic equipment and medium - Google Patents

Scanning light field self-supervision network denoising method and device, electronic equipment and medium Download PDF

Info

Publication number
CN117541501B
CN117541501B CN202410032352.9A CN202410032352A CN117541501B CN 117541501 B CN117541501 B CN 117541501B CN 202410032352 A CN202410032352 A CN 202410032352A CN 117541501 B CN117541501 B CN 117541501B
Authority
CN
China
Prior art keywords
network
denoising
self
data
light field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410032352.9A
Other languages
Chinese (zh)
Other versions
CN117541501A (en
Inventor
戴琼海
卢志
吴嘉敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202410032352.9A priority Critical patent/CN117541501B/en
Publication of CN117541501A publication Critical patent/CN117541501A/en
Application granted granted Critical
Publication of CN117541501B publication Critical patent/CN117541501B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0895Weakly supervised learning, e.g. semi-supervised or self-supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The application relates to the technical field of computational imaging, in particular to a scanning light field self-supervision network denoising method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring scanned light field data; preprocessing the scanned light field data to obtain preprocessed data; and constructing a self-supervision denoising network according to the preprocessed data until a preset iteration stop condition is reached, so as to obtain a final self-supervision denoising network, and the final self-supervision denoising network is used for denoising the scanning light field self-supervision network. Therefore, the problems that in the related technology, the availability of data is easy to limit due to dependence on data with high quality and same or similar content, the difficulty of data acquisition is further increased, the cost is increased, and due to dependence on time information, multi-frame information is needed, the single-frame light field image cannot be used for denoising, the response is poor in practical application, the adaptability is reduced and the like are solved.

Description

Scanning light field self-supervision network denoising method and device, electronic equipment and medium
Technical Field
The present application relates to the field of computational imaging technologies, and in particular, to a method, an apparatus, an electronic device, and a medium for denoising a scanned light field self-monitoring network.
Background
The light field or scanning light field is focused on because of being capable of realizing large-scale rapid 3D imaging by using a single or a small quantity of shooting images, and is widely used in biological applications including 3D calcium imaging and the like, wherein the signal to noise ratio of the light field image is an important influencing factor of the final 3D imaging quality, the detection noise mainly comprising photon shot noise exacerbates measurement uncertainty, morphology and functional explanation of an underlying structure can be changed, and the noise reduction image can be obtained by filtering through a sliding window according to a space domain or a transformation domain of the light field image by a traditional filter denoising method based on a space domain and a transformation domain, including a median filter, a BM3D denoising algorithm and the like, but the time consumption is long, and serious structural detail loss exists.
In the related technology, the optical field or scanning optical field data can be used as input by a supervised deep learning network denoising method, a high-resolution truth image is used as supervision, the network learning denoising process can be realized, a series of time continuous optical field or scanning optical field data can be used as input by a time sequence-based deep learning network denoising method, and two imaging results of adjacent frames in close time are respectively used as input and supervision, so that the network unsupervised learning denoising process is realized.
However, in the related art, since a high-resolution truth image needs to be provided, the usability of data is easily limited due to the dependence on data with high quality and the same or similar content, and the difficulty of data acquisition is further increased, so that the cost is increased.
Disclosure of Invention
The application provides a method, a device, electronic equipment and a storage medium for denoising a scanning light field self-supervision network, which are used for solving the problems that in the related technology, the usability of data is easy to limit due to dependence on data with high quality and same or similar content, the difficulty of data acquisition is further increased, the cost is increased, and due to dependence on time information, multi-frame information is needed, the denoising can not be performed by using a single-frame light field image, the response is poor in practical application, the adaptability is reduced and the like.
An embodiment of a first aspect of the present application provides a method for denoising a scanned light field self-monitoring network, including the following steps: acquiring scanned light field data; preprocessing the scanned light field data to obtain preprocessed data; and constructing a self-supervision denoising network according to the preprocessed data until a preset iteration stop condition is reached, so as to obtain a final self-supervision denoising network, which is used for denoising the scanning light field self-supervision network.
Optionally, in an embodiment of the present application, the preprocessing the scanned light field data to obtain preprocessed data includes: combining the multi-angle data in the scanned light field data according to various angular scanning arrangement sequences to generate light field image data with various different arrangements; and/or rotating, overturning and/or cutting the scanned light field data to obtain enhanced data.
Optionally, in an embodiment of the present application, the preprocessing the scanned light field data to obtain preprocessed data further includes: and segmenting the light field image data after various different arrangements to obtain multiple pairs of segmented image data with the same dimension, so that each pair of segmented image data is respectively used for multi-path network input forwarding and multi-path network targets or fusion targets in single iteration and is used for self-supervision total loss function feedback.
Optionally, in an embodiment of the present application, the constructing a self-supervised denoising network according to the preprocessed data until a preset iteration stop condition is reached, to obtain a final self-supervised denoising network includes: obtaining a plurality of shunt network outputs according to the preprocessed data; fusing the plurality of shunt network outputs to obtain a fused network output; respectively calculating a first mean square error and a first absolute value error between the segmentation image data and corresponding shunt network output and a second mean square error and a second absolute value error between the segmentation image data and the fusion network output; and weighting the first mean square error and the first absolute value, and the second mean square error and the second absolute value error, and calculating a self-supervision total loss function of the self-supervision denoising network.
Optionally, in one embodiment of the present application, before obtaining the final self-supervised denoising network, the method further includes: inputting a test set obtained from the preprocessed data into a trained self-supervision denoising network, and outputting a network result to output the final self-supervision denoising network under the condition that the network result meets a preset test condition, wherein the test set is not overlapped with the data of the training set for constructing the self-supervision denoising network and is different in size.
An embodiment of a second aspect of the present application provides a scanning light field self-monitoring network denoising apparatus, including: the acquisition module is used for acquiring the scanned light field data; the processing module is used for preprocessing the scanned light field data to obtain preprocessed data; and the denoising module is used for constructing a self-supervision denoising network according to the preprocessed data until a preset iteration stop condition is reached, so as to obtain a final self-supervision denoising network, and the final self-supervision denoising network is used for denoising the scanning light field self-supervision network.
Optionally, in one embodiment of the present application, the processing module includes: the generating unit is used for combining the multi-angle data in the scanned light field data according to various angular scanning arrangement sequences to generate light field image data with various different arrangements; and/or an enhancement unit, which is used for rotating, turning over and/or clipping the scanned light field data to obtain enhancement data.
Optionally, in one embodiment of the present application, the processing module further includes: the segmentation module is used for segmenting the light field image data after the various different arrangements to obtain a plurality of pairs of segmented image data with the same dimension, so that each pair of segmented image data is respectively used for multi-path network input forwarding and multi-path network targets or fusion targets in single iteration and is used for self-supervision total loss function feedback.
Optionally, in one embodiment of the present application, the denoising module includes: the acquisition unit is used for obtaining a plurality of branch network outputs according to the preprocessed data; the fusion unit is used for fusing the plurality of branched network outputs to obtain fused network output; a calculation unit, configured to calculate a first mean square error and a first absolute value error between the segmented image data and a corresponding split network output, and a second mean square error and a second absolute value error between the segmented image data and the fused network output, respectively; and the weighting unit is used for weighting the first mean square error and the first absolute value and the second mean square error and the second absolute value error and calculating a self-supervision total loss function of the self-supervision denoising network.
Optionally, in an embodiment of the present application, the denoising module is further configured to, before obtaining the final self-supervised denoising network, input a test set obtained from the preprocessed data to a trained self-supervised denoising network, and output a network result, so as to output the final self-supervised denoising network if the network result meets a preset test condition, where the test set is not coincident with data of a training set for constructing the self-supervised denoising network and has a different size.
An embodiment of a third aspect of the present application provides an electronic device, including: the system comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the program to realize the scanning light field self-supervision network denoising method according to the embodiment.
A fourth aspect of the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements a scanned light field self-supervised network denoising method as above.
According to the embodiment of the application, the scanned light field data can be preprocessed by acquiring the scanned light field data, and the self-supervision denoising network is constructed according to the preprocessed data until the preset iteration stop condition is reached, so that the final self-supervision denoising network is obtained, the scanned light field self-supervision network denoising can be performed, dependence on the data is reduced, the application range and performance can be improved, the single frame image can be utilized for denoising, and the denoising flexibility and robustness can be improved under different structures and signal to noise ratios. Therefore, the problems that in the related technology, the availability of data is easy to limit due to dependence on data with high quality and same or similar content, the difficulty of data acquisition is further increased, the cost is increased, and due to dependence on time information, multi-frame information is needed, the single-frame light field image cannot be used for denoising, the response is poor in practical application, the adaptability is reduced and the like are solved.
Additional aspects and advantages of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a schematic diagram of a method for denoising a scanned light field self-monitoring network according to one embodiment of the present application;
FIG. 2 is a flowchart of a method for denoising a scanned light field self-monitoring network according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a data training pair segmentation module and a self-supervised denoising network training process according to one embodiment of the present application;
FIG. 4 is a schematic diagram of a comparison of a scanned light field original image with a network denoised image in accordance with one embodiment of the present application;
Fig. 5 is a schematic structural diagram of a scanning light field self-monitoring network denoising apparatus according to an embodiment of the present application;
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present application and should not be construed as limiting the application.
The following describes a method, a device, electronic equipment and a storage medium for denoising a scanning light field self-supervision network according to an embodiment of the application with reference to the accompanying drawings. Aiming at the problems that in the related technology mentioned in the background technology center, the availability of data is easy to limit due to dependence on data with high quality and same or similar content, the difficulty of data acquisition is further increased, the cost is increased, multi-frame information is needed, single-frame light field images cannot be used for denoising, poor response in practical application is easy to cause, adaptability is reduced and the like. Therefore, the problems that in the related technology, the availability of data is easy to limit due to dependence on data with high quality and same or similar content, the difficulty of data acquisition is further increased, the cost is increased, and due to dependence on time information, multi-frame information is needed, the single-frame light field image cannot be used for denoising, the response is poor in practical application, the adaptability is reduced and the like are solved.
Before explaining the method for denoising the self-supervision network of the scanning light field provided by the embodiment of the application, the structure of the method for denoising the self-supervision network of the scanning light field related to the embodiment of the application is explained.
As shown in fig. 1, the structure of the scanning light field self-supervision network denoising method comprises: and acquiring a scanning light field data unit, a data preprocessing unit, a self-supervision denoising network training unit and a self-supervision denoising network testing unit.
The method comprises the steps of acquiring a scanning light field data unit for providing data for training and testing of a network, wherein the data can be shot by a scanning optical microscope or downloaded from a public data set;
The data preprocessing unit comprises a data rearrangement function, a data enhancement function and a data training segmentation function, and the final data is used for self-supervision denoising network training or self-supervision denoising network testing.
The data rearrangement function is used for combining the multi-angle data into arranged image data according to various angle arrangement sequences; the data enhancement module function is used for performing image transformation such as cutting, rotation, overturning and the like on the image data to form input data for the data training segmentation module; the data training is used for segmenting the image data after various arrangements to obtain multiple pairs of segmented image data with the same dimension, and each pair of data is respectively used for multi-path network input forwarding and multi-path network target or fusion target in single iteration and finally used for self-supervision total loss function feedback.
The self-supervision denoising network training unit comprises a multipath network input forwarding function, a multipath output fusion module function and a self-supervision total loss function return function, and the three steps are called as one iteration.
The multi-path network input forwarding function receives any arranged image data output by the data preprocessing unit through an input layer, and enables the image data to pass through a branching network to obtain branching network output; the multi-path output fusion module function is used for inputting all image data output by the multi-path network into the fusion network, and outputting the image data forward through the fusion network to obtain fusion network output; and the self-supervision total loss function feedback function is used for calculating the mean square error and the absolute value error between the segmentation image data and the output of the shunt network, fusing the mean square error and the absolute value error between the output of the network, weighting to form the self-supervision total loss function, and returning and updating network parameters. A threshold number of iterations is typically set, and if this value is not exceeded, the network training is considered not to have ended. When the network training is not finished, entering the next iteration, namely, inputting and forwarding the next batch of data through a multi-path network, and returning the data through a multi-path output fusion module and a self-supervision total loss function;
the self-supervision denoising network test unit is used for testing the final performance of the selected data on the network after the self-supervision denoising network training is completed.
The selected data is often not overlapped with the data used by the self-supervision denoising network training unit, the sizes are not necessarily the same, if the sizes are different, overlapping cutting is carried out on the selected data, the selected data is cut into a plurality of images meeting the size requirement of a network input layer, the network is respectively predicted, and then the prediction results are spliced.
Next, a method for denoising a scanned light field self-monitoring network according to an embodiment of the present application will be described in detail.
Specifically, fig. 2 is a schematic flow chart of a method for denoising a scanned light field self-monitoring network according to an embodiment of the present application.
As shown in fig. 2, the method for denoising the scanned light field self-supervision network comprises the following steps:
in step S201, scanned light field data is acquired.
It is understood that scanning light field data refers to a series of images or measurements of the light field sampled from different angles and positions.
Specifically, the embodiment of the application can acquire the scanning light field data by shooting through a scanning optical microscope, for example, a series of light field images of the zebra fish embryo under different visual angles can be acquired by shooting through the scanning optical microscope, and the light field images comprise information such as the direction, the intensity and the phase of light rays on the surface or in the embryo.
The embodiment of the application can acquire the data of the scanning light field by shooting through the scanning optical microscope instrument, is beneficial to improving the comprehensiveness and accuracy of the data and provides accurate basis for subsequent operation.
In step S202, the scanned light field data is preprocessed to obtain preprocessed data.
It is understood that preprocessing includes data rearrangement, data enhancement, and the like.
The embodiment of the application can obtain the preprocessed data by preprocessing the scanned light field data, such as data rearrangement, data enhancement and the like, is beneficial to correcting the scanned light field data, improves the data accuracy, further effectively improves the data definition, the signal to noise ratio and the detail visibility, and is more suitable for subsequent analysis and processing.
Optionally, in an embodiment of the present application, preprocessing the scanned light field data to obtain preprocessed data includes: combining the multi-angle data in the scanned light field data according to various angular scanning arrangement sequences to generate light field image data with various different arrangements; and/or rotating, overturning and/or clipping the scanned light field data to obtain enhanced data.
It is understood that multi-angle data refers to light field image information scanned from different angles or positions, including scene depth, texture details, etc.
Specifically, the embodiment of the application can scan immune cells from different angles or positions, and combine according to different arrangement sequences to obtain light field image data, for example, scan immune cells from different angles such as up, down, left, right and the like, and combine according to different arrangement modes (such as ABC, ACB, BAC and the like) to generate a plurality of light field image data with different arrangements; the scan image of the immune cells is rotated and/or flipped, for example, the image of the immune cells is rotated 90 degrees, 180 degrees clockwise or anticlockwise, or flipped horizontally or vertically, etc., to generate enhanced data with different angles and mirror symmetry; according to the size and the position of the immune cells, a region is selected in the scanned image for cutting operation, so that local regions of the immune cells can be cut out, and enhancement data with different sizes and positions are provided.
According to the embodiment of the application, multi-angle data in the scanned light field data can be combined according to various angular scanning arrangement sequences to generate various light field image data with different arrangements, and the scanned light field data is rotated, turned and/or cut to obtain enhanced data, so that the scanned light field data can be rotated, turned and cut through various angular scanning arrangement combinations, a training data set can be expanded, the diversity and the richness of the data are increased, the robustness of image processing is improved, and the accuracy and the reliability of the data can be further improved.
Optionally, in an embodiment of the present application, preprocessing the scanned light field data to obtain preprocessed data, further includes: and segmenting the light field image data after various different arrangements to obtain a plurality of pairs of segmented image data with the same dimension, so that each pair of segmented image data is respectively used for multi-path network input forwarding and multi-path network targets or fusion targets in single iteration and is used for self-supervision total loss function feedback.
In the actual execution process, the embodiment of the application can obtain a plurality of pairs of segmented image data with the same dimension by segmenting various arranged image data, wherein each pair of data is respectively used for multi-path network input forwarding and multi-path network target or fusion target in single iteration and finally used for self-supervision total loss function feedback, any image processing network can be selected for multi-path network input forwarding, and any arranged image data is input into the multi-path network and is output forward to obtain the multi-path network.
Specifically, with reference to fig. 3, the embodiment of the present application may add a data training segmentation module, and simulate the result after multiple imaging by using a single noisy light field image, which specifically includes the following steps:
Step S301, selecting a segmentation dimension.
It can be understood that the core of the light field or scanning light field microscope system is a micro lens array in front of a camera, the microscope system can finally form four-dimensional image information with two angular dimensions and two spatial dimensions, after the four-dimensional image information passes through the data rearrangement module, the two angular dimensions are combined into an S dimension, the two spatial dimensions are reserved as X, Y dimensions, and in order to simulate the result of multiple imaging under the same noise model, a specific dimension can be selected for segmentation so as to obtain a plurality of groups of image results. The selection method can be as follows:
(1) Selecting the same space dimension X for each iteration;
(2) Selecting the same space dimension Y for each iteration;
(3) Randomly selecting any space dimension X or Y in each iteration;
(4) Simultaneously selecting space dimensions X and Y in each iteration;
(5) Selecting an angle dimension S for each iteration;
The different segmentation dimensions are adopted to obtain a plurality of groups of segmented images in the subsequent steps and are used for providing a plurality of corresponding training and targets for multi-path network input, wherein the segmented image of a selected segmentation dimension is also used as a fusion network target and is used for returning a self-supervision total loss function.
Step S302, obtaining a segmentation image.
It will be appreciated that in order to obtain multiple sets of image results, the selected dimensions need to be segmented. For the arranged images I, after the segmentation dimension D is selected, different image segmentation methods can be selected according to the specific of different image data, wherein the segmentation methods can be as follows:
(1) Odd-even segmentation;
(2) Randomly cutting a sliding window;
(3) Cutting the average sequence;
wherein the result of the parity segmentation may be determined by the following expression:
Wherein I is an input image to be segmented, D is a selected segmentation dimension, Is an odd image after the odd-even segmentation,And (3) obtaining the length of a given dimension by Len which is an even image after odd-even segmentation, and obtaining the training pair after segmentation by O.
The result of the sliding window random cut can be determined by the following expression:
where i is the number of the element in a single sliding window, For the ith random segmentation image, I is the input image to be segmented,/>In the dimension D, len (D) is the length of the dimension D, wsize is the sliding window size, k is the total number of required sliding windows, shuffle is the order of disturbing the set, j is the sliding window number, n is the dimension number, and O is the training pair obtained after segmentation.
Since the average sequence segmentation refers to segmenting the input image I into a plurality of required equal parts according to a specific dimension D in sequence, so as to form training pairs O together, and the training pairs cannot be used for self-supervision denoising network training, and are only used for self-supervision denoising network testing, after N dimensions are selected in step S301, the same number of training pairs are obtained after step S302, and the training pairs can be understood as N training pairs O, wherein a plurality of images in one training pair serve as fusion targets in step S305, and the rest N-1 training pairs can be used for denoising as inputs in step S303.
Step S303, multipath network input forwarding.
For the N-1 training pairs obtained in step S302, each training pair uses one segmentation image as input, and the other segmentation images as targets, which can be understood as obtaining N-1 multiple network inputs and N-1 multiple network targets, inputting the obtained N-1 multiple network inputs into the same number of N-1 networks for forward denoising processing, and obtaining N-1 multiple network outputs.
And S304, multiplexing network output fusion.
In the steps S301 to S302, a specific dimension is selected and image segmentation is performed, which may cause information destruction of the original light field image in the selected dimension, and by selecting a plurality of dimensions for segmentation and referring to the non-segmented modules, information destruction caused by segmentation can be effectively compensated. According to the embodiment of the application, the N-1 multi-path network outputs in the step S303 can be fused into a single fusion network output by using the multi-path output fusion module and utilizing the deep learning image fusion network.
Step S305: a self-supervising total loss function is calculated.
Specifically, the function value may be determined by the following expression:
Wherein L is a self-supervision total loss function, Is the weight of multipath loss, n is the number of branches,/>Output for the ith multipath network,/>For the i-th group of multipath network targets,/>As a multipath loss function, it can be understood that the multipath network output is averaged with the L1 and L2 loss functions of the multipath network target,/>For the weight of fusion loss, output is the Output of the fusion network, target is the fusion Target, and/(I)To fuse the loss function, it can be understood that fusing the network output and fusing the L1 and L2 loss functions of the target are averaged,/>For the weight of multipath similarity loss, S is a similarity loss function, which can be understood as the average of the L1 and L2 loss functions fusing the network output and multipath network output.
According to the embodiment of the application, the multi-segmentation mode is utilized, the multi-dimensional information of the light field image is introduced, and the self-supervision scanning light field network denoising is realized by combining the noise-to-noise deep learning training theory, so that a truth image for supervision is not needed during training, the acquisition time can be reduced, and the robustness of light field image data is improved.
In step S203, a self-supervision denoising network is constructed according to the preprocessed data until a preset iteration stop condition is reached, so as to obtain a final self-supervision denoising network, which is used for denoising the scanned light field self-supervision network.
It is understood that the preset iteration stop condition may be that the number of iterations reaches a preset iteration number threshold, for example, the number of iterations reaches 1000 times, etc.
According to the embodiment of the application, the self-supervision denoising network can be constructed according to the preprocessed data until the preset iteration stop condition is reached, so that the final self-supervision denoising network is obtained, the image distortion can be effectively reduced, the detail and texture information of the original image can be reserved, and the image quality is further improved.
Optionally, in an embodiment of the present application, constructing a self-supervised denoising network according to the preprocessed data until reaching a preset iteration stop condition, to obtain a final self-supervised denoising network, including: obtaining a plurality of shunt network outputs according to the preprocessed data; merging the multiple branch network outputs to obtain a merged network output; respectively calculating a first mean square error and a first absolute value error between the segmented image data and the corresponding split network output and a second mean square error and a second absolute value error between the segmented image data and the fusion network output; and weighting the first mean square error and the first absolute value, and the second mean square error and the second absolute value error, and calculating a self-supervision total loss function of the self-supervision denoising network.
Specifically, as shown in fig. 3, in the embodiment of the present application, a weighted sum of a first mean square error and a first absolute value error between the segmented image data and the output of the split network may be calculated as multiple paths of losses through a self-supervision total loss function feedback, and then a weighted sum of a second mean square error and a second absolute value error between the outputs of the fusion network may be calculated as fusion losses, and finally the multiple paths of losses and the weighted sum of the fusion losses form the self-supervision total loss function and feedback to update network parameters.
The self-supervision total loss function can be represented as follows:
wherein, As a self-supervising total loss function,/>Is the weight of multipath loss, n is the number of branches,/>For/>Multiple network outputs,/>For the i-th group of multipath network targets,/>As a multipath loss function, it can be understood that the multipath network output is averaged with the L1 and L2 loss functions of the multipath network target,/>For the weight of fusion loss, output is the Output of the fusion network, target is the fusion Target, and/(I)To fuse the loss function, it can be understood that fusing the network output and fusing the L1 and L2 loss functions of the target are averaged,/>For the weight of multipath similarity loss, S is a similarity loss function, which can be understood as the average of the L1 and L2 loss functions fusing the network output and multipath network output.
The embodiment of the application can obtain a plurality of branch network outputs by using the preprocessed data to obtain the fused network output, and constructs the self-supervision total loss function of the self-supervision denoising network by calculating the error between the segmentation image data and the corresponding branch network output and the error between the segmentation image data and the fused network output, thereby increasing the robustness of the network output, optimizing the performance of the self-supervision denoising network and improving the accuracy and precision of denoising.
Optionally, in one embodiment of the present application, before obtaining the final self-supervised denoising network, the method further includes: inputting a test set obtained from the preprocessed data into a trained self-supervision denoising network, and outputting a network result to output a final self-supervision denoising network under the condition that the network result meets a preset test condition, wherein the test set is not overlapped with the data of the training set for constructing the self-supervision denoising network and is different in size.
It is understood that the preset test condition refers to a standard for evaluating the self-monitoring denoising network, and may be an image quality evaluation index or the like.
In the actual implementation process, as shown in fig. 4, the embodiment of the application can construct a self-supervision denoising network according to the preprocessed data until a preset iteration stop condition is reached, so as to obtain a final self-supervision denoising network, wherein the upper two side images are respectively an original light field image with 81 angles in the center and a denoised light field image with 81 angles in the center from left to right, and the lower two side images are respectively an original center view image and a denoised center view image from left to right.
Specifically, the embodiment of the application can shoot zebra fish embryo data by using a scanning light field instrument (the scanning multiplying power is 3, the number of pixel points behind the micro lens is 13×13), and then rearrange and strengthen the data to generate 4,900 pieces of zebra fish embryo data with the size of 4,900 piecesIs used for self-supervision denoising network training; setting up a self-supervision denoising network based on PyTorch deep learning framework and Python programming language, specifically, adjusting network input to a target size by using bilinear interpolation after an input layer, and then passing through a U-Net, further, through a training network, wherein the initial learning rate isTraining batch size was 1 and back propagation iterative optimization was performed using Adam optimizer for a total of 98,000 iterations. Therefore, the multi-angle test image can be input into a self-supervision denoising network with training completed, and a denoising image is obtained.
According to the embodiment of the application, the test set obtained from the preprocessed data can be input into the trained self-supervision denoising network, and the network result is output, so that the final self-supervision denoising network is output under the condition that the network result meets the preset test condition, thereby effectively removing noise and retaining image details, and improving the denoising stability and adaptability.
According to the method for denoising the self-supervision network of the scanning light field, which is provided by the embodiment of the application, the scanning light field data can be obtained, the scanning light field data can be preprocessed, the self-supervision denoising network is constructed according to the preprocessed data until the preset iteration stop condition is reached, and the final self-supervision denoising network is obtained, so that the denoising of the self-supervision network of the scanning light field can be performed, the dependence on the data is reduced, the application range and the performance can be improved, the denoising can be performed by utilizing a single frame image, and the flexibility and the robustness of the denoising can be improved under different structures and signal to noise ratios. Therefore, the problems that in the related technology, the availability of data is easy to limit due to dependence on data with high quality and same or similar content, the difficulty of data acquisition is further increased, the cost is increased, and due to dependence on time information, multi-frame information is needed, the single-frame light field image cannot be used for denoising, the response is poor in practical application, the adaptability is reduced and the like are solved.
The scanning light field self-monitoring network denoising device according to the embodiment of the application is described with reference to the accompanying drawings.
Fig. 5 is a schematic structural diagram of a self-monitoring network denoising device for a scanned light field according to an embodiment of the present application.
As shown in fig. 5, the scanned light field self-monitoring network denoising apparatus 10 includes: an acquisition module 100, a processing module 200 and a denoising module 300.
Specifically, the acquiring module 100 is configured to acquire scanned light field data.
The processing module 200 is configured to pre-process the scanned light field data to obtain pre-processed data.
The denoising module 300 is configured to construct a self-supervision denoising network according to the preprocessed data until a preset iteration stop condition is reached, so as to obtain a final self-supervision denoising network for denoising the scanned light field self-supervision network.
Optionally, in one embodiment of the present application, the processing module 200 includes: a generating unit and/or an enhancing unit.
The generating unit is used for combining the multi-angle data in the scanned light field data according to various angular scanning arrangement sequences to generate various light field image data with different arrangements;
and/or the enhancement unit is used for rotating, overturning and/or clipping the scanned light field data to obtain enhancement data.
Optionally, in one embodiment of the present application, the processing module 200 further includes: and a segmentation module.
The segmentation module is used for segmenting the light field image data after various different arrangements to obtain a plurality of pairs of segmented image data with the same dimension, so that each pair of segmented image data is respectively used for multi-path network input forwarding and multi-path network targets or fusion targets in single iteration and is used for self-supervision total loss function feedback.
Optionally, in one embodiment of the present application, the denoising module 300 includes: the device comprises an acquisition unit, a fusion unit, a calculation unit and a weighting unit.
The acquisition unit is used for obtaining a plurality of branch network outputs according to the preprocessed data;
The fusion unit is used for fusing the multiple branch network outputs to obtain fused network outputs;
a calculation unit for calculating a first mean square error and a first absolute value error between the segmented image data and the corresponding split network output and a second mean square error and a second absolute value error between the segmented image data and the fusion network output, respectively;
And the weighting unit is used for weighting the first mean square error and the first absolute value, and the second mean square error and the second absolute value error and calculating a self-supervision total loss function of the self-supervision denoising network.
Optionally, in an embodiment of the present application, the denoising module is further configured to, before obtaining the final self-supervised denoising network, input a test set obtained from the preprocessed data to the trained self-supervised denoising network, and output a network result, so as to output the final self-supervised denoising network if the network result meets a preset test condition, where the test set is not coincident with the data of the training set that constructs the self-supervised denoising network and has a different size.
It should be noted that the foregoing explanation of the embodiment of the method for denoising a scanning light field self-monitoring network is also applicable to the apparatus for denoising a scanning light field self-monitoring network of this embodiment, and is not repeated here.
According to the scanning light field self-supervision network denoising device provided by the embodiment of the application, the scanning light field data can be obtained, the scanning light field data can be preprocessed, and the self-supervision denoising network is constructed according to the preprocessed data until the preset iteration stop condition is reached, so that the final self-supervision denoising network is obtained, the scanning light field self-supervision network denoising can be performed, the dependence on the pair data is reduced, the application range and performance can be improved, the denoising can be performed by utilizing a single frame image, and the flexibility and the robustness of the denoising can be improved under different structures and signal to noise ratios. Therefore, the problems that in the related technology, the availability of data is easy to limit due to dependence on data with high quality and same or similar content, the difficulty of data acquisition is further increased, the cost is increased, and due to dependence on time information, multi-frame information is needed, the single-frame light field image cannot be used for denoising, the response is poor in practical application, the adaptability is reduced and the like are solved.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device may include:
a memory 601, a processor 602, and a computer program stored on the memory 601 and executable on the processor 602.
The processor 602 implements the scan light field self-monitoring network denoising method provided in the above embodiment when executing a program.
Further, the electronic device further includes:
A communication interface 603 for communication between the memory 601 and the processor 602.
A memory 601 for storing a computer program executable on the processor 602.
The memory 601 may comprise a high-speed RAM memory or may further comprise a non-volatile memory (non-volatile memory), such as at least one disk memory.
If the memory 601, the processor 602, and the communication interface 603 are implemented independently, the communication interface 603, the memory 601, and the processor 602 may be connected to each other through a bus and perform communication with each other. The bus may be an industry standard architecture (Industry Standard Architecture, abbreviated ISA) bus, an external device interconnect (PERIPHERAL COMPONENT INTERCONNECT, abbreviated PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one thick line is shown in fig. 6, but not only one bus or one type of bus.
Alternatively, in a specific implementation, if the memory 601, the processor 602, and the communication interface 603 are integrated on a chip, the memory 601, the processor 602, and the communication interface 603 may perform communication with each other through internal interfaces.
The processor 602 may be a central processing unit (Central Processing Unit, abbreviated as CPU), or an Application SPECIFIC INTEGRATED Circuit (ASIC), or one or more integrated circuits configured to implement embodiments of the application.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when being executed by a processor, implements the scanning light field self-monitoring network denoising method as above.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or N embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, "N" means at least two, for example, two, three, etc., unless specifically defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order from that shown or discussed, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or N wires, a portable computer cartridge (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the N steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like. While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (8)

1. The self-supervision network denoising method for the scanning light field is characterized by comprising the following steps of:
Acquiring scanned light field data;
preprocessing the scanned light field data to obtain preprocessed data; and
Constructing a self-supervision denoising network according to the preprocessed data until a preset iteration stop condition is reached, and obtaining a final self-supervision denoising network for denoising the scanning light field self-supervision network;
The preprocessing of the scanned light field data to obtain preprocessed data comprises: combining the multi-angle data in the scanned light field data according to various angular scanning arrangement sequences to generate light field image data with various different arrangements; the light field image data after the various different arrangements are segmented to obtain a plurality of pairs of segmented image data with the same dimension, so that each pair of segmented image data is respectively used for multi-path network input forwarding and multi-path network targets or fusion targets in single iteration and is used for self-supervision total loss function feedback;
The step of constructing a self-supervision denoising network according to the preprocessed data until reaching a preset iteration stop condition to obtain a final self-supervision denoising network comprises the following steps:
obtaining a plurality of shunt network outputs according to the preprocessed data;
fusing the plurality of shunt network outputs to obtain a fused network output;
respectively calculating a first mean square error and a first absolute value error between the segmentation image data and corresponding shunt network output and a second mean square error and a second absolute value error between the segmentation image data and the fusion network output;
And weighting the first mean square error and the first absolute value, and the second mean square error and the second absolute value error, and calculating a self-supervision total loss function of the self-supervision denoising network.
2. The method for denoising a scanned light field self-monitoring network according to claim 1, wherein the preprocessing the scanned light field data to obtain preprocessed data further comprises:
and rotating, overturning and/or cutting the scanned light field data to obtain enhanced data.
3. The method of claim 1, further comprising, prior to obtaining the final self-supervising denoising network:
Inputting a test set obtained from the preprocessed data into a trained self-supervision denoising network, and outputting a network result to output the final self-supervision denoising network under the condition that the network result meets a preset test condition, wherein the test set is not overlapped with the data of the training set for constructing the self-supervision denoising network and is different in size.
4. A scanning light field self-supervising network denoising apparatus, comprising:
the acquisition module is used for acquiring the scanned light field data;
The processing module is used for preprocessing the scanned light field data to obtain preprocessed data; and
The denoising module is used for constructing a self-supervision denoising network according to the preprocessed data until a preset iteration stop condition is reached, so as to obtain a final self-supervision denoising network for denoising the scanning light field self-supervision network;
The processing module comprises: the generating unit is used for combining the multi-angle data in the scanned light field data according to various angular scanning arrangement sequences to generate light field image data with various different arrangements; the segmentation unit is used for segmenting the light field image data after the various different arrangements to obtain a plurality of pairs of segmented image data with the same dimension, so that each pair of segmented image data is respectively used for multi-path network input forwarding and multi-path network targets or fusion targets in single iteration and is used for self-supervision total loss function feedback;
the denoising module comprises:
The acquisition unit is used for obtaining a plurality of branch network outputs according to the preprocessed data;
The fusion unit is used for fusing the plurality of branched network outputs to obtain fused network output;
a calculation unit, configured to calculate a first mean square error and a first absolute value error between the segmented image data and a corresponding split network output, and a second mean square error and a second absolute value error between the segmented image data and the fused network output, respectively;
and the weighting unit is used for weighting the first mean square error and the first absolute value and the second mean square error and the second absolute value error and calculating a self-supervision total loss function of the self-supervision denoising network.
5. The scanned light field self-supervising network denoising apparatus of claim 4, wherein the processing module further comprises:
and the enhancement unit is used for rotating, overturning and/or cutting the scanned light field data to obtain enhancement data.
6. The device for denoising a scanned light field self-monitoring network according to claim 4, wherein the denoising module is further configured to input a test set obtained from the preprocessed data into a trained self-monitoring denoising network before obtaining the final self-monitoring denoising network, and output a network result, so as to output the final self-monitoring denoising network if the network result meets a preset test condition, wherein the test set is not coincident with the data of a training set for constructing the self-monitoring denoising network and has a different size.
7. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the scanned light field self-supervising network denoising method as claimed in any one of claims 1 to 3.
8. A computer readable storage medium having stored thereon a computer program, wherein the program is executed by a processor for implementing a scanned light field self-supervised network denoising method as claimed in any one of claims 1 to 3.
CN202410032352.9A 2024-01-09 2024-01-09 Scanning light field self-supervision network denoising method and device, electronic equipment and medium Active CN117541501B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410032352.9A CN117541501B (en) 2024-01-09 2024-01-09 Scanning light field self-supervision network denoising method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410032352.9A CN117541501B (en) 2024-01-09 2024-01-09 Scanning light field self-supervision network denoising method and device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN117541501A CN117541501A (en) 2024-02-09
CN117541501B true CN117541501B (en) 2024-05-31

Family

ID=89792340

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410032352.9A Active CN117541501B (en) 2024-01-09 2024-01-09 Scanning light field self-supervision network denoising method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN117541501B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3109821A1 (en) * 2015-06-26 2016-12-28 Thomson Licensing Real-time light-field denoising
CN111640073A (en) * 2020-05-15 2020-09-08 哈尔滨工业大学 Image blind denoising system
CN113537025A (en) * 2021-07-08 2021-10-22 浙江工业大学 Electromagnetic modulation signal deep denoising method and system based on self-supervision learning
CN114155340A (en) * 2021-10-20 2022-03-08 清华大学 Reconstruction method and device of scanning light field data, electronic equipment and storage medium
CN114897737A (en) * 2022-05-25 2022-08-12 南京邮电大学 Hyperspectral image denoising method based on non-paired unsupervised neural network
CN115423946A (en) * 2022-11-02 2022-12-02 清华大学 Large scene elastic semantic representation and self-supervision light field reconstruction method and device
CN116385280A (en) * 2023-01-09 2023-07-04 爱芯元智半导体(上海)有限公司 Image noise reduction system and method and noise reduction neural network training method
CN116628421A (en) * 2023-05-19 2023-08-22 北京航空航天大学 IMU (inertial measurement Unit) original data denoising method based on self-supervision learning neural network model
CN116721017A (en) * 2023-06-20 2023-09-08 中国科学院生物物理研究所 Self-supervision microscopic image super-resolution processing method and system
WO2023201783A1 (en) * 2022-04-18 2023-10-26 清华大学 Light field depth estimation method and apparatus, and electronic device and storage medium
KR20230165686A (en) * 2022-05-27 2023-12-05 삼성전자주식회사 Method and electronic device for performing denosing processing on image data
CN117333398A (en) * 2023-10-26 2024-01-02 南京信息工程大学 Multi-scale image denoising method and device based on self-supervision

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113643189A (en) * 2020-04-27 2021-11-12 深圳市中兴微电子技术有限公司 Image denoising method, device and storage medium
CN111914997B (en) * 2020-06-30 2024-04-02 华为技术有限公司 Method for training neural network, image processing method and device
CN111861930A (en) * 2020-07-27 2020-10-30 京东方科技集团股份有限公司 Image denoising method and device, electronic equipment and image hyper-resolution denoising method
US20230394631A1 (en) * 2020-11-06 2023-12-07 Rensselaer Polytechnic Institute Noise2sim - similarity-based self-learning for image denoising
US20230013779A1 (en) * 2021-07-06 2023-01-19 GE Precision Healthcare LLC Self-supervised deblurring
US20230103638A1 (en) * 2021-10-06 2023-04-06 Google Llc Image-to-Image Mapping by Iterative De-Noising
US20230370104A1 (en) * 2022-05-13 2023-11-16 DeepSig Inc. Processing antenna signals using machine learning networks with self-supervised learning

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3109821A1 (en) * 2015-06-26 2016-12-28 Thomson Licensing Real-time light-field denoising
CN111640073A (en) * 2020-05-15 2020-09-08 哈尔滨工业大学 Image blind denoising system
CN113537025A (en) * 2021-07-08 2021-10-22 浙江工业大学 Electromagnetic modulation signal deep denoising method and system based on self-supervision learning
CN114155340A (en) * 2021-10-20 2022-03-08 清华大学 Reconstruction method and device of scanning light field data, electronic equipment and storage medium
WO2023201783A1 (en) * 2022-04-18 2023-10-26 清华大学 Light field depth estimation method and apparatus, and electronic device and storage medium
CN114897737A (en) * 2022-05-25 2022-08-12 南京邮电大学 Hyperspectral image denoising method based on non-paired unsupervised neural network
KR20230165686A (en) * 2022-05-27 2023-12-05 삼성전자주식회사 Method and electronic device for performing denosing processing on image data
CN115423946A (en) * 2022-11-02 2022-12-02 清华大学 Large scene elastic semantic representation and self-supervision light field reconstruction method and device
CN116385280A (en) * 2023-01-09 2023-07-04 爱芯元智半导体(上海)有限公司 Image noise reduction system and method and noise reduction neural network training method
CN116628421A (en) * 2023-05-19 2023-08-22 北京航空航天大学 IMU (inertial measurement Unit) original data denoising method based on self-supervision learning neural network model
CN116721017A (en) * 2023-06-20 2023-09-08 中国科学院生物物理研究所 Self-supervision microscopic image super-resolution processing method and system
CN117333398A (en) * 2023-10-26 2024-01-02 南京信息工程大学 Multi-scale image denoising method and device based on self-supervision

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
《自适应扫描光场显微成像技术:打破活体成像壁垒》;戴琼海 , 范静涛 , 吴嘉敏 , 卢志;《 前沿科学》;20220830(第1期);39-43 *
《计算光场成像》;方璐 戴琼海;《光学学报》;20200511;第40卷(第1期);3-24 *
Dan Zhang ; Fangfang Zhou.《Self-Supervised Image Denoising for Real-World Images With Context-Aware Transformer》.《IEEE Access ( Volume: 11)》.2023,14340 - 14349. *
Yihui Feng ; Xianming Liu ; Yongbing Zhang ; Qionghai Dai.《2017 IEEE International Conference on Image Processing (ICIP)》.《2017 IEEE International Conference on Image Processing (ICIP)》.2017,4063-4067. *
基于改进栈式稀疏去噪自编码器的自适应图像去噪;马红强;马时平;许悦雷;吕超;朱明明;;光学学报;20180511(10);128-135 *

Also Published As

Publication number Publication date
CN117541501A (en) 2024-02-09

Similar Documents

Publication Publication Date Title
CN110378348B (en) Video instance segmentation method, apparatus and computer-readable storage medium
CN109003297B (en) Monocular depth estimation method, device, terminal and storage medium
CN108961180B (en) Infrared image enhancement method and system
CN111385490B (en) Video splicing method and device
CN111968123A (en) Semi-supervised video target segmentation method
CN110443874B (en) Viewpoint data generation method and device based on convolutional neural network
CN111626960A (en) Image defogging method, terminal and computer storage medium
CN114140623A (en) Image feature point extraction method and system
CN116664446A (en) Lightweight dim light image enhancement method based on residual error dense block
CN115860091A (en) Depth feature descriptor learning method based on orthogonal constraint
US11783454B2 (en) Saliency map generation method and image processing system using the same
CN117541501B (en) Scanning light field self-supervision network denoising method and device, electronic equipment and medium
CN117197627B (en) Multi-mode image fusion method based on high-order degradation model
CN115810112A (en) Image processing method, image processing device, storage medium and electronic equipment
CN114926352B (en) Image antireflection method, system, device and storage medium
CN113947547B (en) Monte Carlo rendering graph noise reduction method based on multi-scale kernel prediction convolutional neural network
CN116385369A (en) Depth image quality evaluation method and device, electronic equipment and storage medium
CN111612690B (en) Image splicing method and system
CN111310916B (en) Depth system training method and system for distinguishing left and right eye pictures
CN111091144B (en) Image feature point matching method and device based on depth pseudo-twin network
EP2947626A1 (en) Method and apparatus for generating spanning tree, method and apparatus for stereo matching, method and apparatus for up-sampling, and method and apparatus for generating reference pixel
CN112102208A (en) Underwater image processing system, method, apparatus, and medium with edge preservation
CN114529514B (en) Depth data quality evaluation method and device based on graph structure
CN110363171A (en) The method of the training method and identification sky areas of sky areas prediction model
CN112017113B (en) Image processing method and device, model training method and device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant