CN117541501A - Scanning light field self-supervision network denoising method and device, electronic equipment and medium - Google Patents

Scanning light field self-supervision network denoising method and device, electronic equipment and medium Download PDF

Info

Publication number
CN117541501A
CN117541501A CN202410032352.9A CN202410032352A CN117541501A CN 117541501 A CN117541501 A CN 117541501A CN 202410032352 A CN202410032352 A CN 202410032352A CN 117541501 A CN117541501 A CN 117541501A
Authority
CN
China
Prior art keywords
network
denoising
self
data
light field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410032352.9A
Other languages
Chinese (zh)
Other versions
CN117541501B (en
Inventor
戴琼海
卢志
吴嘉敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202410032352.9A priority Critical patent/CN117541501B/en
Publication of CN117541501A publication Critical patent/CN117541501A/en
Application granted granted Critical
Publication of CN117541501B publication Critical patent/CN117541501B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0895Weakly supervised learning, e.g. semi-supervised or self-supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure relates to the field of computational imaging technologies, and in particular, to a method, an apparatus, an electronic device, and a storage medium for denoising a scanned light field in a self-monitoring network, where the method includes: acquiring scanned light field data; preprocessing the scanned light field data to obtain preprocessed data; and constructing a self-supervision denoising network according to the preprocessed data until a preset iteration stop condition is reached, so as to obtain a final self-supervision denoising network, and the final self-supervision denoising network is used for denoising the scanning light field self-supervision network. Therefore, the problems that in the related technology, the availability of data is easy to limit due to dependence on data with high quality and same or similar content, the difficulty of data acquisition is further increased, the cost is increased, and due to dependence on time information, multi-frame information is needed, the single-frame light field image cannot be used for denoising, the response is poor in practical application, the adaptability is reduced and the like are solved.

Description

Scanning light field self-supervision network denoising method and device, electronic equipment and medium
Technical Field
The present disclosure relates to the field of computational imaging technologies, and in particular, to a method, an apparatus, an electronic device, and a medium for denoising a scanned light field self-monitoring network.
Background
The light field or scanning light field is focused on because of being capable of realizing large-scale rapid 3D imaging by using a single or a small quantity of shooting images, and is widely used in biological applications including 3D calcium imaging and the like, wherein the signal to noise ratio of the light field image is an important influencing factor of the final 3D imaging quality, the detection noise mainly comprising photon shot noise exacerbates measurement uncertainty, morphology and functional explanation of an underlying structure can be changed, and the noise reduction image can be obtained by filtering through a sliding window according to a space domain or a transformation domain of the light field image by a traditional filter denoising method based on a space domain and a transformation domain, including a median filter, a BM3D denoising algorithm and the like, but the time consumption is long, and serious structural detail loss exists.
In the related technology, the optical field or scanning optical field data can be used as input by a supervised deep learning network denoising method, a high-resolution truth image is used as supervision, the network learning denoising process can be realized, a series of time continuous optical field or scanning optical field data can be used as input by a time sequence-based deep learning network denoising method, and two imaging results of adjacent frames in close time are respectively used as input and supervision, so that the network unsupervised learning denoising process is realized.
However, in the related art, since a high-resolution truth image needs to be provided, the usability of data is easily limited due to the dependence on data with high quality and the same or similar content, and the difficulty of data acquisition is further increased, so that the cost is increased.
Disclosure of Invention
The application provides a scanning light field self-supervision network denoising method, device, electronic equipment and storage medium, which are used for solving the problems that in the related technology, due to dependence on data with high quality and same or similar content, the availability of the data is easy to limit, the difficulty of data acquisition is further increased, the cost is increased, and due to dependence on time information, multi-frame information is needed, single-frame light field images cannot be used for denoising, the response is poor in practical application, the adaptability is reduced and the like.
An embodiment of a first aspect of the present application provides a method for denoising a scanned light field self-monitoring network, including the following steps: acquiring scanned light field data; preprocessing the scanned light field data to obtain preprocessed data; and constructing a self-supervision denoising network according to the preprocessed data until a preset iteration stop condition is reached, so as to obtain a final self-supervision denoising network, which is used for denoising the scanning light field self-supervision network.
Optionally, in an embodiment of the present application, the preprocessing the scanned light field data to obtain preprocessed data includes: combining the multi-angle data in the scanned light field data according to various angular scanning arrangement sequences to generate light field image data with various different arrangements; and/or rotating, overturning and/or cutting the scanned light field data to obtain enhanced data.
Optionally, in an embodiment of the present application, the preprocessing the scanned light field data to obtain preprocessed data further includes: and segmenting the light field image data after various different arrangements to obtain multiple pairs of segmented image data with the same dimension, so that each pair of segmented image data is respectively used for multi-path network input forwarding and multi-path network targets or fusion targets in single iteration and is used for self-supervision total loss function feedback.
Optionally, in an embodiment of the present application, the constructing a self-supervised denoising network according to the preprocessed data until a preset iteration stop condition is reached, to obtain a final self-supervised denoising network includes: obtaining a plurality of shunt network outputs according to the preprocessed data; fusing the plurality of shunt network outputs to obtain a fused network output; respectively calculating a first mean square error and a first absolute value error between the segmentation image data and corresponding shunt network output and a second mean square error and a second absolute value error between the segmentation image data and the fusion network output; and weighting the first mean square error and the first absolute value, and the second mean square error and the second absolute value error, and calculating a self-supervision total loss function of the self-supervision denoising network.
Optionally, in one embodiment of the present application, before obtaining the final self-supervised denoising network, the method further includes: inputting a test set obtained from the preprocessed data into a trained self-supervision denoising network, and outputting a network result to output the final self-supervision denoising network under the condition that the network result meets a preset test condition, wherein the test set is not overlapped with the data of the training set for constructing the self-supervision denoising network and is different in size.
An embodiment of a second aspect of the present application provides a scanning light field self-monitoring network denoising apparatus, including: the acquisition module is used for acquiring the scanned light field data; the processing module is used for preprocessing the scanned light field data to obtain preprocessed data; and the denoising module is used for constructing a self-supervision denoising network according to the preprocessed data until a preset iteration stop condition is reached, so as to obtain a final self-supervision denoising network, and the final self-supervision denoising network is used for denoising the scanning light field self-supervision network.
Optionally, in one embodiment of the present application, the processing module includes: the generating unit is used for combining the multi-angle data in the scanned light field data according to various angular scanning arrangement sequences to generate light field image data with various different arrangements; and/or an enhancement unit, which is used for rotating, turning over and/or clipping the scanned light field data to obtain enhancement data.
Optionally, in one embodiment of the present application, the processing module further includes: the segmentation module is used for segmenting the light field image data after the various different arrangements to obtain a plurality of pairs of segmented image data with the same dimension, so that each pair of segmented image data is respectively used for multi-path network input forwarding and multi-path network targets or fusion targets in single iteration and is used for self-supervision total loss function feedback.
Optionally, in one embodiment of the present application, the denoising module includes: the acquisition unit is used for obtaining a plurality of branch network outputs according to the preprocessed data; the fusion unit is used for fusing the plurality of branched network outputs to obtain fused network output; a calculation unit, configured to calculate a first mean square error and a first absolute value error between the segmented image data and a corresponding split network output, and a second mean square error and a second absolute value error between the segmented image data and the fused network output, respectively; and the weighting unit is used for weighting the first mean square error and the first absolute value and the second mean square error and the second absolute value error and calculating a self-supervision total loss function of the self-supervision denoising network.
Optionally, in an embodiment of the present application, the denoising module is further configured to, before obtaining the final self-supervised denoising network, input a test set obtained from the preprocessed data to a trained self-supervised denoising network, and output a network result, so as to output the final self-supervised denoising network if the network result meets a preset test condition, where the test set is not coincident with data of a training set for constructing the self-supervised denoising network and has a different size.
An embodiment of a third aspect of the present application provides an electronic device, including: the system comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the program to realize the scanning light field self-supervision network denoising method according to the embodiment.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program which, when executed by a processor, implements a scanned light field self-supervised network denoising method as above.
According to the embodiment of the application, the scanned light field data can be preprocessed by acquiring the scanned light field data, and the self-supervision denoising network is constructed according to the preprocessed data until the preset iteration stop condition is reached, so that the final self-supervision denoising network is obtained, the scanned light field self-supervision network denoising can be performed, dependence on the data is reduced, the application range and performance can be improved, the single frame image can be utilized for denoising, and the flexibility and the robustness of denoising can be improved under different structures and signal to noise ratios. Therefore, the problems that in the related technology, the availability of data is easy to limit due to dependence on data with high quality and same or similar content, the difficulty of data acquisition is further increased, the cost is increased, and due to dependence on time information, multi-frame information is needed, the single-frame light field image cannot be used for denoising, the response is poor in practical application, the adaptability is reduced and the like are solved.
Additional aspects and advantages of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a schematic diagram of a method for denoising a scanned light field self-monitoring network according to one embodiment of the present application;
fig. 2 is a flowchart of a method for denoising a scanned light field self-monitoring network according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a data training pair segmentation module and a self-supervised denoising network training process according to one embodiment of the present application;
FIG. 4 is a schematic diagram of a comparison of a scanned light field original image with a network denoised image according to one embodiment of the present application;
fig. 5 is a schematic structural diagram of a scanning light field self-monitoring network denoising device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary and intended for the purpose of explaining the present application and are not to be construed as limiting the present application.
The following describes a method, a device, an electronic device and a storage medium for denoising a scanning light field self-supervision network according to an embodiment of the application with reference to the accompanying drawings. Aiming at the problems that in the related technology mentioned in the background technology center, the availability of data is easy to limit due to dependence on data with high quality and same or similar content, the difficulty of data acquisition is further increased, the cost is increased, multi-frame information is needed, single-frame light field images cannot be used for denoising, poor response in practical application is easy to cause, adaptability is reduced and the like. Therefore, the problems that in the related technology, the availability of data is easy to limit due to dependence on data with high quality and same or similar content, the difficulty of data acquisition is further increased, the cost is increased, and due to dependence on time information, multi-frame information is needed, the single-frame light field image cannot be used for denoising, the response is poor in practical application, the adaptability is reduced and the like are solved.
Before explaining the method for denoising the self-supervision network of the scanning light field provided by the embodiment of the application, the structure of the method for denoising the self-supervision network of the scanning light field related to the embodiment of the application is explained.
As shown in fig. 1, the structure of the scanning light field self-supervision network denoising method comprises: and acquiring a scanning light field data unit, a data preprocessing unit, a self-supervision denoising network training unit and a self-supervision denoising network testing unit.
The method comprises the steps of acquiring a scanning light field data unit for providing data for training and testing of a network, wherein the data can be shot by a scanning optical microscope or downloaded from a public data set;
the data preprocessing unit comprises a data rearrangement function, a data enhancement function and a data training segmentation function, and the final data is used for self-supervision denoising network training or self-supervision denoising network testing.
The data rearrangement function is used for combining the multi-angle data into arranged image data according to various angle arrangement sequences; the data enhancement module function is used for performing image transformation such as cutting, rotation, overturning and the like on the image data to form input data for the data training segmentation module; the data training is used for segmenting the image data after various arrangements to obtain multiple pairs of segmented image data with the same dimension, and each pair of data is respectively used for multi-path network input forwarding and multi-path network target or fusion target in single iteration and finally used for self-supervision total loss function feedback.
The self-supervision denoising network training unit comprises a multipath network input forwarding function, a multipath output fusion module function and a self-supervision total loss function return function, and the three steps are called as one iteration.
The multi-path network input forwarding function receives any arranged image data output by the data preprocessing unit through an input layer, and enables the image data to pass through a branching network to obtain branching network output; the multi-path output fusion module function is used for inputting all image data output by the multi-path network into the fusion network, and outputting the image data forward through the fusion network to obtain fusion network output; and the self-supervision total loss function feedback function is used for calculating the mean square error and the absolute value error between the segmentation image data and the output of the shunt network, fusing the mean square error and the absolute value error between the output of the network, weighting to form the self-supervision total loss function, and returning and updating network parameters. A threshold number of iterations is typically set, and if this value is not exceeded, the network training is considered not to have ended. When the network training is not finished, entering the next iteration, namely, inputting and forwarding the next batch of data through a multi-path network, and returning the data through a multi-path output fusion module and a self-supervision total loss function;
The self-supervision denoising network test unit is used for testing the final performance of the selected data on the network after the self-supervision denoising network training is completed.
The selected data is often not overlapped with the data used by the self-supervision denoising network training unit, the sizes are not necessarily the same, if the sizes are different, overlapping cutting is carried out on the selected data, the selected data is cut into a plurality of images meeting the size requirement of a network input layer, the network is respectively predicted, and then the prediction results are spliced.
Next, a method for denoising the scanned light field self-monitoring network in the embodiment of the present application will be described in detail.
Specifically, fig. 2 is a schematic flow chart of a method for denoising a scanned light field self-monitoring network according to an embodiment of the present application.
As shown in fig. 2, the method for denoising the scanned light field self-supervision network comprises the following steps:
in step S201, scanned light field data is acquired.
It is understood that scanning light field data refers to a series of images or measurements of the light field sampled from different angles and positions.
Specifically, the embodiment of the application can acquire the scanning light field data by shooting through a scanning optical microscope, for example, a series of light field images of the zebra fish embryo under different visual angles can be acquired by shooting through the scanning optical microscope, and the light field images comprise information such as the direction, the intensity and the phase of light rays on the surface or in the embryo.
According to the embodiment of the application, the scanning optical microscope instrument can be used for shooting to acquire the scanning light field data, so that the comprehensiveness and accuracy of the data are improved, and an accurate basis is provided for subsequent operation.
In step S202, the scanned light field data is preprocessed to obtain preprocessed data.
It is understood that preprocessing includes data rearrangement, data enhancement, and the like.
According to the embodiment of the application, the data after preprocessing can be obtained by preprocessing the data of the scanning light field, such as data rearrangement and data enhancement, so that the correction of the data of the scanning light field is facilitated, the accuracy of the data is improved, and the definition, the signal-to-noise ratio and the detail visibility of the data are effectively improved, so that the data are more suitable for subsequent analysis and processing.
Optionally, in an embodiment of the present application, preprocessing the scanned light field data to obtain preprocessed data includes: combining the multi-angle data in the scanned light field data according to various angular scanning arrangement sequences to generate light field image data with various different arrangements; and/or rotating, overturning and/or clipping the scanned light field data to obtain enhanced data.
It is understood that multi-angle data refers to light field image information scanned from different angles or positions, including scene depth, texture details, etc.
Specifically, the embodiment of the application can scan immune cells from different angles or positions, and combine the immune cells according to different arrangement sequences to obtain light field image data, for example, scan immune cells from different angles such as up, down, left, right and the like, and combine the immune cells according to different arrangement modes (such as ABC, ACB, BAC and the like) to generate a plurality of light field image data with different arrangements; the scan image of the immune cells is rotated and/or flipped, for example, the image of the immune cells is rotated 90 degrees, 180 degrees clockwise or anticlockwise, or flipped horizontally or vertically, etc., to generate enhanced data with different angles and mirror symmetry; according to the size and the position of the immune cells, a region is selected in the scanned image for cutting operation, so that local regions of the immune cells can be cut out, and enhancement data with different sizes and positions are provided.
According to the embodiment of the application, multi-angle data in the scanned light field data can be combined according to various angular scanning arrangement sequences, the light field image data after various different arrangements are generated, and the scanned light field data is rotated, turned and/or cut to obtain enhanced data, so that the training data set can be expanded, the diversity and the richness of the data are increased, the robustness of image processing is improved, and the accuracy and the reliability of the data can be further improved through various angular scanning arrangement combinations and rotation, turning and cutting of the scanned light field data.
Optionally, in an embodiment of the present application, preprocessing the scanned light field data to obtain preprocessed data, further includes: and segmenting the light field image data after various different arrangements to obtain a plurality of pairs of segmented image data with the same dimension, so that each pair of segmented image data is respectively used for multi-path network input forwarding and multi-path network targets or fusion targets in single iteration and is used for self-supervision total loss function feedback.
In an actual execution process, the embodiment of the application can obtain multiple pairs of segmented image data with the same dimension by segmenting multiple arranged image data, wherein each pair of data is respectively used for multi-path network input forwarding and multi-path network target or fusion target in a single iteration and finally used for self-supervision total loss function feedback, any image processing network can be selected for multi-path network input forwarding, and any arranged image data is input into the multi-path network and is output forward to obtain the multi-path network.
Specifically, with reference to fig. 3, in the embodiment of the present application, a segmentation module may be added in data training, and a single noisy light field image is used to simulate the result after multiple imaging, which specifically includes the following steps:
Step S301, selecting a segmentation dimension.
It can be understood that the core of the light field or scanning light field microscope system is a micro lens array in front of a camera, the microscope system can finally form four-dimensional image information with two angular dimensions and two spatial dimensions, after the four-dimensional image information passes through the data rearrangement module, the two angular dimensions are combined into an S dimension, the two spatial dimensions are reserved as X, Y dimensions, and in order to simulate the result of multiple imaging under the same noise model, a specific dimension can be selected for segmentation so as to obtain a plurality of groups of image results. The selection method can be as follows:
(1) Selecting the same space dimension X for each iteration;
(2) Selecting the same space dimension Y for each iteration;
(3) Randomly selecting any space dimension X or Y in each iteration;
(4) Simultaneously selecting space dimensions X and Y in each iteration;
(5) Selecting an angle dimension S for each iteration;
the different segmentation dimensions are adopted to obtain a plurality of groups of segmented images in the subsequent steps and are used for providing a plurality of corresponding training and targets for multi-path network input, wherein the segmented image of a selected segmentation dimension is also used as a fusion network target and is used for returning a self-supervision total loss function.
Step S302, obtaining a segmentation image.
It will be appreciated that in order to obtain multiple sets of image results, the selected dimensions need to be segmented. For the arranged imagesIIn the selected segmentation dimensionDThen, according to the specific of different image data, different image segmentation methods can be selected, wherein the segmentation methods can be as follows:
(1) Odd-even segmentation;
(2) Randomly cutting a sliding window;
(3) Cutting the average sequence;
wherein the result of the parity segmentation may be determined by the following expression:
wherein,Ifor the input image to be segmented,Dfor a selected dimension of the cut,is an odd image after the odd-even segmentation,is an even image after the odd-even segmentation,Lenin order to find the length of a given dimension,Ois the training pair obtained after segmentation.
The result of the sliding window random cut can be determined by the following expression:
wherein,ithe numbering of the elements within a single sliding window,is the firstiThe images are randomly segmented and the images are then processed,Ifor the input image to be segmented, +.>To be in dimension ofDIn (1)kSliding windowW k Middle (f)iThe elements corresponding to the sequence numbers of the dimensions,Len(D)to give dimensions ofDIs the sliding window size,kfor the total number of sliding windows required,shufflein order to disrupt the order of a given set,jfor sliding window numbering, n is the dimension number, OIs the training pair obtained after segmentation.
Due to the average cisSequence segmentation refers to the segmentation in specific dimensionsDSequentially inputting imagesISplitting into several equal parts to form training pairOThe method cannot be used for self-supervision denoising network training, and is only used for self-supervision denoising network test, so that after N dimensions are selected in step S301, the same number of training pairs are obtained after step S302, and the N training pairs can be understood asOSeveral images in one training pair will be the fusion target in step S305, and the remaining N-1 training pairs can be the input of step S303 for noise reduction.
Step S303, multipath network input forwarding.
For the N-1 training pairs obtained in step S302, each training pair uses one segmentation image as input, and the other segmentation images as targets, which can be understood as obtaining N-1 multiple network inputs and N-1 multiple network targets, inputting the obtained N-1 multiple network inputs into the same number of N-1 networks for forward denoising processing, and obtaining N-1 multiple network outputs.
And S304, multiplexing network output fusion.
In the steps S301 to S302, a specific dimension is selected and image segmentation is performed, which may cause information destruction of the original light field image in the selected dimension, and by selecting a plurality of dimensions for segmentation and referring to the non-segmented modules, information destruction caused by segmentation can be effectively compensated. According to the embodiment of the application, the multi-path output fusion module is used, and the N-1 multi-path network outputs in the step S303 can be fused into a single fusion network output by utilizing the deep learning image fusion network.
Step S305: a self-supervising total loss function is calculated.
Specifically, the function value may be determined by the following expression:
wherein,Las a self-supervising total loss function,for the weight of the multipath loss,nfor the number of branches->Is the firstiMultiple network outputs,/->Is the firstiGroup multipath network target,/->As a multipath loss function, it can be understood that the multipath network output is averaged with the L1 and L2 loss functions of the multipath network target,/->In order to fuse the weights of the losses,Outputin order to fuse the network outputs,Targetfor the purpose of fusion, +.>The fusion of the loss function can be understood as the fusion of the network output with the L1 and L2 loss functions of the fusion target, which are averaged,/->For the weight of the multipath similarity loss,Sthe similarity loss function can be understood as the average of the L1 and L2 loss functions fusing the network output and the multipath network output.
According to the embodiment of the application, the multi-segmentation mode is utilized, the multi-dimensional information of the light field image is introduced, the noise is combined with the deep learning training theory of noise to noise, and the self-supervision scanning light field network denoising is realized, so that a truth image for supervision is not needed in training, the acquisition time can be reduced, and the robustness of light field image data is improved.
In step S203, a self-supervision denoising network is constructed according to the preprocessed data until a preset iteration stop condition is reached, so as to obtain a final self-supervision denoising network, which is used for denoising the scanned light field self-supervision network.
It is understood that the preset iteration stop condition may be that the number of iterations reaches a preset iteration number threshold, for example, the number of iterations reaches 1000 times, etc.
According to the embodiment of the application, the self-supervision denoising network can be constructed according to the preprocessed data until the preset iteration stop condition is reached, so that the final self-supervision denoising network is obtained, the image distortion can be effectively reduced, the detail and texture information of an original image can be reserved, and the image quality is further improved.
Optionally, in an embodiment of the present application, constructing a self-supervised denoising network according to the preprocessed data until a preset iteration stop condition is reached, to obtain a final self-supervised denoising network, including: obtaining a plurality of shunt network outputs according to the preprocessed data; merging the multiple branch network outputs to obtain a merged network output; respectively calculating a first mean square error and a first absolute value error between the segmented image data and the corresponding split network output and a second mean square error and a second absolute value error between the segmented image data and the fusion network output; and weighting the first mean square error and the first absolute value, and the second mean square error and the second absolute value error, and calculating a self-supervision total loss function of the self-supervision denoising network.
Specifically, in connection with fig. 3, in the embodiment of the present application, a feedback of a self-supervision total loss function may be used to calculate a weighted sum of a first mean square error and a first absolute value error between the segmented image data and the output of the split network as a multipath loss, then calculate a weighted sum of a second mean square error and a second absolute value error between the outputs of the fusion network as a fusion loss, and finally, the weighted sum of the multipath loss and the fusion loss forms the self-supervision total loss function, and the feedback updates network parameters, where all the outputs of the multipath network may be input into the fusion network by selecting any image fusion network, and the output of the fusion network is obtained in the forward direction.
The self-supervision total loss function can be represented as follows:
wherein,for self-supervision total loss function->For the weight of the multipath loss,nfor the number of branches->Is->Multiple network outputs,/->Is the firstiGroup multipath network target,/->As a multipath loss function, it can be understood that the multipath network output is averaged with the L1 and L2 loss functions of the multipath network target,/->In order to fuse the weights of the losses,Outputin order to fuse the network outputs,Targetfor the purpose of fusion, +.>The fusion of the loss function can be understood as the fusion of the network output with the L1 and L2 loss functions of the fusion target, which are averaged,/- >For the weight of the multipath similarity loss,Sthe similarity loss function can be understood as the average of the L1 and L2 loss functions fusing the network output and the multipath network output.
According to the embodiment of the invention, the preprocessed data can be used for obtaining a plurality of branch network outputs to obtain the fusion network output, and the self-supervision total loss function of the self-supervision denoising network is constructed by calculating the error between the segmentation image data and the corresponding branch network output and the error between the segmentation image data and the fusion network output, so that the robustness of the network output can be improved, the performance of the self-supervision denoising network can be optimized, and the denoising accuracy and precision are improved.
Optionally, in one embodiment of the present application, before obtaining the final self-supervised denoising network, the method further includes: inputting a test set obtained from the preprocessed data into a trained self-supervision denoising network, and outputting a network result to output a final self-supervision denoising network under the condition that the network result meets a preset test condition, wherein the test set is not overlapped with the data of the training set for constructing the self-supervision denoising network and is different in size.
It is understood that the preset test condition refers to a standard for evaluating the self-monitoring denoising network, and may be an image quality evaluation index or the like.
In the actual implementation process, as shown in fig. 4, the embodiment of the application may construct a self-supervision denoising network according to the preprocessed data until reaching a preset iteration stop condition, so as to obtain a final self-supervision denoising network, where the upper side images are respectively, from left to right, an original light field image with 81 angles in the center and a denoised light field image with 81 angles in the center, and the lower side images are respectively, from left to right, an original center view image and a denoised center view image.
Specifically, the embodiment of the application can use a scanning light field instrument (the scanning magnification is 3, the number of pixel points behind the micro lens is 13×13) to shoot zebra fish embryo data, and then rearrange and strengthen the data to generate 4,900 pieces of zebra fish embryo data with the size of 4,900 piecesIs used for self-supervision denoising network training; building a self-supervision denoising network based on a PyTorch deep learning framework and Python programming language, specifically, adjusting network input to a target size by using bilinear interpolation after an input layer, and then passing through a U-Net, and further, training the network, wherein the initial learning rate isTraining batch size was 1 and back propagation iterative optimization was performed using Adam optimizer for a total of 98,000 iterations. Thereby can input the multi-angle test image into the self-supervision denoising network that the training was accomplished And obtaining a denoising image.
According to the embodiment of the application, the test set obtained by the preprocessed data can be input into the trained self-supervision denoising network, and the network result is output, so that the final self-supervision denoising network is output under the condition that the network result meets the preset test condition, and therefore noise can be effectively removed, image details can be reserved, and the denoising stability and adaptability can be improved.
According to the scanning light field self-supervision network denoising method provided by the embodiment of the application, the scanning light field data can be obtained, preprocessed, and the self-supervision denoising network is built according to the preprocessed data until the preset iteration stop condition is reached, so that the final self-supervision denoising network is obtained, the scanning light field self-supervision network denoising can be performed, dependence on the data is reduced, the application range and performance can be improved, the denoising can be performed by utilizing a single frame image, and the flexibility and the robustness of the denoising can be improved under different structures and signal to noise ratios. Therefore, the problems that in the related technology, the availability of data is easy to limit due to dependence on data with high quality and same or similar content, the difficulty of data acquisition is further increased, the cost is increased, and due to dependence on time information, multi-frame information is needed, the single-frame light field image cannot be used for denoising, the response is poor in practical application, the adaptability is reduced and the like are solved.
The scanning light field self-monitoring network denoising device according to the embodiment of the application is described with reference to the accompanying drawings.
Fig. 5 is a schematic structural diagram of a scanning light field self-monitoring network denoising device according to an embodiment of the present application.
As shown in fig. 5, the scanned light field self-monitoring network denoising apparatus 10 includes: an acquisition module 100, a processing module 200 and a denoising module 300.
Specifically, the acquiring module 100 is configured to acquire scanned light field data.
The processing module 200 is configured to pre-process the scanned light field data to obtain pre-processed data.
The denoising module 300 is configured to construct a self-supervision denoising network according to the preprocessed data until a preset iteration stop condition is reached, so as to obtain a final self-supervision denoising network for denoising the scanned light field self-supervision network.
Optionally, in one embodiment of the present application, the processing module 200 includes: a generating unit and/or an enhancing unit.
The generating unit is used for combining the multi-angle data in the scanned light field data according to various angular scanning arrangement sequences to generate various light field image data with different arrangements;
and/or the enhancement unit is used for rotating, overturning and/or clipping the scanned light field data to obtain enhancement data.
Optionally, in one embodiment of the present application, the processing module 200 further includes: and a segmentation module.
The segmentation module is used for segmenting the light field image data after various different arrangements to obtain a plurality of pairs of segmented image data with the same dimension, so that each pair of segmented image data is respectively used for multi-path network input forwarding and multi-path network targets or fusion targets in single iteration and is used for self-supervision total loss function feedback.
Optionally, in one embodiment of the present application, the denoising module 300 includes: the device comprises an acquisition unit, a fusion unit, a calculation unit and a weighting unit.
The acquisition unit is used for obtaining a plurality of branch network outputs according to the preprocessed data;
the fusion unit is used for fusing the multiple branch network outputs to obtain fused network outputs;
a calculation unit for calculating a first mean square error and a first absolute value error between the segmented image data and the corresponding split network output and a second mean square error and a second absolute value error between the segmented image data and the fusion network output, respectively;
and the weighting unit is used for weighting the first mean square error and the first absolute value, and the second mean square error and the second absolute value error and calculating a self-supervision total loss function of the self-supervision denoising network.
Optionally, in an embodiment of the present application, the denoising module is further configured to, before obtaining the final self-supervised denoising network, input a test set obtained from the preprocessed data to the trained self-supervised denoising network, and output a network result, so as to output the final self-supervised denoising network if the network result meets a preset test condition, where the test set is not coincident with the data of the training set that constructs the self-supervised denoising network and has a different size.
It should be noted that the foregoing explanation of the embodiment of the method for denoising a scanning light field self-monitoring network is also applicable to the apparatus for denoising a scanning light field self-monitoring network of this embodiment, and is not repeated here.
According to the scanning light field self-supervision network denoising device provided by the embodiment of the application, the scanning light field data can be preprocessed by obtaining the scanning light field data, and the self-supervision denoising network is constructed according to the preprocessed data until the preset iteration stop condition is reached, so that the final self-supervision denoising network is obtained, the scanning light field self-supervision network denoising can be performed, the dependence on the data is reduced, the application range and performance can be improved, the denoising can be performed by utilizing a single frame image, and the flexibility and the robustness of the denoising can be improved under different structures and signal to noise ratios. Therefore, the problems that in the related technology, the availability of data is easy to limit due to dependence on data with high quality and same or similar content, the difficulty of data acquisition is further increased, the cost is increased, and due to dependence on time information, multi-frame information is needed, the single-frame light field image cannot be used for denoising, the response is poor in practical application, the adaptability is reduced and the like are solved.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device may include:
a memory 601, a processor 602, and a computer program stored on the memory 601 and executable on the processor 602.
The processor 602 implements the scan light field self-monitoring network denoising method provided in the above embodiment when executing a program.
Further, the electronic device further includes:
a communication interface 603 for communication between the memory 601 and the processor 602.
A memory 601 for storing a computer program executable on the processor 602.
The memory 601 may comprise a high-speed RAM memory or may further comprise a non-volatile memory (non-volatile memory), such as at least one disk memory.
If the memory 601, the processor 602, and the communication interface 603 are implemented independently, the communication interface 603, the memory 601, and the processor 602 may be connected to each other through a bus and perform communication with each other. The bus may be an industry standard architecture (Industry Standard Architecture, abbreviated ISA) bus, an external device interconnect (Peripheral Component Interconnect, abbreviated PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, only one thick line is shown in fig. 6, but not only one bus or one type of bus.
Alternatively, in a specific implementation, if the memory 601, the processor 602, and the communication interface 603 are integrated on a chip, the memory 601, the processor 602, and the communication interface 603 may perform communication with each other through internal interfaces.
The processor 602 may be a central processing unit (Central Processing Unit, abbreviated as CPU), or an application specific integrated circuit (Application Specific Integrated Circuit, abbreviated as ASIC), or one or more integrated circuits configured to implement embodiments of the present application.
Embodiments of the present application also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a scanned light field self-supervised network denoising method as above.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or N embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "N" is at least two, such as two, three, etc., unless explicitly defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order from that shown or discussed, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or N wires, a portable computer cartridge (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the N steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like. Although embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (12)

1. The self-supervision network denoising method for the scanning light field is characterized by comprising the following steps of:
acquiring scanned light field data;
preprocessing the scanned light field data to obtain preprocessed data; and
and constructing a self-supervision denoising network according to the preprocessed data until a preset iteration stop condition is reached, and obtaining a final self-supervision denoising network for denoising the scanning light field self-supervision network.
2. The method for denoising a scanned light field self-monitoring network according to claim 1, wherein preprocessing the scanned light field data to obtain preprocessed data comprises:
combining the multi-angle data in the scanned light field data according to various angular scanning arrangement sequences to generate light field image data with various different arrangements;
And/or rotating, overturning and/or cutting the scanned light field data to obtain enhanced data.
3. The method for denoising a scanned light field self-monitoring network according to claim 2, wherein the preprocessing the scanned light field data to obtain preprocessed data further comprises:
and segmenting the light field image data after various different arrangements to obtain multiple pairs of segmented image data with the same dimension, so that each pair of segmented image data is respectively used for multi-path network input forwarding and multi-path network targets or fusion targets in single iteration and is used for self-supervision total loss function feedback.
4. The method for denoising a scanned light field self-supervision network according to claim 3, wherein constructing a self-supervision denoising network according to the preprocessed data until a preset iteration stop condition is reached, and obtaining a final self-supervision denoising network comprises:
obtaining a plurality of shunt network outputs according to the preprocessed data;
fusing the plurality of shunt network outputs to obtain a fused network output;
respectively calculating a first mean square error and a first absolute value error between the segmentation image data and corresponding shunt network output and a second mean square error and a second absolute value error between the segmentation image data and the fusion network output;
And weighting the first mean square error and the first absolute value, and the second mean square error and the second absolute value error, and calculating a self-supervision total loss function of the self-supervision denoising network.
5. The method of claim 1, further comprising, prior to obtaining the final self-supervising denoising network:
inputting a test set obtained from the preprocessed data into a trained self-supervision denoising network, and outputting a network result to output the final self-supervision denoising network under the condition that the network result meets a preset test condition, wherein the test set is not overlapped with the data of the training set for constructing the self-supervision denoising network and is different in size.
6. A scanning light field self-supervising network denoising apparatus, comprising:
the acquisition module is used for acquiring the scanned light field data;
the processing module is used for preprocessing the scanned light field data to obtain preprocessed data; and
and the denoising module is used for constructing a self-supervision denoising network according to the preprocessed data until a preset iteration stop condition is reached, so as to obtain a final self-supervision denoising network for denoising the scanning light field self-supervision network.
7. The scanned light field self-supervising network denoising apparatus of claim 6, wherein the processing module comprises:
the generating unit is used for combining the multi-angle data in the scanned light field data according to various angular scanning arrangement sequences to generate light field image data with various different arrangements;
and/or an enhancement unit, which is used for rotating, turning over and/or clipping the scanned light field data to obtain enhancement data.
8. The scanned light field self-supervising network denoising apparatus of claim 7, wherein the processing module further comprises:
the segmentation module is used for segmenting the light field image data after the various different arrangements to obtain a plurality of pairs of segmented image data with the same dimension, so that each pair of segmented image data is respectively used for multi-path network input forwarding and multi-path network targets or fusion targets in single iteration and is used for self-supervision total loss function feedback.
9. The scanned light field self-supervising network denoising apparatus of claim 8, wherein the denoising module comprises:
the acquisition unit is used for obtaining a plurality of branch network outputs according to the preprocessed data;
the fusion unit is used for fusing the plurality of branched network outputs to obtain fused network output;
A calculation unit, configured to calculate a first mean square error and a first absolute value error between the segmented image data and a corresponding split network output, and a second mean square error and a second absolute value error between the segmented image data and the fused network output, respectively;
and the weighting unit is used for weighting the first mean square error and the first absolute value and the second mean square error and the second absolute value error and calculating a self-supervision total loss function of the self-supervision denoising network.
10. The device for denoising a scanned light field self-monitoring network according to claim 6, wherein the denoising module is further configured to input a test set obtained from the preprocessed data into a trained self-monitoring denoising network before obtaining the final self-monitoring denoising network, and output a network result, so as to output the final self-monitoring denoising network if the network result meets a preset test condition, wherein the test set is not coincident with the data of a training set for constructing the self-monitoring denoising network and has a different size.
11. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor executing the program to implement the scanned light field self-supervising network denoising method as claimed in any one of claims 1 to 5.
12. A computer readable storage medium having stored thereon a computer program, wherein the program is executed by a processor for implementing the scanned light field self-supervised network denoising method as claimed in any one of claims 1 to 5.
CN202410032352.9A 2024-01-09 2024-01-09 Scanning light field self-supervision network denoising method and device, electronic equipment and medium Active CN117541501B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410032352.9A CN117541501B (en) 2024-01-09 2024-01-09 Scanning light field self-supervision network denoising method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410032352.9A CN117541501B (en) 2024-01-09 2024-01-09 Scanning light field self-supervision network denoising method and device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN117541501A true CN117541501A (en) 2024-02-09
CN117541501B CN117541501B (en) 2024-05-31

Family

ID=89792340

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410032352.9A Active CN117541501B (en) 2024-01-09 2024-01-09 Scanning light field self-supervision network denoising method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN117541501B (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3109821A1 (en) * 2015-06-26 2016-12-28 Thomson Licensing Real-time light-field denoising
CN111640073A (en) * 2020-05-15 2020-09-08 哈尔滨工业大学 Image blind denoising system
CN113537025A (en) * 2021-07-08 2021-10-22 浙江工业大学 Electromagnetic modulation signal deep denoising method and system based on self-supervision learning
US20220028041A1 (en) * 2020-07-27 2022-01-27 Boe Technology Group Co., Ltd. Image denoising method and apparatus, electronic device and non-transitory computer readalble storage medium
CN114155340A (en) * 2021-10-20 2022-03-08 清华大学 Reconstruction method and device of scanning light field data, electronic equipment and storage medium
CN114897737A (en) * 2022-05-25 2022-08-12 南京邮电大学 Hyperspectral image denoising method based on non-paired unsupervised neural network
CN115423946A (en) * 2022-11-02 2022-12-02 清华大学 Large scene elastic semantic representation and self-supervision light field reconstruction method and device
US20230013779A1 (en) * 2021-07-06 2023-01-19 GE Precision Healthcare LLC Self-supervised deblurring
US20230103638A1 (en) * 2021-10-06 2023-04-06 Google Llc Image-to-Image Mapping by Iterative De-Noising
US20230177641A1 (en) * 2020-06-30 2023-06-08 Huawei Technologies Co., Ltd. Neural network training method, image processing method, and apparatus
CN116385280A (en) * 2023-01-09 2023-07-04 爱芯元智半导体(上海)有限公司 Image noise reduction system and method and noise reduction neural network training method
US20230230206A1 (en) * 2020-04-27 2023-07-20 Sanechips Technology Co., Ltd. Image denoising method and apparatus, electronic device, and storage medium
CN116628421A (en) * 2023-05-19 2023-08-22 北京航空航天大学 IMU (inertial measurement Unit) original data denoising method based on self-supervision learning neural network model
CN116721017A (en) * 2023-06-20 2023-09-08 中国科学院生物物理研究所 Self-supervision microscopic image super-resolution processing method and system
WO2023201783A1 (en) * 2022-04-18 2023-10-26 清华大学 Light field depth estimation method and apparatus, and electronic device and storage medium
US20230370104A1 (en) * 2022-05-13 2023-11-16 DeepSig Inc. Processing antenna signals using machine learning networks with self-supervised learning
KR20230165686A (en) * 2022-05-27 2023-12-05 삼성전자주식회사 Method and electronic device for performing denosing processing on image data
US20230394631A1 (en) * 2020-11-06 2023-12-07 Rensselaer Polytechnic Institute Noise2sim - similarity-based self-learning for image denoising
CN117333398A (en) * 2023-10-26 2024-01-02 南京信息工程大学 Multi-scale image denoising method and device based on self-supervision

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3109821A1 (en) * 2015-06-26 2016-12-28 Thomson Licensing Real-time light-field denoising
US20230230206A1 (en) * 2020-04-27 2023-07-20 Sanechips Technology Co., Ltd. Image denoising method and apparatus, electronic device, and storage medium
CN111640073A (en) * 2020-05-15 2020-09-08 哈尔滨工业大学 Image blind denoising system
US20230177641A1 (en) * 2020-06-30 2023-06-08 Huawei Technologies Co., Ltd. Neural network training method, image processing method, and apparatus
US20220028041A1 (en) * 2020-07-27 2022-01-27 Boe Technology Group Co., Ltd. Image denoising method and apparatus, electronic device and non-transitory computer readalble storage medium
US20230394631A1 (en) * 2020-11-06 2023-12-07 Rensselaer Polytechnic Institute Noise2sim - similarity-based self-learning for image denoising
US20230013779A1 (en) * 2021-07-06 2023-01-19 GE Precision Healthcare LLC Self-supervised deblurring
CN113537025A (en) * 2021-07-08 2021-10-22 浙江工业大学 Electromagnetic modulation signal deep denoising method and system based on self-supervision learning
US20230103638A1 (en) * 2021-10-06 2023-04-06 Google Llc Image-to-Image Mapping by Iterative De-Noising
CN114155340A (en) * 2021-10-20 2022-03-08 清华大学 Reconstruction method and device of scanning light field data, electronic equipment and storage medium
WO2023201783A1 (en) * 2022-04-18 2023-10-26 清华大学 Light field depth estimation method and apparatus, and electronic device and storage medium
US20230370104A1 (en) * 2022-05-13 2023-11-16 DeepSig Inc. Processing antenna signals using machine learning networks with self-supervised learning
CN114897737A (en) * 2022-05-25 2022-08-12 南京邮电大学 Hyperspectral image denoising method based on non-paired unsupervised neural network
KR20230165686A (en) * 2022-05-27 2023-12-05 삼성전자주식회사 Method and electronic device for performing denosing processing on image data
CN115423946A (en) * 2022-11-02 2022-12-02 清华大学 Large scene elastic semantic representation and self-supervision light field reconstruction method and device
CN116385280A (en) * 2023-01-09 2023-07-04 爱芯元智半导体(上海)有限公司 Image noise reduction system and method and noise reduction neural network training method
CN116628421A (en) * 2023-05-19 2023-08-22 北京航空航天大学 IMU (inertial measurement Unit) original data denoising method based on self-supervision learning neural network model
CN116721017A (en) * 2023-06-20 2023-09-08 中国科学院生物物理研究所 Self-supervision microscopic image super-resolution processing method and system
CN117333398A (en) * 2023-10-26 2024-01-02 南京信息工程大学 Multi-scale image denoising method and device based on self-supervision

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
DAN ZHANG; FANGFANG ZHOU: "《Self-Supervised Image Denoising for Real-World Images With Context-Aware Transformer》", 《IEEE ACCESS ( VOLUME: 11)》, 10 February 2023 (2023-02-10), pages 14340 *
YIHUI FENG; XIANMING LIU; YONGBING ZHANG; QIONGHAI DAI: "《2017 IEEE International Conference on Image Processing (ICIP)》", 《2017 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)》, 17 September 2017 (2017-09-17), pages 4063 - 4067 *
戴琼海 , 范静涛 , 吴嘉敏 , 卢志: "《自适应扫描光场显微成像技术:打破活体成像壁垒》", 《 前沿科学》, no. 1, 30 August 2022 (2022-08-30), pages 39 - 43 *
方璐 戴琼海: "《计算光场成像》", 《光学学报》, vol. 40, no. 1, 11 May 2020 (2020-05-11), pages 3 - 24 *
马红强;马时平;许悦雷;吕超;朱明明;: "基于改进栈式稀疏去噪自编码器的自适应图像去噪", 光学学报, no. 10, 11 May 2018 (2018-05-11), pages 128 - 135 *

Also Published As

Publication number Publication date
CN117541501B (en) 2024-05-31

Similar Documents

Publication Publication Date Title
CN110245659B (en) Image salient object segmentation method and device based on foreground and background interrelation
CN110378348B (en) Video instance segmentation method, apparatus and computer-readable storage medium
CN111968123B (en) Semi-supervised video target segmentation method
CN108961180B (en) Infrared image enhancement method and system
CN111462012A (en) SAR image simulation method for generating countermeasure network based on conditions
CN111385490B (en) Video splicing method and device
CN112102182A (en) Single image reflection removing method based on deep learning
CN110443874B (en) Viewpoint data generation method and device based on convolutional neural network
CN111626960A (en) Image defogging method, terminal and computer storage medium
US6049625A (en) Method of and an apparatus for 3-dimensional structure estimation
CN114140623A (en) Image feature point extraction method and system
CN113610905A (en) Deep learning remote sensing image registration method based on subimage matching and application
CN116664446A (en) Lightweight dim light image enhancement method based on residual error dense block
CN117351448B (en) Improved polarized image road target detection method based on YOLOv8
CN114299358A (en) Image quality evaluation method and device, electronic equipment and machine-readable storage medium
Kim et al. Infrared and visible image fusion using a guiding network to leverage perceptual similarity
CN117541501B (en) Scanning light field self-supervision network denoising method and device, electronic equipment and medium
CN115810112A (en) Image processing method, image processing device, storage medium and electronic equipment
CN114926352B (en) Image antireflection method, system, device and storage medium
CN116912391A (en) Reverse rendering method and device combining nerve radiation field and steerable path tracking
CN113947547B (en) Monte Carlo rendering graph noise reduction method based on multi-scale kernel prediction convolutional neural network
CN114862695A (en) Single-image rain and fog removing method and equipment based on scene depth and storage medium
CN111310916B (en) Depth system training method and system for distinguishing left and right eye pictures
CN112102208B (en) Underwater image processing system, method, apparatus, and medium with edge preservation
CN114972937A (en) Feature point detection and descriptor generation method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant