CN115984146A - Global consistency-based marine chlorophyll concentration image completion method and network - Google Patents
Global consistency-based marine chlorophyll concentration image completion method and network Download PDFInfo
- Publication number
- CN115984146A CN115984146A CN202310250397.9A CN202310250397A CN115984146A CN 115984146 A CN115984146 A CN 115984146A CN 202310250397 A CN202310250397 A CN 202310250397A CN 115984146 A CN115984146 A CN 115984146A
- Authority
- CN
- China
- Prior art keywords
- image
- global consistency
- time sequence
- vector
- global
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of image processing, and discloses a method and a network for completing an ocean chlorophyll concentration image based on global consistency. The invention solves the problems that the prior art does not consider the conditions at other moments and does not fully utilize the constraint of global consistency.
Description
Technical Field
The invention belongs to the technical field of image processing, relates to an image completion method, and particularly relates to a method and a network for completing an ocean chlorophyll concentration image based on global consistency.
Background
The development of sea surface chlorophyll concentration space-time data field completion by utilizing a deep neural network is an important attempt for applying artificial intelligence to ocean science in recent years. The traditional method is generally based on a generative model, such as a generative confrontation network, a reconstruction model, such as a neural network model like a coder-decoder, and the like, and utilizes historical marine data to directly estimate and complement a marine chlorophyll concentration completion field. The conventional method has the following problems: the completion is unstable and inaccurate, and the reason is caused by: on one hand, the traditional method usually adopts a one-time completion mode to directly complete the missing area, and on the other hand, the numerical range of the chlorophyll concentration field is usually very large, and the direct estimation of the large numerical range can cause the completion process to be very unstable, and simultaneously, the numerical value of each point is difficult to accurately estimate, thereby limiting the completion accuracy.
At present, a rough-to-fine image completion mechanism is adopted in a marine chlorophyll concentration completion leading edge method based on deep learning, a two-stage completion method is mostly adopted, the overall structure is shown in fig. 1, a marine chlorophyll concentration monthly mean value is firstly adopted to estimate a weekly mean value, then the deviation of a missing part is estimated by means of a deviation value between the non-missing part and the weekly mean value in marine chlorophyll concentration daily data to be completed, and then the daily chlorophyll concentration field is superposed with the weekly mean value to obtain a completed daily chlorophyll concentration field, so that the overall consistency and the local fine granularity accuracy are both considered in the image completion process, and the problems of unstable and inaccurate completion caused by the fact that an image is directly completed by a marine image completion traditional method based on deep learning are solved.
However, this method has the following problems: firstly, only the data which is not missing on the day and is to be complemented is utilized in the local modeling process, the conditions of other moments are ignored, and the reliability of the result is reduced. For example, in marine images, a single-moment marine chlorophyll concentration field is not only affected in magnitude by multiple previous moments. Secondly, the bias estimated from the missing part and the cycle average value are directly superposed in the local modeling process to obtain a completion result, and the fusion mechanism does not fully utilize the constraint of global consistency and is easy to introduce local noise interference. For example, in the process of estimating missing area data by using a data pattern of deviation between non-missing data and week-mean data, the data pattern almost completely depends on the condition of the non-missing data, in a marine environment, a chlorophyll concentration field is generally interfered by noise of some uncontrollable factors, such as suddenly appearing typhoon, vortex in the sea and the like, so that the non-missing data carries a large amount of noise interference, the estimation of a missing part by a local modeling process is interfered, and then the method directly superimposes the result of local estimation and the week-mean, and the simple and direct fusion mode does not use the global consistency of medium-long time-series means to carry out noise reduction constraint on the fusion process of the two, so that the noise interference is easily introduced, and the quality of completion is reduced.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a global consistency-based marine chlorophyll concentration image completion method and a network, wherein a global consistency vector is extracted by a global consistency extraction network by means of a cycle average value, then a time sequence evolution correlation extraction network extracts time sequence correlation vectors at other moments by means of historical data, and finally, in the image completion network, the method completes the chlorophyll concentration image by means of the constraint of the global consistency vector and the time sequence correlation vectors at other moments; the invention solves the problems that the conditions at other moments are not considered in the local modeling process and the constraint of global consistency is not fully utilized in the prior art.
In order to solve the technical problems, the invention adopts the technical scheme that:
firstly, the invention provides a method for complementing marine chlorophyll concentration images based on global consistency, which comprises the following steps of respectively introducing.
S1, acquiring image data:
and acquiring chlorophyll concentration images SST, weekly average images, images to be compensated and cloud mask images of the images to be compensated at different moments.
S2, global consistency extraction:
the method comprises the steps of inputting a weekly average image, an image to be compensated and a cloud mask image of the image to be compensated, and outputting a global consistency vector through a global information encoder consisting of a neural network.
S3, extracting time sequence evolution correlation:
the method comprises the steps of obtaining cloud mask images M of chlorophyll concentration images SST at different moments, obtaining deviation images at different moments after the difference between the chlorophyll concentration images SST at different moments and a week average image is made, then inputting the deviation images and the corresponding cloud mask images M into a time sequence information encoder composed of a neural network, then outputting characterization vectors at different moments, finally extracting time sequence correlation of the characterization vectors at different moments by using a time sequence modeling module composed of a gate control circulation neural network unit, and outputting the time sequence correlation vectors.
S4, image completion:
and (3) inputting the global consistency vector output in the step (S2) and the time sequence correlation vector output in the step (S3) into a time sequence evolution deep layer decoding module under the global consistency constraint designed based on the U-net network, completing the image by the time sequence evolution deep layer decoding module under the global consistency constraint based on the constraint of the global consistency vector and the time sequence correlation vector, and outputting the completed image.
And finally, sending the completed image and the real image into a discriminator together to train the whole model.
Furthermore, the global information encoder and the time sequence information encoder have the same structure and comprise a plurality of gated convolutional layers.
Further, the time sequence modeling module comprisesThe characterization vectors at different moments are used as the input of the time sequence modeling module and are respectively input into the gated cyclic neural network unit 1, the gated cyclic neural network unit 2 and the gated cyclic neural network unit 3, which are respectively referred to as the gated cyclic neural network units for short、/>、/>The process is as follows:
wherein, the hidden state is usedTo indicate timing information between different times, in which &>Initializing to a random value; first, the characterization vector at time t-3 is ≧>And a random initial hidden state>Is sent into together>Then subsequently>Output hidden status->And then hidden state->And then the characterization vector at time t-2>Is sent into together>Analogously, subsequently->Output hidden status->And is compared with the characterization vector at time t-1>Together input>And finally->Hidden status of output>I.e. is the timing dependency vector pick>。
Further, the time-series evolution deep layer decoding module under the global consistency constraint includes an upper flow and a lower flow, the upper flow includes a plurality of basic neural network layers composed of an deconvolution layer and an original convolution layer, the lower flow includes a plurality of neural network layers composed of an deconvolution layer, a jump connection layer and an original convolution layer, the upper flow processes the global consistency vector output by the step S2, the lower flow processes the time-series correlation vector output by the step S3, the output of the basic neural network layer of each layer in the upper flow performs f operation with the corresponding output in the lower flow, wherein the formula of the f operation is as follows:
wherein, the first and the second end of the pipe are connected with each other,、/>is a hyper-parameter->Representative point multiplication, sigmoid being a sigmoid function,/'>For the global coherency vector output in step S2, <' >>Is the timing correlation vector output in step S3.
Secondly, the invention also provides a global consistency-based marine chlorophyll concentration image completion network, which is used for realizing the global consistency-based marine chlorophyll concentration image completion method, wherein the network comprises a global consistency extraction network, a time sequence evolution correlation extraction network and an image completion network, and the global consistency extraction network comprises a global information encoder for extracting a global consistency vector; the time sequence evolution correlation extraction network comprises a time sequence information encoder and a time sequence modeling module and is used for extracting time sequence correlation vectors; the image completion network comprises a time sequence evolution deep decoding module and a discriminator under global consistency constraint, and is used for completing an original image by using the extracted global consistency vector and the time sequence correlation vector to obtain a completed image, and distinguishing the completed image from the original image.
Compared with the prior art, the invention has the advantages that:
(1) In the time sequence evolution correlation extraction network, the deviation between the data at other moments and the cycle average value is calculated firstly, and then the time sequence correlation at other moments is extracted through a time sequence modeling module. By fully considering the deviation data of a plurality of moments, the potential information and the change rule of the potential information at other moments can be further mined, the problem that the conditions of other moments are not considered in the local modeling process in the prior art is solved, the reliability of the local modeling result is improved, and the accuracy of image completion is improved;
(2) In the image completion network, the global consistency vector and the time sequence correlation vector are respectively processed by utilizing the upper and lower two flows of a time sequence evolution deep decoding module under the global consistency constraint, the output of each layer of the upper flow and each layer of the lower flow are deeply embedded (f operation) in the processing process, the global consistency constraint is fully utilized to guide the fusion of the two kinds of information, and the global consistency constraint is fully utilized to reduce the noise interference of a local modeling result to a greater extent. The method solves the problems that in the prior art, the operation of directly superposing the deviation obtained by missing part estimation and the cycle average value in the local modeling process is too direct and noise is easily introduced, and improves the accuracy of image completion.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of a prior art process;
FIG. 2 is a diagram of a network architecture of the present invention;
FIG. 3 is a block diagram of an encoder of the present invention;
FIG. 4 is a block diagram of a timing modeling module of the present invention;
FIG. 5 is a block diagram of a deep decoding module for time-series evolution under global consistency constraint according to the present invention;
wherein G in FIG. 3 represents a gated convolution layer;
in fig. 5, R represents deconvolution, S represents jump connection, V represents original convolution, and f means f operation.
Detailed Description
The invention is further described with reference to the following figures and specific examples.
Example 1
Referring to fig. 2, the present embodiment provides a method for completing an ocean chlorophyll concentration image based on global consistency, which includes the following steps, which are described separately.
S1, acquiring image data:
obtaining chlorophyll concentration images SST and week average images at different momentsTo-be-compensated imageCloud mask image>。
In the present embodiment, three different times t-1, t-2, and t-3 are taken as examples for explanation, that is, the chlorophyll concentration images SST at different times areAbbreviated as->。
S2, global consistency extraction:
input week-averaged imageThe image to be compensated is based on>And the full image to be compensated->The cloud mask image->Outputting a global consistency vector by a global information encoder consisting of a neural network。
The global information encoder structure is shown in fig. 3, and includes a plurality of gated convolutional layers, where G in fig. 3 represents a gated convolutional layer.
S3, extracting time sequence evolution correlation:
obtaining chlorophyll concentration images at different momentsCloud mask image ofThe cloud mask image may be abbreviated as £ er>The chlorophyll concentration images at different moments are->And week mean image>The difference is taken to obtain deviation images->(in FIG. 2 +)>Representing a difference operation), and then the deviation image @>And the corresponding cloud mask image->Inputting the signals into a time sequence information encoder composed of a neural network together, and then outputting the characterization vectors at different time instantsFinally, a time sequence modeling module composed of Gated Recurrent Unit (GRU) is used for extracting the characterization vectors at different momentsThe output timing correlation vector ≥>。
The time sequence information encoder structure is the same as the global information encoder structure and comprises a plurality of gating convolution layers.
The structure of the time sequence modeling module is shown in fig. 4, and the time sequence modeling module includes a plurality of gated recurrent neural network units, 3 in this embodiment, which are respectively a gated recurrent neural network unit 1, a gated recurrent neural network unit 2, and a gated recurrent neural network unit 3, and are respectively referred to as simply "gated recurrent neural network unit 1", "gated recurrent neural network unit 2", and "gated recurrent neural network unit 3", for short、/>、/>Feature vectors at different time instancesAs inputs to the timing modeling module, the respective inputs ≥>、/>、The process is as follows:
wherein, the hidden state is usedTo indicate timing information between different moments in time, wherein ≥ is present>Initialized to a random value.
First, at time t-3Token vector of momentAnd a random initial hidden state>Is sent into together>And subsequently->Output hidden status->Then, hidden state>And then the characterization vector at time t-2>Are sent togetherAnalogously, subsequently->Output hidden status>And is compared with the characterization vector at time t-1>Input togetherAnd finally->Hidden state of output->I.e. is the timing dependency vector pick>。
Wherein, the token vector 1 in FIG. 4 is the token vector at time t-3Characterization vector 2 is the characterization vector at time t-2 @>Characterization vector 3 is the characterization vector at time t-1>Hidden state 0 represents the initial hidden state->Hidden State 1 represents a hidden State->Hidden state 2 represents a hidden state->。
The partial design solves the problem that the conditions of other moments are not considered in the local modeling process in the prior art.
S4, image completion:
the global consistency vector output by the step S2And the timing correlation vector ≥ at another time instant on the output of step S3>Inputting a time sequence evolution deep layer decoding module under the global consistency constraint designed based on the U-net network, wherein the time sequence evolution deep layer decoding module under the global consistency constraint is based on a global consistency vector->Is restricted and the timing dependency vector->Complementing the image and outputting the complemented image>。
The time sequence evolution deep layer decoding module under the global consistency constraint is shown in fig. 5, wherein R in the figure represents deconvolution, S represents jump connection, and V represents original convolution. Different from the traditional U-net structure, the invention aims to give full play to the global consistency vector in the fusion processThe time sequence evolution deep layer decoding module under the global consistency constraint comprises an upper stream and a lower stream, wherein the upper stream comprises a plurality of layers of basic neural network layers consisting of deconvolution layers and original convolution layers, the lower layer comprises a plurality of layers of neural network layers consisting of deconvolution layers, jump connection layers and original convolution layers, and a global consistency vector (^ er) output in the step S2 of processing the upper stream>The timing correlation vector of the other time outputted from the downstream processing step S3The output of the basic neural network layer of each layer in the upstream is f-operated with the corresponding output in the downstream, wherein f-operation is formulated as follows:
wherein the content of the first and second substances,、/>is a hyper-parameter>The representative point is multiplied by, sigmoid is a sigmoid function, f is an f-operation,for the global coherency vector output in step S2, <' >>Is the timing correlation vector output in step S3.
The method solves the problems that in the prior art, the bias obtained by estimation of a missing part and the weekly average are directly superposed to obtain a completion result in the local modeling process, the constraint of global consistency is not fully utilized, and local noise interference is easily introduced.
Finally, the completed imageAnd a real image>And (3) the training data is sent to a discriminator together to train the whole model, wherein the training of the discriminator and the training of the whole model related to the content of the part can be carried out by adopting the prior art, namely the countertraining of the generator and the discriminator is not repeated.
In addition, it should be noted that the loss function of the present invention is as follows:
wherein D represents the number of the discriminators,indicates that the discriminator D is based on the real image->Is distinguished, G represents the network model of the invention, is judged>Indicates that the discriminator D discriminates the image generated by the present model G, and>、/>are respectively the parameters to be learned of the discriminator D and the network model G of the invention>For a true marine chlorophyll concentration image, X is the input to the network model of the invention:
wherein, the first and the second end of the pipe are connected with each other,is a weekly mean image>For the full image to be compensated, is>A cloud mask image for a full image to be compensated, based on the image data>For images of chlorophyll concentration at different times>And cloud mask images are correspondingly generated for chlorophyll concentration images at different moments.
Example 2
The embodiment provides a global consistency-based marine chlorophyll concentration image completion network, which comprises a global consistency extraction network, a time sequence evolution correlation extraction network and an image completion network.
And the global consistency extraction network comprises a global information encoder used for extracting the global consistency vector.
The time sequence evolution correlation extraction network comprises a time sequence information encoder and a time sequence modeling module and is used for extracting time sequence correlation vectors.
The image completion network comprises a time sequence evolution deep decoding module and a discriminator under global consistency constraint, and is used for completing an original image by using the extracted global consistency vector and the time sequence correlation vector to obtain a completed image, and distinguishing the completed image from the original image.
The network is used for realizing a global consistency-based marine chlorophyll concentration image completion method, and the method flow, the composition and the functions of each module can be referred to the record of the embodiment 1, and are not described herein again.
When the method is applied, the marine chlorophyll concentration image to be supplemented is input into the global consistency-based marine chlorophyll concentration image supplementation network, and the chlorophyll concentration image which fully considers the influence of the marine chlorophyll concentration fields at other moments and fully utilizes global consistency constraint after supplementation can be output.
In summary, the timing evolution correlation extraction network of the present invention is used to solve the problem that the conditions of other times are not considered in the local modeling process in the prior art; the image completion network of the method is used for solving the problems that in the prior art, the offset obtained by estimation of a missing part and a weekly average value are directly superposed in the local modeling process to obtain a completion result, the constraint of global consistency is not fully utilized, and local noise interference is easily introduced. By the aid of the global consistency-based ocean chlorophyll concentration image completion network and method, accurate completion of the ocean chlorophyll concentration image is achieved.
It is understood that the above description is not intended to limit the present invention, and the present invention is not limited to the above examples, and those skilled in the art should understand that they can make various changes, modifications, additions and substitutions within the spirit and scope of the present invention.
Claims (5)
1. The method for complementing the marine chlorophyll concentration image based on global consistency is characterized by comprising the following steps of:
s1, acquiring image data:
acquiring chlorophyll concentration images SST, weekly average images, images to be compensated and cloud mask images of the images to be compensated at different moments;
s2, global consistency extraction:
inputting a weekly average image, an image to be compensated and a cloud mask image of the image to be compensated, and outputting a global consistency vector through a global information encoder consisting of a neural network;
s3, extracting time sequence evolution correlation:
acquiring cloud mask images M of chlorophyll concentration images SST at different moments, subtracting the chlorophyll concentration images SST at the different moments from the weekly average image to obtain deviation images at the different moments, inputting the deviation images and the corresponding cloud mask images M into a time sequence information encoder composed of a neural network, then outputting characterization vectors at the different moments, and finally extracting time sequence correlation of the characterization vectors at the different moments by using a time sequence modeling module composed of a gated cyclic neural network unit and outputting the time sequence correlation vectors;
s4, image completion:
inputting the global consistency vector output in the step S2 and the time sequence correlation vector output in the step S3 into a time sequence evolution deep layer decoding module under the global consistency constraint based on the U-net network design, completing the image by the time sequence evolution deep layer decoding module under the global consistency constraint based on the constraint of the global consistency vector and the time sequence correlation vector, and outputting a completed image;
and finally, the completed image and the real image are sent to a discriminator together to train the whole model.
2. The marine chlorophyll concentration image completion method based on global consistency of claim 1, wherein the global information encoder and the time-series information encoder are identical in structure and comprise a plurality of gated convolution layers.
3. The global consistency-based marine chlorophyll concentration image completion method according to claim 1, wherein the time-series modeling module comprises a plurality of gated recurrent neural network units, and the characterization vectors at different times are used as inputs of the time-series modeling module and are respectively input into the time-series modeling moduleAn input gate-controlled recurrent neural network unit 1, a gate-controlled recurrent neural network unit 2 and a gate-controlled recurrent neural network unit 3, which are respectively abbreviated as、/>、/>The process is as follows:
wherein, the hidden state is usedTo indicate timing information between different moments in time, wherein ≥ is present>Initializing to a random value; first, the characterization vector at time t-3 is ≧>And a random initial hidden state>Is sent into together>Then subsequently>Output hidden status->And then hidden state->And then the characterization vector at time t-2>Is sent into together>Analogously, subsequently->Output hidden status->And associated with a characterization vector at time t-1>Is input together>And finallyHidden state of output->I.e. is the timing dependency vector pick>。
4. The method for completing marine chlorophyll concentration images according to claim 1, wherein the time-series evolution deep decoding module under the global consistency constraint comprises an upper stream and a lower stream, the upper stream comprises multiple basic neural network layers composed of deconvolution layers and original convolution layers, the lower layer comprises multiple neural network layers composed of deconvolution layers, jump connection layers and original convolution layers, the global consistency vector output by the upper stream processing step S2 and the time-series correlation vector output by the lower stream processing step S3, the output of the basic neural network layer of each layer in the upper stream is f-operated with the corresponding output in the lower stream, wherein the f-operated formula is as follows:
5. The global consistency-based marine chlorophyll concentration image completion network is characterized by being used for realizing the global consistency-based marine chlorophyll concentration image completion method according to any one of claims 1 to 4, wherein the network comprises a global consistency extraction network, a time-sequence evolution correlation extraction network and an image completion network, and the global consistency extraction network comprises a global information encoder for extracting a global consistency vector;
the time sequence evolution correlation extraction network comprises a time sequence information encoder and a time sequence modeling module and is used for extracting time sequence correlation vectors;
the image completion network comprises a time sequence evolution deep decoding module and a discriminator under global consistency constraint, and is used for completing an original image by using the extracted global consistency vector and the extracted time sequence correlation vector to obtain a completed image, and distinguishing the completed image from the original image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310250397.9A CN115984146B (en) | 2023-03-16 | 2023-03-16 | Method and network for supplementing ocean chlorophyll concentration image based on global consistency |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310250397.9A CN115984146B (en) | 2023-03-16 | 2023-03-16 | Method and network for supplementing ocean chlorophyll concentration image based on global consistency |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115984146A true CN115984146A (en) | 2023-04-18 |
CN115984146B CN115984146B (en) | 2023-07-07 |
Family
ID=85965181
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310250397.9A Active CN115984146B (en) | 2023-03-16 | 2023-03-16 | Method and network for supplementing ocean chlorophyll concentration image based on global consistency |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115984146B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117935066A (en) * | 2024-03-25 | 2024-04-26 | 中国海洋大学 | Sea surface temperature complement method and system based on parallel multi-scale constraint |
CN117994171A (en) * | 2024-04-03 | 2024-05-07 | 中国海洋大学 | Sea surface temperature image complement method based on Fourier transform diffusion model |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000015868A (en) * | 1998-07-03 | 2000-01-18 | Canon Inc | Image-recording apparatus and method for recording image |
CN110210514A (en) * | 2019-04-24 | 2019-09-06 | 北京林业大学 | Production fights network training method, image completion method, equipment and storage medium |
CN113821760A (en) * | 2021-11-23 | 2021-12-21 | 湖南工商大学 | Air data completion method, device, equipment and storage medium |
CN114463214A (en) * | 2022-01-28 | 2022-05-10 | 北京邮电大学 | Double-path iris completion method and system guided by regional attention mechanism |
CN114708295A (en) * | 2022-04-02 | 2022-07-05 | 华南理工大学 | Logistics package separation method based on Transformer |
CN114780739A (en) * | 2022-04-14 | 2022-07-22 | 武汉大学 | Time sequence knowledge graph completion method and system based on time graph convolution network |
CN114821285A (en) * | 2022-04-15 | 2022-07-29 | 北京工商大学 | System and method for predicting cyanobacterial bloom based on ACONV-LSTM and New-GANs combination |
CN114897955A (en) * | 2022-04-25 | 2022-08-12 | 电子科技大学 | Depth completion method based on micro-geometric propagation |
CN115374903A (en) * | 2022-06-19 | 2022-11-22 | 山东高速集团有限公司 | Long-term pavement monitoring data enhancement method based on expressway sensor network layout |
CN115470201A (en) * | 2022-08-30 | 2022-12-13 | 同济大学 | Intelligent ocean remote sensing missing data completion method based on graph attention network |
WO2022257408A1 (en) * | 2021-06-10 | 2022-12-15 | 南京邮电大学 | Medical image segmentation method based on u-shaped network |
CN115587646A (en) * | 2022-09-09 | 2023-01-10 | 中国海洋大学 | Method and system for predicting concentration of chlorophyll a in offshore area based on space-time characteristic fusion |
-
2023
- 2023-03-16 CN CN202310250397.9A patent/CN115984146B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000015868A (en) * | 1998-07-03 | 2000-01-18 | Canon Inc | Image-recording apparatus and method for recording image |
CN110210514A (en) * | 2019-04-24 | 2019-09-06 | 北京林业大学 | Production fights network training method, image completion method, equipment and storage medium |
WO2022257408A1 (en) * | 2021-06-10 | 2022-12-15 | 南京邮电大学 | Medical image segmentation method based on u-shaped network |
CN113821760A (en) * | 2021-11-23 | 2021-12-21 | 湖南工商大学 | Air data completion method, device, equipment and storage medium |
CN114463214A (en) * | 2022-01-28 | 2022-05-10 | 北京邮电大学 | Double-path iris completion method and system guided by regional attention mechanism |
CN114708295A (en) * | 2022-04-02 | 2022-07-05 | 华南理工大学 | Logistics package separation method based on Transformer |
CN114780739A (en) * | 2022-04-14 | 2022-07-22 | 武汉大学 | Time sequence knowledge graph completion method and system based on time graph convolution network |
CN114821285A (en) * | 2022-04-15 | 2022-07-29 | 北京工商大学 | System and method for predicting cyanobacterial bloom based on ACONV-LSTM and New-GANs combination |
CN114897955A (en) * | 2022-04-25 | 2022-08-12 | 电子科技大学 | Depth completion method based on micro-geometric propagation |
CN115374903A (en) * | 2022-06-19 | 2022-11-22 | 山东高速集团有限公司 | Long-term pavement monitoring data enhancement method based on expressway sensor network layout |
CN115470201A (en) * | 2022-08-30 | 2022-12-13 | 同济大学 | Intelligent ocean remote sensing missing data completion method based on graph attention network |
CN115587646A (en) * | 2022-09-09 | 2023-01-10 | 中国海洋大学 | Method and system for predicting concentration of chlorophyll a in offshore area based on space-time characteristic fusion |
Non-Patent Citations (3)
Title |
---|
XIAOMIN YE等: "Global Ocean Chlorophyll-a Concentrations Derived From COCTS Onboard the HY-1C Satellite and Their Preliminary Evaluation", IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, vol. 59, no. 12, pages 9914, XP011889019, DOI: 10.1109/TGRS.2020.3036963 * |
冀俭俭;杨刚;: "基于生成对抗网络的分级联合图像补全方法", 图学学报, no. 06, pages 29 - 37 * |
郭瑞杰: "基于时序提名的视频动作检测算法研究", 中国优秀硕士学位论文全文数据库 信息科技辑, no. 1, pages 138 - 2160 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117935066A (en) * | 2024-03-25 | 2024-04-26 | 中国海洋大学 | Sea surface temperature complement method and system based on parallel multi-scale constraint |
CN117935066B (en) * | 2024-03-25 | 2024-05-28 | 中国海洋大学 | Sea surface temperature complement method and system based on parallel multi-scale constraint |
CN117994171A (en) * | 2024-04-03 | 2024-05-07 | 中国海洋大学 | Sea surface temperature image complement method based on Fourier transform diffusion model |
CN117994171B (en) * | 2024-04-03 | 2024-05-31 | 中国海洋大学 | Sea surface temperature image complement method based on Fourier transform diffusion model |
Also Published As
Publication number | Publication date |
---|---|
CN115984146B (en) | 2023-07-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115984146A (en) | Global consistency-based marine chlorophyll concentration image completion method and network | |
Jiang et al. | Edge-enhanced GAN for remote sensing image superresolution | |
Liu et al. | An attention-based approach for single image super resolution | |
CN112598053B (en) | Active significance target detection method based on semi-supervised learning | |
CN115984281B (en) | Multi-task complement method of time sequence sea temperature image based on local specificity deepening | |
CN113450278B (en) | Image rain removing method based on cross-domain collaborative learning | |
CN112541459A (en) | Crowd counting method and system based on multi-scale perception attention network | |
CN112884758B (en) | Defect insulator sample generation method and system based on style migration method | |
CN114692509A (en) | Strong noise single photon three-dimensional reconstruction method based on multi-stage degeneration neural network | |
CN116524062B (en) | Diffusion model-based 2D human body posture estimation method | |
CN116206133A (en) | RGB-D significance target detection method | |
CN113112003A (en) | Data amplification and deep learning channel estimation performance improvement method based on self-encoder | |
CN111222453B (en) | Remote sensing image change detection method based on dense connection and geometric structure constraint | |
Chen et al. | Dehrformer: Real-time transformer for depth estimation and haze removal from varicolored haze scenes | |
CN112766217A (en) | Cross-modal pedestrian re-identification method based on disentanglement and feature level difference learning | |
CN116052254A (en) | Visual continuous emotion recognition method based on extended Kalman filtering neural network | |
Wang et al. | Detect any shadow: Segment anything for video shadow detection | |
Wang et al. | Predicting diverse future frames with local transformation-guided masking | |
CN111968208A (en) | Human body animation synthesis method based on human body soft tissue grid model | |
CN116402874A (en) | Spacecraft depth complementing method based on time sequence optical image and laser radar data | |
Liu et al. | Attention-guided lightweight generative adversarial network for low-light image enhancement in maritime video surveillance | |
CN112991398B (en) | Optical flow filtering method based on motion boundary guidance of cooperative deep neural network | |
Fang et al. | A Novel Method for Precipitation Nowcasting Based on ST-LSTM. | |
Zhong et al. | Spatio-temporal dual-branch network with predictive feature learning for satellite video object segmentation | |
CN113850719A (en) | RGB image guided depth map super-resolution method based on joint implicit image function |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |