CN115984146A - Global consistency-based marine chlorophyll concentration image completion method and network - Google Patents

Global consistency-based marine chlorophyll concentration image completion method and network Download PDF

Info

Publication number
CN115984146A
CN115984146A CN202310250397.9A CN202310250397A CN115984146A CN 115984146 A CN115984146 A CN 115984146A CN 202310250397 A CN202310250397 A CN 202310250397A CN 115984146 A CN115984146 A CN 115984146A
Authority
CN
China
Prior art keywords
image
global consistency
time sequence
vector
global
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310250397.9A
Other languages
Chinese (zh)
Other versions
CN115984146B (en
Inventor
聂婕
左子杰
温琦
刘安安
孙正雅
魏琪晨
刁雅宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ocean University of China
Original Assignee
Ocean University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ocean University of China filed Critical Ocean University of China
Priority to CN202310250397.9A priority Critical patent/CN115984146B/en
Publication of CN115984146A publication Critical patent/CN115984146A/en
Application granted granted Critical
Publication of CN115984146B publication Critical patent/CN115984146B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of image processing, and discloses a method and a network for completing an ocean chlorophyll concentration image based on global consistency. The invention solves the problems that the prior art does not consider the conditions at other moments and does not fully utilize the constraint of global consistency.

Description

Global consistency-based marine chlorophyll concentration image completion method and network
Technical Field
The invention belongs to the technical field of image processing, relates to an image completion method, and particularly relates to a method and a network for completing an ocean chlorophyll concentration image based on global consistency.
Background
The development of sea surface chlorophyll concentration space-time data field completion by utilizing a deep neural network is an important attempt for applying artificial intelligence to ocean science in recent years. The traditional method is generally based on a generative model, such as a generative confrontation network, a reconstruction model, such as a neural network model like a coder-decoder, and the like, and utilizes historical marine data to directly estimate and complement a marine chlorophyll concentration completion field. The conventional method has the following problems: the completion is unstable and inaccurate, and the reason is caused by: on one hand, the traditional method usually adopts a one-time completion mode to directly complete the missing area, and on the other hand, the numerical range of the chlorophyll concentration field is usually very large, and the direct estimation of the large numerical range can cause the completion process to be very unstable, and simultaneously, the numerical value of each point is difficult to accurately estimate, thereby limiting the completion accuracy.
At present, a rough-to-fine image completion mechanism is adopted in a marine chlorophyll concentration completion leading edge method based on deep learning, a two-stage completion method is mostly adopted, the overall structure is shown in fig. 1, a marine chlorophyll concentration monthly mean value is firstly adopted to estimate a weekly mean value, then the deviation of a missing part is estimated by means of a deviation value between the non-missing part and the weekly mean value in marine chlorophyll concentration daily data to be completed, and then the daily chlorophyll concentration field is superposed with the weekly mean value to obtain a completed daily chlorophyll concentration field, so that the overall consistency and the local fine granularity accuracy are both considered in the image completion process, and the problems of unstable and inaccurate completion caused by the fact that an image is directly completed by a marine image completion traditional method based on deep learning are solved.
However, this method has the following problems: firstly, only the data which is not missing on the day and is to be complemented is utilized in the local modeling process, the conditions of other moments are ignored, and the reliability of the result is reduced. For example, in marine images, a single-moment marine chlorophyll concentration field is not only affected in magnitude by multiple previous moments. Secondly, the bias estimated from the missing part and the cycle average value are directly superposed in the local modeling process to obtain a completion result, and the fusion mechanism does not fully utilize the constraint of global consistency and is easy to introduce local noise interference. For example, in the process of estimating missing area data by using a data pattern of deviation between non-missing data and week-mean data, the data pattern almost completely depends on the condition of the non-missing data, in a marine environment, a chlorophyll concentration field is generally interfered by noise of some uncontrollable factors, such as suddenly appearing typhoon, vortex in the sea and the like, so that the non-missing data carries a large amount of noise interference, the estimation of a missing part by a local modeling process is interfered, and then the method directly superimposes the result of local estimation and the week-mean, and the simple and direct fusion mode does not use the global consistency of medium-long time-series means to carry out noise reduction constraint on the fusion process of the two, so that the noise interference is easily introduced, and the quality of completion is reduced.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a global consistency-based marine chlorophyll concentration image completion method and a network, wherein a global consistency vector is extracted by a global consistency extraction network by means of a cycle average value, then a time sequence evolution correlation extraction network extracts time sequence correlation vectors at other moments by means of historical data, and finally, in the image completion network, the method completes the chlorophyll concentration image by means of the constraint of the global consistency vector and the time sequence correlation vectors at other moments; the invention solves the problems that the conditions at other moments are not considered in the local modeling process and the constraint of global consistency is not fully utilized in the prior art.
In order to solve the technical problems, the invention adopts the technical scheme that:
firstly, the invention provides a method for complementing marine chlorophyll concentration images based on global consistency, which comprises the following steps of respectively introducing.
S1, acquiring image data:
and acquiring chlorophyll concentration images SST, weekly average images, images to be compensated and cloud mask images of the images to be compensated at different moments.
S2, global consistency extraction:
the method comprises the steps of inputting a weekly average image, an image to be compensated and a cloud mask image of the image to be compensated, and outputting a global consistency vector through a global information encoder consisting of a neural network.
S3, extracting time sequence evolution correlation:
the method comprises the steps of obtaining cloud mask images M of chlorophyll concentration images SST at different moments, obtaining deviation images at different moments after the difference between the chlorophyll concentration images SST at different moments and a week average image is made, then inputting the deviation images and the corresponding cloud mask images M into a time sequence information encoder composed of a neural network, then outputting characterization vectors at different moments, finally extracting time sequence correlation of the characterization vectors at different moments by using a time sequence modeling module composed of a gate control circulation neural network unit, and outputting the time sequence correlation vectors.
S4, image completion:
and (3) inputting the global consistency vector output in the step (S2) and the time sequence correlation vector output in the step (S3) into a time sequence evolution deep layer decoding module under the global consistency constraint designed based on the U-net network, completing the image by the time sequence evolution deep layer decoding module under the global consistency constraint based on the constraint of the global consistency vector and the time sequence correlation vector, and outputting the completed image.
And finally, sending the completed image and the real image into a discriminator together to train the whole model.
Furthermore, the global information encoder and the time sequence information encoder have the same structure and comprise a plurality of gated convolutional layers.
Further, the time sequence modeling module comprisesThe characterization vectors at different moments are used as the input of the time sequence modeling module and are respectively input into the gated cyclic neural network unit 1, the gated cyclic neural network unit 2 and the gated cyclic neural network unit 3, which are respectively referred to as the gated cyclic neural network units for short
Figure SMS_1
、/>
Figure SMS_2
、/>
Figure SMS_3
The process is as follows:
Figure SMS_4
wherein, the hidden state is used
Figure SMS_13
To indicate timing information between different times, in which &>
Figure SMS_6
Initializing to a random value; first, the characterization vector at time t-3 is ≧>
Figure SMS_11
And a random initial hidden state>
Figure SMS_15
Is sent into together>
Figure SMS_19
Then subsequently>
Figure SMS_18
Output hidden status->
Figure SMS_20
And then hidden state->
Figure SMS_17
And then the characterization vector at time t-2>
Figure SMS_21
Is sent into together>
Figure SMS_8
Analogously, subsequently->
Figure SMS_12
Output hidden status->
Figure SMS_7
And is compared with the characterization vector at time t-1>
Figure SMS_10
Together input>
Figure SMS_14
And finally->
Figure SMS_16
Hidden status of output>
Figure SMS_5
I.e. is the timing dependency vector pick>
Figure SMS_9
Further, the time-series evolution deep layer decoding module under the global consistency constraint includes an upper flow and a lower flow, the upper flow includes a plurality of basic neural network layers composed of an deconvolution layer and an original convolution layer, the lower flow includes a plurality of neural network layers composed of an deconvolution layer, a jump connection layer and an original convolution layer, the upper flow processes the global consistency vector output by the step S2, the lower flow processes the time-series correlation vector output by the step S3, the output of the basic neural network layer of each layer in the upper flow performs f operation with the corresponding output in the lower flow, wherein the formula of the f operation is as follows:
Figure SMS_22
wherein, the first and the second end of the pipe are connected with each other,
Figure SMS_23
、/>
Figure SMS_24
is a hyper-parameter->
Figure SMS_25
Representative point multiplication, sigmoid being a sigmoid function,/'>
Figure SMS_26
For the global coherency vector output in step S2, <' >>
Figure SMS_27
Is the timing correlation vector output in step S3.
Secondly, the invention also provides a global consistency-based marine chlorophyll concentration image completion network, which is used for realizing the global consistency-based marine chlorophyll concentration image completion method, wherein the network comprises a global consistency extraction network, a time sequence evolution correlation extraction network and an image completion network, and the global consistency extraction network comprises a global information encoder for extracting a global consistency vector; the time sequence evolution correlation extraction network comprises a time sequence information encoder and a time sequence modeling module and is used for extracting time sequence correlation vectors; the image completion network comprises a time sequence evolution deep decoding module and a discriminator under global consistency constraint, and is used for completing an original image by using the extracted global consistency vector and the time sequence correlation vector to obtain a completed image, and distinguishing the completed image from the original image.
Compared with the prior art, the invention has the advantages that:
(1) In the time sequence evolution correlation extraction network, the deviation between the data at other moments and the cycle average value is calculated firstly, and then the time sequence correlation at other moments is extracted through a time sequence modeling module. By fully considering the deviation data of a plurality of moments, the potential information and the change rule of the potential information at other moments can be further mined, the problem that the conditions of other moments are not considered in the local modeling process in the prior art is solved, the reliability of the local modeling result is improved, and the accuracy of image completion is improved;
(2) In the image completion network, the global consistency vector and the time sequence correlation vector are respectively processed by utilizing the upper and lower two flows of a time sequence evolution deep decoding module under the global consistency constraint, the output of each layer of the upper flow and each layer of the lower flow are deeply embedded (f operation) in the processing process, the global consistency constraint is fully utilized to guide the fusion of the two kinds of information, and the global consistency constraint is fully utilized to reduce the noise interference of a local modeling result to a greater extent. The method solves the problems that in the prior art, the operation of directly superposing the deviation obtained by missing part estimation and the cycle average value in the local modeling process is too direct and noise is easily introduced, and improves the accuracy of image completion.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of a prior art process;
FIG. 2 is a diagram of a network architecture of the present invention;
FIG. 3 is a block diagram of an encoder of the present invention;
FIG. 4 is a block diagram of a timing modeling module of the present invention;
FIG. 5 is a block diagram of a deep decoding module for time-series evolution under global consistency constraint according to the present invention;
wherein G in FIG. 3 represents a gated convolution layer;
in fig. 5, R represents deconvolution, S represents jump connection, V represents original convolution, and f means f operation.
Detailed Description
The invention is further described with reference to the following figures and specific examples.
Example 1
Referring to fig. 2, the present embodiment provides a method for completing an ocean chlorophyll concentration image based on global consistency, which includes the following steps, which are described separately.
S1, acquiring image data:
obtaining chlorophyll concentration images SST and week average images at different moments
Figure SMS_28
To-be-compensated image
Figure SMS_29
Cloud mask image>
Figure SMS_30
In the present embodiment, three different times t-1, t-2, and t-3 are taken as examples for explanation, that is, the chlorophyll concentration images SST at different times are
Figure SMS_31
Abbreviated as->
Figure SMS_32
S2, global consistency extraction:
input week-averaged image
Figure SMS_33
The image to be compensated is based on>
Figure SMS_34
And the full image to be compensated->
Figure SMS_35
The cloud mask image->
Figure SMS_36
Outputting a global consistency vector by a global information encoder consisting of a neural network
Figure SMS_37
The global information encoder structure is shown in fig. 3, and includes a plurality of gated convolutional layers, where G in fig. 3 represents a gated convolutional layer.
S3, extracting time sequence evolution correlation:
obtaining chlorophyll concentration images at different moments
Figure SMS_38
Cloud mask image of
Figure SMS_42
The cloud mask image may be abbreviated as £ er>
Figure SMS_45
The chlorophyll concentration images at different moments are->
Figure SMS_40
And week mean image>
Figure SMS_43
The difference is taken to obtain deviation images->
Figure SMS_46
(in FIG. 2 +)>
Figure SMS_48
Representing a difference operation), and then the deviation image @>
Figure SMS_39
And the corresponding cloud mask image->
Figure SMS_44
Inputting the signals into a time sequence information encoder composed of a neural network together, and then outputting the characterization vectors at different time instants
Figure SMS_47
Finally, a time sequence modeling module composed of Gated Recurrent Unit (GRU) is used for extracting the characterization vectors at different moments
Figure SMS_49
The output timing correlation vector ≥>
Figure SMS_41
The time sequence information encoder structure is the same as the global information encoder structure and comprises a plurality of gating convolution layers.
The structure of the time sequence modeling module is shown in fig. 4, and the time sequence modeling module includes a plurality of gated recurrent neural network units, 3 in this embodiment, which are respectively a gated recurrent neural network unit 1, a gated recurrent neural network unit 2, and a gated recurrent neural network unit 3, and are respectively referred to as simply "gated recurrent neural network unit 1", "gated recurrent neural network unit 2", and "gated recurrent neural network unit 3", for short
Figure SMS_50
、/>
Figure SMS_51
、/>
Figure SMS_52
Feature vectors at different time instances
Figure SMS_53
As inputs to the timing modeling module, the respective inputs ≥>
Figure SMS_54
、/>
Figure SMS_55
Figure SMS_56
The process is as follows:
Figure SMS_57
wherein, the hidden state is used
Figure SMS_58
To indicate timing information between different moments in time, wherein ≥ is present>
Figure SMS_59
Initialized to a random value.
First, at time t-3Token vector of moment
Figure SMS_62
And a random initial hidden state>
Figure SMS_67
Is sent into together>
Figure SMS_70
And subsequently->
Figure SMS_60
Output hidden status->
Figure SMS_64
Then, hidden state>
Figure SMS_73
And then the characterization vector at time t-2>
Figure SMS_74
Are sent together
Figure SMS_61
Analogously, subsequently->
Figure SMS_65
Output hidden status>
Figure SMS_68
And is compared with the characterization vector at time t-1>
Figure SMS_72
Input together
Figure SMS_63
And finally->
Figure SMS_66
Hidden state of output->
Figure SMS_69
I.e. is the timing dependency vector pick>
Figure SMS_71
Wherein, the token vector 1 in FIG. 4 is the token vector at time t-3
Figure SMS_75
Characterization vector 2 is the characterization vector at time t-2 @>
Figure SMS_76
Characterization vector 3 is the characterization vector at time t-1>
Figure SMS_77
Hidden state 0 represents the initial hidden state->
Figure SMS_78
Hidden State 1 represents a hidden State->
Figure SMS_79
Hidden state 2 represents a hidden state->
Figure SMS_80
The partial design solves the problem that the conditions of other moments are not considered in the local modeling process in the prior art.
S4, image completion:
the global consistency vector output by the step S2
Figure SMS_81
And the timing correlation vector ≥ at another time instant on the output of step S3>
Figure SMS_82
Inputting a time sequence evolution deep layer decoding module under the global consistency constraint designed based on the U-net network, wherein the time sequence evolution deep layer decoding module under the global consistency constraint is based on a global consistency vector->
Figure SMS_83
Is restricted and the timing dependency vector->
Figure SMS_84
Complementing the image and outputting the complemented image>
Figure SMS_85
The time sequence evolution deep layer decoding module under the global consistency constraint is shown in fig. 5, wherein R in the figure represents deconvolution, S represents jump connection, and V represents original convolution. Different from the traditional U-net structure, the invention aims to give full play to the global consistency vector in the fusion process
Figure SMS_86
The time sequence evolution deep layer decoding module under the global consistency constraint comprises an upper stream and a lower stream, wherein the upper stream comprises a plurality of layers of basic neural network layers consisting of deconvolution layers and original convolution layers, the lower layer comprises a plurality of layers of neural network layers consisting of deconvolution layers, jump connection layers and original convolution layers, and a global consistency vector (^ er) output in the step S2 of processing the upper stream>
Figure SMS_87
The timing correlation vector of the other time outputted from the downstream processing step S3
Figure SMS_88
The output of the basic neural network layer of each layer in the upstream is f-operated with the corresponding output in the downstream, wherein f-operation is formulated as follows:
Figure SMS_89
/>
wherein the content of the first and second substances,
Figure SMS_90
、/>
Figure SMS_91
is a hyper-parameter>
Figure SMS_92
The representative point is multiplied by, sigmoid is a sigmoid function, f is an f-operation,
Figure SMS_93
for the global coherency vector output in step S2, <' >>
Figure SMS_94
Is the timing correlation vector output in step S3.
The method solves the problems that in the prior art, the bias obtained by estimation of a missing part and the weekly average are directly superposed to obtain a completion result in the local modeling process, the constraint of global consistency is not fully utilized, and local noise interference is easily introduced.
Finally, the completed image
Figure SMS_95
And a real image>
Figure SMS_96
And (3) the training data is sent to a discriminator together to train the whole model, wherein the training of the discriminator and the training of the whole model related to the content of the part can be carried out by adopting the prior art, namely the countertraining of the generator and the discriminator is not repeated.
In addition, it should be noted that the loss function of the present invention is as follows:
Figure SMS_97
wherein D represents the number of the discriminators,
Figure SMS_98
indicates that the discriminator D is based on the real image->
Figure SMS_99
Is distinguished, G represents the network model of the invention, is judged>
Figure SMS_100
Indicates that the discriminator D discriminates the image generated by the present model G, and>
Figure SMS_101
、/>
Figure SMS_102
are respectively the parameters to be learned of the discriminator D and the network model G of the invention>
Figure SMS_103
For a true marine chlorophyll concentration image, X is the input to the network model of the invention:
Figure SMS_104
wherein, the first and the second end of the pipe are connected with each other,
Figure SMS_105
is a weekly mean image>
Figure SMS_106
For the full image to be compensated, is>
Figure SMS_107
A cloud mask image for a full image to be compensated, based on the image data>
Figure SMS_108
For images of chlorophyll concentration at different times>
Figure SMS_109
And cloud mask images are correspondingly generated for chlorophyll concentration images at different moments.
Example 2
The embodiment provides a global consistency-based marine chlorophyll concentration image completion network, which comprises a global consistency extraction network, a time sequence evolution correlation extraction network and an image completion network.
And the global consistency extraction network comprises a global information encoder used for extracting the global consistency vector.
The time sequence evolution correlation extraction network comprises a time sequence information encoder and a time sequence modeling module and is used for extracting time sequence correlation vectors.
The image completion network comprises a time sequence evolution deep decoding module and a discriminator under global consistency constraint, and is used for completing an original image by using the extracted global consistency vector and the time sequence correlation vector to obtain a completed image, and distinguishing the completed image from the original image.
The network is used for realizing a global consistency-based marine chlorophyll concentration image completion method, and the method flow, the composition and the functions of each module can be referred to the record of the embodiment 1, and are not described herein again.
When the method is applied, the marine chlorophyll concentration image to be supplemented is input into the global consistency-based marine chlorophyll concentration image supplementation network, and the chlorophyll concentration image which fully considers the influence of the marine chlorophyll concentration fields at other moments and fully utilizes global consistency constraint after supplementation can be output.
In summary, the timing evolution correlation extraction network of the present invention is used to solve the problem that the conditions of other times are not considered in the local modeling process in the prior art; the image completion network of the method is used for solving the problems that in the prior art, the offset obtained by estimation of a missing part and a weekly average value are directly superposed in the local modeling process to obtain a completion result, the constraint of global consistency is not fully utilized, and local noise interference is easily introduced. By the aid of the global consistency-based ocean chlorophyll concentration image completion network and method, accurate completion of the ocean chlorophyll concentration image is achieved.
It is understood that the above description is not intended to limit the present invention, and the present invention is not limited to the above examples, and those skilled in the art should understand that they can make various changes, modifications, additions and substitutions within the spirit and scope of the present invention.

Claims (5)

1. The method for complementing the marine chlorophyll concentration image based on global consistency is characterized by comprising the following steps of:
s1, acquiring image data:
acquiring chlorophyll concentration images SST, weekly average images, images to be compensated and cloud mask images of the images to be compensated at different moments;
s2, global consistency extraction:
inputting a weekly average image, an image to be compensated and a cloud mask image of the image to be compensated, and outputting a global consistency vector through a global information encoder consisting of a neural network;
s3, extracting time sequence evolution correlation:
acquiring cloud mask images M of chlorophyll concentration images SST at different moments, subtracting the chlorophyll concentration images SST at the different moments from the weekly average image to obtain deviation images at the different moments, inputting the deviation images and the corresponding cloud mask images M into a time sequence information encoder composed of a neural network, then outputting characterization vectors at the different moments, and finally extracting time sequence correlation of the characterization vectors at the different moments by using a time sequence modeling module composed of a gated cyclic neural network unit and outputting the time sequence correlation vectors;
s4, image completion:
inputting the global consistency vector output in the step S2 and the time sequence correlation vector output in the step S3 into a time sequence evolution deep layer decoding module under the global consistency constraint based on the U-net network design, completing the image by the time sequence evolution deep layer decoding module under the global consistency constraint based on the constraint of the global consistency vector and the time sequence correlation vector, and outputting a completed image;
and finally, the completed image and the real image are sent to a discriminator together to train the whole model.
2. The marine chlorophyll concentration image completion method based on global consistency of claim 1, wherein the global information encoder and the time-series information encoder are identical in structure and comprise a plurality of gated convolution layers.
3. The global consistency-based marine chlorophyll concentration image completion method according to claim 1, wherein the time-series modeling module comprises a plurality of gated recurrent neural network units, and the characterization vectors at different times are used as inputs of the time-series modeling module and are respectively input into the time-series modeling moduleAn input gate-controlled recurrent neural network unit 1, a gate-controlled recurrent neural network unit 2 and a gate-controlled recurrent neural network unit 3, which are respectively abbreviated as
Figure QLYQS_1
、/>
Figure QLYQS_2
、/>
Figure QLYQS_3
The process is as follows:
Figure QLYQS_4
wherein, the hidden state is used
Figure QLYQS_13
To indicate timing information between different moments in time, wherein ≥ is present>
Figure QLYQS_5
Initializing to a random value; first, the characterization vector at time t-3 is ≧>
Figure QLYQS_9
And a random initial hidden state>
Figure QLYQS_7
Is sent into together>
Figure QLYQS_10
Then subsequently>
Figure QLYQS_14
Output hidden status->
Figure QLYQS_20
And then hidden state->
Figure QLYQS_15
And then the characterization vector at time t-2>
Figure QLYQS_17
Is sent into together>
Figure QLYQS_8
Analogously, subsequently->
Figure QLYQS_12
Output hidden status->
Figure QLYQS_16
And associated with a characterization vector at time t-1>
Figure QLYQS_19
Is input together>
Figure QLYQS_18
And finally
Figure QLYQS_21
Hidden state of output->
Figure QLYQS_6
I.e. is the timing dependency vector pick>
Figure QLYQS_11
4. The method for completing marine chlorophyll concentration images according to claim 1, wherein the time-series evolution deep decoding module under the global consistency constraint comprises an upper stream and a lower stream, the upper stream comprises multiple basic neural network layers composed of deconvolution layers and original convolution layers, the lower layer comprises multiple neural network layers composed of deconvolution layers, jump connection layers and original convolution layers, the global consistency vector output by the upper stream processing step S2 and the time-series correlation vector output by the lower stream processing step S3, the output of the basic neural network layer of each layer in the upper stream is f-operated with the corresponding output in the lower stream, wherein the f-operated formula is as follows:
Figure QLYQS_22
wherein, the first and the second end of the pipe are connected with each other,
Figure QLYQS_23
、/>
Figure QLYQS_24
is a hyper-parameter>
Figure QLYQS_25
Representative point multiplication, sigmoid being a sigmoid function,/'>
Figure QLYQS_26
For the global coherency vector output in step S2, <' >>
Figure QLYQS_27
Is the timing correlation vector output in step S3.
5. The global consistency-based marine chlorophyll concentration image completion network is characterized by being used for realizing the global consistency-based marine chlorophyll concentration image completion method according to any one of claims 1 to 4, wherein the network comprises a global consistency extraction network, a time-sequence evolution correlation extraction network and an image completion network, and the global consistency extraction network comprises a global information encoder for extracting a global consistency vector;
the time sequence evolution correlation extraction network comprises a time sequence information encoder and a time sequence modeling module and is used for extracting time sequence correlation vectors;
the image completion network comprises a time sequence evolution deep decoding module and a discriminator under global consistency constraint, and is used for completing an original image by using the extracted global consistency vector and the extracted time sequence correlation vector to obtain a completed image, and distinguishing the completed image from the original image.
CN202310250397.9A 2023-03-16 2023-03-16 Method and network for supplementing ocean chlorophyll concentration image based on global consistency Active CN115984146B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310250397.9A CN115984146B (en) 2023-03-16 2023-03-16 Method and network for supplementing ocean chlorophyll concentration image based on global consistency

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310250397.9A CN115984146B (en) 2023-03-16 2023-03-16 Method and network for supplementing ocean chlorophyll concentration image based on global consistency

Publications (2)

Publication Number Publication Date
CN115984146A true CN115984146A (en) 2023-04-18
CN115984146B CN115984146B (en) 2023-07-07

Family

ID=85965181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310250397.9A Active CN115984146B (en) 2023-03-16 2023-03-16 Method and network for supplementing ocean chlorophyll concentration image based on global consistency

Country Status (1)

Country Link
CN (1) CN115984146B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117935066B (en) * 2024-03-25 2024-05-28 中国海洋大学 Sea surface temperature complement method and system based on parallel multi-scale constraint

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000015868A (en) * 1998-07-03 2000-01-18 Canon Inc Image-recording apparatus and method for recording image
CN110210514A (en) * 2019-04-24 2019-09-06 北京林业大学 Production fights network training method, image completion method, equipment and storage medium
CN113821760A (en) * 2021-11-23 2021-12-21 湖南工商大学 Air data completion method, device, equipment and storage medium
CN114463214A (en) * 2022-01-28 2022-05-10 北京邮电大学 Double-path iris completion method and system guided by regional attention mechanism
CN114708295A (en) * 2022-04-02 2022-07-05 华南理工大学 Logistics package separation method based on Transformer
CN114780739A (en) * 2022-04-14 2022-07-22 武汉大学 Time sequence knowledge graph completion method and system based on time graph convolution network
CN114821285A (en) * 2022-04-15 2022-07-29 北京工商大学 System and method for predicting cyanobacterial bloom based on ACONV-LSTM and New-GANs combination
CN114897955A (en) * 2022-04-25 2022-08-12 电子科技大学 Depth completion method based on micro-geometric propagation
CN115374903A (en) * 2022-06-19 2022-11-22 山东高速集团有限公司 Long-term pavement monitoring data enhancement method based on expressway sensor network layout
CN115470201A (en) * 2022-08-30 2022-12-13 同济大学 Intelligent ocean remote sensing missing data completion method based on graph attention network
WO2022257408A1 (en) * 2021-06-10 2022-12-15 南京邮电大学 Medical image segmentation method based on u-shaped network
CN115587646A (en) * 2022-09-09 2023-01-10 中国海洋大学 Method and system for predicting concentration of chlorophyll a in offshore area based on space-time characteristic fusion

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000015868A (en) * 1998-07-03 2000-01-18 Canon Inc Image-recording apparatus and method for recording image
CN110210514A (en) * 2019-04-24 2019-09-06 北京林业大学 Production fights network training method, image completion method, equipment and storage medium
WO2022257408A1 (en) * 2021-06-10 2022-12-15 南京邮电大学 Medical image segmentation method based on u-shaped network
CN113821760A (en) * 2021-11-23 2021-12-21 湖南工商大学 Air data completion method, device, equipment and storage medium
CN114463214A (en) * 2022-01-28 2022-05-10 北京邮电大学 Double-path iris completion method and system guided by regional attention mechanism
CN114708295A (en) * 2022-04-02 2022-07-05 华南理工大学 Logistics package separation method based on Transformer
CN114780739A (en) * 2022-04-14 2022-07-22 武汉大学 Time sequence knowledge graph completion method and system based on time graph convolution network
CN114821285A (en) * 2022-04-15 2022-07-29 北京工商大学 System and method for predicting cyanobacterial bloom based on ACONV-LSTM and New-GANs combination
CN114897955A (en) * 2022-04-25 2022-08-12 电子科技大学 Depth completion method based on micro-geometric propagation
CN115374903A (en) * 2022-06-19 2022-11-22 山东高速集团有限公司 Long-term pavement monitoring data enhancement method based on expressway sensor network layout
CN115470201A (en) * 2022-08-30 2022-12-13 同济大学 Intelligent ocean remote sensing missing data completion method based on graph attention network
CN115587646A (en) * 2022-09-09 2023-01-10 中国海洋大学 Method and system for predicting concentration of chlorophyll a in offshore area based on space-time characteristic fusion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XIAOMIN YE等: "Global Ocean Chlorophyll-a Concentrations Derived From COCTS Onboard the HY-1C Satellite and Their Preliminary Evaluation", IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, vol. 59, no. 12, pages 9914, XP011889019, DOI: 10.1109/TGRS.2020.3036963 *
冀俭俭;杨刚;: "基于生成对抗网络的分级联合图像补全方法", 图学学报, no. 06, pages 29 - 37 *
郭瑞杰: "基于时序提名的视频动作检测算法研究", 中国优秀硕士学位论文全文数据库 信息科技辑, no. 1, pages 138 - 2160 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117935066B (en) * 2024-03-25 2024-05-28 中国海洋大学 Sea surface temperature complement method and system based on parallel multi-scale constraint
CN117994171B (en) * 2024-04-03 2024-05-31 中国海洋大学 Sea surface temperature image complement method based on Fourier transform diffusion model

Also Published As

Publication number Publication date
CN115984146B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
Jiang et al. Edge-enhanced GAN for remote sensing image superresolution
Deng et al. R3net: Recurrent residual refinement network for saliency detection
Fjortoft et al. Unsupervised classification of radar images using hidden Markov chains and hidden Markov random fields
Liu et al. An attention-based approach for single image super resolution
CN112541459A (en) Crowd counting method and system based on multi-scale perception attention network
Yuan et al. Neighborloss: a loss function considering spatial correlation for semantic segmentation of remote sensing image
CN114692509A (en) Strong noise single photon three-dimensional reconstruction method based on multi-stage degeneration neural network
CN116206133A (en) RGB-D significance target detection method
CN111222453B (en) Remote sensing image change detection method based on dense connection and geometric structure constraint
CN112766217A (en) Cross-modal pedestrian re-identification method based on disentanglement and feature level difference learning
CN112884758A (en) Defective insulator sample generation method and system based on style migration method
Chen et al. Dehrformer: Real-time transformer for depth estimation and haze removal from varicolored haze scenes
CN116205962A (en) Monocular depth estimation method and system based on complete context information
CN116052254A (en) Visual continuous emotion recognition method based on extended Kalman filtering neural network
Wang et al. Detect any shadow: Segment anything for video shadow detection
CN116524062B (en) Diffusion model-based 2D human body posture estimation method
CN115984146A (en) Global consistency-based marine chlorophyll concentration image completion method and network
Wang et al. Predicting diverse future frames with local transformation-guided masking
CN111968208A (en) Human body animation synthesis method based on human body soft tissue grid model
CN116402874A (en) Spacecraft depth complementing method based on time sequence optical image and laser radar data
CN115223033A (en) Synthetic aperture sonar image target classification method and system
Liu et al. Attention-guided lightweight generative adversarial network for low-light image enhancement in maritime video surveillance
Fang et al. A Novel Method for Precipitation Nowcasting Based on ST-LSTM.
Zhong et al. Spatio-temporal dual-branch network with predictive feature learning for satellite video object segmentation
CN113850719A (en) RGB image guided depth map super-resolution method based on joint implicit image function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant