CN115661612A - General climate data downscaling method based on element migration learning - Google Patents

General climate data downscaling method based on element migration learning Download PDF

Info

Publication number
CN115661612A
CN115661612A CN202211498102.1A CN202211498102A CN115661612A CN 115661612 A CN115661612 A CN 115661612A CN 202211498102 A CN202211498102 A CN 202211498102A CN 115661612 A CN115661612 A CN 115661612A
Authority
CN
China
Prior art keywords
downscaling
training
model
data
climate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211498102.1A
Other languages
Chinese (zh)
Inventor
胡靖�
田川
母嘉陵
郑鹏
吴锡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu University of Information Technology
Original Assignee
Chengdu University of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu University of Information Technology filed Critical Chengdu University of Information Technology
Priority to CN202211498102.1A priority Critical patent/CN115661612A/en
Publication of CN115661612A publication Critical patent/CN115661612A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention relates to a general climate data downscaling method based on meta-migration learning, and provides a general climate downscaling frame MTL-Framework, wherein a constructed downscaling model is trained and optimized on the frame, and the trained downscaling model can implicitly learn the relevance among different climate variables based on the meta-migration learning. Experimental results show that the climate downscaling method is better than the prior art and has better comprehensive performance of a plurality of tasks.

Description

General climate data downscaling method based on element migration learning
Technical Field
The invention relates to the field of climate data processing, in particular to a general climate data downscaling method based on meta-migration learning.
Background
For climate simulation and Forecasting, scientists developed various Numerical Weather Prediction (NWP) models, such as Weather Research and Forecasting (WRF), to approximate a complex atmospheric simulation system by using partial differential equations. However, at smaller scales, such as regional or local scales, the computational cost of a numerical weather forecast model such as WRF is too high, because the number of elements in a three-dimensional mesh increases in a free-standing manner as the resolution increases. Therefore, there is a great need to develop efficient, physically accurate methods to improve resolution to predict regional or local weather conditions. Among them, down-scaling is a common technique for converting large-scale weather simulation data into a smaller spatial scale.
In general, there are two types of downscaling methods: dynamic downscaling and statistical downscaling. In the dynamic downscaling method, the output of an NWP model is used as a boundary/initial condition for solving the refined model on a sub-region in the original domain. However, the obtained region numerical pattern is still complicated and the amount of calculation is large. Worse, the result of dynamic downscaling tends to smooth out critical small-scale features. Statistical Downscaling (SD) methods establish empirical relationships between weather and climate features at different scales. SD is widely favored because of its low computational cost and simplicity, compared to its corresponding dynamic downscaling. One of the most important ideas in SD is to use regression to model the mapping projection, including linear and non-linear approaches. For example, the classical SD technique is fundamentally based on interpolation techniques, such as bilinear interpolation or kriging interpolation. However, these interpolation methods have the disadvantage that they tend to smooth out sharp details, resulting in blurred images.
With the rapid development of Machine Learning (ML), the use of ML algorithms, such as random forest and Support Vector Machine (SVM), has better performance in capturing the nonlinear relationship between different scales, and the result is more reliable. For example, duhan and Pandey down-scale the monthly temperatures of india using a least squares SVM model, which they found to perform better than the traditional multiple linear regression model. Although these ML-based methods have improved significantly in recent years, the performance of machine learning still cannot meet the demand due to the correlation between randomness and non-linear features.
In recent years, with the development of computing power, there is a trend to introduce Deep Learning (DL) techniques into the downscaling problem, which can automatically extract spatial (or spatio-temporal) features to obtain further process understanding of downscaling. These deep learning based downscaling methods attempt to improve the resolution of the climate image, typically using a feed forward neural network, linking the input mode with a high resolution output. These techniques rely on the fact that: much of the information is redundant and a high resolution image can be recovered from a low resolution input. To some extent, these deep learning based downscaling methods are closely related to super-resolution, which is a classical research area of image processing. Therefore, most DL-based downscaling methods simply employ some of the most advanced super-resolution methods to super-resolve weather or weather data.
Although these DL-based downscaling methods are better at capturing redundant information and patterns in the low resolution input, which helps to improve the performance of the downscaling, their drawbacks are also apparent. First, since these DL models are usually supervised, if the training and testing data sets are very different, the effect is often not ideal and versatility is difficult to achieve. Second, these models are usually motivated by image super-resolution, which does not take into account the characteristics of climate data. For example, terrain information helps in climate downscaling. Third, most super-resolution methods assume a known down-sampling function, usually Bicubic (Bicubic). But the weather downscaling is not such, where the inverse mapping between high-resolution to low-resolution forecasts is completely unknown. More specifically, in super-resolution, the low resolution input image and the high resolution target image are from the same source (usually the high resolution target image is down-sampled to the low resolution image as input), while in meteorological data down-scaling, the input and output are from different sources. For example, in statistical downscaling, the input is simulated data from the WRF and the target output is historical observation data. This presents difficulties in down scaling because, in addition to the conventional function of increasing resolution, an additional "regression" from the input variables to the output variables is required.
The existing downscaling learning model only aims at a certain specific meteorological variable, the relevance among different meteorological variables is difficult to learn, and even when a plurality of meteorological variable downscaling tasks are faced, the effect and the efficiency cannot reach a good balance. In order to solve the problems, the invention utilizes element migration learning to construct a universal climate downscaling framework aiming at a plurality of meteorological variable downscaling tasks.
Disclosure of Invention
Aiming at the defects of the prior art, the general climate data downscaling method based on element migration learning is characterized in that the downscaling method is based on a climate downscaling frame MTL-Framework, after a built climate downscaling model is trained in the climate downscaling frame MTL-Framework, the relevance among different climate variables can be implicitly learned, and an initialization parameter which is sensitive to a plurality of climate variable downscaling tasks and can be transferred is found in a parameter space, and the specific method comprises the following steps:
step 1: the 8 meteorological variable data sets are combined into a data set suitable for meta migration learning
Figure 219341DEST_PATH_IMAGE001
The data set
Figure 550529DEST_PATH_IMAGE001
The system comprises 8 task subsets, wherein each task subset comprises 4000 pairs of meteorological variable pictures, and each pair of pictures comprises a large-scale low-resolution image and a small-scale high-resolution image.
Step 2: a pre-training phase of pre-training the weather downscaling model on a common DIV2K dataset by minimizing L1 loss to obtain model parameters
Figure 880666DEST_PATH_IMAGE002
And step 3: then entering a meta migration learning stage, and enabling the model parameters
Figure 680737DEST_PATH_IMAGE002
As a model initialization parameter θ of the meta migration learning phase.
And 4, step 4: in the data set
Figure 118278DEST_PATH_IMAGE001
Sequentially extracting 10 pairs of pictures from 8 task subsets as training data
Figure 802725DEST_PATH_IMAGE003
Wherein the training data is selected
Figure 952647DEST_PATH_IMAGE003
The 5 pairs of pictures are used for learning training in the step 5 and are recorded as a training set
Figure 951214DEST_PATH_IMAGE004
And 5 remaining pairs of pictures are used for the test training of the step 6 and are recorded as a test set
Figure 963553DEST_PATH_IMAGE005
After extraction from the 8 task subsets is completed, all extracted training data form a meta-batch
Figure 320366DEST_PATH_IMAGE006
And 5: from the meta-batch
Figure 862644DEST_PATH_IMAGE006
Selecting training data of one task
Figure 550721DEST_PATH_IMAGE003
To, for
Figure 467249DEST_PATH_IMAGE003
Training set in
Figure 537840DEST_PATH_IMAGE004
Training the data in (2) to obtain training loss, wherein a mathematical expression of the training loss is as follows:
Figure 375871DEST_PATH_IMAGE007
(1)
wherein the content of the first and second substances,
Figure 596200DEST_PATH_IMAGE008
is a large-scale low-resolution image input to the weather down-scale model,
Figure 367234DEST_PATH_IMAGE009
is a high-resolution image and is,
Figure 77569DEST_PATH_IMAGE010
representing parameters
Figure 66210DEST_PATH_IMAGE011
Initialized weather downscaling model, n representing a training set
Figure 944692DEST_PATH_IMAGE004
The number of pairs of median images.
Then, the update parameters are calculated by a random gradient descent method
Figure 331418DEST_PATH_IMAGE012
And α represents a learning rate.
And 6: then continuing to train data on the current task
Figure 507926DEST_PATH_IMAGE003
Test set in (1)
Figure 290855DEST_PATH_IMAGE005
The data in (5) is used for test training, and the updated parameters in the step (5) are used
Figure 148260DEST_PATH_IMAGE013
Calculating the test error, and the mathematical expression is as follows:
Figure 270717DEST_PATH_IMAGE014
(2)
wherein, the first and the second end of the pipe are connected with each other,
Figure 352548DEST_PATH_IMAGE015
representing parameters
Figure 419511DEST_PATH_IMAGE013
Initialized weather downscaling model, n representing a test set
Figure 221552DEST_PATH_IMAGE005
The number of pairs of median images.
And 7: sequentially circulating the step 5 and the step 6, and selecting the meta-batch in the step 5
Figure 910766DEST_PATH_IMAGE006
And (5) the untrained tasks are started until all the tasks participate in training and obtain the test error, the training is finished, and the step 7 is finished.
And 8: adding the test error of each task to obtain an accumulated test error:
Figure 292724DEST_PATH_IMAGE016
updating parameters of the climate downscaling model
Figure 858702DEST_PATH_IMAGE011
Figure 741732DEST_PATH_IMAGE017
Wherein, in the process,
Figure 482855DEST_PATH_IMAGE018
is the meta learning rate.
And step 9: repeating the steps 4 to 8 until the accumulated test error value is converged and the element migration learning is finished, wherein at the moment, the downscaling model finds out a transferable initialization parameter sensitive to a plurality of meteorological variable downscaling tasks in a parameter space
Figure 629190DEST_PATH_IMAGE011
Is recorded as
Figure 682464DEST_PATH_IMAGE019
Step 10: a fine adjustment stage:
Figure 838026DEST_PATH_IMAGE020
the initial parameters are used as the initialization parameters of the fine tuning stage, and the model parameters are obtained through simple fine tuning on the test task
Figure 314882DEST_PATH_IMAGE021
Step 11: and (3) a testing stage: by
Figure 479454DEST_PATH_IMAGE021
Initializing the downscaling model, and downscaling the test task.
According to a preferred embodiment, the weather downscaling model extracts shallow features from an input large-scale low-resolution image using two layers of convolutional layers, feeds the obtained shallow features to an upsampling and downsampling module, and uses sub-pixel convolution and convolutional layers respectively to realize upsampling and downsampling, wherein the concept of residual is introduced to reveal deeper high-frequency feature information.
The invention has the beneficial effects that:
1. the method is based on meta-migration learning, the built climate downscaling frame MTL-Framework can implicitly learn the relevance among different climate variables, an initialization parameter which is sensitive to a plurality of weather variable downscaling tasks and can be transferred can be found in a parameter space, and the method is a universal climate downscaling frame.
2. The MTL-Framework aims at searching for proper parameters in a plurality of downscaling tasks, avoids independently training each task, greatly shortens the training time, is superior to the traditional universal downscaling Framework in the aspect of PSNR/SSIM indexes, achieves the effect close to independent training, and is a downscaling Framework which is excellent in performance and meets the real-time application requirements.
3. The MTL-Framework provided by the invention can learn the relevant characteristics of a new task through simple fine tuning, does not train the new task, can obtain a competitive result, and avoids the problem that the traditional training set and actual data are greatly different so that the effect is not ideal.
Drawings
FIG. 1 is a flow chart of a Framework MTL-Framework proposed by the present invention;
FIG. 2 is a schematic diagram of a down-scaling model proposed in the present invention;
FIG. 3 is a graph comparing the results of T2 experiments with different downscaling methods of the present invention;
FIG. 4 is a graph comparing the Wind experimental effects of different downscaling methods of the present invention;
FIG. 5 is a comparison graph of objective evaluation indexes PSNR of different downscaling methods;
FIG. 6 is a graph comparing objective evaluation indexes SSIM of different downscaling methods;
FIG. 7 is a comparison of the run times of different downscaling methods.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings in conjunction with the following detailed description. It is to be understood that these descriptions are only illustrative and are not intended to limit the scope of the present invention. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present invention.
The following detailed description is made with reference to the accompanying drawings.
The existing downscaling learning model only aims at a certain specific meteorological variable, the relevance among different meteorological variables is difficult to learn, and even when a plurality of meteorological variable downscaling tasks are faced, the effect and the efficiency cannot reach a good balance. In order to solve the problems, the invention utilizes element migration learning to construct a universal climate downscaling frame MTL-Framework aiming at a plurality of meteorological variable downscaling tasks, the frame structure is shown in figure 1, the constructed downscaling model is trained and optimized by utilizing the frame, and the relevance among different climate variables is implicitly learned, so that the model finds an initialization parameter which is sensitive to the plurality of meteorological variable downscaling tasks and can be transferred in a parameter space. When a plurality of different downscaling tasks are tested, the downscaling model can obtain a good downscaling effect only by initializing the model through the initialization parameter and then simply fine-tuning the current target task. The schematic structural diagram of the downscaling model proposed by the present invention is shown in fig. 2.
Step 1: the 8 meteorological variable data sets are combined into a data set suitable for meta migration learning
Figure 477147DEST_PATH_IMAGE001
. The eight meteorological variables specifically include: specific humidity Q2 with the height of 2m above the ground, temperature T2 with the height of 2m above the ground, wind speed Wind of a Wind field of 10m above the ground, planet boundary layer height PBLH, skin sea surface temperature TSK, outward long wave radiation OLR, temperature TH2 with the height of 2m above the ground and sea level air pressure SLP. Event data set
Figure 916960DEST_PATH_IMAGE001
The system comprises 8 task subsets, wherein each task subset comprises 4000 pairs of meteorological variable pictures, each pair of pictures comprises a large-scale low-resolution image which serves as input and a small-scale high-resolution image which serves as a label.
Step 2:a pre-training phase of pre-training the weather downscaling model on a common DIV2K dataset by minimizing L1 loss to obtain model parameters
Figure 113236DEST_PATH_IMAGE002
(ii) a The climate downscaling model provided by the invention is only a carrier of the climate downscaling framework.
The downscaling model of the invention uses two layers of convolution layers to extract shallow features from an input large-scale low-resolution image. The obtained shallow features are then fed to upsampling and downsampling modules, which use sub-pixel convolution and convolution layers to implement upsampling and downsampling, respectively, where the concept of residual is introduced to reveal deeper high-frequency feature information. In addition, hopping connections are also used to enhance feature learning.
And step 3: then entering a meta migration learning stage, and enabling the model parameters
Figure 855234DEST_PATH_IMAGE002
As a model initialization parameter θ of the meta migration learning phase;
and 4, step 4: in the data set
Figure 871382DEST_PATH_IMAGE001
Sequentially extracting 10 pairs of pictures from 8 task subsets as training data
Figure 192965DEST_PATH_IMAGE003
Wherein the training data is selected
Figure 441151DEST_PATH_IMAGE003
The 5 pairs of pictures are used for learning training in the step 5 and are recorded as a training set
Figure 365769DEST_PATH_IMAGE004
And 5 remaining pairs of pictures are used for the test training of the step 6 and are recorded as a test set
Figure 326335DEST_PATH_IMAGE005
. From 8 task subsetsAfter the extraction is finished, all the extracted training data form a meta batch
Figure 627391DEST_PATH_IMAGE006
And 5: from the meta-batch
Figure 544398DEST_PATH_IMAGE006
Selecting training data of one task
Figure 469279DEST_PATH_IMAGE003
To, for
Figure 681256DEST_PATH_IMAGE003
Training set in
Figure 192527DEST_PATH_IMAGE004
Training the data to obtain training loss, wherein the mathematical expression of the training loss is as follows:
Figure 274359DEST_PATH_IMAGE007
(1)
wherein the content of the first and second substances,
Figure 53964DEST_PATH_IMAGE008
is a large-scale low-resolution image (input) that is input to the weather down-scale model,
Figure 905901DEST_PATH_IMAGE009
is a high-resolution image (label),
Figure 802619DEST_PATH_IMAGE010
representing parameters
Figure 823446DEST_PATH_IMAGE011
Initialized weather downscaling model, n representing a training set
Figure 272487DEST_PATH_IMAGE004
The number of pairs of median images;
then calculating the update parameters by using a random gradient descent method
Figure 972239DEST_PATH_IMAGE012
And α represents a learning rate of 0.01.
Step 6: then continuing to train data on the current task
Figure 141491DEST_PATH_IMAGE003
Test set in (1)
Figure 870019DEST_PATH_IMAGE005
The data in (5) is used for test training, and the parameters updated in the step
Figure 274980DEST_PATH_IMAGE013
Calculating the test error, and the mathematical expression is as follows:
Figure 473747DEST_PATH_IMAGE014
(2)
wherein the content of the first and second substances,
Figure 458408DEST_PATH_IMAGE015
representing parameters
Figure 504425DEST_PATH_IMAGE013
Initialized weather downscaling model, n representing a test set
Figure 283550DEST_PATH_IMAGE005
The number of pairs of median images.
And 7: sequentially circulating the step 5 and the step 6, and selecting the meta-batch in the step 5
Figure 235193DEST_PATH_IMAGE006
The untrained tasks are started, until all the tasks participate in training and obtain the test error, the training is finished, and the step 7 is finished.
In steps 5 to 7, the climate downscaling model quickly learns the characteristic knowledge of each task.
And 8: adding the test error of each task to obtain an accumulated test error:
Figure 757965DEST_PATH_IMAGE016
updating parameters of the weather downscaling model
Figure 523403DEST_PATH_IMAGE011
Figure 963175DEST_PATH_IMAGE017
Wherein, in the step (A),
Figure 623351DEST_PATH_IMAGE018
is the meta learning rate 0.0001;
in step 8, the climate downscaling model learns the relevance between different climate variables, and updates the model parameters according to the common knowledge between the climate variables.
And step 9: repeating the steps 4 to 8 until the accumulated test error value is converged and the element migration learning is finished, wherein at the moment, the downscaling model finds out a transferable initialization parameter sensitive to a plurality of meteorological variable downscaling tasks in a parameter space
Figure 197419DEST_PATH_IMAGE011
Is marked as
Figure 462702DEST_PATH_IMAGE019
Step 10: a fine adjustment stage:
Figure 870113DEST_PATH_IMAGE020
used as initialization parameters of the fine tuning stage, and model parameters are obtained through simple fine tuning on the test task
Figure 158531DEST_PATH_IMAGE021
Step 11: and (3) a testing stage: by
Figure 769246DEST_PATH_IMAGE021
Initializing the downscaling model, and downscaling the test task.
In order to further illustrate the beneficial effects of the invention, comparison is carried out on subjective naked eyes and objective evaluation indexes through comparison experiments. The comparison method adopted by the invention comprises the following steps: bicubic interpolation, kriging Kriging interpolation as an experimental baseline, phIRE method, deepSD method, EDSR method and MZSR method.
The objective evaluation indexes adopted by the invention are peak signal-to-noise ratio PSNR and structural similarity SSIM, as shown in Table 1, although the method of the invention is slightly inferior to the MZSR method in the Q2 task, the PSNR of the invention achieves the best in other tasks. Particularly, the PSNR and SSIM of the method are highest on the Average value Average of the comprehensive performance of the method, and the comprehensive performance reaches the best. Table 1 shows the PSNR/SSIM index of the downscaling method of the present invention over a plurality of training tasks. Bold letters indicate the best performance.
TABLE 1 PSNR and SSIM indexes of the method of the present invention over multiple training tasks
Figure 509275DEST_PATH_IMAGE023
Fig. 3 and 4 are visual comparisons of T2 and Wind using different downscaling methods, respectively. Fig. 3 and 4 show downscaling results of the respective methods (a 1), (b 1), (c 1), (d 1), (e 1), (f 1), (g 1), and fig. 3 and 4 show differences between the downscaling results of the respective methods and the high-resolution image (a 2), (b 2), (c 2), (d 2), (e 2), (f 2), (g 2), which are presented by color bars. PSNR and SSIM values are shown under each group of graphs. The bold face indicates the best performance. The Bicubic interpolation and the Kriging Kriging interpolation are used as experimental baselines, and the experimental results in the figures 3 and 4 show that the difference between the result of the method and the high-resolution image is minimum and the effect is optimal by taking T2 and Wind tasks as examples.
When faced with multiple downscaling tasks, most downscaling algorithms tend to train each task individually. However, this training method will sacrifice a lot of time and resources to achieve good PSNR and SSIM indexes, which is not desirable. However, the conventional general downscaling framework tends to combine a plurality of tasks into one task for training, and although the time spent in this way is not long, the PSNR and SSIM indexes are often poor due to data differences among the tasks.
Fig. 5 to 7 are representations of the depsd method, the EDSR method, the MZSR method, and the model proposed by the present invention on 8 downscaling tasks under different frameworks. Fig. 5, 6 and 7 are PSNR, SSIM and time-consuming comparisons, respectively (when line-overlaps are not resolved, the names of the models are labeled in the figures). Merge represents a traditional universal downscaling framework; solo represents a framework trained separately for each task; MTL represents the generic downscaling framework proposed by the present invention. Fig. 7 shows that a large amount of time is required for a single training frame (Solo) to achieve an ideal effect, PSNR and SSIM of the conventional general downscaling frame Merge have poor effects, and the general downscaling frame MTL provided by the present invention enables the downscaling model to achieve good effects with less time, thereby achieving a good balance between effects and efficiency.
In contrast, the MTL-Framework proposed by the invention aims at searching for suitable parameters in a plurality of downscaling tasks, and the problems can be well solved. Compared with the single training, the MTL-Framework greatly reduces the time required by the training, and in the aspects of PSNR and SSIM indexes, the MTL-Framework exceeds the traditional general downscaling Framework, so that the effect close to the single training is achieved. These results show that the MTL-Framework can achieve a good balance between efficiency and effect, and is an efficient downscaling Framework which meets the requirements of real-time application.
In addition, most of the existing downscaling methods based on deep learning are learned around a specific or overall task, and new downscaling tasks often occur in practical situations. In view of this, the trained method was tested on four new tasks that were not touched at all. The new tasks include surface heat flux GRDFLX, surface heat flux HFX, surface temperature SST and surface latent heat flux LH. As shown in Table 2, the best results were obtained with the process of the present invention, of course. This is because the MTL-Framework has mastered the learning ability and can learn the relevant features of a new task with simple fine-tuning. Thus, the proposed method achieves competitive results even without training for new tasks. Table 2 is the PSNR and SSIM indices of the downscaling method of the present invention on the new task. Bold letters indicate the best performance.
TABLE 2 PSNR/SSIM indexes of the method of the present invention on new tasks
Figure 473208DEST_PATH_IMAGE025
In order to better show the superiority of the MTL-Framework, the invention also incorporates other downscaling models into the Framework, and the result is shown in Table 3, and all downscaling methods obviously improve the PSNR and SSIM values after the MTL-Framework is adopted. From these results, it can be seen that MTL-Framework is very excellent and flexible. Table 3 shows a pre-and post-comparison of the downscaling model in combination with the MTL-Framework. Bold letters indicate the best performance.
TABLE 3 PSNR and SSIM index comparison before and after combining the downscaling model with MTL-Framework
Figure 66564DEST_PATH_IMAGE027
The above experimental results show that the method of the present invention is superior to many of the most advanced methods at present, and performs best on the overall performance of multiple tasks.
It should be noted that the above-mentioned embodiments are exemplary, and that those skilled in the art, having benefit of this disclosure, may devise various solutions which are within the scope of this disclosure and are within the scope of the invention. It should be understood by those skilled in the art that the present specification and figures are illustrative only and are not limiting upon the claims. The scope of the invention is defined by the claims and their equivalents.

Claims (2)

1. A general climate data downscaling method based on meta-migration learning is characterized in that the downscaling method is based on a climate downscaling frame MTL-Framework, after an established climate downscaling model is trained in the climate downscaling frame MTL-Framework, relevance among different climate variables can be implicitly learned, and an initialization parameter which is sensitive to a plurality of climate variable downscaling tasks and can be migrated is found in a parameter space, and the specific method comprises the following steps:
step 1: the 8 meteorological variable data sets are combined into a data set suitable for meta migration learning
Figure 257402DEST_PATH_IMAGE001
Said data set
Figure 693823DEST_PATH_IMAGE001
The system comprises 8 task subsets, wherein each task subset at least comprises 4000 pairs of meteorological variable pictures, and each pair of pictures comprises a large-scale low-resolution image and a small-scale high-resolution image;
step 2: a pre-training phase of pre-training the climate downscaling model on a common DIV2K dataset by minimizing L1 loss to obtain model parameters
Figure 810815DEST_PATH_IMAGE002
And 3, step 3: then entering a meta migration learning stage, and enabling the model parameters
Figure 725419DEST_PATH_IMAGE002
Model initialization parameters as the meta-migration learning phase
Figure 975747DEST_PATH_IMAGE003
And 4, step 4: in the data set
Figure 12711DEST_PATH_IMAGE001
Sequentially extracting 10 pairs of pictures from 8 task subsets as training data
Figure 912272DEST_PATH_IMAGE004
Wherein the training data is selected
Figure 692884DEST_PATH_IMAGE004
The 5 pairs of pictures in the step (5) are used for learning training in the step (5) and are recorded as a training set
Figure 659703DEST_PATH_IMAGE005
And 5 remaining pairs of pictures are used for the test training of the step 6 and are recorded as a test set
Figure 274093DEST_PATH_IMAGE006
After extraction from the 8 task subsets is completed, all extracted training data form a meta batch
Figure 457687DEST_PATH_IMAGE007
And 5: from the meta-batch
Figure 439329DEST_PATH_IMAGE007
Selecting training data of one task
Figure 713184DEST_PATH_IMAGE004
To, for
Figure 324906DEST_PATH_IMAGE004
Training set in
Figure 136742DEST_PATH_IMAGE005
Training the data to obtain training loss, wherein the mathematical expression of the training loss is as follows:
Figure 901567DEST_PATH_IMAGE008
(1)
wherein the content of the first and second substances,
Figure 341514DEST_PATH_IMAGE009
is a large-scale low-resolution image input to the weather down-scale model,
Figure 32127DEST_PATH_IMAGE010
is a small-scale high-resolution image,
Figure 688849DEST_PATH_IMAGE011
representing parameters
Figure 709894DEST_PATH_IMAGE003
Initialized climate downscaling model, n representing training set
Figure 995161DEST_PATH_IMAGE005
The number of pairs of median images;
then, the update parameters are calculated by a random gradient descent method
Figure 122255DEST_PATH_IMAGE012
Figure 502158DEST_PATH_IMAGE013
Represents a learning rate;
step 6: then continuing to train data on the current task
Figure 874365DEST_PATH_IMAGE004
The test set in (1)
Figure 429849DEST_PATH_IMAGE006
The data in (5) is used for test training, and the parameters updated in the step
Figure 757538DEST_PATH_IMAGE014
Calculating the test error, and the mathematical expression is as follows:
Figure 624737DEST_PATH_IMAGE015
(2)
wherein, the first and the second end of the pipe are connected with each other,
Figure 535056DEST_PATH_IMAGE016
representing parameters
Figure 679467DEST_PATH_IMAGE014
Initialized weather downscaling model, n representing a test set
Figure 272997DEST_PATH_IMAGE006
The number of pairs of median images;
and 7: sequentially circulating the step 5 and the step 6, and selecting the meta-batch in the step 5
Figure 24832DEST_PATH_IMAGE007
The untrained tasks are selected, the training is finished until all the tasks participate in the training and test errors are obtained, and the step 7 is finished;
and step 8: adding the test error of each task to obtain an accumulated test error:
Figure 784846DEST_PATH_IMAGE017
updating parameters of the weather downscaling model
Figure 970715DEST_PATH_IMAGE003
Figure 735146DEST_PATH_IMAGE018
Wherein, in the step (A),
Figure 904834DEST_PATH_IMAGE019
is the meta learning rate;
and step 9: repeating the steps 4 to 8 untilWhen the accumulated test error value is converged and the meta-migration learning is finished, the downscaling model finds a transferable initialization parameter sensitive to the downscaling tasks of the meteorological variables in the parameter space
Figure 248965DEST_PATH_IMAGE003
Is marked as
Figure 492603DEST_PATH_IMAGE020
Step 10: a fine adjustment stage:
Figure 395312DEST_PATH_IMAGE020
the initial parameters are used as the initialization parameters of the fine tuning stage, and the model parameters are obtained through simple fine tuning on the test task
Figure 951100DEST_PATH_IMAGE021
Step 11: and (3) a testing stage: by
Figure 426818DEST_PATH_IMAGE021
Initializing the downscaling model, and downscaling the test task.
2. The universal weather data downscaling method as claimed in claim 1, wherein the weather downscaling model extracts shallow features from an input large-scale low-resolution image using two layers of convolutional layers, feeds the obtained shallow features to an upsampling and downsampling module, and implements the upsampling and downsampling using sub-pixel convolution and convolutional layers, respectively, wherein a concept of residual is introduced to reveal deeper high-frequency feature information.
CN202211498102.1A 2022-11-28 2022-11-28 General climate data downscaling method based on element migration learning Pending CN115661612A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211498102.1A CN115661612A (en) 2022-11-28 2022-11-28 General climate data downscaling method based on element migration learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211498102.1A CN115661612A (en) 2022-11-28 2022-11-28 General climate data downscaling method based on element migration learning

Publications (1)

Publication Number Publication Date
CN115661612A true CN115661612A (en) 2023-01-31

Family

ID=85018887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211498102.1A Pending CN115661612A (en) 2022-11-28 2022-11-28 General climate data downscaling method based on element migration learning

Country Status (1)

Country Link
CN (1) CN115661612A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117746171A (en) * 2024-02-20 2024-03-22 成都信息工程大学 Unsupervised weather downscaling method based on dual learning and auxiliary information

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117746171A (en) * 2024-02-20 2024-03-22 成都信息工程大学 Unsupervised weather downscaling method based on dual learning and auxiliary information
CN117746171B (en) * 2024-02-20 2024-04-23 成都信息工程大学 Unsupervised weather downscaling method based on dual learning and auxiliary information

Similar Documents

Publication Publication Date Title
CN109191476B (en) Novel biomedical image automatic segmentation method based on U-net network structure
Wu et al. Fast end-to-end trainable guided filter
CN111738329B (en) Land use classification method for time series remote sensing images
WO2020056791A1 (en) Method and apparatus for super-resolution reconstruction of multi-scale dilated convolution neural network
CN114004847B (en) Medical image segmentation method based on graph reversible neural network
CN105550649A (en) Extremely low resolution human face recognition method and system based on unity coupling local constraint expression
CN104899830A (en) Image super-resolution method
Chen et al. Single image super-resolution using deep CNN with dense skip connections and inception-resnet
CN106157249A (en) Based on the embedded single image super-resolution rebuilding algorithm of optical flow method and sparse neighborhood
Gendy et al. Lightweight image super-resolution based on deep learning: State-of-the-art and future directions
CN113052755A (en) High-resolution image intelligent matting method based on deep learning
CN111340697B (en) Image super-resolution method based on clustered regression
CN104992407B (en) A kind of image super-resolution method
CN115661612A (en) General climate data downscaling method based on element migration learning
CN113449612A (en) Three-dimensional target point cloud identification method based on sub-flow sparse convolution
CN115526779A (en) Infrared image super-resolution reconstruction method based on dynamic attention mechanism
Yang et al. An image super-resolution network based on multi-scale convolution fusion
CN113208641B (en) Auxiliary diagnosis method for lung nodule based on three-dimensional multi-resolution attention capsule network
CN112907441B (en) Space downscaling method based on super-resolution of ground water satellite image
Bai et al. Digital rock core images resolution enhancement with improved super resolution convolutional neural networks
CN109087247A (en) The method that a kind of pair of stereo-picture carries out oversubscription
CN114022362A (en) Image super-resolution method based on pyramid attention mechanism and symmetric network
Li et al. Texture-centralized deep convolutional neural network for single image super resolution
CN116310391B (en) Identification method for tea diseases
Yang et al. Bi-path network coupling for single image super-resolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination