CN107818555A - A kind of more dictionary remote sensing images space-time fusion methods based on maximum a posteriori - Google Patents

A kind of more dictionary remote sensing images space-time fusion methods based on maximum a posteriori Download PDF

Info

Publication number
CN107818555A
CN107818555A CN201711022705.3A CN201711022705A CN107818555A CN 107818555 A CN107818555 A CN 107818555A CN 201711022705 A CN201711022705 A CN 201711022705A CN 107818555 A CN107818555 A CN 107818555A
Authority
CN
China
Prior art keywords
dictionary
mrow
resolution
image
msub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711022705.3A
Other languages
Chinese (zh)
Other versions
CN107818555B (en
Inventor
何楚
张芷
郭闯创
熊德辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201711022705.3A priority Critical patent/CN107818555B/en
Publication of CN107818555A publication Critical patent/CN107818555A/en
Application granted granted Critical
Publication of CN107818555B publication Critical patent/CN107818555B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • G06T3/4061Super resolution, i.e. output image resolution higher than sensor resolution by injecting details from a different spectral band
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The present invention relates to a kind of more dictionary remote sensing images space-time fusion methods based on maximum a posteriori, first to carrying out rough sort after high-low resolution difference image, image block is taken out to every kind of classification, the training sample matrix per class is formed, so as to train multi-class high-low resolution dictionary.More dictionary learnings, it is contemplated that different landforms have different shape and texture in image so that the dictionary trained is more targeted, can more preferably capture the difference between landforms.The selection of dictionary group is carried out using maximum a posteriori probability model before sparse coefficient is solved, pass through area pixel to the likelihood function between dictionary, Prior function between area pixel dictionary, maximum a posteriori probability is calculated, so as to which low resolution Differential Input image slices vegetarian refreshments is assigned into corresponding dictionary group.Every group of pixel carries out sparse coding under the low-resolution dictionary of corresponding group, obtains rarefaction representation coefficient.Rarefaction representation coefficient is multiplied by corresponding high-resolution dictionary, obtains high-resolution difference image.

Description

A kind of more dictionary remote sensing images space-time fusion methods based on maximum a posteriori
Technical field
The invention belongs to technical field of image processing, during more particularly to a kind of more dictionary remote sensing images based on maximum a posteriori Empty fusion method.
Background technology
Due to hardware technology and budget limit, obtain has high spatial resolution and the remote sensing figure of high time coverage rate simultaneously As obstacle be present.Such as Moderate Imaging Spectroradiomete (Moderate-resolution Imaging Spectroradiometer, MODIS) all the same area can be observed daily, there is higher temporal resolution.But It is that at the same time, their spatial resolution scope is 250 to 1000 meters, because spatial resolution is too low, for objective area Minutia can not show well, therefore ground mulching and the life in a certain region or appointed place can not be monitored well State system change.On the other hand from SPOT and earth's surface satellite instrument (such as:The Landsat Landsat in the U.S.) tool can be obtained There are the remote sensing images of higher spatial resolution, their spatial resolution scope is 10-30 rice.Such remote sensing images are generally suitable The prediction changed for land use, earth's surface drafting and covering, but the shooting of the remote sensing images of these high spatial resolutions Interval time is typically two weeks to one month, and the influence of long shooting interval and bad weather and weather conditions causes such to defend Accelerated surface caused by the image of star shooting can not detect the seasonal variety as caused by mankind's activity or disturbance changes.In order to more Feature changes situation is observed well, earth's surface change is analyzed, temporal-spatial fusion technology is arisen at the historic moment, and the technological incorporation is existing The image that temporal information and spatial information enrich obtains the short high spatial resolution images of time interval.
Spatial temporal adaptive reflection Fusion Model (Spatial and Temporal Adaptive Reflectance Fusion Model, STARFM) it is a kind of classical temporal-spatial fusion model, there are many improvement again subsequently on the basis of the model Model is suggested.STARFM models close on the similar pixel of spectrum using distance, spectral similarity and the aspect of time difference three Value predicts center pel value so that the precision of fusion results is greatly improved.The then figure based on rarefaction representation As ultra-resolution method is suggested:With sparse representation theory, high-low resolution image is obtained by establishing high-low resolution dictionary Mapping relations, realize the super-resolution of image.Rarefaction representation is applied into remote sensing image temporal-spatial fusion field, it is proposed that based on dilute Dredge the temporal-spatial fusion model (SPSTFM) represented:By uniformly train the high time coverage rate differential image of low spatial resolution and Dictionary between the low time coverage rate differential image of high spatial resolution predicts the change of reflectivity, and this causes it to phenology Change and ground mulching Change of types have good disposal ability.
However, there are the following problems for above-mentioned model:STARFM models do not make full use of current data resource, because it is false If ground mulching type and the ratio of each Land cover types are constant during observation, therefore lack for region of variation in short-term Effective means;Model based on rarefaction representation is based on unified dictionary and is learnt and rebuild, and rebuilding effect can be by study dictionary The constraint of validity.
The content of the invention
It is an object of the invention to by more dictionary learnings, have been directed to the different shape and texture of different landforms in image, More targetedly dictionary is trained, can more preferably capture the difference between landforms.Bayesian frame is introduced, priori can be made full use of The selection of dictionary is carried out in information generation image process.The present invention proposes base on the basis of rarefaction representation temporal-spatial fusion model In more dictionary temporal-spatial fusion models of maximum a posteriori:Go out multiple dictionary for different type regional learning in image;Input is schemed The dictionary select permeability of picture regards the Solve problems of a maximum a posteriori probability as, and structure realm pixel is to word under Bayesian frame Prior function between likelihood function between allusion quotation, and area pixel dictionary, the dictionary that the region is obtained by optimization method select Select;Given birth under classical sparse expression temporal-spatial fusion framework by front and rear high-resolution remote sensing image and with time low-resolution image Into high-definition picture.
The technical scheme is that a kind of more dictionary remote sensing images space-time fusion methods based on maximum a posteriori, including with Lower step:Step 1, training image is classified, including following sub-step,
Step 1.1, to any two interval moment t1+2nAnd t1High-definition picture make difference, while to any two It is spaced moment t1+2nAnd t1Low-resolution image make difference, respectively obtain high-resolution difference image and low resolution difference diagram Picture, wherein n=1,2,3...;
Step 1.2, simple rough sort is carried out respectively to high-resolution difference image and low resolution difference image, obtained Multigroup different classes of high-resolution and low resolution training sample image;
Step 2, multi-class dictionary, including following sub-step are learnt,
Step 2.1, it is different classes of according to what is obtained in step 1, piecemeal is carried out to every class training sample image, will each be schemed As block stacking in column, training sample matrix is formed;
Step 2.2, the high-resolution of identical category and low resolution training sample matrix are put together joint training;
Step 2.3, every a kind of training sample matrix is respectively trained out and belongs to such high-low resolution dictionary pair, obtained The dictionary of different groups;
Step 3, input needs the t rebuild1+nThe low-resolution image at moment, with t1+2nThe low-resolution image at moment is made Difference obtains input picture, and the selection of dictionary group pixel-by-pixel, including following sub-step are carried out to input picture,
Step 3.1, the low resolution training sample image of each classification to being obtained in step 1 is sampled, and is obtained every The likelihood probability of any pixel under individual classification;
Step 3.2, the prior probability of the pixel is obtained according to the classification situation put around input image pixels point;
Step 3.3, each pixel in input picture, the method that maximum a posteriori probability is used to each pixel are traveled through Carry out the selection of pixel dictionary group;
Step 4, to t1+nThe full resolution pricture at moment is rebuild, including following sub-step,
Step 4.1, input picture is subjected to piecemeal according to the selection result of dictionary group, the image block of same group is placed on Together and stacking forms input matrix in column, the final input matrix for producing quantity identical with dictionary group;
Step 4.2, for each input matrix, the low-resolution dictionary obtained using step 2.3, asked using OMP methods Go out corresponding rarefaction representation coefficient;
Step 4.3, the high-resolution dictionary of each group is multiplied with the rarefaction representation coefficient of corresponding group, obtains reconstruction image Each group image block;Then each group image block is superimposed, overlapping region uses average, the difference image rebuild;
Step 4.4, the difference image of reconstruction plus or minus known t1+2nWhat the high-definition picture at moment was rebuild High-definition picture L21;
Step 5, by t1+nThe low-resolution image at moment, with t1The low-resolution image at moment makees difference as input figure Picture, repeat step 3-4, the high-definition picture L22 rebuild, two L21 are added averaging with L22, as last Reconstructed results.
Belong to such moreover, every a kind of training sample matrix is respectively trained out using K-SVD methods in the step 2.3 High-low resolution dictionary pair, implementation is as follows,
If training sample matrix is expressed as x={ x1,x2,…,xN, xiBelong to RN, rarefaction representation assumes that these signals can be with By several atom linear expressions in excessively complete dictionary matrix, i.e.,:
X=D α (1)
Cross complete dictionary
D={ d1,d2,…,dN}∈RM×N(M < N) (2)
Sparse coefficient
α={ α12,…,αN}T∈RN (3)
Wherein, M, N are the row and column of dictionary matrix respectively, and each row are referred to as a dictionary atom d in dictionary matrix, share N number of dictionary atom, the dimension of each atom is M;α is rarefaction representation coefficient, and largely value is 0 to wherein α, only a small number of value right and wrong Zero;If the number of nonzero value is K, and K < < M, then it is K sparse to claim α;
It was found from the principle of rarefaction representation and sparse coding, per a kind of high-low resolution dictionary to by optimizing formula (4) Obtain,
Wherein λ is regularization parameter, is balanced for the degree of rarefication to signal after rarefaction representation and reconstructed error, | | α | |1For l1Norm, absolute value sum is represented, | | Z-Da | |2Represent l2The mould of norm, as ordinary meaning, Z=[Y;X], Y is high Resolution ratio training sample matrix, X are low resolution training sample matrixes, and X, Y pass through normalized;D=[DI;Dh], DI, DhRespectively low-resolution dictionary and high-resolution dictionary, D*, α * represent the dictionary and sparse table obtained after on the right of optimization equation Show coefficient;
High-low resolution dictionary is obtained to D using K-SVD method optimization formulaI, Dh, comprise the following steps that:
A. training sample matrix Z is inputted, the atomicity of dictionary is N, iterations J;
B. dictionary is initialized, initial value of the K row as dictionary can be randomly selected from training sample matrix Z;
C. orthogonal matching pursuit algorithm OMP is used, rarefaction representation coefficient α is obtained according to the dictionary after initialization;
D. the kth row of dictionary are updated:
1. the row k being multiplied in sparse matrix with dictionary kth row is made to be denoted as
2. overall expression error matrix after calculating the kth row for removing dictionarydjFor in dictionary Jth arranges;
③EkIn only retain dictionary kth row andItem after the product of middle non-zero position, formed
It is 4. rightSingular value decomposition is carried out, so as to update dictionary kth row and corresponding sparse coefficient;
E. repeat step d, until meeting iterations, final high-low resolution dictionary pair is obtained.
Moreover, each pixel carries out pixel dictionary group class using the method for maximum a posteriori probability in the step 3.3 Other selection, implementation is as follows,
The pixel value of a certain pixel is x in known input picturemnUnder conditions of, m, n are coordinate, public by Bayes Formula (5) calculates the posterior probability under every kind of dictionary group is assumed, takes the wherein conduct of posterior probability maximum pixel final Dictionary group, wherein Bayesian formula is:
X in formulamnRepresent the pixel value of pixel, ciRepresent the i-th category dictionary;P(xmn|ci) represent in the i-th category dictionary ciIn Pixel value is xmnProbability, referred to as likelihood probability;P(ci) be the i-th category dictionary prior probability;P(xmn) represent that pixel value is xmnPrior probability, P (ci|xmn) it is posterior probability, that is, in the pixel value of the known point be xmnUnder conditions of, the point category In dictionary classification ciProbability;
Due to denominator P (xmn) be not dependent on the constant of dictionary group, i.e., no matter the pixel belongs to any dictionary group, P(xmn) value remain constant, therefore had no effect when calculating maximum a posteriori probability, P (xmn) be not involved in calculating, can To be converted to following formula,
Wherein, i represents the classification of dictionary, and for a width input picture I, image size is M × N, to all pictures in image Vegetarian refreshments takes out its pixel value xmn(m=1,2 ..., M;N=1,2 ..., N), its dictionary classification i is obtained by formula (6), by i phases Same pixel is grouped together into the image block I of the i-th category dictionaryi
Moreover, the simple rough sort described in step 1.2 is classified for binaryzation.
Compared with prior art, the advantages of the present invention:More dictionary learnings of the present invention, it is contemplated that in image Different landforms have different shape and texture so that the dictionary trained is more targeted, can more preferably capture the difference between landforms Not, the maximum a posteriori probability method and can while based on Bayesian frame preferably carries out the choosing of dictionary to input picture different zones Select.So as to improve the quality of the high-definition picture of reconstruction.
Brief description of the drawings
The flow chart of Fig. 1 embodiment of the present invention.
The Bayesian MAP classification framework explanation figure of Fig. 2 embodiment of the present invention.
Embodiment
Technical solution of the present invention is described in detail below in conjunction with drawings and examples.
Inventive algorithm introduces more dictionaries and maximum a posteriori.Rough segmentation is carried out after high-low resolution difference image is obtained Class, image block is taken out to every kind of classification, forms the training sample matrix per class, differentiated so as to train multi-class height Rate dictionary.More dictionary learnings, it is contemplated that different landforms have different shape and texture in image so that the dictionary trained has more Targetedly, the difference between landforms can more preferably be captured.Word is carried out using maximum a posteriori probability model before sparse coefficient is solved The selection of allusion quotation group, pass through area pixel to the Prior function between the likelihood function between dictionary, and area pixel dictionary, meter Maximum a posteriori probability is calculated, so as to which low resolution Differential Input image slices vegetarian refreshments is assigned into corresponding dictionary group.Every group of pixel Sparse coding is carried out under the low-resolution dictionary of corresponding group, obtains rarefaction representation coefficient.Rarefaction representation coefficient is multiplied by correspondingly High-resolution dictionary, obtain high-resolution difference image.Maximum a posteriori probability is calculated using Bayesian frame so that Prior information utilization is more abundant, rebuilds effect closer to true picture.
Such as Fig. 1, the flow chart of the embodiment of the present invention includes following 3 steps:
Step 1 trains multi-class dictionary
(1)t3And t1Landsat images Y3, the Y1 difference at moment obtains L31, t3And t1MODIS the images X3, X1 at moment Difference obtains M31, wherein, the interval time at moment is uncertain, and centre can be spaced multiple moment, mainly need with two The image that end time point obtains, reconstructs the image on interlude.
(2) simple rough sort, such as first binaryzation are carried out to L31 and M31, filters out the area that area is more than particular value Domain, obtain lake, lake Degradation path, the binary image in forest land.Lake is separated from difference image using binary image, Lake Degradation path, three groups of forest land training sample image.
(3) piecemeal is carried out to every class training sample image, has overlapping between block and block, each image block stacks in column, shape Into training sample matrix, the joint training of putting together of the sample matrix of high-low resolution obtains multipair high-low resolution dictionary pair. It is specific as follows using K-SVD methods,
For piece image, image block is divided into, each image block is expressed as { x after stacking in column1,x2,…, xN, xiBelong to RN.Rarefaction representation assume these signals can by several atom linear expressions in excessively complete dictionary matrix, I.e.:
X=D α (1)
Cross complete dictionary
D={ d1,d2,…,dN}∈RM×N(M < N) (2)
Sparse coefficient
α={ α12,…,αN}T∈RN (3)
Wherein, M, N are the row and column of dictionary matrix respectively, and each row are referred to as a dictionary atom d in dictionary matrix, share N number of dictionary atom, the dimension of each atom is M.α is rarefaction representation coefficient, and largely value is 0 to wherein α, only a small number of value right and wrong Zero.If the number of nonzero value is K, and K < < M, then it is K sparse to claim α.Can from the principle of rarefaction representation and sparse coding Know, dictionary to being obtained by optimizing formula (4),
Wherein λ is regularization parameter, is balanced for the degree of rarefication to signal after rarefaction representation and reconstructed error, | | α | |1For l1Norm, absolute value sum is represented, | | Z-D α | |2Represent l2The mould of norm, as ordinary meaning, Z=[Y;X], D=[DI; Dh], DI, DhRespectively low-resolution dictionary and high-resolution dictionary, D**Obtained dictionary and dilute after representing on the right of optimization equation Dredge and represent coefficient.Y is that high-resolution difference image is divided into image block, after then being stacked in column to each image block, the height of formation Resolution ratio training matrix.Similarly, X is low resolution difference image block, stacks the low resolution training matrix formed afterwards in column.It is right High-low resolution training matrix, which carries out joint, can ensure in training, the high-resolution rarefaction representation of image block of same position The rarefaction representation coefficient of coefficient and low resolution is identical.In view of all being deposited between different frequency bands and height resolution images It is normalized in the difference of reflectivity, therefore to training matrix.X, Y are the training matrix after normalized.
Optimization formula (4) obtains high-low resolution dictionary DI, DhWhen, using K-SVD method, comprise the following steps that: A) training sample matrix Z is inputted, the atomicity of dictionary is N, iterations J;
B) dictionary is initialized, initial value of the K row as dictionary can be randomly selected from training sample matrix Z;
C) orthogonal matching pursuit algorithm OMP is used, rarefaction representation coefficient α is obtained according to the dictionary after initialization;
For the excessively complete dictionary of a determination, rarefaction representation coefficient α solution is not unique, in order that α is the most sparse, Therefore need to obtain the minimum solution of nonzero value, problem is converted into:
min||α||0S.t.x=D α (7)
||α||0Represent l0Norm, representative be nonzero value number;The number N of atom is greater than signal x dimension in D M, i.e. M < N, it so just can guarantee that the mistake completeness of dictionary.
Solution l can be converted into by solving rarefaction representation coefficient1Norm.Solution to sparse coefficient is using orthogonal matching Follow the trail of (OMP) algorithm.The main thought of the algorithm is:From dictionary matrix D, an original most matched with sample matrix Z is selected Son (namely certain is arranged), builds a sparse bayesian learning, and obtains signal residual error, then proceedes to what selection most matched with signal residual error Atom, iterate, then Z can represent by the linear of these atoms and plus last residual values.When residual values are can be with In the range of ignoring, then Z is exactly the linear combination of these atoms.Need to have carried out the atom of selection before selection atom orthogonal Change is handled so that each iteration is all optimal, is also gradually decreased with the time calculate of iteration.
D) the kth row of dictionary are updated:
1. the row k being multiplied in sparse matrix with dictionary kth row is made to be denoted as
2. overall expression error matrix after calculating the kth row for removing dictionarydjFor in dictionary Jth arranges;
③EkOnly retain dictionary kth row andThose after the product of middle non-zero position, formed
It is 4. rightSingular value decomposition is carried out, so as to update dictionary kth row and corresponding sparse coefficient;
E) repeat step d), until meeting iterations, final dictionary is obtained.
(4) input needs the t rebuild2The low-resolution image X2 at moment, it is poor to make with the low-resolution image X3 at t3 moment Get input picture X32, in intermediate time, the image of low resolution is known, and the purpose of the present invention is exactly according to low point The image of resolution obtains high-resolution image.
The method of step 2 maximum a posteriori probability carries out the selection of pixel dictionary group
The main thought for obtaining likelihood function is first to carry out a rough sort to a series of low resolution difference image, so Carry out the sampling of pixel to every kind of classification respectively afterwards, count the gray value of pixel, draw out the general of every kind of dictionary classification Rate density curve, the probability of each gray value under different group dictionaries can be obtained according to probability density curve.Such as rough sort is big Body can be divided into three classes according to lake, lake Degradation path, forest land, and corresponding dictionary is also this three class.
The determination of prior probability mainly considers the dictionary classification situation around put.For example, when the classification results phase of surrounding point Meanwhile the classification identical probability of intermediate point and surrounding point is larger, 0.8 can be set to, intermediate point be divided into the probability of other classes compared with It is low, it is assumed that to be 0.2;But when the group result of the point of surrounding is inconsistent, it is believed that it is all phase that intermediate point, which assigns to any kind probability, With.Above-mentioned is a kind of better simply Prior function, and surrounding point can also be divided into more different situations, consider surrounding point When can also not only consider four points up and down, more points can also participate in determining prior probability, the elder generation so obtained Testing probability can be more accurate, and the result of classification can be allowed more preferable.
In the present embodiment, the selection of pixel dictionary group classification, traversal are carried out using the method for maximum a posteriori probability to X32 Probability of the view picture figure in the case of all different groupings, take wherein maximum probability selects knot as final dictionary group classification Fruit;Realization is as follows,
The dictionary group selection of pixel uses Bayesian frame, Bayesian formula such as formula:
X in formulamnRepresent the pixel value of pixel, CiRepresent the i-th category dictionary.P(xmn|ci) represent in the i-th category dictionary ciIn Pixel value is xmnProbability, referred to as likelihood probability, P (ci) be the i-th category dictionary prior probability.Prior probability reflects basis The group result of surrounding point, intermediate point assign to the probability of the dictionary group.P(xmn) expression pixel value is xmnPrior probability, P (ci| xmn) it is posterior probability, that is, in the pixel value of the known point be xmnUnder conditions of, the point belongs to dictionary classification ciProbability.
The selection of pixel dictionary classification uses maximum a posteriori probability (Maximum a posteriori, MAP) principle, refers to It is x in known pixel valuesmnUnder conditions of, the posterior probability under every kind of dictionary group is assumed is calculated by formula (5), is taken wherein general Rate is maximum to be assumed as final group.Denominator P (xmn) be not dependent on the constant of dictionary group, i.e., no matter the point, which belongs to, is appointed What dictionary group, P (xmn) value remain constant, therefore have no effect, can not join when calculating maximum a posteriori probability With calculating.
I represents the classification of dictionary, and it can be determined by maximum a posteriori probability, for a width input picture I, image size For M × N, its pixel value x is taken out to all pixels point in imagemn(m=1,2 ..., M;N=1,2 ..., N), pass through formula (6) Its dictionary classification i is obtained, i identical pixels are grouped together into the image block I of the i-th category dictionaryi
Such as Fig. 2, the maximum a posteriori probability of the embodiment of the present invention determines the schematic diagram of pixel group;Each pixel in image Point is corresponding with likelihood probability, pixel value, dictionary group, prior probability.For piece image, can be regarded as has four layers.It is false If the group of intermediate point is unknown, known to the group of surrounding point.The first step is according to pixel value, obtains the likelihood probability of intermediate point. Second step, the prior probability of intermediate point, the 3rd step, by likelihood probability and prior probability are obtained according to the group of known surrounding point Multiplication obtains the posterior probability of intermediate point, takes group result of the group for causing posterior probability maximum as intermediate point.
The reconstruction of step 3 high-definition picture
Input picture X32 is subjected to piecemeal according to the selection result of dictionary group, i identical pixels are combined Form the image block I of the i-th category dictionaryi, the image block of same group is put together and stacked forms input matrix in column, final production The input matrix of raw quantity identical with dictionary group.Low-resolution dictionary corresponding to using each input matrix, is obtained pair with OMP The rarefaction representation coefficient answered.
The high-resolution dictionary D of each grouphIt is multiplied respectively with each group rarefaction representation coefficient, obtains each group figure of reconstruction image As block, each group image block is superimposed, overlapping region uses average, the difference image Y rebuild32
Assuming that obtain input matrix X using OMP algorithms32Sparse coefficient be α, then high-resolution difference image Y32Can be with It is expressed as:
Y32=Dh*α (8)
Difference image Y32- L2 is obtained plus high-definition picture L3, takes the high-definition picture rebuild after bearing L21。
High score can be obtained as input picture, reconstruction in the hope of making difference image X12 by X1 and X2 using same method Resolution image L22, L21 is added averaging with L22, as last reconstructed results.
Specific embodiment described herein is only to spirit explanation for example of the invention.Technology belonging to the present invention is led The technical staff in domain can be made various modifications or supplement to described specific embodiment or be replaced using similar mode Generation, but without departing from the spiritual of the present invention or surmount scope defined in appended claims.

Claims (4)

1. a kind of more dictionary remote sensing images space-time fusion methods based on maximum a posteriori, it is characterised in that comprise the following steps:
Step 1, training image is classified, including following sub-step,
Step 1.1, to any two interval moment t1+2nAnd t1High-definition picture make difference, while to any two interval Moment t1+2nAnd t1Low-resolution image make difference, respectively obtain high-resolution difference image and low resolution difference image, its Middle n=1,2,3...;
Step 1.2, carry out simple rough sort respectively to high-resolution difference image and low resolution difference image, obtain multigroup Different classes of high-resolution and low resolution training sample image;
Step 2, multi-class dictionary, including following sub-step are learnt,
Step 2.1, it is different classes of according to what is obtained in step 1, piecemeal is carried out to every class training sample image, by each image block Stack in column, form training sample matrix;
Step 2.2, the high-resolution of identical category and low resolution training sample matrix are put together joint training;
Step 2.3, every a kind of training sample matrix is respectively trained out and belongs to such high-low resolution dictionary pair, obtain difference The dictionary of group;
Step 3, input needs the t rebuild1+nThe low-resolution image at moment, with t1+2nThe low-resolution image at moment obtains as difference To input picture, the selection of dictionary group pixel-by-pixel, including following sub-step are carried out to input picture,
Step 3.1, the low resolution training sample image of each classification to being obtained in step 1 is sampled, and obtains each class The likelihood probability of not lower any pixel;
Step 3.2, the prior probability of the pixel is obtained according to the classification situation put around input image pixels point;
Step 3.3, each pixel in input picture is traveled through, each pixel is carried out using the method for maximum a posteriori probability The selection of pixel dictionary group;
Step 4, to t1+nThe full resolution pricture at moment is rebuild, including following sub-step,
Step 4.1, input picture is subjected to piecemeal according to the selection result of dictionary group, the image block of same group is put together And stack and form input matrix in column, the final input matrix for producing quantity identical with dictionary group;
Step 4.2, for each input matrix, the low-resolution dictionary obtained using step 2.3, obtained pair using OMP methods The rarefaction representation coefficient answered;
Step 4.3, the high-resolution dictionary of each group is multiplied with the rarefaction representation coefficient of corresponding group, obtains each of reconstruction image Group image block;Then each group image block is superimposed, overlapping region uses average, the difference image rebuild;
Step 4.4, the difference image of reconstruction plus or minus known t1+2nThe high-resolution that the high-definition picture at moment is rebuild Rate image L21;
Step 5, by t1+nThe low-resolution image at moment, with t1The low-resolution image at moment makees difference as input picture, weight Two L21 are added averaging with L22, as last reconstruction knot by multiple step 3-4, the high-definition picture L22 rebuild Fruit.
2. a kind of more dictionary remote sensing images space-time fusion methods based on maximum a posteriori as claimed in claim 1, its feature exist In:Every a kind of training sample matrix is respectively trained out using K-SVD methods in the step 2.3 and belongs to such height resolution Rate dictionary pair, implementation is as follows, if training sample matrix is expressed as x={ x1,x2,…,xN, xiBelong to RN, rarefaction representation vacation If these signals can be by several atom linear expressions in excessively complete dictionary matrix, i.e.,:
X=D α (1)
Cross complete dictionary
D={ d1,d2,…,dN}∈RM×N(M < N) (2)
Sparse coefficient
α={ α12,…,αN}T∈RN (3)
Wherein, M, N are the row and column of dictionary matrix respectively, and each row are referred to as a dictionary atom d in dictionary matrix, share N number of Dictionary atom, the dimension of each atom is M;α is rarefaction representation coefficient, and largely value is 0 to wherein α, and only a small number of values are non-zeros 's;
It was found from the principle of rarefaction representation and sparse coding, per a kind of high-low resolution dictionary to being obtained by optimizing formula (4) ,
<mrow> <mo>{</mo> <msup> <mi>D</mi> <mo>*</mo> </msup> <mo>,</mo> <msup> <mi>&amp;alpha;</mi> <mo>*</mo> </msup> <mo>}</mo> <mo>=</mo> <mi>arg</mi> <munder> <mi>min</mi> <mrow> <mi>D</mi> <mo>,</mo> <mi>&amp;alpha;</mi> </mrow> </munder> <mo>{</mo> <mo>|</mo> <mo>|</mo> <mi>Z</mi> <mo>-</mo> <mi>D</mi> <mi>&amp;alpha;</mi> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&amp;lambda;</mi> <mo>|</mo> <mo>|</mo> <mi>&amp;alpha;</mi> <mo>|</mo> <msub> <mo>|</mo> <mn>1</mn> </msub> <mo>}</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
Wherein λ is regularization parameter, is balanced for the degree of rarefication to signal after rarefaction representation and reconstructed error, | | α | |1For l1Norm, absolute value sum is represented, | | Z-Da | |2Represent l2Norm, Z=[Y;X], Y is high-resolution training sample matrix, and X is Low resolution training sample matrix, X, Y pass through normalized;D=[DI;Dh], DI, DhRespectively low-resolution dictionary and High-resolution dictionary, D**Represent dictionary and the rarefaction representation coefficient obtained after on the right of optimization equation;
High-low resolution dictionary is obtained to D using K-SVD method optimization formulaI, Dh, comprise the following steps that:
A. training sample matrix Z is inputted, the atomicity of dictionary is N, iterations J;
B. dictionary is initialized, initial value of the K row as dictionary can be randomly selected from training sample matrix Z;
C. orthogonal matching pursuit algorithm OMP is used, rarefaction representation coefficient α is obtained according to the dictionary after initialization;
D. the kth row of dictionary are updated:
1. the row k being multiplied in sparse matrix with dictionary kth row is made to be denoted as
2. overall expression error matrix after calculating the kth row for removing dictionarydjFor the jth in dictionary Row;
③EkIn only retain dictionary kth row andItem after the product of middle non-zero position, formed
It is 4. rightSingular value decomposition is carried out, so as to update dictionary kth row and corresponding sparse coefficient;
E. repeat step d, until meeting iterations, final high-low resolution dictionary pair is obtained.
3. a kind of more dictionary remote sensing images space-time fusion methods based on maximum a posteriori as claimed in claim 1 or 2, its feature It is:Each pixel carries out the selection of pixel dictionary group classification using the method for maximum a posteriori probability in the step 3.3, Implementation is as follows,
The pixel value of a certain pixel is x in known input picturemnUnder conditions of, m, n are coordinate, pass through Bayesian formula (5) calculate every kind of dictionary group and assume lower posterior probability, take wherein posterior probability it is maximum as the pixel finally Dictionary group, wherein Bayesian formula are:
<mrow> <mi>P</mi> <mrow> <mo>(</mo> <msub> <mi>c</mi> <mi>i</mi> </msub> <mo>/</mo> <msub> <mi>x</mi> <mrow> <mi>m</mi> <mi>n</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>m</mi> <mi>n</mi> </mrow> </msub> <mo>/</mo> <msub> <mi>c</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&amp;CenterDot;</mo> <mi>P</mi> <mrow> <mo>(</mo> <msub> <mi>c</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>m</mi> <mi>n</mi> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
X in formulamnRepresent the pixel value of pixel, ciRepresent the i-th category dictionary;P(xmn|ci) represent in the i-th category dictionary ciMiddle pixel value For xmnProbability, referred to as likelihood probability;P(ci) be the i-th category dictionary prior probability;P(xmn) expression pixel value is xmnElder generation Test probability, P (ci|xmn) it is posterior probability, that is, in the pixel value of the known point be xmnUnder conditions of, the point belongs to dictionary class Other ciProbability;
P (the x when calculating maximum a posteriori probabilitymn) be not involved in calculating, formula (5) can be converted to following formula,
<mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>x</mi> <mrow> <mi>m</mi> <mi>n</mi> </mrow> </msub> <mo>&amp;Element;</mo> <msub> <mi>I</mi> <mi>i</mi> </msub> </mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mo>=</mo> <mi>arg</mi> <mi> </mi> <munder> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> <msub> <mi>c</mi> <mi>i</mi> </msub> </munder> <mi>P</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>m</mi> <mi>n</mi> </mrow> </msub> <mo>/</mo> <msub> <mi>c</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mi>P</mi> <mrow> <mo>(</mo> <msub> <mi>c</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mo>(</mo> <mi>m</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>...</mn> <mo>,</mo> <mi>M</mi> <mo>;</mo> <mi>n</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>...</mn> <mo>,</mo> <mi>N</mi> <mo>)</mo> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
Wherein, i represents the classification of dictionary, and for a width input picture I, image size is M × N, to all pixels point in image Take out its pixel value xmn(m=1,2 ..., M;N=1,2 ..., N), its dictionary classification i is obtained by formula (6), by i identicals Pixel is grouped together into the image block I of the i-th category dictionaryi
4. a kind of more dictionary remote sensing images space-time fusion methods based on maximum a posteriori as claimed in claim 1, its feature exist In:Simple rough sort described in step 1.2 is classified for binaryzation.
CN201711022705.3A 2017-10-27 2017-10-27 Multi-dictionary remote sensing image space-time fusion method based on maximum posterior Active CN107818555B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711022705.3A CN107818555B (en) 2017-10-27 2017-10-27 Multi-dictionary remote sensing image space-time fusion method based on maximum posterior

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711022705.3A CN107818555B (en) 2017-10-27 2017-10-27 Multi-dictionary remote sensing image space-time fusion method based on maximum posterior

Publications (2)

Publication Number Publication Date
CN107818555A true CN107818555A (en) 2018-03-20
CN107818555B CN107818555B (en) 2020-03-10

Family

ID=61604143

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711022705.3A Active CN107818555B (en) 2017-10-27 2017-10-27 Multi-dictionary remote sensing image space-time fusion method based on maximum posterior

Country Status (1)

Country Link
CN (1) CN107818555B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921116A (en) * 2018-07-10 2018-11-30 武汉商学院 Remote sensing image varying information extracting method
CN108932710A (en) * 2018-07-10 2018-12-04 武汉商学院 Remote sensing Spatial-temporal Information Fusion method
CN109410165A (en) * 2018-11-14 2019-03-01 太原理工大学 A kind of multi-spectral remote sensing image fusion method based on classification learning
CN111242848A (en) * 2020-01-14 2020-06-05 武汉大学 Binocular camera image suture line splicing method and system based on regional feature registration
CN112183595A (en) * 2020-09-18 2021-01-05 中国科学院空天信息创新研究院 Space-time remote sensing image fusion method based on packet compressed sensing
CN113160100A (en) * 2021-04-02 2021-07-23 深圳市规划国土房产信息中心(深圳市空间地理信息中心) Fusion method, fusion device and medium based on spectral information image
CN113449683A (en) * 2021-07-15 2021-09-28 江南大学 High-frequency ultrasonic sparse denoising method and system based on K-SVD training local dictionary
CN113723228A (en) * 2021-08-16 2021-11-30 北京大学 Method and device for determining earth surface type ratio, electronic equipment and storage medium
CN116958717A (en) * 2023-09-20 2023-10-27 山东省地质测绘院 Intelligent geological big data cleaning method based on machine learning
CN113723228B (en) * 2021-08-16 2024-04-26 北京大学 Method and device for determining earth surface type duty ratio, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102360498A (en) * 2011-10-27 2012-02-22 江苏省邮电规划设计院有限责任公司 Reconstruction method for image super-resolution
CN104103052A (en) * 2013-04-11 2014-10-15 北京大学 Sparse representation-based image super-resolution reconstruction method
CN104778671A (en) * 2015-04-21 2015-07-15 重庆大学 Image super-resolution method based on SAE and sparse representation
CN105374060A (en) * 2015-10-15 2016-03-02 浙江大学 PET image reconstruction method based on structural dictionary constraint
CN106227015A (en) * 2016-07-11 2016-12-14 中国科学院深圳先进技术研究院 A kind of hologram image super-resolution reconstruction method and system based on compressive sensing theory
CN106919952A (en) * 2017-02-23 2017-07-04 西北工业大学 EO-1 hyperion Anomaly target detection method based on structure rarefaction representation and internal cluster filter

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102360498A (en) * 2011-10-27 2012-02-22 江苏省邮电规划设计院有限责任公司 Reconstruction method for image super-resolution
CN104103052A (en) * 2013-04-11 2014-10-15 北京大学 Sparse representation-based image super-resolution reconstruction method
CN104778671A (en) * 2015-04-21 2015-07-15 重庆大学 Image super-resolution method based on SAE and sparse representation
CN105374060A (en) * 2015-10-15 2016-03-02 浙江大学 PET image reconstruction method based on structural dictionary constraint
CN106227015A (en) * 2016-07-11 2016-12-14 中国科学院深圳先进技术研究院 A kind of hologram image super-resolution reconstruction method and system based on compressive sensing theory
CN106919952A (en) * 2017-02-23 2017-07-04 西北工业大学 EO-1 hyperion Anomaly target detection method based on structure rarefaction representation and internal cluster filter

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李祥灿: ""基于组稀疏表示的自然图像超分辨率算法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932710B (en) * 2018-07-10 2021-11-12 武汉商学院 Remote sensing space-time information fusion method
CN108932710A (en) * 2018-07-10 2018-12-04 武汉商学院 Remote sensing Spatial-temporal Information Fusion method
CN108921116A (en) * 2018-07-10 2018-11-30 武汉商学院 Remote sensing image varying information extracting method
CN108921116B (en) * 2018-07-10 2021-09-28 武汉商学院 Time-varying information extraction method for remote sensing image
CN109410165A (en) * 2018-11-14 2019-03-01 太原理工大学 A kind of multi-spectral remote sensing image fusion method based on classification learning
CN109410165B (en) * 2018-11-14 2022-02-11 太原理工大学 Multispectral remote sensing image fusion method based on classification learning
CN111242848A (en) * 2020-01-14 2020-06-05 武汉大学 Binocular camera image suture line splicing method and system based on regional feature registration
CN111242848B (en) * 2020-01-14 2022-03-04 武汉大学 Binocular camera image suture line splicing method and system based on regional feature registration
CN112183595A (en) * 2020-09-18 2021-01-05 中国科学院空天信息创新研究院 Space-time remote sensing image fusion method based on packet compressed sensing
CN112183595B (en) * 2020-09-18 2023-08-04 中国科学院空天信息创新研究院 Space-time remote sensing image fusion method based on grouping compressed sensing
CN113160100A (en) * 2021-04-02 2021-07-23 深圳市规划国土房产信息中心(深圳市空间地理信息中心) Fusion method, fusion device and medium based on spectral information image
CN113449683A (en) * 2021-07-15 2021-09-28 江南大学 High-frequency ultrasonic sparse denoising method and system based on K-SVD training local dictionary
CN113449683B (en) * 2021-07-15 2022-12-27 江南大学 High-frequency ultrasonic sparse denoising method and system based on K-SVD training local dictionary
CN113723228A (en) * 2021-08-16 2021-11-30 北京大学 Method and device for determining earth surface type ratio, electronic equipment and storage medium
CN113723228B (en) * 2021-08-16 2024-04-26 北京大学 Method and device for determining earth surface type duty ratio, electronic equipment and storage medium
CN116958717A (en) * 2023-09-20 2023-10-27 山东省地质测绘院 Intelligent geological big data cleaning method based on machine learning
CN116958717B (en) * 2023-09-20 2023-12-12 山东省地质测绘院 Intelligent geological big data cleaning method based on machine learning

Also Published As

Publication number Publication date
CN107818555B (en) 2020-03-10

Similar Documents

Publication Publication Date Title
CN107818555A (en) A kind of more dictionary remote sensing images space-time fusion methods based on maximum a posteriori
CN108537192B (en) Remote sensing image earth surface coverage classification method based on full convolution network
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
CN108537742B (en) Remote sensing image panchromatic sharpening method based on generation countermeasure network
Canchumuni et al. Recent developments combining ensemble smoother and deep generative networks for facies history matching
CN108648197B (en) Target candidate region extraction method based on image background mask
CN109345476A (en) High spectrum image super resolution ratio reconstruction method and device based on depth residual error network
CN112070078B (en) Deep learning-based land utilization classification method and system
CN107944442A (en) Based on the object test equipment and method for improving convolutional neural networks
CN106295714A (en) A kind of multi-source Remote-sensing Image Fusion based on degree of depth study
CN112733950A (en) Power equipment fault diagnosis method based on combination of image fusion and target detection
CN110245678A (en) A kind of isomery twinned region selection network and the image matching method based on the network
Carrara et al. Combining gans and autoencoders for efficient anomaly detection
CN106780546B (en) The personal identification method of motion blur encoded point based on convolutional neural networks
CN107833208A (en) A kind of hyperspectral abnormity detection method based on changeable weight depth own coding
Dai et al. Fully convolutional line parsing
CN109685716A (en) A kind of image super-resolution rebuilding method of the generation confrontation network based on Gauss encoder feedback
CN107491793B (en) Polarized SAR image classification method based on sparse scattering complete convolution
CN109712150A (en) Optical microwave image co-registration method for reconstructing and device based on rarefaction representation
CN111402138A (en) Image super-resolution reconstruction method of supervised convolutional neural network based on multi-scale feature extraction fusion
CN114241422A (en) Student classroom behavior detection method based on ESRGAN and improved YOLOv5s
CN108932710A (en) Remote sensing Spatial-temporal Information Fusion method
CN113222825A (en) Infrared image super-resolution reconstruction method based on visible light image training and application
CN112818777B (en) Remote sensing image target detection method based on dense connection and feature enhancement
CN107358625B (en) SAR image change detection method based on SPP Net and region-of-interest detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant