CN106920214A - Spatial target images super resolution ratio reconstruction method - Google Patents

Spatial target images super resolution ratio reconstruction method Download PDF

Info

Publication number
CN106920214A
CN106920214A CN201710123081.8A CN201710123081A CN106920214A CN 106920214 A CN106920214 A CN 106920214A CN 201710123081 A CN201710123081 A CN 201710123081A CN 106920214 A CN106920214 A CN 106920214A
Authority
CN
China
Prior art keywords
image
resolution
low
sample
subspace
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710123081.8A
Other languages
Chinese (zh)
Other versions
CN106920214B (en
Inventor
姜志国
张浩鹏
张鑫
谢凤英
罗晓燕
尹继豪
史振威
赵丹培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Publication of CN106920214A publication Critical patent/CN106920214A/en
Application granted granted Critical
Publication of CN106920214B publication Critical patent/CN106920214B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution

Abstract

The present invention relates to a kind of spatial target images super resolution ratio reconstruction method, belong to digital image processing techniques field.The present invention trains independent dictionary by each word space, improves expression ability of the dictionary to the local composition of sample.And built by the way that low-rank matrix restored method is introduced into super-resolution rebuilding subspace dictionary, expression ability, expression precision of the subspace dictionary to the fractional sample pattern of spatial target images rule are improved, and then improves the reconstruction effect of spatial target images super-resolution rebuilding.Train expression of the dictionary for obtaining to low resolution sample more accurate using the inventive method, while the fractional sample pattern of the closer high resolution space target observation image of reconstruction to high-resolution sample.

Description

Spatial target images super resolution ratio reconstruction method
Technical field
The present invention relates to digital image processing techniques field, and in particular to a kind of spatial target images super-resolution rebuilding side Method.
Background technology
With continuing to develop for Space-objects Observation technology, people need to obtain more details letter from spatial target images Breath, these detailed information help to carry out image Spatial Object interpretation with identification.However, extraterrestrial target image-forming condition is multiple It is miscellaneous, in image imaging process, by sampling, obscuring and the factor such as noise is influenceed, image quality decrease and image detail is damaged Lose seriously, the interpretation identification of extraction and image to image detail information makes a big impact.Image super-resolution rebuilding is one Plant the signal processing technology for recovering high-definition picture detailed information from low-resolution image.This technology is in remote sensing, doctor The Disciplinary Frontiers such as and safety monitoring image procossing show huge application prospect.Therefore, for the super of spatial target images Resolution reconstruction method has important research and application value.
Outer research of the scholar to super resolution ratio reconstruction method of Current Domestic is broadly divided into two classes:Rebuild based on sequence image Super-resolution reconstruction established model and based on study single frames Super-resolution reconstruction established model.Super-resolution rebuilding based on sequence image Model mainly includes:Sequence image frequency domain super-resolution rate is rebuild and sequence image spatial domain super-resolution rebuilding.List based on study Frame Super-resolution reconstruction established model mainly includes:Single frames super-resolution rebuilding based on neighborhood insertion and the single frames based on rarefaction representation Super-resolution rebuilding.In existing algorithm, the single frames super-resolution rebuilding algorithm based on rarefaction representation be rebuild effect preferably and By widely studied method.The algorithm realizes single frames super-resolution rebuilding by building high-resolution and low-resolution dictionary, Algorithm flow chart is as shown in Figure 1.
Wherein, rarefaction representation dictionary training is based on high-resolution and the low-resolution image block feature for extracting, and training can With the optimal high-resolution and low-resolution dictionary for representing training image sample, specific dictionary training process is as shown in Figure 2.
The classical rarefaction representation super resolution ratio reconstruction method model low-resolution dictionary unified by building the overall situation, to institute There is low-resolution image sample mode complicated and changeable to be indicated, dictionary represents limited in one's ability to low resolution sample;The party Method model is rebuild by global unified high-resolution dictionary to high-definition picture sample mode, and dictionary is to high-resolution Sample rebuilds effect on driving birds is not good.Additionally, Space-objects Observation image is main with spacecraft as observed object, spacecraft is Typical culture, the edge and texture pattern of culture imaging are presented as the local mode of rule.However, traditional is dilute Dredge and represent that super-resolution rebuilding algorithm only considers to minimize the Pixel-level reconstruction error of image during dictionary builds, and simultaneously Do not consider influence of the local mode to reconstructed image quality, thus for the reconstruction of spatial target images occur local edge and The situation of texture distortion.
The content of the invention
(1) technical problem to be solved
The technical problem to be solved in the present invention is:How the reconstruction effect of room for promotion target image super-resolution rebuilding.
(2) technical scheme
In order to solve the above-mentioned technical problem, the invention provides a kind of spatial target images super resolution ratio reconstruction method, bag Include following steps:
S1, training sample and subspace low-rank dictionary build:
Prepare high-resolution and correspondence low-resolution image first as training image collection, and concentrate random from training image Image block characteristics are selected to generate training sample set, after the structure for completing training sample set, with Euclidean distance between training sample feature It is foundation, Subspace partition is carried out to training sample set;Then dictionary building process is independently carried out in every sub-spaces, is passed through The method that low-rank matrix is recovered builds high-resolution and low resolution low-rank dictionary to every sub-spaces respectively;
Step S2, super-resolution rebuilding is carried out based on subspace low-rank dictionary:
Wherein when get need to carry out the low-resolution image of super-resolution rebuilding when, using low-resolution image block as Sample primitive carries out image reconstruction based on subspace low-rank dictionary, and block-by-block carries out super-resolution rebuilding to low-resolution image.
Preferably, step S1 specifically includes following sub-step:
S1.1, training sample set build:
Input high-resolution sample image, and by high-resolution sample image through down-sampling and through three cube sums It is amplified to high-definition picture size and obtains low resolution training image;Three cube sums are considering picture in the neighborhood of image 16 While plain monochrome information image interpolation result is calculated using the gradient information for reflecting neighborhood adjacent pixel change severe degree;
Training image collection is made up of high-resolution training image collection and low resolution training image collection, and training sample set is from training The random mode for extracting training image collection image block characteristics is built in image set, and each training sample is concentrated and contains ladder again Degree feature set and brightness collection;
S1.2, Subspace partition is carried out to all training samples;Subspace is wherein carried out by using Coupling Gradient feature Divide;
S1.3, structure subspace low-rank dictionary:
Low-rank matrix recovery wherein is carried out to sub- spatial brightness feature set using the method for Robust Principal Component Analysis.
Preferably, in step S1.1, the Gradient Features collection is built by image gradient, edge and texture in image The extraction of information is realized by single order and second order filter, training image is filtered by using single order and second order filter Ripple, obtains 4 characteristic images on same width training image, and characteristic pattern respectively describes training image both horizontally and vertically On single order and second order textural characteristics, for any one training sample, 4 groups of characteristic vectors are obtained from 4 characteristic images, lead to Cross the gradient eigenvector for these characteristic vectors being sequentially connected and obtaining this image pattern of correspondence.
Preferably, in step S1.1, the brightness collection uses image using the image pattern vector after removal average The Pixel Information of sample is built.
Preferably, step S2 specifically includes following sub-step:
S2.1 carries out low-resolution image sample expression:
The feature of low-resolution image block sample is gathered in interpolation image;And carried out in the subspace of image pattern it is right The expression of low resolution sample and reconstruction, wherein the division of sample subspace is by sample Gradient Features and low resolution anchor point collection Arest neighbors matching primitives obtain, complete image pattern Subspace partition after, using the brightness of sample in subspace In sample is indicated, image pattern brightness is expressed as subspace low-rank dictionary atom by representing rarefaction representation coefficient Linear combination;
S2.2 carries out high-definition picture sample reconstruction:
Identical rarefaction representation coefficient is had with corresponding high-definition picture block based on low-resolution image block in subspace It is assumed that the brightness of high-resolution sample is rebuild, by using each image block sample in low-resolution image Reconstructed results carry out assignment to high-definition picture respective pixel, you can complete the reconstruction to high-definition picture;
S2.3 carries out reconstruction post processing:
Wherein by using the post-processing approach of iterative backprojection, increase the global restriction in process of reconstruction.
Preferably, in step 2.1, low-resolution image is up-sampled to identical with reconstruction image by three cubes of interpolation Pixel Dimensions.
(3) beneficial effect
The present invention trains independent dictionary by each word space, improves expression energy of the dictionary to the local composition of sample Power.And built by the way that low-rank matrix restored method is introduced into super-resolution rebuilding subspace dictionary, improve subspace dictionary pair The expression ability of the fractional sample pattern of spatial target images rule, expression precision, and then improve spatial target images oversubscription The reconstruction effect that resolution is rebuild.Train expression of the dictionary for obtaining to low resolution sample more accurate using the inventive method, Simultaneously to the fractional sample pattern rebuild closer to high resolution space target observation image of high-resolution sample.
Brief description of the drawings
Fig. 1 is based on rarefaction representation super-resolution rebuilding algorithm flow chart;
Fig. 2 is that rarefaction representation dictionary builds flow chart;
Fig. 3 is the method flow diagram of the embodiment of the present invention;
Fig. 4 builds flow chart for the method sub-spaces low-rank dictionary of the embodiment of the present invention;
Fig. 5 is the effect contrast figure of present invention method and existing method.
Specific embodiment
To make the purpose of the present invention, content and advantage clearer, with reference to the accompanying drawings and examples, to of the invention Specific embodiment is described in further detail.
As shown in figure 3, the spatial target images super resolution ratio reconstruction method of the embodiment of the present invention is comprised the following steps:
S1, training sample are extracted and built with subspace low-rank dictionary
Training sample is depended on because dictionary builds, it is therefore desirable to prepare high-resolution and correspondence low-resolution image first As training image collection, and image block characteristics are selected to generate training sample set from training image concentration is random.Complete training sample After the structure of collection, algorithm, as foundation, Subspace partition is carried out to training sample set with Euclidean distance between training sample feature.Dictionary Building process is independently carried out in every sub-spaces, and the method recovered by low-rank matrix builds high score respectively to every sub-spaces Resolution and low resolution low-rank dictionary.
As shown in figure 4, step S1 specifically includes following sub-step:
S1.1 training sample sets build input high-resolution sample image, and by high-resolution sample image through down-sampling simultaneously High-definition picture size is amplified to through three cube sums obtain low resolution training image.
Wherein down-sampling refers to:It is the image I of M*N for a width size, s times of down-sampling is carried out to it, that is, obtains (M/ S) low-resolution image of * (N/s) size, s should be the common divisor of M and N.If it is considered that be matrix form image, just It is that the image in original image s*s windows is become a pixel, the value of this pixel is exactly the equal of all pixels in window Value.Three cube sums are utilized while pixel luminance information in the neighborhood of image 16 is considered and reflect that neighborhood adjacent pixel becomes The gradient information for changing severe degree calculates image interpolation result, in two dimensional image pixel (x, y) neighborhood (x+u, y+v) Interpolation result can be expressed as:
Tri- matrixes of wherein A, B, C have following form respectively:
A=[h (1+u), h (u), h (1-u), h (2-u)]
F (x, y) is the pixel value of sampled point (x, y) place image.Interpolation kernel function h (x) is defined using range segment separating function, To neighborhood distance | x | for [0,1), [1,2), [2 ,+∞) pixel to define convolution kernel function respectively as follows:
Training image collection is by high-resolution training image collection Y={ YF,YIAnd low resolution training image collection X={ XF,XIStructure Into training sample set concentrates the random mode for extracting training image collection image block characteristics to be built from training image.Each instruction Gradient Features collection and brightness collection are contained in white silk sample set again.Specifically,It is high-resolution sample ladder Degree feature set,I-th gradient eigenvector of high-definition picture sample is represented, N is high-definition picture sample size;It is high-resolution sample brightness collection,Represent i-th brightness vector of high-definition picture sample; Gradient Features collection in low resolution sample setWith brightness collectionWith with high-resolution sample The definition of this collection phase analogy, and information is obtained by low resolution training image,Represent i-th low-resolution image sample Gradient eigenvector,Represent i-th brightness vector of low-resolution image sample.Each Gradient Features is by a dimension For the column vector of d × 1 is represented, and each brightness is represented by the column vector that m × 1 is tieed up.If training sample sum is N It is individual, then image gradient features collection XF,And image brightness properties collection XI,
In order to preferably portray the potential general character structure between high-resolution sample and low resolution sample, gradient is special Collection is built by image gradient.The extraction of edge and texture information is filtered by the single order and second order in formula 1 in image Ripple device is realized.
Training image is filtered by using formula (1) median filter, can be obtained on same width training image 4 characteristic patterns.Characteristic pattern respectively describes the single order and second order textural characteristics that training image is both horizontally and vertically gone up.For Any one training sample, can obtain 4 groups of characteristic vectors from above-mentioned 4 characteristic images, by by these characteristic vectors according to Secondary connection can obtain the gradient eigenvector of this image pattern of correspondence.
In order to ensure that process of reconstruction recovers more accurate detailed information, brightness collection is directly using image pattern Pixel Information is built.In view of the detailed information such as texture and edge is unrelated with the absolute brightness of image in image, in order to more preferable The adaptation reconstruction image brightness of itself, brightness collection built using the image pattern vector after removal average.
S1.2 carries out Subspace partition to all training samples.
The local detail of image has representation complicated and changeable in natural scene, therefore can be with essence for building one The trial for really describing the Global Dictionary of whole sample space is often what is be difficult to.Drawn by carrying out subspace to training sample Divide, and the mode of targetedly subspace dictionary is built for the distinctive structure of every sub-spaces, can effectively reduce word Description precision of the dictionary to subspace internal schema is improved while allusion quotation complexity.Compared with image brightness properties, the gradient of image There is feature more stable pattern to represent under the interference of the factors such as aliasing, noise.Simultaneously consider high-definition picture with it is low The textural characteristics of image in different resolution have close texture structure, and Subspace partition is carried out by using Coupling Gradient feature, can To ensure Subspace partition precision and robustness higher.
Subspace partition is in high-resolution features collection YFWith low resolution feature set XFCoupling feature collectionIn carry out.It is coupling feature collection DFIn i-th element, by i-th mutual corresponding high-resolution Rate and low-resolution image sample gradient eigenvectorWithBe linked in sequence composition.The arbitrary element that thus coupling feature is concentrated ByThe characteristic vector of dimension represents that corresponding coupling feature collection hasDimension. Close coupling feature this reasonable assumption is had based on image pattern in identical subspace, with coupling feature collection DFAs training sample Originally the method and by vector quantization (Vector Quantization) can be realized to image pattern Subspace partition.Vector Quantization algorithm is a kind of unsupervised clustering algorithm, and algorithm is by minimizing coupling feature sample in cost function (2)With son Space anchor point (Anchor Point)Between Euclidean distance realize for image pattern subspace division.
Wherein,It is the set of subspace anchor point, K is subspace number, ciIt is i-th instruction Practice the classification of sample.Constraints | | ci||0≤ 1 most only one of which non-zero entries of expression coefficient for requiring arbitrary training sample Element, that is, in ensureing that each training sample only belongs to a sub-spaces.ConstraintsIt is required that subspace anchor point is through returning One change is processed, so as to ensure that dividing for subspace is only relevant with texture pattern without being influenceed by feature samples amplitude size.It is logical Cross minimum cost function (2), it is possible to achieve to subspace anchor point AFWith the class label of each training sample's Solve.
Object function (2) is solved by the method for iteration, iterative process updates two by Subspace partition and anchor point Step is constituted.In Subspace partition step, each coupling feature of N for i ∈ 1 ...Its affiliated class is calculated by formula (3) Other ci
After the Subspace partition to all training samples is completed, the subspace anchor point of K by formula (4) to jth ∈ 1 ...It is updated:
In anchor point renewal process, ifChange, then iteration process is until in single-wheel renewalNo longer change.Divided for the ease of process of reconstruction sub-spaces, obtaining AFAfterwards, it is necessary to antithetical phrase is empty Between anchor point carry out uncoupling treatment, will AFIt is decomposed into low resolution anchor point collectionWith high-resolution anchor Point set
Sample Subspace partition algorithm:
Input coupling Gradient Features collection;K training sample of random selection is made vectorial as subspace anchor point collection AFIt is initial Estimate;K training sample of random selection is made vectorial as subspace anchor point collection AFInitial estimation;As subspace anchor point collection AFIn When any anchor point changes;Each sample is calculated by formula (3)Affiliated subspace, subspace is updated using formula (4) Anchor point AF, output space anchor point, sample subspace classification.
Subspace classification C=c according to each samplei, i ∈ { 1...N }, can build for subspace dictionary construction Subspace brightness collection S.J-th brightness collection of subspace is illustrated, whereinIt is j-th Low resolution sample brightness collection in subspace,It is j-th subspace middle high-resolution sample brightness collection.
S1.3 builds subspace low-rank dictionary
Image pattern in identical subspace has close texture pattern, thus be made up of sample in same subspace Data matrix should have common low dimensional structures.The interference meeting of the degraded factor such as image down sampling, noise, fuzzy in imaging process The amplitude of random change data element, destroys this low dimensional structures of subspace sample data matrix itself.In super-resolution Rebuild in dictionary building process by the recovery to the potential low dimensional structures of data, effectively can reduce to degrade image model is changed Become and lifted the expression precision of dictionary, and then the relatively sharp real high-definition picture of details is recovered in process of reconstruction.
The structure of subspace low-rank dictionary is independently carried out in every sub-spaces.In order in subspace dictionary building process more The good recovery subspace potential low dimensional structures of sample, by the present invention in that with the method antithetical phrase of Robust Principal Component Analysis (RPCA) Spatial brightness collection carries out low-rank matrix recovery.To j-th subspace brightness collection SjLow-rank matrix recover by minimizing formula (5) cost function is realized in:
Wherein, nuclear norm in object function | | | |*It is the approximate calculation to rank of matrix, and 1- norms | | | |1It is to square The sparse degree of battle array it is approximate.Object function can carry out rapid solving by augmented vector approach.What solution was obtained It is the low-rank composition of subspace brightness collection, has corresponded to identical subspace middle high-resolution and the potential low-rank knot of low resolution sample Structure.AndIt is the sparse composition of subspace brightness collection, has corresponded to the sparse error composition of destruction subspace low-rank structure.
The definition for recovering only to change individual element in matrix due to carrying out matrix low-rank matrix constitutes square without changing The essence of the column vector of battle array, thus can be byWithIt is decomposed into following form.
Wherein ()lrIt is the low-rank representation of correspondence subspace sample, ()spIt is the sparse error of correspondence subspace sample.
In the dictionary building process of subspace introduce low-rank matrix recover purpose be in order to recover subspace low-rank pattern, Reduce the influence that error builds to dictionary.The subspace low-rank structure for thus being obtained merely with recoveryIn information, you can it is real Now to the structure of subspace low-rank dictionary.Specifically, ifWithLow resolution and height in j-th subspace are represented respectively Resolution ratio low-rank dictionary, then subspace low resolution low-rank dictionarySubspace high-resolution low-rank dictionaryRespective subspace low-rank dictionary is built to every sub-spaces by the above method respectivelyWithComplete To low-resolution dictionary DLWith high-resolution dictionary DHStructure.
Step S2, super-resolution rebuilding is carried out based on subspace dictionary
When get need to carry out the low-resolution image of super-resolution rebuilding when, the image based on subspace low-rank dictionary Rebuild using low-resolution image block as sample primitive, block-by-block carries out super-resolution rebuilding to low-resolution image.By using The high-definition picture sample of generation is rebuild to high-definition picture corresponding region assignment, is realized to view picture high-definition picture Rebuild.For RGB Three Channel Color low-resolution images, algorithm changes to YCbCr space image, and to human eye vision more Sensitive Y passages carry out super-resolution rebuilding, and three cubes of interpolation are carried out to Cb and Cr passages.Complete rebuild after, by Y, Cb, Cr triple channels image carries out inverse transformation, reverts to high resolution R GB triple channel images.
Process of reconstruction is carried out according to image pattern feature firstly the need of carrying out feature extraction to low-resolution image sample Sample Subspace partition to be reconstructed.Being reconstituted in the affiliated subspace of sample for high-definition picture sample is carried out, by using low Resolution ratio dictionary carries out rarefaction representation solution to low resolution sample, can obtain expression system of the image pattern in subspace Number.Linear combination according to sample rarefaction representation coefficient to sub- spatial high resolution dictionary atom, you can obtain high-resolution sample This reconstructed results.Due to carrying out rebuilding the constraint for lacking global priori by primitive of image block, in addition it is also necessary to scheme to rebuilding Reconstruction image is set to meet global imaging model constraint as carrying out post processing.
Step S2 specifically includes following sub-step:
S2.1 carries out low-resolution image sample expression
In order to ensure the uniformity of low-resolution image sample characteristics and training sample feature, low-resolution image block sample Feature gathered in interpolation image, therefore first by three cubes of interpolation by low-resolution image up-sample to reconstruction figure As identical Pixel Dimensions.Feature extraction to low-resolution image block sample x should equally be consistent with training sample, that is, carry Sample this Gradient Features x after going average and two norms to normalizefWith brightness xi
To low resolution sample expression be reconstituted in the subspace of image pattern carry out, it is necessary to first to image sample This x carries out Subspace partition.The division of sample subspace passes through sample Gradient Features xfWith low resolution anchor point collection Arest neighbors matching primitives obtain:
Two norms, zero norm are represented respectively.After the Subspace partition for completing image pattern x, sample is used Brightness sample xiSample is indicated in subspace.In the c of subspace, image pattern brightness xiCan be by table Show that rarefaction representation coefficient α is expressed as the linear combination of subspace low-rank dictionary atom, i.e., as shown in formula (9):
Due to dictionaryIt is only used for representing similar local mode in subspace, therefore can be byIt is considered as the mistake of subspace c Complete dictionary.Based on rarefaction representation principle, brightness xiRelative to excessively complete dictionaryRarefaction representation coefficient α can pass through The mode for minimizing cost function (10) is solved:
S2.2 carries out high-definition picture sample reconstruction
Because in dictionary building process, low-rank matrix restored method is only modified to individual element in sample without changing Corresponding relation between the essential structure of sample, therefore high-definition picture block and low-resolution image block can't change. Based on low-resolution image block x in subspaceiWith corresponding high-definition picture block yiIt is with identical rarefaction representation coefficient it is assumed that Obtaining sample xiRelative to subspace low resolution low-rank dictionaryExpression factor alpha after, can directly pass through formula (11) To the brightness y of high-resolution sampleiRebuild:
Represent subspace C high-resolution dictionaries.The high-resolution brightness y that reconstruction is obtainediIn only include high-resolution The relative magnitude change information of rate image block, before high-definition picture is rebuild, in addition it is also necessary to which brightness, variance to image block etc. are believed Breath further defined in formula (12):
Y=| | x | | * yi+m (12)
In formula (12), | | | | representing matrix norm.| | x | | features the severe degree of image pattern luminance difference, and m Then represent the average of low resolution sample x.By using the reconstructed results y of each image block sample in low-resolution image to height This image of resolution ratio respective pixel carries out assignment, you can complete to high-definition picture H0Reconstruction.
S2.3 carries out reconstruction post processing
Image super-resolution rebuilding process described above is rebuild by primitive of image block, the algorithm in process of reconstruction Reconstruction uniformity current image block between different resolution is constrained, and the process of reconstruction of different image blocks is mutual Independently carry out, therefore the uniformity of adjacent image block texture cannot be ensured.It is existing based on image block in order to solve this problem The algorithm of reconstruction generally strengthen adjacent image block by the way of the image block overlap sampling between continuity.Although overlap sampling Reconstruction texture is more continuous between mode can make topography's block, but because entire image still can not meet global coherency about , there are a large amount of unnatural man-made noises in reconstruction image in beam.
Regarding to the issue above, this algorithm increases and rebuilds after reconstruction by using the post-processing approach of iterative backprojection During global restriction:
Wherein L is the low-resolution image that obtains of observation, and matrix D and matrix B are adopted under respectively describing in imaging process The influence of sample and fuzzy factors.Formula (13) uses reconstruction image H0Initialization Optimal Parameters H, and by gradient descent method to public affairs Formula (13) is iterated optimization, you can obtain meeting the high-definition picture H of image imaging model global restriction*
Fig. 5 illustrates experiment effect figure of the present invention, it can be seen that the inventive method reconstruction image has better than similar contrast The reconstruction effect of algorithm.
As can be seen that inventive algorithm is ensureing to calculate using the dictionary building mode for building many sub-spaces low-rank dictionaries While method computational efficiency, expression order of accuarcy of the lifting dictionary to local texture pattern.Inventive algorithm is further by low-rank The method that matrix restores introduces dictionary building process, so as to subspace dictionary is substantially improved to extraterrestrial target general character partial structurtes Precision is represented, and then improves the reconstruction effect of spatial target images super-resolution rebuilding.
The above is only the preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art For member, on the premise of the technology of the present invention principle is not departed from, some improvement and deformation can also be made, these improve and deform Also should be regarded as protection scope of the present invention.

Claims (6)

1. a kind of spatial target images super resolution ratio reconstruction method, it is characterised in that comprise the following steps:
S1, training sample and subspace low-rank dictionary build:
Prepare high-resolution and correspondence low-resolution image first as training image collection, and random choosing figure is concentrated from training image As block feature generate training sample set, complete training sample set structure after, with Euclidean distance between training sample feature be according to According to carrying out Subspace partition to training sample set;Then dictionary building process is independently carried out in every sub-spaces, by low-rank The method that matrix recovers builds high-resolution and low resolution low-rank dictionary to every sub-spaces respectively;
Step S2, super-resolution rebuilding is carried out based on subspace low-rank dictionary:
Wherein when get need to carry out the low-resolution image of super-resolution rebuilding when, using low-resolution image block as sample Primitive carries out image reconstruction based on subspace low-rank dictionary, and block-by-block carries out super-resolution rebuilding to low-resolution image.
2. the method for claim 1, it is characterised in that step S1 specifically includes following sub-step:
S1.1, training sample set build:
Input high-resolution sample image, and amplified through down-sampling and through three cube sums by high-resolution sample image Low resolution training image is obtained to high-definition picture size;Three cube sums are considering that pixel is bright in the neighborhood of image 16 While degree information image interpolation result is calculated using the gradient information for reflecting neighborhood adjacent pixel change severe degree;
Training image collection is made up of high-resolution training image collection and low resolution training image collection, and training sample set is from training image The random mode for extracting training image collection image block characteristics is concentrated to be built, it is special that each training sample concentration contains gradient again Collection and brightness collection;
S1.2, Subspace partition is carried out to all training samples;Subspace partition is wherein carried out by using Coupling Gradient feature;
S1.3, structure subspace low-rank dictionary:
Low-rank matrix recovery wherein is carried out to sub- spatial brightness feature set using the method for Robust Principal Component Analysis.
3. method as claimed in claim 2, it is characterised in that in step S1.1, the Gradient Features collection passes through image gradient Built, the extraction of edge and texture information is realized by single order and second order filter in image, by using single order and two Rank wave filter is filtered to training image, obtains 4 characteristic images on same width training image, and characteristic pattern is described respectively Single order and second order textural characteristics that training image is both horizontally and vertically gone up, for any one training sample, from 4 features 4 groups of characteristic vectors are obtained in image, the gradient spy of this image pattern of correspondence is obtained by the way that these characteristic vectors are sequentially connected Levy vector.
4. method as claimed in claim 2, it is characterised in that in step S1.1, the brightness collection uses removal average Image pattern vector afterwards is built using the Pixel Information of image pattern.
5. the method for claim 1, it is characterised in that step S2 specifically includes following sub-step:
S2.1 carries out low-resolution image sample expression:
The feature of low-resolution image block sample is gathered in interpolation image;And carried out to low point in the subspace of image pattern The expression of resolution sample and reconstruction, the division of wherein sample subspace pass through sample Gradient Features with low resolution anchor point collection most Neighborhood matching is calculated, and after the Subspace partition for completing image pattern, the brightness using sample is right in subspace Sample is indicated, and image pattern brightness is expressed as the line of subspace low-rank dictionary atom by representing rarefaction representation coefficient Property combination;
S2.2 carries out high-definition picture sample reconstruction:
Based on low-resolution image block in subspace and corresponding high-definition picture block have identical rarefaction representation coefficient it is assumed that Brightness to high-resolution sample is rebuild, by using the reconstruction knot of each image block sample in low-resolution image Fruit carries out assignment to high-definition picture respective pixel, you can complete the reconstruction to high-definition picture;
S2.3 carries out reconstruction post processing:
Wherein by using the post-processing approach of iterative backprojection, increase the global restriction in process of reconstruction.
6. method as claimed in claim 5, it is characterised in that in step 2.1, by three cubes of interpolation by low resolution figure As up-sampling to reconstruction image identical Pixel Dimensions.
CN201710123081.8A 2016-07-01 2017-03-03 Super-resolution reconstruction method for space target image Active CN106920214B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2016105161526 2016-07-01
CN201610516152 2016-07-01

Publications (2)

Publication Number Publication Date
CN106920214A true CN106920214A (en) 2017-07-04
CN106920214B CN106920214B (en) 2020-04-14

Family

ID=59460762

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710123081.8A Active CN106920214B (en) 2016-07-01 2017-03-03 Super-resolution reconstruction method for space target image

Country Status (1)

Country Link
CN (1) CN106920214B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765343A (en) * 2018-05-29 2018-11-06 Oppo(重庆)智能科技有限公司 Method, apparatus, terminal and the computer readable storage medium of image procossing
CN109284752A (en) * 2018-08-06 2019-01-29 中国科学院声学研究所 A kind of rapid detection method of vehicle
CN109302600A (en) * 2018-12-06 2019-02-01 成都工业学院 A kind of stereo scene filming apparatus
CN109934193A (en) * 2019-03-20 2019-06-25 福建师范大学 Prior-constrained anti-of global context blocks face super-resolution method and its system
CN110430419A (en) * 2019-07-12 2019-11-08 北京大学 A kind of multiple views naked eye three-dimensional image composition method anti-aliasing based on super-resolution
CN110942425A (en) * 2019-11-26 2020-03-31 贵州师范学院 Reconstruction method and reconstruction system of super-resolution image and electronic equipment
CN112508788A (en) * 2020-12-15 2021-03-16 华中科技大学 Spatial neighborhood group target super-resolution method based on multi-frame observation information
CN112528844A (en) * 2020-12-11 2021-03-19 中南大学 Gait feature extraction method and device with single visual angle and low resolution and storage medium
CN113643341A (en) * 2021-10-12 2021-11-12 四川大学 Different-scale target image registration method based on resolution self-adaptation
CN115187918A (en) * 2022-09-14 2022-10-14 中广核贝谷科技有限公司 Method and system for identifying moving object in monitoring video stream

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102629374A (en) * 2012-02-29 2012-08-08 西南交通大学 Image super resolution (SR) reconstruction method based on subspace projection and neighborhood embedding
CN104504672A (en) * 2014-12-27 2015-04-08 西安电子科技大学 NormLV feature based low-rank sparse neighborhood-embedding super-resolution method
KR20150093993A (en) * 2014-02-10 2015-08-19 한국전자통신연구원 Method and apparatus for image reconstruction using super-resolution

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102629374A (en) * 2012-02-29 2012-08-08 西南交通大学 Image super resolution (SR) reconstruction method based on subspace projection and neighborhood embedding
KR20150093993A (en) * 2014-02-10 2015-08-19 한국전자통신연구원 Method and apparatus for image reconstruction using super-resolution
CN104504672A (en) * 2014-12-27 2015-04-08 西安电子科技大学 NormLV feature based low-rank sparse neighborhood-embedding super-resolution method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIAOXUAN CHEN等: "Low-Rank Neighbor Embedding for Single Image Super-Resolution", 《IEEE SIGNAL PROCESSING LETTERS》 *
杨帅锋 等: "基于低秩矩阵和字典学习的图像超分辨率重建", 《万方数据库期刊库》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108765343A (en) * 2018-05-29 2018-11-06 Oppo(重庆)智能科技有限公司 Method, apparatus, terminal and the computer readable storage medium of image procossing
CN109284752A (en) * 2018-08-06 2019-01-29 中国科学院声学研究所 A kind of rapid detection method of vehicle
CN109302600B (en) * 2018-12-06 2023-03-21 成都工业学院 Three-dimensional scene shooting device
CN109302600A (en) * 2018-12-06 2019-02-01 成都工业学院 A kind of stereo scene filming apparatus
CN109934193A (en) * 2019-03-20 2019-06-25 福建师范大学 Prior-constrained anti-of global context blocks face super-resolution method and its system
CN109934193B (en) * 2019-03-20 2023-04-07 福建师范大学 Global context prior constraint anti-occlusion face super-resolution method and system
CN110430419A (en) * 2019-07-12 2019-11-08 北京大学 A kind of multiple views naked eye three-dimensional image composition method anti-aliasing based on super-resolution
CN110430419B (en) * 2019-07-12 2021-06-04 北京大学 Multi-view naked eye three-dimensional image synthesis method based on super-resolution anti-aliasing
CN110942425A (en) * 2019-11-26 2020-03-31 贵州师范学院 Reconstruction method and reconstruction system of super-resolution image and electronic equipment
CN112528844A (en) * 2020-12-11 2021-03-19 中南大学 Gait feature extraction method and device with single visual angle and low resolution and storage medium
CN112508788A (en) * 2020-12-15 2021-03-16 华中科技大学 Spatial neighborhood group target super-resolution method based on multi-frame observation information
CN113643341B (en) * 2021-10-12 2021-12-28 四川大学 Different-scale target image registration method based on resolution self-adaptation
CN113643341A (en) * 2021-10-12 2021-11-12 四川大学 Different-scale target image registration method based on resolution self-adaptation
CN115187918A (en) * 2022-09-14 2022-10-14 中广核贝谷科技有限公司 Method and system for identifying moving object in monitoring video stream
CN115187918B (en) * 2022-09-14 2022-12-13 中广核贝谷科技有限公司 Method and system for identifying moving object in monitoring video stream

Also Published As

Publication number Publication date
CN106920214B (en) 2020-04-14

Similar Documents

Publication Publication Date Title
CN106920214A (en) Spatial target images super resolution ratio reconstruction method
CN106204449B (en) A kind of single image super resolution ratio reconstruction method based on symmetrical depth network
CN109741256A (en) Image super-resolution rebuilding method based on rarefaction representation and deep learning
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
Suryanarayana et al. Accurate magnetic resonance image super-resolution using deep networks and Gaussian filtering in the stationary wavelet domain
Lin et al. Image super-resolution using a dilated convolutional neural network
US9734566B2 (en) Image enhancement using semantic components
CN112734646B (en) Image super-resolution reconstruction method based on feature channel division
CN106952228A (en) The super resolution ratio reconstruction method of single image based on the non local self-similarity of image
CN105046672B (en) A kind of image super-resolution rebuilding method
CN107240066A (en) Image super-resolution rebuilding algorithm based on shallow-layer and deep layer convolutional neural networks
CN105844590A (en) Image super-resolution reconstruction method and system based on sparse representation
Huang et al. Deep hyperspectral image fusion network with iterative spatio-spectral regularization
CN104574336B (en) Super-resolution image reconstruction system based on adaptive sub- mould dictionary selection
CN108428212A (en) A kind of image magnification method based on double laplacian pyramid convolutional neural networks
CN109544448A (en) A kind of group's network super-resolution image reconstruction method of laplacian pyramid structure
CN109509160A (en) A kind of remote sensing image fusion method by different level using layer-by-layer iteration super-resolution
CN107220957B (en) It is a kind of to utilize the remote sensing image fusion method for rolling Steerable filter
Xiao et al. A dual-UNet with multistage details injection for hyperspectral image fusion
Shen et al. Convolutional neural pyramid for image processing
CN111340696B (en) Convolutional neural network image super-resolution reconstruction method fused with bionic visual mechanism
CN107833182A (en) The infrared image super resolution ratio reconstruction method of feature based extraction
CN111402138A (en) Image super-resolution reconstruction method of supervised convolutional neural network based on multi-scale feature extraction fusion
CN104299193B (en) Image super-resolution reconstruction method based on high-frequency information and medium-frequency information
Hu et al. An efficient fusion algorithm based on hybrid multiscale decomposition for infrared-visible and multi-type images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant