CN105741252A - Sparse representation and dictionary learning-based video image layered reconstruction method - Google Patents

Sparse representation and dictionary learning-based video image layered reconstruction method Download PDF

Info

Publication number
CN105741252A
CN105741252A CN201510789969.6A CN201510789969A CN105741252A CN 105741252 A CN105741252 A CN 105741252A CN 201510789969 A CN201510789969 A CN 201510789969A CN 105741252 A CN105741252 A CN 105741252A
Authority
CN
China
Prior art keywords
resolution
dictionary
image
low
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510789969.6A
Other languages
Chinese (zh)
Other versions
CN105741252B (en
Inventor
王海
王柯
刘岩
张皓迪
李彬
毛敏泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201510789969.6A priority Critical patent/CN105741252B/en
Publication of CN105741252A publication Critical patent/CN105741252A/en
Application granted granted Critical
Publication of CN105741252B publication Critical patent/CN105741252B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a sparse representation and dictionary learning-based video image layered reconstruction method. The main objective of the invention is to solve the problem of long consumed time in video image reconstruction in the prior art. The method includes the following steps of: (1) obtaining a sample set; (2) layering images in the sample set; (3) training the images of the sample set before and after layering so as to obtain high-resolution dictionaries and low-resolution dictionaries of the sample set before and after layering; (4) dividing an image to be reconstructed into a main region, a sub region or a region-of-non-interest; (5) reconstructing the main region according to the high-resolution dictionaries and the low-resolution dictionaries of the sample set after layering; reconstructing the sub region according to the high-resolution dictionaries and the low-resolution dictionaries of the sample set before layering; (7) reconstructing the region-of-non-interest; (8) fusing a reconstructed main region and a reconstructed sub region into a reconstructed region-of-non-interest so as to obtain a complete reconstructed image. With the method of the invention adopted, the reconstruction time of the image is reduced. The method can be used for the processing of medical images, natural images and remote sensing images.

Description

Video image grade reconstruction method based on rarefaction representation Yu dictionary learning
Technical field
The invention belongs to video and technical field of image processing, relate to the super resolution ratio reconstruction method of a kind of video image, can be used for medical image, natural image and remote sensing images etc. and generally require the occasion of high-definition picture.
Background technology
Due to the impact of the factors such as restriction and atmospheric interference of imaging system inherence build-in attribute, can cause that the problems such as image quality is poor, resolution is low occur in the single image obtained or video.How based on the video image of existing hardware condition and acquisition, recover its style as much as possible or improve its quality index such as resolution, definition, being always up the hot issue in video image scientific research and engineer applied.Super-resolution rebuilding is a kind of technology that can be effectively improved and improve video image level of resolution, and its single frames to obtaining or multiframe low-resolution image utilize the priori such as mathematical model of image to rebuild, and then obtain high-definition picture.
Current super-resolution rebuilding mainly has three kinds of methods: interpolation method, reconstruction method and the method based on study.Traditional interpolation method has arest neighbors interpolation method, bilinear interpolation and bicubic interpolation method, although interpolation method algorithm is simple, be easily achieved, but the edge rebuilding image has discontinuous, ringing effect or the shortcoming such as overall polarisation is sliding.Reconstruction method is devoted to the acquisition procedure to low-resolution image and carries out effective and reasonable modeling, the priori being formed corresponding high resolution information by regularization mode is used restraint, image super-resolution rebuilding problem is converted to the low-resolution image estimation problem to high-definition picture, is namely converted to the optimal solution problem with restriction criterion cost method.Based on the main stream approach that the super resolution ratio reconstruction method learnt is image-recovery technique field in recent years, its thought comes from machine learning.Freeman et al. proposes a kind of super resolution ratio reconstruction method based on sample, the method first passes through machine learning and high-resolution and low-resolution sample image is carried out block division, utilize Markov network that the spatial relationship of image is modeled, each piece of low-resolution image to be reconstructed seeks most suitable position in Markov grid in the model set up, and realizes super-resolution rebuilding with this.Although the method can restore more detailed information, but full images region is processed by it, it usually needs longer reconstruction time, and is not suitable for the reconstruction of the video image comprising multiple mobile object.
Summary of the invention
Present invention aims to the deficiency of above-mentioned prior art, a kind of video image grade reconstruction method based on rarefaction representation and dictionary learning is proposed, for comprising the reconstruction of the video image of multiple mobile object, can while ensureing video image main contents reconstruction quality, reduction reconstruction time, the real-time reconstruction for video lays the foundation.
The technical thought realizing the present invention is: utilize form Component Analysis method that the image in sample set is layered, utilize KSVD algorithm that the image before and after layering is trained respectively, obtain each training dictionary, utilize Snake algorithm by image division to be reconstructed for area-of-interest and region of loseing interest in, according to moving target size, area-of-interest is further divided into main region and sub-region, adopt doubledictionary learning method that main region is carried out super-resolution rebuilding, adopt individual character allusion quotation learning method that sub-region is carried out super-resolution rebuilding, adopt interpolation method that region of loseing interest in is interpolated reconstruction, the main region of converged reconstruction, sub-region and region of loseing interest in, obtain the original image rebuild.Its concrete steps include as follows:
(1) from sample database, sample set I={I is obtainedh,Il,Represent high-resolution sample set,Represent low resolution sample set, with the high-definition picture of content same in sample set IAnd low-resolution imageConstitute sample to image
(2) utilize form Component Analysis method that the image in sample set I carries out texture layer and structural stratification, obtain high-resolution texture layer Iht, high resolution structures layer IhsWith low resolution texture layer Ilt, low resolution structure sheaf Ils
(3) utilize KSVD algorithm to the high-resolution sample image I in sample set IhWith low resolution sample image IlIt is trained, obtains high-resolution dictionary DhWith low-resolution dictionary Dl
(4) utilize KSVD algorithm that each layered image in sample set I is trained, obtain texture high-resolution dictionary Dht, structure high-resolution dictionary DhsWith texture low-resolution dictionary Dlt, structure low-resolution dictionary Dls
(5) low-resolution video single-frame images to be reconstructed is divided into area-of-interest and region of loseing interest in;
(6) area-of-interest of low-resolution video single-frame images to be reconstructed is divided into main region and sub-region;
(7) adopt doubledictionary learning method that main region carries out super-resolution rebuilding, adopt individual character allusion quotation learning method that sub-region carries out super-resolution rebuilding, adopt interpolation method that region of loseing interest in is interpolated reconstruction;
(8) what the main region of reconstruction, sub-region be fused to reconstruction loses interest in region, obtains complete reconstruction image.
The present invention has the advantage that compared with prior art
1. video image is carried out grade reconstruction by the present invention, zones of different in video image is adopted the method for reconstructing of different accuracy grade, main region is carried out the reconstruction based on doubledictionary study, sub-region is carried out the reconstruction of single dictionary learning, to loseing interest in, region is interpolated reconstruction, can improving the existing super resolution ratio reconstruction method based on dictionary learning and act on the problem that the reconstruction time brought in full images region is longer, the real-time reconstruction for video lays the foundation;
2. when the present invention extracts the area-of-interest of video image, utilize the Snake algorithm detection accurate closed contour of moving target, using comprise accurate closed contour minimum rectangular area as area-of-interest, the area-of-interest obtained can be made minimum at the same time domain comprising moving target;
3. the moving target in video image is divided into major heading and time target by the present invention according to elemental area size, utilize this parameter of elemental area to characterize the primary and secondary of target, directly effectively target can be carried out major minor sorting, can not increase again computation complexity, the reconstruction time of further downscaled video image on the whole;
4. the present invention is using the minimum rectangular area that comprises major heading in low-resolution video single-frame images to be reconstructed as main region, and when can make based on the super-resolution rebuilding algorithm of doubledictionary study, main region to be rebuild, action scope is minimum, the reconstruction time of reduction main region;
5. the present invention is when calculating the rarefaction representation of each piecemeal of main region, utilize searching algorithm search piecemeal best matching blocks in each three two field pictures in front and back, using the weighted sum of the best matching blocks rarefaction representation rarefaction representation as piecemeal, this utilize the rarefaction representation that before and after video, frame temporal correlation obtains, coefficient is more accurate, can further improve the reconstruction effect of main region in single-frame images to be reconstructed;
To sum up, low-resolution video image can be carried out grade reconstruction by the present invention effectively, while ensureing the reconstruction quality of these main contents of major heading, reduces the reconstruction time of video image, and the real-time reconstruction for video establishes circuit base.
Accompanying drawing explanation
Fig. 1 be the present invention realize general flow chart;
Fig. 2 is the texture layer rarefaction representation and the structure sheaf rarefaction representation sub-process figure that calculate main region in the present invention.
Specific embodiments
Below in conjunction with accompanying drawing 1, the step of the present invention is described in further detail:
Step 1. obtains sample set.
Pictures PASCALVOC committee provided are as sample database, and this data base includes the mankind, animal, the vehicles and indoor four big class totally 20 catalogues: wherein, animal includes bird, cat, cattle, Canis familiaris L., horse, sheep;The vehicles include aircraft, bicycle, ship, bus, car, motorcycle, train;Indoor include bottle, chair, dining table, potted plant, sofa, TV.
Under each catalogue, randomly choose 10 width images, obtain 200 width sample images.High-resolution sample set is constituted with the 200 width sample images obtainedThe 200 width sample images obtained are carried out 3 times of down-samplings respectively, obtains 200 width low-resolution images, constitute low resolution sample set with this 200 width low-resolution imageHigh-resolution sample set IhWith low resolution sample set IlCollectively form sample set I={Ih,Il};The high-definition picture of same contentAnd low-resolution imageConstitute sample to image
Step 2. utilizes form Component Analysis method that the image in sample set carries out texture layer and structural stratification.
The core of form component analysis is: be indicated sparse for image aspects optimum.Assuming that image X to be processed comprises Γ different form, namely image X comprises the layering of Γ mutually different background transparent, { Xλ, λ=1,2 ...., Γ }, X=X1+X2+...+Xλ+...+XΓ, MCA method uses dictionary { T one group excessively complete1,T2,...,Tλ,...,TΓΓ the layering of image X is described, λ layer XλDictionary T can only be usedλAtom carry out rarefaction representation, with other dictionaries TγThe atom of (γ ≠ λ) cannot represent, so, it is possible to cross complete dictionary { T by building one group1,T2,...,Tλ,...,TΓRealize the Γ layer to image X and be layered.
Picture breakdown is two different forms by this example, is texture layer X by picture breakdowntWith structure sheaf Xs, accordingly, it would be desirable to built complete dictionary { Tt,Ts, TtFor describing the dictionary of image texture information, TsFor describing the dictionary of image structure information.
Instrument for building texture dictionary has Gabor transformation, dct transform etc., and the instrument for building structure dictionary has wavelet transformation, curve wave conversion, ridgelet transform, profile wave convert etc..The selection of dictionary is generally chosen according to fidelity measurement functions or other similar approach, but this method too complex selecting optimum dictionary according to theory function, so, in a lot of image processing work, generally according to user experience, image is analyzed, it is selected to better represent the conventional conversion of texture or structure, the texture part of image and structure division are separated.This example selects but is not limited to the texture dictionary with dct transform structure image, builds the structure dictionary of image with profile wave convert.Implement step as follows:
2.1) texture dictionary is built
To 200 samples in sample set I to image Ii, i=1,2 ..., 200 make dct transform respectively, obtain high-resolution sample set Ih200 dct transform matrixes and low resolution sample set Il200 dct transform matrixes, using the dct transform matrix dictionary as image, obtain high-resolution sample set Ih200 texture dictionariesWith low resolution sample set Il200 texture dictionaries
2.2) structure dictionary is built
To 200 samples in sample set I to image Ii, i=1,2 ..., 200 make profile wave convert respectively, obtain high-resolution sample set Ih200 profile wave convert matrixes and low resolution sample set Il200 profile wave convert matrixes, using the profile wave convert matrix dictionary as image, obtain high-resolution sample set Ih200 structure dictionariesWith low resolution sample set Il200 structure dictionaries
2.3) matching pursuit algorithm is utilized to calculate optimum texture sparse coefficient and optimum structure sparse coefficient
In order to obtain the texture layer of high-resolution sample imageAnd structure sheafNeed to calculate high-resolution sample imageAt high-resolution texture dictionaryWith high resolution structures dictionaryUnder optimum rarefaction representation, namely solve following optimization problem
{ α ht i * , α hs i * } = A r g { α ht i , α hs i } min { | | α ht i | | 1 + | | α hs i | | 1 } s . t | | I h i - T ht i × α ht i - T hs i × α hs i | | 2 ≤ ϵ , i = 1 , 2 , ... , 200
Wherein ε=1.0 × 10-6For degree of rarefication empirical value,WithThe high-resolution texture sparse coefficient respectively calculated and high resolution structures sparse coefficient,WithThe high-resolution optimum texture sparse coefficient respectively tried to achieve and high-resolution optimum structure sparse coefficient.
The algorithm solving above-mentioned optimization problem has matching pursuit algorithm, base tracing algorithm, orthogonal matching pursuit algorithm etc..Wherein, matching pursuit algorithm is a kind of greedy algorithm, and it tries to achieve sparse signal representation by being progressively similar to, and its principle is simple, it is simple to realizes, is current signal Its Sparse Decomposition most common method.Therefore, this example adopts but is not limited to matching pursuit algorithm and image carries out Its Sparse Decomposition, follow-up based on being also use matching pursuit algorithm that image is carried out Its Sparse Decomposition in the image reconstruction of dictionary learning.
To low resolution sample imageDo same process, obtain low resolution optimum texture sparse coefficient α lt i * , i = 1 , 2 , ... , 200 With low resolution optimum structure sparse coefficient α ls i * , i = 1 , 2 , ... , 200.
2.4) texture layer and the structural stratification of image are calculated:
2.4a) according to high-resolution texture dictionaryWith high-resolution optimum texture sparse coefficient α ht i * , i = 1 , 2 , ... , 200 , Obtain high-resolution texture layer I ht i = T ht i × α ht i * , i = 1 , 2 , ... , 200 , And remember I h t = { I ht i , i = 1 , 2 , ... , 200 } ;
2.4b) according to high resolution structures dictionaryWith high-resolution optimum structure sparse coefficient α hs i * , i = 1 , 2 , ... , 200 , Obtain high resolution structures layer I hs i = T hs i × α hs i * , i = 1 , 2 , ... , 200 , And remember I h s = { I hs i , i = 1 , 2 , ... , 200 } ;
2.4c) according to low resolution texture dictionaryWith low resolution optimum texture sparse coefficient α lt i * , i = 1 , 2 , ... , 200 , Obtain low resolution texture layer I lt i = T lt i × α lt i * , i = 1 , 2 , ... , 200 , And remember I l t = { I lt i , i = 1 , 2 , ... , 200 } ;
2.4d) according to low resolution structure dictionaryWith low resolution optimum structure sparse coefficient α ls i * , i = 1 , 2 , ... , 200 , Obtain low resolution structure sheaf I ls i = T ls i × α ls i * , i = 1 , 2 , ... , 200 , And remember I l s = { I ls i , i = 1 , 2 , ... , 200 } .
Step 3. utilizes KSVD algorithm that the image in sample set is trained.
Image super-resolution rebuilding based on dictionary learning, generally require the substantial amounts of sample image of training to obtain high-resolution dictionary and low-resolution dictionary, the efficiency of training dictionary is very big by the impact of dictionary atom number, therefore, selects a method that can effectively reduce dictionary atom number particularly significant.
The method of dictionary learning is broadly divided into two big classes: non-supervisory dictionary learning and supervision dictionary learning.Non-supervisory dictionary learning is intended to study one and has the good dictionary representing ability, and supervision dictionary learning, owing to considering the identification of dictionary, is usually used in computer identification mission.Based in the image super-resolution rebuilding of dictionary learning, it is necessary to obtain the optimum rarefaction representation of image, and good dictionary that corresponding rarefaction representation can be made to have is higher openness, therefore the method that this example selects non-supervisory dictionary learning trains each tomographic image.
The representative method of non-supervisory dictionary learning has MOD method and KSVD method, the object function of two kinds of method optimizations is identical, but when utilizing matching pursuit algorithm to carry out dictionary iteration, dictionary Global Algorithm is once obtained by MOD method, KSVD method has then carried out the optimization of sequential update row on the basis of MOD method, each iteration only updates the string of dictionary, and namely iteration only updates an atom of dictionary every time.The optimization of this sequential update of KSVD method row can effectively reduce the atom number in dictionary, and the atom after training still can all information of the initial dictionary of linear expression, so, this example adopts but is not limited to KSVD Algorithm for Training sample image.
This example utilizes KSVD Algorithm for Training high-resolution sample image the same with the step of low resolution sample image, existing to train high-resolution sample image IhFor example, implement step as follows:
3.1) by high-resolution sample image IhCarry out overlap partition
By high-resolution sample image IhIn every piece imageCarrying out overlap partition according to array scan mode, piecemeal is sized to 9 × 9 block of pixels, and horizontally and vertically each overlapping pixel in direction, obtains high-resolution sample block collectionWherein,Represent high-resolution sample image IhA sample block, m=1,2 ..., M, M represents high-resolution sample image IhPiecemeal number.
3.2) high-resolution dictionary D is builthInitial value
Take high-resolution sample block collection YhIn front 1024 sample block, it is made dct transform, obtain the dct transform matrix of 1024 9 × 9 sizes, the dct transform matrix of each 9 × 9 sizes is opened into column vector, obtaining 1024 length is the column vector of 81, by 1024 column vectors by row combination, obtain the matrix of 81 × 1024 sizes, using this matrix as high-resolution dictionary DhInitial value.
3.3) optimum high-resolution dictionary is calculated
Utilize KSVD algorithm, by following optimization process to high-resolution dictionary DhIt is updated, until high-resolution sample block collection YhAt high-resolution dictionary DhUnder rarefaction representation be optimum rarefaction representation
D h * = A r g D h min { Σ m = 1 M | | y h m - D h × α h m | | 2 2 } s . t | | α h m | | 0 ≤ ϵ , m = 1 , 2 , ... , M
Wherein,For sample blockAt high-resolution dictionary DhUnder rarefaction representation, ε=1.0 × 10-6For degree of rarefication empirical value,For the optimum high-resolution dictionary tried to achieve.
Consider that low resolution sample image is obtained according to 3 times of down-samplings by high-resolution sample image, so at training low resolution sample image IlTime, this example is by step 3.1) in piecemeal be sized such that 3 × 3 block of pixels so that step 3.2) in the dictionary initial value that obtains be sized to 9 × 1024, other operations are with step 3.1) 3.3) described in, obtain optimum low-resolution dictionary
Step 4. utilizes KSVD algorithm that each layered image in sample set is trained.
According to step 3.1) 3.3) to high-resolution texture layer IhtWith high resolution structures layer IhsProcess, obtain texture high-resolution dictionaryWith structure high-resolution dictionary
At training low resolution texture layer IltWith low resolution structure sheaf IlsTime, this example is by step 3.1) in piecemeal be sized such that 3 × 3 block of pixels, so that step 3.2) in the dictionary initial value that obtains be sized to 9 × 1024, other operations are with step 3.1) 3.3) described in, obtain optimum texture low-resolution dictionaryWith optimum structure low-resolution dictionary
Low-resolution video single-frame images to be reconstructed is divided into area-of-interest and region of loseing interest in by step 5..
Low-resolution video single-frame images to be reconstructed is carried out area-of-interest and the problem of region division of loseing interest in by this example, can regard the partition problem of foreground image and background image in machine vision as.In field of machine vision, the method separating video image foreground and background mainly has two big classes, one class is that video or image sequence are carried out background modeling, obtain background image, foreground image is obtained by video image subtracting background to be detected, this kind of method requires that input is for multiple image, conventional method has mixed Gaussian background modeling method, optical flow method, but the background that these algorithms extract still contains fuzzy moving target, by them with the moving target in area-of-interest being made clear not in this example.
Another kind of is utilize the movable information extracting directly moving target in video image, using motion target area as foreground image, except foreground image partly as background image.This kind of method generally extracts the motion vector information in video image code stream, combining form processes the binary image obtaining characterizing moving region, but the description of moving target is generally of bigger deviation by this binary image, if now can inevitably make moving target lack to some extent by the minimum rectangular area comprising moving target as foreground area.
Considering that Snake algorithm can detect the relatively precise boundary of target in broad image, this example adopts Snake algorithm to extract the area-of-interest of image to be reconstructed, it is intended to making area-of-interest is the minimum rectangular area comprising relatively precise motion target.Implement step as follows:
5.1) binary image characterizing moving target is obtained:
5.1a) from the H.264 code stream of low-resolution video single-frame images to be reconstructed, extract movable information, obtain the motion vector field MV of present frame;
5.1b) characterize grey scale pixel value by vector length, and gray value is standardized to [0,255] scope, the motion vector field MV of present frame is converted into the gray-scale map G characterizing current frame motion region;
5.1c) the gray-scale map G characterizing current frame motion region is carried out Morphological scale-space, obtain the binary image BW of moving target.
5.2) Snake algorithm is utilized to extract the relatively precise boundary of moving target:
5.2a) extract the Guan Bi outline of moving target binary image BW, obtain curve v (s)=[x (s), y (s)], x (s), the abscissa of the point on y (s) respectively contour curve and vertical coordinate, parameter s ∈ [0,1], using this curve initial profile as Snake algorithm;
5.2b) utilize Snake algorithm that curve v (s) is deformed so that it is to approach relatively precise boundary v (s) of moving target*, this process can be converted into and seek following optimal solution
v ( s ) * = A r g v ( s ) min ∫ 0 1 E s n a k e ( v ( s ) ) d s = A r g v ( s ) min ∫ 0 1 [ E int ( v ( s ) ) + E i m a g e ( v ( s ) ) + E c o n ( v ( s ) ) ] d s
Wherein,Represent internal energy, vs、vssThe respectively single order of v (s) and second dervative, the weighting parameter of α (s), β (s) respectively controlling curve v (s) tension force and slickness, determines curve v (s) at the extension of certain point and degree of crook;Eimage(v (s)) represents the energy that image active force produces, and in order to highlight the marked feature of image, is generally designed by gradation of image, gradient information, and guiding curve v (s) approaches to edge contour;EconRepresenting the energy that outside limits power produces, this portion of energy is set to 0 by this example;v(s)*Relatively precise boundary for moving target.
5.3) area-of-interest and region of loseing interest in are obtained
Low-resolution video single-frame images to be reconstructed will comprise more accurate closed contour v (s) of moving target*Minimum rectangular area extract, as area-of-interest P, by except area-of-interest partly as the region B that loses interest in.
The area-of-interest of low-resolution video single-frame images to be reconstructed is divided into main region and sub-region by step 6..
For comprising the scene of two or more multiple moving target, if all of moving target carrying out the super-resolution rebuilding of same precision regardless of primary and secondary, the reconstruction time that can make video image is long, takies too much calculating resource.Meanwhile, be no matter computer when carrying out Digital Image Processing, or when video image is observed by subjective observation person, often more concerned with the information of main target in video image.So, main target in video image is carried out high-precision super-resolution rebuilding by this example, and by-end carries out the reconstruction of relatively low precision, so can while ensureing the reconstruction quality of video image main contents, reduction reconstruction time, improves and rebuilds efficiency.
Considering shooting video or during image, focal object tends to take up more elemental area, this example is using the bigger target of elemental area in low-resolution video single-frame images to be reconstructed as major heading, and the less target of elemental area is as secondary target.Implement step as follows:
6.1) step 5.2 is utilized) more accurate closed contour v (s) of moving target that obtains*, calculate the elemental area A={A of each target1, A2..., An..., AN, wherein AnRepresent the elemental area of the n-th target, n=1,2 ..., N, N represents the number of moving object in video sequences;
6.2) utilize K-means algorithm by object pixel area A={A1, A2..., An..., ANIt being divided into two classes by size, the class that area is big is designated as major heading Am, the class that area is little is designated as time target Asub
6.3) major heading A will be comprisedmMinimum rectangular area as main region Pm, except main region partly as sub-region Psub
6.4) record minimum rectangular area position Pos=[row, col, del in low-resolution video single-frame images to be reconstructedrow, delcol], the ranks coordinate that wherein (row, col) is minimum rectangular area top left corner pixel, delrow、delcolRespectively minimum rectangular area obtains line number and columns.
Step 7. adopts the method that doubledictionary learns that main region is rebuild.
The problem that the reconstruction time brought in full images region is longer is acted in order to improve the existing image super-resolution rebuilding method based on dictionary learning, video image is carried out grade reconstruction by this example, wherein the main region comprising major heading in video image is carried out the super-resolution rebuilding based on doubledictionary study, implements step as follows:
7.1) according to step 2) by main region PmIt is divided into texture layer PmtWith structure sheaf Pms
7.2) texture layer rarefaction representation and the structure sheaf rarefaction representation of main region are calculated:
Existing image super-resolution rebuilding method utilizes the dictionary of training directly image to be reconstructed to be processed mostly, and these methods have rebuilds effect preferably.The input of this example is video information, when single-frame images to be reconstructed is rebuild, for improving the reconstruction effect of image further, directly single-frame images to be reconstructed is not processed, and it is combined with the temporal correlation between frame of video and spatial coherence, namely select each three two field pictures in front and back adjacent with video single-frame images to be reconstructed as reference picture, by image to be reconstructed is rebuild by the reconstruction of reference picture indirectly.
With reference to Fig. 2, being accomplished by of this step
7.2a) select front and back each three two field pictures adjacent with main region place frame as reference picture, obtain reference picture collection Pr={ Prj, j=1,2 ..., 6}, PrjRepresent a frame reference picture;
7.2b) according to step 2) by reference picture collection PrIt is divided into texture layer Prt={ Prtj, j=1,2 ..., 6} and structure sheaf Prs={ Prsj, j=1,2 ..., 6}, wherein PrtjFor reference picture PrjTexture layer, PrsjFor reference picture PrjStructure sheaf;
7.2c) by the texture layer P of main regionmtCarrying out overlap partition according to array scan mode, piecemeal is sized to 3 × 3 block of pixels, and horizontally and vertically each overlapping pixel in direction, obtains the piecemeal collection of main region texture layerWhereinRepresent the texture layer P of main regionmtA piecemeal, n=1,2 ..., N, N represents the texture layer P of main regionmtPiecemeal number;
7.2d) use the ParallelComputingToolbox workbox in Matlab, set up six parallel task Proj, j=1,2 ..., 6, each task ProjOnly process for reference picture texture layer PrtjOperation;
7.2e) at task Proj, j=1,2 ..., 6 times, to main region texture layer PmtEach piecemealUtilize Three Step Search Algorithm at reference picture texture layer PrtjMiddle search best matching blocksBlock matching criterion adopts MAD criterion, namely minimizes mean absolute error function MAD (dh,dv):
M A D ( d h , d v ) = 1 R C Σ r = 1 R Σ c = 1 C | f ( r , c ) - f r j ( r + d h , c + d v ) |
Wherein, R, C respectively piecemealLine number and columns, (r c) represents that in piecemeal, coordinate is (r, pixel brightness value c), f to frj(r+dh,c+dv) represent reference picture texture layer PrtjMiddle coordinate is (r+dh,c+dv) pixel brightness value, (dh,dv) for moving displacement vector, dhFor horizontal direction displacement, dvFor vertical direction displacement;
7.2f) according to texture low-resolution dictionaryCalculate match blockRarefaction representationWhereinForInverse matrix;
7.2g) calculate reference picture texture layer PrtjIn most match blockWeight coefficientComputing formula is as follows
w j n = 1 ( y t n - y rtj n * ) ( y t n - y rtj n * ) T ;
7.2h) to match blockRarefaction representationIt is weighted summation, obtains main region texture layer piecemealTexture layer rarefaction representationNoteTexture layer rarefaction representation for main region;
7.2i) by the structure sheaf P of main regionmsAccording to step 7.2c) 7.2h) process, obtain the structure sheaf rarefaction representation of main region
7.3) according to main region PmTexture layer rarefaction representationWith texture high-resolution dictionaryObtain the reconstruction image of main region texture layer P m t * = D h t * × β m t * ;
7.4) the structure sheaf rarefaction representation according to main regionWith structure high-resolution dictionaryObtain the reconstruction image of main region structure sheaf P m s * = D h s * × β m s * ;
7.5) by the reconstruction image of main region texture layerReconstruction image with main region structure sheafMerge, obtain the reconstruction image of complete main region.
Step 8. adopts the method for single dictionary learning that sub-region is rebuild.
Reconstruction time for downscaled video image, ensure the reconstruction quality of video image main contents simultaneously, video image is carried out grade reconstruction by this example, the sub-region comprising time target wherein carries out the super-resolution rebuilding based on single dictionary learning, implement step as follows in video image:
8.1) according to step 3) the optimum low-resolution dictionary that obtainsCalculate sub-region PsubRarefaction representation β s u b = ( D l * ) × P s u b , WhereinForInverse matrix;
8.2) the rarefaction representation β according to sub-regionsubWith step 3) the optimum high-resolution dictionary that obtainsObtain the reconstruction image of sub-region P s u b * = D h * × β s u b .
Step 9. adopts interpolation method that region of loseing interest in is rebuild.
Current super-resolution rebuilding mainly has three kinds of methods: interpolation method, reconstruction method and based on study method.Interpolation method algorithm is simple, be easily achieved, and rebuilds quality relatively other two classes method deviations of image.This example is intended to video image is carried out grade reconstruction, the area-of-interest comprising moving target carries out the reconstruction based on study, makes moving target have and rebuild effect preferably;Interpolation method is adopted to rebuild in region of loseing interest in, although so can sacrifice the reconstruction quality in region of loseing interest in, but can while ensureing the reconstruction quality of these main contents of moving target, the reconstruction time of downscaled video image.
Interpolation method mainly has arest neighbors interpolation method, bilinear interpolation and bicubic interpolation method.The arest neighbors interpolation method pixel value to each interpolation point, be taken in original image around respective point that in 4 consecutive points, Euclidean distance is the shortest one, this method simply easily realizes, amount of calculation is only small, but the picture quality after interpolation is not high, blocking artifact and sawtooth effect usually occurs.
The bilinear interpolation pixel value to each interpolation point, corresponding weights are determined with the distance of adjacent 4 points according to it, the pixel value of interpolation point is determined by the weighted sum of the pixel value of adjacent 4 points, this method amplifies the image smoothing that the image produced produces than arest neighbors interpolation method, do not have the situation that gray value is discontinuous, but owing to bilinear interpolation has the character of low pass filter, make high fdrequency component impaired, when amplification increases, image after amplification also there will be significantly block phenomenon, makes image outline thicken to a certain extent.
Bicubic interpolation method utilizes the gray value of 16 points around interpolation point to make cubic interpolation, considers not only the gray scale impact of 4 direct neighbor points, and considers the impact of gray-value variation rate between each consecutive points, rebuilds effect and be better than above two method.This example adopts but is not limited to bicubic interpolation method and the region B that loses interest in is rebuild, and its formula for interpolation is as follows:
F (i+u, j+v)=A*B*C*
A*=[S (1+u) S (u) S (1-u) S (2-u)]
B * = f ( i - 1 , j - 2 ) f ( i , j - 2 ) f ( i + 1 , j - 2 ) f ( i + 2 , j - 2 ) f ( i - 1 , j - 1 ) f ( i , j - 1 ) f ( i + 1 , j - 1 ) f ( i + 2 , j - 1 ) f ( i - 1 , j ) f ( i , j ) f ( i + 1 , j ) f ( i + 2 , j ) f ( i - 1 , j + 1 ) f ( i , j + 1 ) f ( i + 1 , j + 1 ) f ( i + 2 , j + 1 )
C*=[S (1+v) S (v) S (1-v) S (2-v)]T
S ( w ) = 1 - 2 | w | 2 + | w | 3 , | w | < 1 ; 4 - 8 | w | + 5 | w | 2 - | w | 3 , 1 &le; | w | < 2 ; 0 , | w | &GreaterEqual; 2
Wherein, i, j are nonnegative integer, represent interpolation point row-coordinate in original image and row coordinate respectively;U, v are the floating number that (0,1) is interval, represent interpolation point and the closest pixel distance in horizontally and vertically direction respectively;(i j) represents that original image is at coordinate (i, j) pixel value at place to f;S (w) is bicubic interpolation basic function, and independent variable w is taken absolute value by independent variable w ∈ R, | w | expression.
Step 10. is by step 7) the reconstruction main region that obtains and step 8) the reconstruction sub-region that obtains, according to step 6.4) locus Pos=[row, col, the del that recordrow,delcol], it being fused to step 9) reconstruction that obtains loses interest in region, obtains complete reconstruction image.
Above description is only example of the present invention, does not constitute any limitation of the invention.Obviously for those skilled in the art; after having understood present disclosure and principle; all it is likely to when without departing substantially from the principle of the invention, structure; carry out the various corrections in form and in details and change, but these based on the correction of inventive concept and change still within the claims of the present invention.

Claims (7)

1., based on a video image grade reconstruction method for rarefaction representation and dictionary learning, comprise the steps:
(1) from sample database, sample set I={I is obtainedh,Il,Represent high-resolution sample set,Represent low resolution sample set, with the high-definition picture of content same in sample set IAnd low-resolution imageConstitute sample to image I i = { I h i , I l i } ;
(2) utilize form Component Analysis method that the image in sample set I carries out texture layer and structural stratification, obtain high-resolution texture layer Iht, high resolution structures layer IhsWith low resolution texture layer Ilt, low resolution structure sheaf Ils
(3) utilize KSVD algorithm to the high-resolution sample image I in sample set IhWith low resolution sample image IlIt is trained, obtains high-resolution dictionary DhWith low-resolution dictionary Dl
(4) utilize KSVD algorithm that each layered image in sample set I is trained, obtain texture high-resolution dictionary Dht, structure high-resolution dictionary DhsWith texture low-resolution dictionary Dlt, structure low-resolution dictionary Dls
(5) low-resolution video single-frame images to be reconstructed is divided into area-of-interest and region of loseing interest in;
(6) area-of-interest of low-resolution video single-frame images to be reconstructed is divided into main region and sub-region;
(7) adopt doubledictionary learning method that main region carries out super-resolution rebuilding, adopt individual character allusion quotation learning method that sub-region carries out super-resolution rebuilding, adopt interpolation method that region of loseing interest in is interpolated reconstruction;
(8) what the main region of reconstruction, sub-region be fused to reconstruction loses interest in region, obtains complete reconstruction image.
2. the video image grade reconstruction method based on rarefaction representation and dictionary learning according to claim 1, it is characterized in that: step (2) utilizes form Component Analysis method the image in sample set I carries out texture layer and structural stratification, carry out as follows:
(2a) to sample to image IiMake dct transform, constitute high-resolution texture dictionary by the data after conversionWith low resolution texture dictionary
(2b) to sample to image IiMake profile wave convert, constitute high resolution structures dictionary by the data after conversionWith low resolution structure dictionary
(2c) matching pursuit algorithm is utilized to calculate high-definition pictureAt high-resolution texture dictionaryWith high resolution structures dictionaryUnder optimum rarefaction representation, be converted into following optimization process by this calculating process
{ &alpha; ht i * , &alpha; hs i * } = A r g { &alpha; ht i , &alpha; hs i } m i n { | | &alpha; ht i | | 1 + | | &alpha; hs i | | 1 } s . t | | I h i - T ht i &times; &alpha; ht i - T hs i &times; &alpha; hs i | | 2 &le; &epsiv; ,
Wherein ε is degree of rarefication empirical value,WithRespectively utilize high-resolution texture sparse coefficient and high resolution structures sparse coefficient that matching pursuit algorithm calculates,WithThe high-resolution optimum texture sparse coefficient respectively tried to achieve and high-resolution optimum structure sparse coefficient;
(2d) low-resolution image is calculated according to step (2c)At low resolution texture dictionaryWith low resolution structure dictionaryUnder optimum rarefaction representation, obtain low resolution optimum texture sparse coefficientWith low resolution optimum structure sparse coefficient
(2e) according to high-resolution texture dictionaryWith high-resolution optimum texture sparse coefficientObtain high-resolution texture layerNoteHigh-resolution texture layer for sample set I;According to high resolution structures dictionaryWith high-resolution optimum structure sparse coefficientObtain high resolution structures layerNoteHigh resolution structures layer for sample set I;
(2f) according to low resolution texture dictionaryWith low resolution optimum texture sparse coefficientObtain low resolution texture layerNoteLow resolution texture layer for sample set I;According to low resolution structure dictionaryWith low resolution optimum structure sparse coefficientObtain low resolution structure sheafNoteLow resolution structure sheaf for sample set I.
3. the video image grade reconstruction method based on rarefaction representation and dictionary learning according to claim 1, it is characterised in that: step (3) utilizes KSVD algorithm that the image in sample set I is trained, carries out as follows:
(3a) by the high-resolution sample image I in sample set IhCarry out overlap partition, obtain high-resolution sample block collection Represent high-resolution sample image IhA sample block, m=1,2 ..., M, M represents high-resolution sample image IhPiecemeal number;
(3b) at high-resolution sample block collection YhIn randomly select a sample block, it is made dct transform, with conversion after data constitute high-resolution dictionary DhInitial value;
(3c) KSVD algorithm is utilized, by following optimization process to high-resolution dictionary DhIt is updated, until high-resolution sample block collection YhAt high-resolution dictionary DhUnder rarefaction representation be optimum rarefaction representation:
D h * = A r g D h m i n { &Sigma; m = 1 M | | y h m - D h &times; &alpha; h m | | 2 2 } s . t | | &alpha; h m | | 0 &le; &epsiv; , m = 1 , 2 , ... , M
Wherein,For sample blockAt high-resolution dictionary DhUnder rarefaction representation, ε is degree of rarefication empirical value,For the optimum high-resolution dictionary tried to achieve;
(3d) by the low resolution sample image I of sample set IlProcess according to step (3a) (3c), obtain optimum low-resolution dictionary
4. the video image grade reconstruction method based on rarefaction representation and dictionary learning according to claim 1, it is characterized in that: low-resolution video single-frame images to be reconstructed is divided into area-of-interest and region of loseing interest in by step (5), carries out as follows:
(5a) low-resolution video single-frame images to be reconstructed is carried out moving object detection, obtain the binary image of moving target;
(5b) using the Guan Bi outline of moving target binary image as the initial profile of Snake algorithm, by the successive iteration process of Snake algorithm, the accurate closed contour of moving target is obtained;
(5c) using the minimum rectangular area that comprises the accurate closed contour of moving target in low-resolution video single-frame images to be reconstructed as area-of-interest P, except area-of-interest partly as the region B that loses interest in.
5. the video image grade reconstruction method based on rarefaction representation and dictionary learning according to claim 1, it is characterized in that: the area-of-interest of low-resolution video single-frame images to be reconstructed is divided into main region and sub-region by step (6), carries out as follows:
(6a) utilize the accurate closed contour of the moving target that step (5b) obtains, calculate the elemental area of each target;
(6b) according to elemental area size target is divided into major heading and time target;
(6c) using the minimum rectangular area that comprises major heading as main region Pm, except main region partly as sub-region Psub
6. the video image grade reconstruction method based on rarefaction representation and dictionary learning according to claim 1, it is characterised in that: step (7) adopts the method for doubledictionary study main region is carried out super-resolution rebuilding, carry out as follows:
(7a) according to step (2) by main region PmIt is divided into texture layer PmtWith structure sheaf Pms
(7b) select the reference picture of main region, utilize texture layer rarefaction representation and the structure sheaf rarefaction representation of reference picture, calculate the texture layer rarefaction representation of main regionWith structure sheaf rarefaction representation
(7c) according to main region PmTexture layer rarefaction representationWith texture high-resolution dictionaryObtain the reconstruction image of main region texture layerStructure sheaf rarefaction representation according to main regionWith structure high-resolution dictionaryObtain the reconstruction image of main region structure sheaf
(7d) by the reconstruction image of main region texture layerReconstruction image with main region structure sheafMerge, obtain the reconstruction image of complete main region.
7. the video image grade reconstruction method based on rarefaction representation and dictionary learning according to claim 1, it is characterized in that: step (7b) selects the reference picture of main region, utilize texture layer rarefaction representation and the structure sheaf rarefaction representation of reference picture, calculate the texture layer rarefaction representation of main regionWith structure sheaf rarefaction representationCarry out as follows:
(7b1) using each for the front and back of main region place frame three frames as reference picture, reference picture collection P is obtainedr={ Prj, PrjRepresent a frame reference picture, j=1,2 ..., 6;
(7b2) according to step (2) by reference picture collection PrIt is divided into texture layer Prt={ PrtjAnd structure sheaf Prs={ Prsj, PrtjFor reference picture PrjTexture layer, PrsjFor reference picture PrjStructure sheaf;
(7b3) by the texture layer P of main regionmtCarry out overlap partition, obtain the piecemeal collection of main region texture layer Represent the texture layer P of main regionmtA piecemeal, n=1,2 ..., N, N represents the texture layer P of main regionmtPiecemeal number;
(7b4) to main region texture layer PmtEach piecemealUtilize Three Step Search Algorithm at reference picture texture layer PrtjThe most match block of middle search
(7b5) according to texture low-resolution dictionaryCalculate match blockRarefaction representationWhereinForInverse matrix;
(7b6) reference picture texture layer P is calculatedrtjIn most match blockWeight coefficient wjn, computing formula is as follows
w j n = 1 ( y mt n - y rtj n * ) ( y mt n - y rtj n * ) T ;
(7b7) to match blockRarefaction representationIt is weighted summation, obtains main region texture layer piecemealTexture layer rarefaction representationNoteTexture layer rarefaction representation for main region;
(7b8) by the structure sheaf P of main regionmsProcess according to step (7b3) (7b7), obtain the structure sheaf rarefaction representation of main region
CN201510789969.6A 2015-11-17 2015-11-17 Video image grade reconstruction method based on rarefaction representation and dictionary learning Active CN105741252B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510789969.6A CN105741252B (en) 2015-11-17 2015-11-17 Video image grade reconstruction method based on rarefaction representation and dictionary learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510789969.6A CN105741252B (en) 2015-11-17 2015-11-17 Video image grade reconstruction method based on rarefaction representation and dictionary learning

Publications (2)

Publication Number Publication Date
CN105741252A true CN105741252A (en) 2016-07-06
CN105741252B CN105741252B (en) 2018-11-16

Family

ID=56296191

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510789969.6A Active CN105741252B (en) 2015-11-17 2015-11-17 Video image grade reconstruction method based on rarefaction representation and dictionary learning

Country Status (1)

Country Link
CN (1) CN105741252B (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106558020A (en) * 2015-09-29 2017-04-05 北京大学 A kind of image rebuilding method and system based on network image block retrieval
CN106570886A (en) * 2016-10-27 2017-04-19 南京航空航天大学 Target tracking method based on super-resolution reconstruction
CN106780331A (en) * 2016-11-11 2017-05-31 浙江师范大学 A kind of new super-resolution method based on neighborhood insertion
CN106815922A (en) * 2016-11-14 2017-06-09 杭州数生科技有限公司 A kind of paper money discrimination method and system based on mobile phone A PP and cloud platform
CN106981047A (en) * 2017-03-24 2017-07-25 武汉神目信息技术有限公司 A kind of method for recovering high-resolution human face from low resolution face
CN107871115A (en) * 2016-11-01 2018-04-03 中国科学院沈阳自动化研究所 A kind of recognition methods of the submarine hydrothermal solution spout based on image
CN107888915A (en) * 2017-11-07 2018-04-06 武汉大学 A kind of perception compression method of combination dictionary learning and image block
CN108765524A (en) * 2018-06-06 2018-11-06 微幻科技(北京)有限公司 Animation producing method based on distant view photograph and device
CN109325916A (en) * 2018-10-16 2019-02-12 哈尔滨理工大学 A kind of video image super-resolution reconstruction method based on rarefaction representation
CN109409285A (en) * 2018-10-24 2019-03-01 西安电子科技大学 Remote sensing video object detection method based on overlapping slice
CN109949257A (en) * 2019-03-06 2019-06-28 西安电子科技大学 Area-of-interest compressed sensing image reconstructing method based on deep learning
CN110428366A (en) * 2019-07-26 2019-11-08 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN110443172A (en) * 2019-07-25 2019-11-12 北京科技大学 A kind of object detection method and system based on super-resolution and model compression
CN110689508A (en) * 2019-08-15 2020-01-14 西安理工大学 Sparse structure manifold embedding-based IHS remote sensing image fusion method
WO2020048484A1 (en) * 2018-09-04 2020-03-12 清华-伯克利深圳学院筹备办公室 Super-resolution image reconstruction method and apparatus, and terminal and storage medium
CN111563866A (en) * 2020-05-07 2020-08-21 重庆三峡学院 Multi-source remote sensing image fusion method
CN110176029B (en) * 2019-04-29 2021-03-26 华中科技大学 Image restoration and matching integrated method and system based on level sparse representation
CN112597983A (en) * 2021-03-04 2021-04-02 湖南航天捷诚电子装备有限责任公司 Method for identifying target object in remote sensing image and storage medium and system thereof
CN113447111A (en) * 2021-06-16 2021-09-28 合肥工业大学 Visual vibration amplification method, detection method and system based on morphological component analysis
CN116310883A (en) * 2023-05-17 2023-06-23 山东建筑大学 Agricultural disaster prediction method based on remote sensing image space-time fusion and related equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950365A (en) * 2010-08-30 2011-01-19 西安电子科技大学 Multi-task super-resolution image reconstruction method based on KSVD dictionary learning
CN102800076A (en) * 2012-07-16 2012-11-28 西安电子科技大学 Image super-resolution reconstruction method based on double-dictionary learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101950365A (en) * 2010-08-30 2011-01-19 西安电子科技大学 Multi-task super-resolution image reconstruction method based on KSVD dictionary learning
CN102800076A (en) * 2012-07-16 2012-11-28 西安电子科技大学 Image super-resolution reconstruction method based on double-dictionary learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MICHAL AHARON等: "K-SVD:An algorithm for designing of overcomplete dictionaries for sparse Representation", 《IEEE TRANSACTIONS ON SIGNAL PROCESSING》 *
魏艳新: "基于字典训练和稀疏表示的图像超分辨重建算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106558020A (en) * 2015-09-29 2017-04-05 北京大学 A kind of image rebuilding method and system based on network image block retrieval
CN106558020B (en) * 2015-09-29 2019-08-30 北京大学 A kind of image rebuilding method and system based on network image block retrieval
CN106570886B (en) * 2016-10-27 2019-05-14 南京航空航天大学 A kind of method for tracking target based on super-resolution rebuilding
CN106570886A (en) * 2016-10-27 2017-04-19 南京航空航天大学 Target tracking method based on super-resolution reconstruction
CN107871115B (en) * 2016-11-01 2021-05-04 中国科学院沈阳自动化研究所 Image-based submarine hydrothermal vent identification method
CN107871115A (en) * 2016-11-01 2018-04-03 中国科学院沈阳自动化研究所 A kind of recognition methods of the submarine hydrothermal solution spout based on image
CN106780331B (en) * 2016-11-11 2020-04-17 浙江师范大学 Novel super-resolution method based on neighborhood embedding
CN106780331A (en) * 2016-11-11 2017-05-31 浙江师范大学 A kind of new super-resolution method based on neighborhood insertion
CN106815922B (en) * 2016-11-14 2019-11-19 东阳市天杨建筑工程设计有限公司 A kind of paper money discrimination method and system based on cell phone application and cloud platform
CN106815922A (en) * 2016-11-14 2017-06-09 杭州数生科技有限公司 A kind of paper money discrimination method and system based on mobile phone A PP and cloud platform
CN106981047A (en) * 2017-03-24 2017-07-25 武汉神目信息技术有限公司 A kind of method for recovering high-resolution human face from low resolution face
CN107888915A (en) * 2017-11-07 2018-04-06 武汉大学 A kind of perception compression method of combination dictionary learning and image block
CN108765524B (en) * 2018-06-06 2022-04-05 微幻科技(北京)有限公司 Animation generation method and device based on panoramic photo
CN108765524A (en) * 2018-06-06 2018-11-06 微幻科技(北京)有限公司 Animation producing method based on distant view photograph and device
WO2020048484A1 (en) * 2018-09-04 2020-03-12 清华-伯克利深圳学院筹备办公室 Super-resolution image reconstruction method and apparatus, and terminal and storage medium
CN109325916A (en) * 2018-10-16 2019-02-12 哈尔滨理工大学 A kind of video image super-resolution reconstruction method based on rarefaction representation
CN109409285A (en) * 2018-10-24 2019-03-01 西安电子科技大学 Remote sensing video object detection method based on overlapping slice
CN109409285B (en) * 2018-10-24 2021-11-09 西安电子科技大学 Remote sensing video target detection method based on overlapped slices
CN109949257A (en) * 2019-03-06 2019-06-28 西安电子科技大学 Area-of-interest compressed sensing image reconstructing method based on deep learning
CN109949257B (en) * 2019-03-06 2021-09-10 西安电子科技大学 Region-of-interest compressed sensing image reconstruction method based on deep learning
CN110176029B (en) * 2019-04-29 2021-03-26 华中科技大学 Image restoration and matching integrated method and system based on level sparse representation
CN110443172A (en) * 2019-07-25 2019-11-12 北京科技大学 A kind of object detection method and system based on super-resolution and model compression
CN110428366B (en) * 2019-07-26 2023-10-13 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
WO2021017811A1 (en) * 2019-07-26 2021-02-04 Oppo广东移动通信有限公司 Image processing method and apparatus, electronic device, and computer readable storage medium
CN110428366A (en) * 2019-07-26 2019-11-08 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium
CN110689508A (en) * 2019-08-15 2020-01-14 西安理工大学 Sparse structure manifold embedding-based IHS remote sensing image fusion method
CN110689508B (en) * 2019-08-15 2022-07-01 西安理工大学 Sparse structure manifold embedding-based IHS remote sensing image fusion method
CN111563866A (en) * 2020-05-07 2020-08-21 重庆三峡学院 Multi-source remote sensing image fusion method
CN112597983B (en) * 2021-03-04 2021-05-14 湖南航天捷诚电子装备有限责任公司 Method for identifying target object in remote sensing image and storage medium and system thereof
CN112597983A (en) * 2021-03-04 2021-04-02 湖南航天捷诚电子装备有限责任公司 Method for identifying target object in remote sensing image and storage medium and system thereof
CN113447111A (en) * 2021-06-16 2021-09-28 合肥工业大学 Visual vibration amplification method, detection method and system based on morphological component analysis
CN116310883A (en) * 2023-05-17 2023-06-23 山东建筑大学 Agricultural disaster prediction method based on remote sensing image space-time fusion and related equipment
CN116310883B (en) * 2023-05-17 2023-10-20 山东建筑大学 Agricultural disaster prediction method based on remote sensing image space-time fusion and related equipment

Also Published As

Publication number Publication date
CN105741252B (en) 2018-11-16

Similar Documents

Publication Publication Date Title
CN105741252B (en) Video image grade reconstruction method based on rarefaction representation and dictionary learning
CN106127684B (en) Image super-resolution Enhancement Method based on forward-backward recutrnce convolutional neural networks
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
CN111062872B (en) Image super-resolution reconstruction method and system based on edge detection
CN110443842B (en) Depth map prediction method based on visual angle fusion
CN101976435B (en) Combination learning super-resolution method based on dual constraint
CN101877143B (en) Three-dimensional scene reconstruction method of two-dimensional image group
CN113362223B (en) Image super-resolution reconstruction method based on attention mechanism and two-channel network
CN103824272B (en) The face super-resolution reconstruction method heavily identified based on k nearest neighbor
CN106204447A (en) The super resolution ratio reconstruction method with convolutional neural networks is divided based on total variance
CN108090403A (en) A kind of face dynamic identifying method and system based on 3D convolutional neural networks
CN109344822B (en) Scene text detection method based on long-term and short-term memory network
CN105046672A (en) Method for image super-resolution reconstruction
CN103455991A (en) Multi-focus image fusion method
CN112967178B (en) Image conversion method, device, equipment and storage medium
CN111241963B (en) First person view video interactive behavior identification method based on interactive modeling
CN107680116A (en) A kind of method for monitoring moving object in video sequences
CN103473797B (en) Spatial domain based on compressed sensing sampling data correction can downscaled images reconstructing method
CN110930500A (en) Dynamic hair modeling method based on single-view video
CN104504672B (en) Low-rank sparse neighborhood insertion ultra-resolution method based on NormLV features
CN104077742B (en) Human face sketch synthetic method and system based on Gabor characteristic
CN108090873B (en) Pyramid face image super-resolution reconstruction method based on regression model
Zhou et al. PADENet: An efficient and robust panoramic monocular depth estimation network for outdoor scenes
CN106600533B (en) Single image super resolution ratio reconstruction method
CN109559278B (en) Super resolution image reconstruction method and system based on multiple features study

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant