CN112184549B - Super-resolution image reconstruction method based on space-time transformation technology - Google Patents
Super-resolution image reconstruction method based on space-time transformation technology Download PDFInfo
- Publication number
- CN112184549B CN112184549B CN202010961932.8A CN202010961932A CN112184549B CN 112184549 B CN112184549 B CN 112184549B CN 202010961932 A CN202010961932 A CN 202010961932A CN 112184549 B CN112184549 B CN 112184549B
- Authority
- CN
- China
- Prior art keywords
- resolution
- space
- low
- super
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 75
- 230000009466 transformation Effects 0.000 title claims abstract description 26
- 238000005516 engineering process Methods 0.000 title claims abstract description 16
- 239000011159 matrix material Substances 0.000 claims abstract description 27
- 239000013598 vector Substances 0.000 claims abstract description 19
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 15
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 3
- 238000010606 normalization Methods 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 6
- 230000002123 temporal effect Effects 0.000 claims description 3
- 238000005259 measurement Methods 0.000 claims 1
- 238000004364 calculation method Methods 0.000 abstract description 5
- 238000004088 simulation Methods 0.000 description 17
- 230000006870 function Effects 0.000 description 9
- 238000012545 processing Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 7
- 230000008859 change Effects 0.000 description 5
- 230000015572 biosynthetic process Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000003786 synthesis reaction Methods 0.000 description 3
- 241000209140 Triticum Species 0.000 description 2
- 235000021307 Triticum Nutrition 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 235000013339 cereals Nutrition 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 240000000233 Melia azedarach Species 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000013341 scale-up Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a super-resolution image reconstruction method based on space-time transformation technologyA method of dividing an image into three components R, G and B; the R, G and B components of each low resolution image sequence are combined into a vector l R 、l G 、l B The method comprises the steps of carrying out a first treatment on the surface of the Constructing a contribution matrix A of each high-resolution space-time point corresponding to a low-resolution space-time point by using space-time transformation technology R 、A G 、A B The method comprises the steps of carrying out a first treatment on the surface of the Obtaining a vector h of the high-time-resolution image sequence by using a linear formula ah=l of the high-resolution elements through conjugate gradient algorithm operation R 、h G 、h B The method comprises the steps of carrying out a first treatment on the surface of the According to h R 、h G And h B Reconstructing the spatial super-resolution by using an iterative back projection mode to obtain X corresponding to R, G and B components in the super-resolution image sequence R 、X G And X B The method comprises the steps of carrying out a first treatment on the surface of the X is to be R 、X G And X B And synthesizing to obtain the super-resolution image. The image reconstruction method can effectively solve the problems of low image reconstruction resolution, low accuracy, complex calculation and the like, thereby obtaining an image with good quality; the method is simple in operation, low in cost and high in robustness.
Description
Technical Field
The invention relates to the technical field of image reconstruction, in particular to a super-resolution image reconstruction method based on a space-time transformation technology.
Background
The pixel resolution is a direct factor influencing the quality of the image, and higher resolution is more beneficial to further processing and synthesis of the image. Super-resolution image reconstruction refers to the reconstruction of an ultra-high resolution image by combining several lower resolution images. The super-resolution image reconstruction method can break through the control of the existing sampling frequency of the traditional sensing equipment. The resolution of a digital image represents not only the pixels of the image, but also the resolution of the details of the image. Image resolution is a key factor in measuring image detail. Image interpolation can arbitrarily scale up or down an image, but does not represent increasing or decreasing its image resolution, whereas super-resolution image reconstruction is based on the mutual motion of several lower resolution images of the same scene, which are combined into an image, so that the image has extremely high resolution. The biggest bright spot of the method is to reduce the cost, and the existing lower resolution image can still be utilized.
In recent years, super-resolution image reconstruction has become a hot spot for research in related fields such as video monitoring, military and medical treatment. Based on the fact that detail data are easy to lose in the reconstruction process of traditional images or phenomena such as image edge distortion and noise are easy to occur in detail enhancement, the super-resolution reconstruction of images in a cross-scale and feature combination mode is proposed. Firstly, constructing mapping relation between pixels and gradient characteristics between high-resolution and low-resolution images by using an adjacent method according to trans-scale characteristics of the images; reconstructing the initial input image by using the mapping relation, and obtaining high-frequency data of the initial input image by a singular value thresholding method; and finally, carrying out block superposition processing on the high-frequency data by adopting a gradient feature mapping mode, and fusing the obtained data onto the high-resolution image to obtain a final image. The method can effectively enhance the image details, improve the image resolution and enhance the visual effect, but has complex operation flow and wastes a great deal of cost;
xu Jun and Liu Hui research finds that the traditional image synthesis in the medical field has the problem of low resolution, and seriously affects the accuracy of clinical diagnosis and treatment. It is therefore proposed to reconstruct medical images using a non-local autoregressive learning approach. And constructing a new model according to the specific non-local characteristics of the medical image and by utilizing an autoregressive function and a classification dictionary obtained through clustering, and finally obtaining the reconstructed high-resolution image. The method improves the resolution ratio of the medical image to a great extent, but the calculation process is still too complicated, and a great amount of time is wasted;
yang, the mansion seedling has the problems of low speed, poor image quality and the like according to the image reconstruction of the current resolution, and two or more than two images are reconstructed in a block symmetrical overlapping mode. Performing image registration processing on the lower resolution image sequence by adopting an ORB mode, performing PSyCo reconstruction on the processed image, and performing pixel gray maximum fusion processing on the reconstructed image to obtain an ultrahigh resolution image. The method has higher resolution of the reconstructed image, but has lower accuracy of the reconstruction of the resolution image, and seriously affects the image quality.
Disclosure of Invention
Aiming at the problems, the invention aims to provide a super-resolution image reconstruction method based on a space-time transformation technology, which can effectively solve the problems of low image reconstruction resolution, low accuracy, complex calculation and the like, thereby obtaining excellent image quality.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
the super-resolution image reconstruction method based on the space-time transformation technology is characterized by comprising the following steps,
s1: dividing the image into R, G and B components by adopting a component mode;
s2: the R, G and B components of each low resolution image sequence are combined into a vector l R 、l G 、l B Wherein l R 、l G 、l B A vector representing all low resolution observations in the corresponding component;
s3: constructing a contribution matrix A of each high-resolution space-time point corresponding to a low-resolution space-time point by using space-time transformation technology R 、A G 、A B ;
S4: vector h of the high-time-resolution image sequence is derived using the linear formula ah=l of the high-resolution elements R 、h G 、h B The method comprises the steps of carrying out a first treatment on the surface of the Wherein h is R 、h G 、h B Different component vectors representing all unknown high-resolution values in the reconstructed image frame sequence X;
s5: h derived from step S4 R 、h G And h B Reconstructing the spatial super-resolution by numerical value to obtain X corresponding to R, G and B components in the super-resolution image sequence R 、X G And X B ;
S6: r, G and B component X R 、X G And X B And synthesizing to finally obtain the super-resolution image.
Further, the specific operation of step S3 includes the steps of,
s31: matching the low-resolution image sequences to obtain N low-resolution image sequences, and performing space-time downsampling on N low-resolution image sequence observation models to obtain N sampling matrixes D based on space-time transformation 1 、D 2 ,……,D N ;
S32: finding out the coordinates of points in the object space which are not changed along with the movement of the field of view, and obtaining a space-time coordinate transformation matrix T irradiated by N low-resolution images 1 、T 2 ,……,T N ;
S33: camera point spread function H for acquiring N low resolution images 1 ,H 2 ,……,H N And a temporal blur matrix M corresponding to the sequence of low resolution images 1 ,M 2 ,……,M N ;
S34: establishing an observation model of a plurality of images;
s35: from the observation model, the relation between the ith low-resolution image sequence and the predicted high-resolution sequence X can be obtained as Y i =D i M i H i T i X+n i I is more than or equal to 1 and less than or equal to N, wherein N is more than or equal to 1 i Observation noise representing the i-th low resolution image; y is Y i Namely a contribution matrix A of the low-resolution space-time points corresponding to each high-resolution space-time point;
s36: the R, G and B components of the contribution matrix A of the low-resolution spatiotemporal points corresponding to the high-resolution spatiotemporal points are the contribution matrix A of the low-resolution spatiotemporal points corresponding to the high-resolution spatiotemporal points R 、A G 、A B 。
Further, in step S4, a conjugate gradient algorithm is used to obtain a vector h of the high-time-resolution image sequence R 、h G 、h B 。
Further, a conjugate gradient algorithm is utilized to obtain a vector h of the high-time resolution image sequence R 、h G 、h B The specific operation of (a) comprises the following steps,
s41: the least squares image sequence model min E (h) =min { ||ah-l||is obtainable according to the linear formula ah=l of the high resolution element 2 };
S42: the least square image sequence model is normalized to obtain an image sequence super-resolution reconstruction equation min E (h) =min { ||Ah-l||sub-sub- 2 +α||WCh|| 2 -a }; wherein W represents a diagonal weight matrix function describing the expected normalization at each time point; alpha represents an overall normalization factor; c represents a matrix recording space-time square valence derivatives, and is selected as a laplace operator;
s43: initial settingLet beta=0, h 0 =0,b=A T l,r=b,p=b;
S44: iterative operations are performed from k=1, 2., where the algorithm search direction is p=r+βp, q= (a T A+αC T W T WC) p; algorithm search step size is alpha=r T r/p T p, algorithm gradient r 0 =r,r=r 0 -αq,
S45: then the actual search h is derived k =h k-1 +αp, where h=h R 、h G 、h B 。
Further, step S5 uses an iterative back-projection method to reconstruct the spatial super-resolution.
Further, the method for performing spatial super-resolution reconstruction by using the iterative back projection mode can be expressed asWherein m is described as the number of iterations; p is described as the number of frames in the low resolution image; />Describing a super-resolution image frame obtained by the m+1st iteration; />Describing the super-resolution image frame obtained by the mth iteration; />The number of iterative back projection practices is described; />Described as->A low resolution image frame experimentally obtained in a low resolution observation model;lambda is described as the gradient step size; f is described as a reference frame; delta is described as the laplace operator of the square differential.
The beneficial effects of the invention are as follows:
the super-resolution image reconstruction method based on the space-time transformation technology adopts a matching method to reconstruct a high-time resolution image sequence, the obtained sequence adopts an iterative back projection mode to carry out space-time super-resolution reconstruction processing, finally a super-resolution image is obtained, and through simulation experiment verification, the method has the advantages of simple calculation, high reconstruction speed, image resolution improvement, and effective solving of the problems of deformation, noise, blurring and the like in the image reconstruction process.
Drawings
FIG. 1 is an observation model of several low resolution images according to the present invention;
FIG. 2 is a diagram showing the reconstruction of an image sequence according to the foreman rule in a simulation experiment according to the present invention;
FIG. 3 is an image reconstruction situation based on an actual text sequence in a simulation experiment of the invention;
FIG. 4 is a comparison of the iterative calculation of the foreman sequence in the simulation experiment of the present invention;
fig. 5 is a comparison situation of iterative computation based on an actual text sequence in a simulation experiment of the invention.
Detailed Description
In order to enable those skilled in the art to better understand the technical solution of the present invention, the technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
A super-resolution image reconstruction method based on space-time transformation technology comprises the following steps,
s1: dividing the image into R, G and B components by adopting a component mode;
s2: the R, G and B components of each low resolution image sequence are combined into a vector l R 、l G 、l B Wherein l R 、l G 、l B A vector representing all low resolution observations in the corresponding component;
s3: construction of low corresponding high resolution spatio-temporal points by spatio-temporal transformation techniqueContribution matrix A of resolution space-time points R 、A G 、A B ;
For some unclear space degradation functions such as linear and radial motion, 1 coordinate space can be found in a try, and the object space does not change the coordinates of each point in the space along with the movement of the field of view, so that the image restoration can be completed in the space by utilizing various image restoration methods, and then the original image space is transformed back, which is the basic process of image space-time transformation.
The linear motion can cause the space image to change, the space image change degree is obtained through the system image field curvature and the aspect difference, if the image represents a certain form, the space image can be converted by a space conversion modeWherein P represents an initial image, (u, v) and (x, y) represent coordinates of object space and image space, α (x, y), β (u, v), c n (x,y),b n (u,v),c I2 (x,y),b I2 (u, v) respectively represent the point coordinates of the corresponding reversible function, and α (x, y) noteq0, β (u, v) noteq0, p in the whole space I (. Cndot.) Fourier transform functions can be performed to obtainTherefore, the space-time coordinate transformation is a direct and simple super-resolution image reconstruction method, and lays a foundation for improving the accuracy of the subsequent image reconstruction.
Specifically, in the invention, a space-time transformation technology is utilized to construct a contribution matrix A of each high-resolution space-time point corresponding to a low-resolution space-time point R 、A G 、A B The specific operation of (a) comprises the following steps,
s31: matching the low-resolution image sequences to obtain N low-resolution image sequences, and performing space-time downsampling on N low-resolution image sequence observation models to obtain N sampling matrixes D based on space-time transformation 1 、D 2 ,……,D N ;
S32: find out the object space without followingChanging coordinates of each point in space along with movement of the field of view to obtain a space-time coordinate transformation matrix T irradiated by N low-resolution images 1 、T 2 ,……,T N ;
S33: camera point spread function H for acquiring N low resolution images 1 ,H 2 ,……,H N And a temporal blur matrix M corresponding to the sequence of low resolution images 1 ,M 2 ,……,M N ;
S34: establishing observation models of a plurality of images as shown in figure 1, wherein in figure 1, n i Observation noise representing the i-th low resolution image;
s35: from the observation model, it can be derived that the relation between the ith low resolution image sequence and the predicted high resolution sequence X is Y i =D i M i H i T i X+n i I is more than or equal to 1 and less than or equal to N, wherein N is more than or equal to 1 i Observation noise representing the i-th low resolution image; y is Y i Namely a contribution matrix A of the low-resolution space-time points corresponding to each high-resolution space-time point;
s36: the R, G and B components of the contribution matrix A of the low-resolution spatiotemporal points corresponding to the high-resolution spatiotemporal points are the contribution matrix A of the low-resolution spatiotemporal points corresponding to the high-resolution spatiotemporal points R 、A G 、A B 。
S4: vector h of the high-time-resolution image sequence is derived using the linear formula ah=l of the high-resolution elements R 、h G 、h B The method comprises the steps of carrying out a first treatment on the surface of the Wherein h is R 、h G 、h B Representing all unknown high-resolution values of the different component vectors in the reconstructed image frame sequence X;
specifically, a conjugate gradient algorithm is utilized to obtain a vector h of a high-time resolution image sequence R 、h G 、h B 。
S41: the least squares image sequence model min E (h) =min { ||ah-l||is obtainable according to the linear formula ah=l of the high resolution element 2 };
S42: normalizing the least square image sequence model to obtain an image sequence supergraphResolution reconstruction equation min E (h) =min { Ah-l I 2 +α||WCh|| 2 -a }; wherein W represents a diagonal weight matrix function describing the expected normalization at each time point; alpha represents an overall normalization factor; c represents a matrix recording space-time square valence derivatives, and is selected as a laplace operator;
s43: initial setting β=0, h 0 =0,b=A T l,r=b,p=b;
S44: iterative operations are performed from k=1, 2., where the algorithm search direction is p=r+βp, q= (a T A+αC T W T WC) p; algorithm search step size is alpha=r T r/p T p, algorithm gradient r 0 =r,r=r 0 -αq,
S45: then the actual search h is derived k =h k-1 +αp, where h=h R 、h G 、h B 。
S5: h derived from step S4 R 、h G And h B Reconstructing the spatial super-resolution by numerical value to obtain X corresponding to R, G and B components in the super-resolution image sequence R 、X G And X B ;
Specifically, an iterative back projection mode is used for reconstructing the spatial super resolution.
The method for reconstructing the spatial super-resolution by using the iterative back projection mode can be expressed as followsWherein m is described as the number of iterations; p is described as the number of frames in the low resolution image; />Describing a super-resolution image frame obtained by the m+1st iteration; />Described as the mth iterationThe obtained super-resolution image frame; />The number of iterative back projection practices is described; />Described as->A low resolution image frame experimentally obtained in a low resolution observation model; lambda is described as the gradient step size; f is described as a reference frame; delta is described as the laplace operator of the square differential.
S6: r, G and B component X R 、X G And X B And (5) performing component matrix synthesis to finally obtain a super-resolution image.
Simulation experiment:
in order to verify that the super-resolution image reconstruction method based on the space-time transformation technology has higher effectiveness, two groups of simulation experiments are carried out: group 1 adopts a foreman rule image sequence in CIF mode, the sequence object moves more strongly, the human face distribution generating part deforms and misplaces, and the motion scene is also changed; the 2 nd group is a text image sequence of realistic photographing, the motion scene contains Chinese characters, table grid lines, numbers and the like, and the camera can shake drastically in the photographing process.
In the 1 st group of simulation experiments, setting a 33 rd frame of a foreman image sequence as an intermediate reference frame, performing 3:1 proportion sampling in time to obtain 6 frames of high-resolution initial images, and performing convolution fuzzy processing on the 6 frames of images by adopting a Gaussian module with the size of 3 multiplied by 3; then performing 2:1 proportion sampling in time to obtain 6 frames of low-resolution images; and finally, carrying out bilinear interpolation processing on the reference image frame, and taking the reference image frame as original prediction data of super-resolution image reconstruction.
When the group 2 simulation experiment is performed, 1 frame can be arbitrarily extracted as an intermediate reference frame due to the irregular sequence, and a 6-frame low-resolution image can be obtained by using the same method as the group 1 simulation experiment.
The three-dimensional reconstruction method of the wheat grain image based on the z-axis weight (three-dimensional reconstruction of the wheat grain image based on the z-axis weight, zhang Gongtao, chang Yan, tan Lian, and the like, optical report, 2019,39 (3): 127-135), hereinafter referred to as method one, is adopted to reconstruct the image with the super-resolution image reconstruction method based on the space-time transformation technology in the invention, and the results of the simulation experiment of group 1 are shown in the attached figure 2, wherein (a) is an initial high-resolution reference image, (b) is a low-resolution reference image, (c) is an image reconstructed by the method one, and (d) is an image reconstructed by the method in the invention.
The results of the simulation experiment of group 2 are shown in fig. 3, wherein (a) is the initial high resolution reference image, (b) is the low resolution reference image, (c) is the image reconstructed by the method one, and (d) is the image reconstructed by the method in the invention.
According to the two groups of simulation experiments, the iterative computation times of the two methods are the same, the parameter setting of each method is based on the basic principle that the quality of the reconstructed image is close to the optimal, and the parameters are strictly controlled: method one correction residual threshold delta 0 The simulation experiment of the 1 st group is set to be 4, the simulation experiment of the 2 nd group is set to be 3, and the relaxation parameters are set to be 2; the image reconstruction method in the present invention corrects the inverse proportion parameter in the residual function, group 1 a=40, group 2 a=20.
As can be seen from fig. 2 and fig. 3, the low resolution images in the two simulation experiments have a large amount of noise, and the image frame motion prediction is greatly error due to severe motion, so that the method cannot effectively solve the above problems, and a part of detail loss, especially on a table line, occurs in the result obtained after the processing, and the image noise is extremely large (as shown in part (c) of fig. 2 and part (c) of fig. 3).
The super-resolution image reconstruction method based on the space-time transformation technology can effectively reduce noise amplification and reduce prediction errors caused by strong motion. As can be seen from fig. 2 (d) and fig. 3 (d), the details of the image are clearer, the edge of the image is significantly improved compared with the processing result of the first method, and the noise is well controlled, so that the image quality is excellent.
Table 1 below shows the actual index conditions corresponding to the iteration process of the first method and the image reconstruction method of the present invention when two sets of simulation experiments are performed, and fig. 4 and fig. 5 show the change curves of the first method and the image reconstruction method of the present invention, respectively, caused by the change of the actual index iteration times corresponding to the iteration process.
Table 1 reconstructed images and reality index conditions of different methods
According to the curve comparison situation, the iteration times are found to be continuously increased, even if the signal-to-noise ratio and the mean square error result situation of the two methods are very fast and close to stable, the signal-to-noise ratio value of the method is larger, the mean square error value is smaller, the signal-to-noise ratio of the reconstructed image is improved, and the mean square error is reduced, so that the reconstructed image is excellent in vision, and the super-resolution image reconstruction method of the space-time transformation technology provided by the invention has higher effectiveness.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, and that the above embodiments and descriptions are merely illustrative of the principles of the present invention, and various changes and modifications may be made without departing from the spirit and scope of the invention, which is defined in the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (5)
1. The super-resolution image reconstruction method based on the space-time transformation technology is characterized by comprising the following steps,
s1: dividing the image into R, G and B components by adopting a component mode;
s2: the R, G and B components of each low resolution image sequence are combined into a vector l R 、l G 、l B Wherein l R 、l G 、l B A vector representing all low resolution observations in the corresponding component;
s3: constructing a contribution matrix A of each high-resolution space-time point corresponding to a low-resolution space-time point by using space-time transformation technology R 、A G 、A B ;
S4: vector h of the high-time-resolution image sequence is derived using the linear formula ah=l of the high-resolution elements R 、h G 、h B The method comprises the steps of carrying out a first treatment on the surface of the Wherein h is R 、h G 、h B Different component vectors representing all unknown high-resolution values in the reconstructed image frame sequence X;
s5: h derived from step S4 R 、h G And h B Reconstructing the spatial super-resolution by numerical value to obtain X corresponding to R, G and B components in the super-resolution image sequence R 、X G And X B ;
S6: r, G and B component X R 、X G And X B Synthesizing to finally obtain a super-resolution image;
the specific operation of step S3 includes the following steps,
s31: matching the low-resolution image sequences to obtain N low-resolution image sequences, and performing space-time downsampling on N low-resolution image sequence observation models to obtain N sampling matrixes D based on space-time transformation 1 、D 2 ,……,D N ;
S32: finding out the coordinates of points in the object space which are not changed along with the movement of the field of view, and obtaining a space-time coordinate transformation matrix T irradiated by N low-resolution images 1 、T 2 ,……,T N ;
S33: camera point spread function H for acquiring N low resolution images 1 ,H 2 ,……,H N And a temporal blur matrix M corresponding to the sequence of low resolution images 1 ,M 2 ,……,M N ;
S34: establishing an observation model of a plurality of images;
s35: from the observation model, the ith low resolution image sequence and pre-prediction can be derivedThe relation between the height measurement resolution sequence X is Y i =D i M i H i T i X+n i I is more than or equal to 1 and less than or equal to N, wherein N is more than or equal to 1 i Observation noise representing the i-th low resolution image; y is Y i Namely a contribution matrix A of the low-resolution space-time points corresponding to each high-resolution space-time point;
s36: the R, G and B components of the contribution matrix A of the low-resolution spatiotemporal points corresponding to the high-resolution spatiotemporal points are the contribution matrix A of the low-resolution spatiotemporal points corresponding to the high-resolution spatiotemporal points R 、A G 、A B 。
2. The method for reconstructing a super-resolution image based on a space-time transform technique as recited in claim 1, wherein the vector h of the high-time-resolution image sequence is obtained in step S4 using a conjugate gradient algorithm R 、h G 、h B 。
3. The super-resolution image reconstruction method based on the space-time transform technique according to claim 2, wherein the vector h of the high-time resolution image sequence is obtained by using a conjugate gradient algorithm R 、h G 、h B The specific operation of (a) comprises the following steps,
s41: the least squares image sequence model minE (h) =min { ||ah-l||is obtainable according to the linear formula ah=l of the high resolution element 2 };
S42: normalizing the least square image sequence model to obtain an image sequence super-resolution reconstruction equation minE (h) =min { ||Ah-l||sub-sub 2 +α||WCh|| 2 -a }; wherein W represents a diagonal weight matrix function describing the expected normalization at each time point; alpha represents an overall normalization factor; c represents a matrix recording space-time square valence derivatives, and is selected as a laplace operator;
s43: initial setting β=0, h 0 =0,b=A T l,r=b,p=b;
S44: iterative operations are performed from k=1, 2., where the algorithm search direction is p=r+βp, q= (a T A+αC T W T WC) p; algorithm search step size is alpha=r T r/p T p, algorithm gradient r 0 =r,r=r 0 -αq,
S45: then the actual search h is derived k =h k-1 +αp, where h=h R 、h G 、h B 。
4. The method for reconstructing a super-resolution image based on a space-time transform technique as recited in claim 3, wherein step S5 uses an iterative back-projection method to reconstruct the spatial super-resolution.
5. The method for reconstructing a super-resolution image based on a space-time transform technique as recited in claim 4, wherein said method for reconstructing a spatial super-resolution using an iterative back-projection method can be expressed asWherein m is described as the number of iterations; p is described as the number of frames in the low resolution image; />Describing a super-resolution image frame obtained by the m+1st iteration; />Describing the super-resolution image frame obtained by the mth iteration; />The number of iterative back projection practices is described; />Described as->A low resolution image frame experimentally obtained in a low resolution observation model; lambda is described as the gradient step size; f is described as a reference frame; delta is described as the laplace operator of the square differential.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010961932.8A CN112184549B (en) | 2020-09-14 | 2020-09-14 | Super-resolution image reconstruction method based on space-time transformation technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010961932.8A CN112184549B (en) | 2020-09-14 | 2020-09-14 | Super-resolution image reconstruction method based on space-time transformation technology |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112184549A CN112184549A (en) | 2021-01-05 |
CN112184549B true CN112184549B (en) | 2023-06-23 |
Family
ID=73920953
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010961932.8A Active CN112184549B (en) | 2020-09-14 | 2020-09-14 | Super-resolution image reconstruction method based on space-time transformation technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112184549B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114897697A (en) * | 2022-05-18 | 2022-08-12 | 北京航空航天大学 | Super-resolution reconstruction method for camera imaging model |
CN115128789B (en) * | 2022-07-07 | 2023-06-30 | 中国科学院光电技术研究所 | Super-diffraction structure illumination microscopic imaging system and method based on hyperbolic metamaterial |
CN115994858B (en) * | 2023-03-24 | 2023-06-06 | 广东海洋大学 | Super-resolution image reconstruction method and system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101441765A (en) * | 2008-11-19 | 2009-05-27 | 西安电子科技大学 | Self-adapting regular super resolution image reconstruction method for maintaining edge clear |
CN101644773A (en) * | 2009-03-20 | 2010-02-10 | 中国科学院声学研究所 | Real-time frequency domain super-resolution direction estimation method and device |
CN102073866A (en) * | 2010-12-27 | 2011-05-25 | 清华大学 | Video super resolution method by utilizing space-time Markov random field model |
CN103400346A (en) * | 2013-07-18 | 2013-11-20 | 天津大学 | Video super resolution method for self-adaption-based superpixel-oriented autoregression model |
CN103440676A (en) * | 2013-08-13 | 2013-12-11 | 南方医科大学 | Method for reconstruction of super-resolution coronary sagittal plane image of lung 4D-CT image based on motion estimation |
CN106157249A (en) * | 2016-08-01 | 2016-11-23 | 西安电子科技大学 | Based on the embedded single image super-resolution rebuilding algorithm of optical flow method and sparse neighborhood |
CN109658361A (en) * | 2018-12-27 | 2019-04-19 | 辽宁工程技术大学 | A kind of moving scene super resolution ratio reconstruction method for taking motion estimation error into account |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102800071B (en) * | 2012-06-20 | 2015-05-20 | 南京航空航天大学 | Method for reconstructing super resolution of sequence image POCS |
CN104376547A (en) * | 2014-11-04 | 2015-02-25 | 中国航天科工集团第三研究院第八三五七研究所 | Motion blurred image restoration method |
DE102017123969B4 (en) * | 2017-10-16 | 2019-11-28 | Conti Temic Microelectronic Gmbh | Method for the classification of planar structures |
CN108280804B (en) * | 2018-01-25 | 2021-03-16 | 湖北大学 | Multi-frame image super-resolution reconstruction method |
CN109255822B (en) * | 2018-07-13 | 2023-02-24 | 中国人民解放军战略支援部队航天工程大学 | Multi-scale coding and multi-constraint compression sensing reconstruction method for resolution ratio between times out |
CN110060209B (en) * | 2019-04-28 | 2021-09-24 | 北京理工大学 | MAP-MRF super-resolution image reconstruction method based on attitude information constraint |
CN110458756A (en) * | 2019-06-25 | 2019-11-15 | 中南大学 | Fuzzy video super-resolution method and system based on deep learning |
CN116664400A (en) * | 2019-09-24 | 2023-08-29 | 南京工程学院 | Video high space-time resolution signal processing method |
CN111583330B (en) * | 2020-04-13 | 2023-07-04 | 中国地质大学(武汉) | Multi-scale space-time Markov remote sensing image sub-pixel positioning method and system |
-
2020
- 2020-09-14 CN CN202010961932.8A patent/CN112184549B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101441765A (en) * | 2008-11-19 | 2009-05-27 | 西安电子科技大学 | Self-adapting regular super resolution image reconstruction method for maintaining edge clear |
CN101644773A (en) * | 2009-03-20 | 2010-02-10 | 中国科学院声学研究所 | Real-time frequency domain super-resolution direction estimation method and device |
CN102073866A (en) * | 2010-12-27 | 2011-05-25 | 清华大学 | Video super resolution method by utilizing space-time Markov random field model |
CN103400346A (en) * | 2013-07-18 | 2013-11-20 | 天津大学 | Video super resolution method for self-adaption-based superpixel-oriented autoregression model |
CN103440676A (en) * | 2013-08-13 | 2013-12-11 | 南方医科大学 | Method for reconstruction of super-resolution coronary sagittal plane image of lung 4D-CT image based on motion estimation |
CN106157249A (en) * | 2016-08-01 | 2016-11-23 | 西安电子科技大学 | Based on the embedded single image super-resolution rebuilding algorithm of optical flow method and sparse neighborhood |
CN109658361A (en) * | 2018-12-27 | 2019-04-19 | 辽宁工程技术大学 | A kind of moving scene super resolution ratio reconstruction method for taking motion estimation error into account |
Also Published As
Publication number | Publication date |
---|---|
CN112184549A (en) | 2021-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Molini et al. | Deepsum: Deep neural network for super-resolution of unregistered multitemporal images | |
Huang et al. | Robust single-image super-resolution based on adaptive edge-preserving smoothing regularization | |
Nguyen et al. | Super-resolution for biometrics: A comprehensive survey | |
McCann et al. | Convolutional neural networks for inverse problems in imaging: A review | |
Zheng et al. | Crossnet: An end-to-end reference-based super resolution network using cross-scale warping | |
CN106952228B (en) | Super-resolution reconstruction method of single image based on image non-local self-similarity | |
Zhang et al. | Image super-resolution based on structure-modulated sparse representation | |
Trinh et al. | Novel example-based method for super-resolution and denoising of medical images | |
CN107025632B (en) | Image super-resolution reconstruction method and system | |
CN102800071B (en) | Method for reconstructing super resolution of sequence image POCS | |
CN103824273B (en) | Super-resolution reconstruction method based on compound motion and self-adaptive nonlocal prior | |
CN112184549B (en) | Super-resolution image reconstruction method based on space-time transformation technology | |
CN106127688B (en) | A kind of super-resolution image reconstruction method and its system | |
CN103020898B (en) | Sequence iris image super resolution ratio reconstruction method | |
Sun et al. | Multiscale generative adversarial network for real‐world super‐resolution | |
CN114202459B (en) | Blind image super-resolution method based on depth priori | |
Su et al. | Super-resolution without dense flow | |
Wu et al. | Feedback pyramid attention networks for single image super-resolution | |
Tan et al. | Deep SR-HDR: Joint learning of super-resolution and high dynamic range imaging for dynamic scenes | |
Yang et al. | A survey of super-resolution based on deep learning | |
CN116563916A (en) | Attention fusion-based cyclic face super-resolution method and system | |
CN116664397A (en) | TransSR-Net structured image super-resolution reconstruction method | |
Shen et al. | Projection onto Convex Sets Method in Space-frequency Domain for Super Resolution. | |
Shen et al. | Deeper super-resolution generative adversarial network with gradient penalty for sonar image enhancement | |
Chen et al. | Guided dual networks for single image super-resolution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |