CN112184549B - Super-resolution image reconstruction method based on space-time transformation technology - Google Patents
Super-resolution image reconstruction method based on space-time transformation technology Download PDFInfo
- Publication number
- CN112184549B CN112184549B CN202010961932.8A CN202010961932A CN112184549B CN 112184549 B CN112184549 B CN 112184549B CN 202010961932 A CN202010961932 A CN 202010961932A CN 112184549 B CN112184549 B CN 112184549B
- Authority
- CN
- China
- Prior art keywords
- resolution
- low
- super
- resolution image
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 230000009466 transformation Effects 0.000 title claims abstract description 29
- 238000005516 engineering process Methods 0.000 title claims abstract description 20
- 239000013598 vector Substances 0.000 claims abstract description 19
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 15
- 239000011159 matrix material Substances 0.000 claims abstract description 14
- 230000006870 function Effects 0.000 claims description 11
- 230000002123 temporal effect Effects 0.000 claims description 8
- 230000008859 change Effects 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 7
- 238000010606 normalization Methods 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 6
- 238000000844 transformation Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 abstract description 7
- 238000004088 simulation Methods 0.000 description 16
- 230000008569 process Effects 0.000 description 8
- 238000013507 mapping Methods 0.000 description 3
- 241000209140 Triticum Species 0.000 description 2
- 235000021307 Triticum Nutrition 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 235000013339 cereals Nutrition 0.000 description 2
- 238000012804 iterative process Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 230000003321 amplification Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
本发明公开了一种基于时空变换技术的超分辨图像重建方法,将图像分成R、G和B三个分量;把每个低分辨率图像序列的R、G和B分量分别组成矢量lR、lG、lB;利用时空变换技术构建各高分辨率时空点对应低分辨率时空点的贡献矩阵AR、AG、AB;通过共轭梯度算法运算利用高分辨率元素的线性公式Ah=l得出高时间分辨率图像序列的矢量hR、hG、hB;根据hR、hG和hB数值使用迭代反投影方式重新构建空间超分辨率,获得超分辨率图像序列中R、G和B分量对应的XR、XG和XB;将XR、XG和XB进行合成,得到超分辨率图像。本发明中的图像重建方法可以有效解决图像重建分辨率低、准确率低以及计算复杂等问题,从而得到质量佳的图像;操作简单、成本低,具有较高鲁棒性。
The invention discloses a super-resolution image reconstruction method based on space-time transformation technology, which divides the image into three components of R, G and B; and composes the R, G and B components of each low-resolution image sequence into vector l R , l G , l B ; use the space-time transformation technology to construct the contribution matrix A R , A G , A B of each high-resolution space-time point corresponding to the low-resolution space-time point; use the linear formula Ah of the high-resolution element to operate through the conjugate gradient algorithm = l to obtain the vectors h R , h G , h B of the high time resolution image sequence; according to h R , h G and h B values, use the iterative back projection method to reconstruct the spatial super-resolution, and obtain the super-resolution image sequence X R , X G and X B corresponding to R , G and B components; X R , X G and X B are synthesized to obtain a super-resolution image. The image reconstruction method in the present invention can effectively solve the problems of low image reconstruction resolution, low accuracy, and complex calculation, thereby obtaining an image with good quality; the operation is simple, the cost is low, and it has high robustness.
Description
技术领域technical field
本发明涉及图像重建技术领域,尤其涉及基于时空变换技术的超分辨图像重建方法。The invention relates to the technical field of image reconstruction, in particular to a super-resolution image reconstruction method based on space-time transformation technology.
背景技术Background technique
像素分辨率是影响图质量的直接要素,较高分辨率更利于图像做深一步处理与合成。超分辨率图像重建是指将若干幅分辨率较低图像相结合,重新构建一幅超高分辨率的图像。这里“超”代表超分辨率图像重建方法可以突破传统传感设备已有采样频率的控制。数字图像的分辨率不单代表图像的像素,其还代表图像细节的分辨能力。图像分辨率是衡量图像细节关键因素。图像插值可以任意放大或者缩小一幅图像的比例,但并不代表提升与降低其图像分辨力,而超分辨率图像重建是根据相同场景若干幅较低分辨率图像相互运动的数据,把他们结合到一幅图像中,使该图像具有极高的分辨率。该方法最大的亮点就是降低成本,同时已有的较低分辨率图像依旧可以利用。Pixel resolution is a direct factor affecting image quality, and higher resolution is more conducive to further processing and synthesis of images. Super-resolution image reconstruction refers to combining several lower-resolution images to reconstruct an ultra-high-resolution image. Here "ultra" means that the super-resolution image reconstruction method can break through the existing sampling frequency control of traditional sensing devices. The resolution of a digital image not only represents the pixels of the image, but also represents the resolution ability of the image details. Image resolution is a key factor in measuring image detail. Image interpolation can arbitrarily enlarge or reduce the proportion of an image, but it does not mean improving or reducing its image resolution, while super-resolution image reconstruction is based on the data of the mutual movement of several lower-resolution images in the same scene, combining them into an image, making that image very high resolution. The biggest bright spot of this method is to reduce costs, while existing lower-resolution images can still be used.
近年来,超分辨图像重建已成为视频监控、军事以及医疗等相关领域研究的热点。战玉丽,迟静等人根据传统图像在重建流程中极易发生细节数据丢失,或者增强细节时极易发生图像边缘失真与噪声等现象,提出采用跨尺度与特征组合方式对图像进行超分辨率重建。先根据图像的跨尺度特征使用邻近方法构建高、低分辨率图像间像素与梯度特性的映射联系;再使用映射联系将初始输入图像重建,并通过奇异值阈值化方法得出初始输入图像的高频数据;最后采用梯度特征映射方式把高频数据进行分块叠加处理,并将得出数据融合到高分辨率图像上,获得最终图像。该方法可以有效增强图像细节,提升图像分辨率,增强视觉效果,但操作流程复杂,浪费大量成本;In recent years, super-resolution image reconstruction has become a research hotspot in video surveillance, military and medical and other related fields. Zhan Yuli, Chi Jing et al. According to the fact that traditional images are prone to loss of detail data in the reconstruction process, or image edge distortion and noise are prone to occur when details are enhanced, they proposed to use cross-scale and feature combination methods to super-resolution images rate reconstruction. First, according to the cross-scale features of the image, the neighbor method is used to construct the mapping relationship between the pixels and the gradient characteristics between the high and low resolution images; then the initial input image is reconstructed by using the mapping relationship, and the height of the initial input image is obtained by the singular value thresholding method. The high-frequency data; finally, the high-frequency data is processed by block superposition using the gradient feature mapping method, and the obtained data is fused to the high-resolution image to obtain the final image. This method can effectively enhance image details, improve image resolution, and enhance visual effects, but the operation process is complicated and a lot of cost is wasted;
徐军,刘慧研究发现医学领域中传统图像合成存在分辨率不高的问题,严重影响临床诊疗的准确率。因此提出利用非局部自回归学习方式对医学图像进行重建。根据医学图像特有的非局部特性,并利用自回归函数和通过聚类获得的分类字典构建新模型,最终得出重建后的高分辨率图像。该方法很大程度上提升医学图像分辨率,但是计算过程仍过于繁琐,浪费大量时间;Xu Jun and Liu Hui's research found that traditional image synthesis in the medical field has a problem of low resolution, which seriously affects the accuracy of clinical diagnosis and treatment. Therefore, a non-local autoregressive learning method is proposed to reconstruct medical images. According to the unique non-local characteristics of medical images, a new model is constructed by using the autoregressive function and the classification dictionary obtained by clustering, and finally a reconstructed high-resolution image is obtained. This method greatly improves the resolution of medical images, but the calculation process is still too cumbersome and wastes a lot of time;
杨飚,邸苗依据当前分辨率图像重建存在速度慢、图像质量不佳等问题,提出通过块对称对叠方式对两幅或者两幅以上幅图像分辨率进行重建。先采用ORB方式对较低分辨率图像序列做图像配准处理,再将处理后图像进行PSyCo重建,然后将重建后图像做像素灰度极大值融合处理,得到超高分辨率图像。该方法重建图像分辨率较高,但分辨率图像重建正确率较低,严重影响图像质量。Yang Biao and Di Miao proposed to reconstruct the resolution of two or more images by means of block symmetric stacking based on the problems of slow speed and poor image quality in image reconstruction with current resolution. First, the ORB method is used to perform image registration processing on the lower-resolution image sequence, and then the processed image is reconstructed by PSyCo, and then the reconstructed image is processed by pixel gray maximum value fusion processing to obtain an ultra-high resolution image. The reconstruction image resolution of this method is high, but the reconstruction accuracy of the resolution image is low, which seriously affects the image quality.
发明内容Contents of the invention
针对上述存在的问题,本发明旨在提供基于时空变换技术的超分辨图像重建方法,可以有效解决图像重建分辨率低、准确率低以及计算复杂等问题,从而得到图像质量极佳。In view of the above existing problems, the present invention aims to provide a super-resolution image reconstruction method based on spatio-temporal transformation technology, which can effectively solve the problems of low resolution, low accuracy and complex calculation of image reconstruction, thereby obtaining excellent image quality.
为了实现上述目的,本发明所采用的技术方案如下:In order to achieve the above object, the technical scheme adopted in the present invention is as follows:
基于时空变换技术的超分辨图像重建方法,其特征在于,包括以下步骤,The super-resolution image reconstruction method based on space-time transformation technology, is characterized in that, comprises the following steps,
S1:采用分量方式将图像分成R、G和B三个分量;S1: Divide the image into three components of R, G and B by means of components;
S2:把每个低分辨率图像序列的R、G和B分量分别组成矢量lR、lG、lB,其中lR、lG、lB代表对应分量中全部低分辨率观察数值的矢量;S2: Compose the R, G and B components of each low-resolution image sequence into vectors l R , l G , l B , where l R , l G , and l B represent the vectors of all low-resolution observation values in the corresponding components ;
S3:利用时空变换技术构建各高分辨率时空点对应低分辨率时空点的贡献矩阵AR、AG、AB;S3: Use the space-time transformation technology to construct the contribution matrices A R , A G , A B of each high-resolution space-time point corresponding to the low-resolution space-time point;
S4:利用高分辨率元素的线性公式Ah=l得出高时间分辨率图像序列的矢量hR、hG、hB;其中,hR、hG、hB代表重建图像帧序列X中所有未知高分辨率数值的不同分量矢量;S4: Use the linear formula Ah=l of high-resolution elements to obtain the vectors h R , h G , h B of the high-time resolution image sequence; where, h R , h G , h B represent all the reconstructed image frame sequences X different component vectors of unknown high-resolution values;
S5:根据步骤S4得出的hR、hG和hB数值重新构建空间超分辨率,获得超分辨率图像序列中R、G和B分量对应的XR、XG和XB;S5: Reconstruct the spatial super-resolution according to the h R , h G and h B values obtained in step S4, and obtain X R , X G and X B corresponding to the R, G and B components in the super-resolution image sequence;
S6:将R、G和B分量XR、XG和XB进行合成,最终得到超分辨率图像。S6: Combining R, G and B components X R , X G and X B to finally obtain a super-resolution image.
进一步的,步骤S3的具体操作包括以下步骤,Further, the specific operation of step S3 includes the following steps,
S31:对低分辨率图像序列做匹配处理,得到N个低分辨率图像序列,对N个低分辨率图像序列观察模型进行时空下采样,得到N个基于时空变换采样矩阵D1、D2,……,DN;S31: Perform matching processing on the low-resolution image sequences to obtain N low-resolution image sequences, perform temporal and spatial downsampling on the observation models of the N low-resolution image sequences, and obtain N sampling matrices D 1 and D 2 based on temporal and spatial transformations, ..., D N ;
S32:找出物空间不会随着视场运动而改变空间中各点的坐标,得出N个低分辨率图像照射的时空坐标变换矩阵T1、T2,……,TN;S32: Find out that the object space will not change the coordinates of each point in the space with the movement of the field of view, and obtain the space-time coordinate transformation matrices T 1 , T 2 , ..., T N of N low-resolution images;
S33:获取N个低分辨率图像的相机点扩展函数H1,H2,……,HN,以及低分辨率图像序列对应的的时间模糊矩阵M1,M2,……,MN;S33: Obtain camera point spread functions H 1 , H 2 , ..., H N of N low-resolution images, and time blur matrices M 1 , M 2 , ..., M N corresponding to the low-resolution image sequences;
S34:建立若干个图像的观测模型;S34: Establish observation models of several images;
S35:从观测模型中可以得出第i个低分辨率图像序列与预测高分辨率序列X之间的关系为Yi=DiMiHiTiX+ni,1≤i≤N,其中,Ni表示第i个低分辨率图像的观察噪声;Yi也即各高分辨率时空点所对应的低分辨率时空点的贡献矩阵A;S35: From the observation model, it can be concluded that the relationship between the i-th low-resolution image sequence and the predicted high-resolution sequence X is Y i =D i M i H i T i X+n i , 1≤i≤N , where N i represents the observation noise of the i-th low-resolution image; Y i is also the contribution matrix A of the low-resolution spatio-temporal points corresponding to each high-resolution spatio-temporal point;
S36:各高分辨率时空点所对应的低分辨率时空点的贡献矩阵A的R、G和B分量即为各高分辨率时空点对应低分辨率时空点的贡献矩阵AR、AG、AB。S36: The R, G and B components of the contribution matrix A of the low-resolution space-time point corresponding to each high-resolution space-time point are the contribution matrices A R , A G , A B .
进一步的,步骤S4中利用共轭梯度算法获得高时间分辨率图像序列的矢量hR、hG、hB。Further, in step S4, the vectors h R , h G , and h B of the high-time resolution image sequence are obtained by using the conjugate gradient algorithm.
进一步的,利用共轭梯度算法获得高时间分辨率图像序列的矢量hR、hG、hB的具体操作包括以下步骤,Further, the specific operation of using the conjugate gradient algorithm to obtain the vectors h R , h G , and h B of the high-time resolution image sequence includes the following steps,
S41:根据高分辨率元素的线性公式Ah=l可得最小平方图像序列模型min E(h)=min{||Ah-l||2};S41: According to the linear formula Ah=l of high-resolution elements, the least square image sequence model min E(h)=min{||Ah-l|| 2 } can be obtained;
S42:对最小平方图像序列模型进行正规化处理,得出图像序列超分辨率重建方程式min E(h)=min{||Ah-l||2+α||WCh||2};式中,W表示记载各时间点上期望正规化情况的对角线权值矩阵函数;α表示整局正规化因子;C表示记载时空平方价导数的矩阵,被选为laplace算符;S42: Normalize the least square image sequence model to obtain the image sequence super-resolution reconstruction equation min E(h)=min{||Ah-l|| 2 +α||WCh|| 2 }; where , W represents the diagonal weight matrix function that records the expected normalization situation at each time point; α represents the overall normalization factor; C represents the matrix that records the space-time square valence derivative, which is selected as the laplace operator;
S43:初始设定β=0,h0=0,b=ATl,r=b,p=b;S43: Initially set β=0, h 0 =0, b= AT l, r=b, p=b;
S44:从k=1,2...进行迭代运算,其中,算法搜索方向为p=r+βp,q=(ATA+αCTWTWC)p;算法搜索步长为α=rTr/pTp,算法梯度为r0=r,r=r0-αq, S44: Carry out iterative operation from k=1, 2..., wherein, the algorithm search direction is p=r+βp, q=(A T A+αC T W T WC)p; the algorithm search step size is α=r T r/p T p, the algorithm gradient is r 0 = r, r = r 0 -αq,
S45:则得出实际搜索hk=hk-1+αp,其中,h=hR、hG、hB。S45: Then obtain the actual search h k =h k-1 +αp, where h=h R , h G , h B .
进一步的,步骤S5使用迭代反投影方式进行空间超分辨率重新构建。Further, step S5 performs spatial super-resolution reconstruction using an iterative back-projection method.
进一步的,使用迭代反投影方式进行空间超分辨率重建的方法可表示为其中,m描述为迭代次数;p描述为在低分辨率图像中的帧数量;/>描述为第m+1次进行迭代获得的超分辨率图像帧;/>描述为第m次进行迭代获得的超分辨率图像帧;/>描述为迭代反投影实践次数;/>描述为/>在低分辨率观测模型中实验获得的低分辨率图像帧;λ描述为梯度步长;f描述为参考帧;Δ描述为平方价微分的拉普拉斯算符。Further, the method of spatial super-resolution reconstruction using iterative back-projection can be expressed as Among them, m is described as the number of iterations; p is described as the number of frames in the low-resolution image; /> Described as the super-resolution image frame obtained by the m+1th iteration; /> Described as the super-resolution image frame obtained by the mth iteration; /> Described as the number of iterative backprojection practices; /> described as /> A low-resolution image frame experimentally obtained in a low-resolution observation model; λ is described as the gradient step size; f is described as the reference frame; Δ is described as the Laplace operator of the quadratic valence differential.
本发明的有益效果是:The beneficial effects of the present invention are:
本发明中基于时空变换技术的超分辨率图像重建方法采用匹配方法重新构建高时间分辨率图像序列,将得出的序列采用迭代反投影方式做时空超分辨率重建处理,最终得出超分辨率图像,经过仿真实验验证,该方法计算简单、重建速度快、提升图像分辨率,并有效解决图像重建过程发生变形、噪声以及模糊等问题。In the present invention, the super-resolution image reconstruction method based on the space-time transformation technology adopts the matching method to reconstruct the high-time-resolution image sequence, and uses the iterative back-projection method to perform the space-time super-resolution reconstruction process on the obtained sequence, and finally obtains the super-resolution The image has been verified by simulation experiments. This method is simple in calculation, fast in reconstruction speed, improves image resolution, and effectively solves problems such as deformation, noise, and blurring in the image reconstruction process.
附图说明Description of drawings
图1为本发明中若干个低分辨率图像的观测模型;Fig. 1 is the observation model of several low-resolution images among the present invention;
图2为为本发明仿真实验中foreman规则下图像序列重建情况;Fig. 2 is for the image sequence reconstruction situation under the foreman rule in the simulation experiment of the present invention;
图3为本发明仿真实验中基于实际text序列的图像重建情况;Fig. 3 is the image reconstruction situation based on actual text sequence in the simulation experiment of the present invention;
图4为本发明仿真实验中foreman序列迭代计算对比情况;Fig. 4 is the comparison situation of foreman sequence iterative calculation in the emulation experiment of the present invention;
图5为本发明仿真实验中基于实际text序列迭代计算对比情况。FIG. 5 is a comparison of iterative calculations based on actual text sequences in the simulation experiment of the present invention.
具体实施方式Detailed ways
为了使本领域的普通技术人员能更好的理解本发明的技术方案,下面结合附图和实施例对本发明的技术方案做进一步的描述。In order to enable those skilled in the art to better understand the technical solution of the present invention, the technical solution of the present invention will be further described below in conjunction with the accompanying drawings and embodiments.
基于时空变换技术的超分辨图像重建方法,包括以下步骤,A super-resolution image reconstruction method based on space-time transformation technology, comprising the following steps,
S1:采用分量方式将图像分成R、G和B三个分量;S1: Divide the image into three components of R, G and B by means of components;
S2:把每个低分辨率图像序列的R、G和B分量分别组成矢量lR、lG、lB,其中lR、lG、lB代表对应分量中全部低分辨率观察数值的矢量;S2: Compose the R, G and B components of each low-resolution image sequence into vectors l R , l G , l B , where l R , l G , and l B represent the vectors of all low-resolution observation values in the corresponding components ;
S3:利用时空变换技术构建各高分辨率时空点对应低分辨率时空点的贡献矩阵AR、AG、AB;S3: Use the space-time transformation technology to construct the contribution matrices A R , A G , A B of each high-resolution space-time point corresponding to the low-resolution space-time point;
针对一些线性与径向运动等不清晰的空间退化函数,可以试着找出1个坐标空间,物空间不会随着视场运动而改变空间中各点的坐标,所以可以在该空间利用各种图像复原方法完成图像复原,然后再变换回初始图像空间,这就是图像时空变换基本过程。For some unclear space degradation functions such as linear and radial motion, you can try to find a coordinate space. The object space will not change the coordinates of each point in the space with the movement of the field of view, so you can use various coordinates in this space. This image restoration method completes image restoration, and then transforms back to the original image space, which is the basic process of image spatiotemporal transformation.
线性运动会造成空间图像发生改变,通过系统像场弯曲与纵横差获得空间图像变化程度,如果图像代表某种形式,则可以用空间变换方式进行转换得到式中,P表示初始图像,(u,v)与(x,y)表示物空间与像空间的坐标,α(x,y),β(u,v),cn(x,y),bn(u,v),cI2(x,y),bI2(u,v)分别表示相对应可逆函数点坐标,并在全部空间中α(x,y)≠0,β(u,v)≠0,PI(·)可以进行傅里叶变换函数,得到由此可见,时空坐标变换是一种既直接又简单的超分辨率图像重建方法,为提高后续图像重建精准度奠定基础。Linear motion will cause the spatial image to change, and the degree of spatial image change can be obtained through the system image field curvature and vertical and horizontal differences. If the image represents a certain form, it can be converted by spatial transformation to obtain In the formula, P represents the initial image, (u, v) and (x, y) represent the coordinates of object space and image space, α(x, y), β(u, v), c n (x, y), b n (u, v), c I2 (x, y), b I2 (u, v) respectively represent the coordinates of the corresponding reversible function points, and in all spaces α(x, y)≠0, β(u, v)≠0, P I (·) can perform Fourier transform function, get It can be seen that the space-time coordinate transformation is a direct and simple super-resolution image reconstruction method, which lays the foundation for improving the accuracy of subsequent image reconstruction.
具体的,在本发明中利用时空变换技术构建各高分辨率时空点对应低分辨率时空点的贡献矩阵AR、AG、AB的具体操作包括以下步骤,Specifically, in the present invention, using the space-time transformation technology to construct the contribution matrices A R , A G , and A B of each high-resolution space-time point corresponding to the low-resolution space-time point includes the following steps,
S31:对低分辨率图像序列做匹配处理,得到N个低分辨率图像序列,对N个低分辨率图像序列观察模型进行时空下采样,得到N个基于时空变换采样矩阵D1、D2,……,DN;S31: Perform matching processing on the low-resolution image sequences to obtain N low-resolution image sequences, perform temporal and spatial downsampling on the observation models of the N low-resolution image sequences, and obtain N sampling matrices D 1 and D 2 based on temporal and spatial transformations, ..., D N ;
S32:找出物空间不会随着视场运动而改变空间中各点的坐标,得出N个低分辨率图像照射的时空坐标变换矩阵T1、T2,……,TN;S32: Find out that the object space will not change the coordinates of each point in the space with the movement of the field of view, and obtain the space-time coordinate transformation matrices T 1 , T 2 , ..., T N of N low-resolution images;
S33:获取N个低分辨率图像的相机点扩展函数H1,H2,……,HN,以及低分辨率图像序列对应的的时间模糊矩阵M1,M2,……,MN;S33: Obtain camera point spread functions H 1 , H 2 , ..., H N of N low-resolution images, and time blur matrices M 1 , M 2 , ..., M N corresponding to the low-resolution image sequences;
S34:建立若干个图像的观测模型,如附图1所示,在附图1中,ni表示第i个低分辨率图像的观察噪声;S34: establish the observation model of several images, as shown in accompanying drawing 1, in accompanying drawing 1, n i represents the observation noise of the ith low-resolution image;
S35:根据观测模型可以得出,第i个低分辨率图像序列与预测高分辨率序列X之间的关系为Yi=DiMiHiTiX+ni,1≤i≤N,其中,ni表示第i个低分辨率图像的观察噪声;Yi也即各高分辨率时空点所对应的低分辨率时空点的贡献矩阵A;S35: According to the observation model, it can be concluded that the relationship between the i-th low-resolution image sequence and the predicted high-resolution sequence X is Y i =D i M i H i T i X+n i , 1≤i≤N , where n i represents the observation noise of the i-th low-resolution image; Y i is also the contribution matrix A of the low-resolution spatio-temporal points corresponding to each high-resolution spatio-temporal point;
S36:各高分辨率时空点所对应的低分辨率时空点的贡献矩阵A的R、G和B分量即为各高分辨率时空点对应低分辨率时空点的贡献矩阵AR、AG、AB。S36: The R, G and B components of the contribution matrix A of the low-resolution space-time point corresponding to each high-resolution space-time point are the contribution matrices A R , A G , A B .
S4:利用高分辨率元素的线性公式Ah=l得出高时间分辨率图像序列的矢量hR、hG、hB;其中,hR、hG、hB代表重建图像帧序列X中所有未知高分别率数值的不同分量矢量;S4: Use the linear formula Ah=l of high-resolution elements to obtain the vectors h R , h G , h B of the high-time resolution image sequence; where, h R , h G , h B represent all the reconstructed image frame sequences X different component vectors of unknown high-resolution values;
具体的,利用共轭梯度算法获得高时间分辨率图像序列的矢量hR、hG、hB。Specifically, the vectors h R , h G , and h B of the high-time resolution image sequence are obtained by using the conjugate gradient algorithm.
S41:根据高分辨率元素的线性公式Ah=l可得最小平方图像序列模型min E(h)=min{||Ah-l||2};S41: According to the linear formula Ah=l of high-resolution elements, the least square image sequence model min E(h)=min{||Ah-l|| 2 } can be obtained;
S42:对最小平方图像序列模型进行正规化处理,得出图像序列超分辨率重建方程式min E(h)=min{||Ah-l||2+α||WCh||2};式中,W表示记载各时间点上期望正规化情况的对角线权值矩阵函数;α表示整局正规化因子;C表示记载时空平方价导数的矩阵,被选为laplace算符;S42: Normalize the least square image sequence model to obtain the image sequence super-resolution reconstruction equation min E(h)=min{||Ah-l|| 2 +α||WCh|| 2 }; where , W represents the diagonal weight matrix function that records the expected normalization situation at each time point; α represents the overall normalization factor; C represents the matrix that records the space-time square valence derivative, which is selected as the laplace operator;
S43:初始设定β=0,h0=0,b=ATl,r=b,p=b;S43: Initially set β=0, h 0 =0, b= AT l, r=b, p=b;
S44:从k=1,2...进行迭代运算,其中,算法搜索方向为p=r+βp,q=(ATA+αCTWTWC)p;算法搜索步长为α=rTr/pTp,算法梯度为r0=r,r=r0-αq, S44: Carry out iterative operation from k=1, 2..., wherein, the algorithm search direction is p=r+βp, q=(A T A+αC T W T WC)p; the algorithm search step size is α=r T r/p T p, the algorithm gradient is r 0 = r, r = r 0 -αq,
S45:则得出实际搜索hk=hk-1+αp,其中,h=hR、hG、hB。S45: Then obtain the actual search h k =h k-1 +αp, where h=h R , h G , h B .
S5:根据步骤S4得出的hR、hG和hB数值重新构建空间超分辨率,获得超分辨率图像序列中R、G和B分量对应的XR、XG和XB;S5: Reconstruct the spatial super-resolution according to the h R , h G and h B values obtained in step S4, and obtain X R , X G and X B corresponding to the R, G and B components in the super-resolution image sequence;
具体的,使用迭代反投影方式进行空间超分辨率重新构建。Specifically, an iterative back-projection method is used for spatial super-resolution reconstruction.
使用迭代反投影方式进行空间超分辨率重建的方法可表示为其中,m描述为迭代次数;p描述为在低分辨率图像中的帧数量;/>描述为第m+1次进行迭代获得的超分辨率图像帧;/>描述为第m次进行迭代获得的超分辨率图像帧;/>描述为迭代反投影实践次数;/>描述为/>在低分辨率观测模型中实验获得的低分辨率图像帧;λ描述为梯度步长;f描述为参考帧;Δ描述为平方价微分的拉普拉斯算符。The method of spatial super-resolution reconstruction using iterative back-projection can be expressed as Among them, m is described as the number of iterations; p is described as the number of frames in the low-resolution image; /> Described as the super-resolution image frame obtained by the m+1th iteration; /> Described as the super-resolution image frame obtained by the mth iteration; /> Described as the number of iterative backprojection practices; /> described as /> A low-resolution image frame experimentally obtained in a low-resolution observation model; λ is described as the gradient step size; f is described as the reference frame; Δ is described as the Laplace operator of the quadratic valence differential.
S6:将R、G和B分量XR、XG和XB进行分量矩阵合成,最终得到超分辨率图像。S6: Combining R, G and B components X R , X G and X B in a component matrix to finally obtain a super-resolution image.
仿真实验:Simulation:
为了验证本发明中基于时空变换技术的超分辨率图像重建方法具有较高实效性,本发明进行了两组仿真实验:第1组采用CIF模式的foreman规则图像序列,该序列对象运动较为强烈,人脸部分发生部分变形与错位,运动场景也发生改变;第2组是现实拍照的text图像序列,运动场景含有汉字、表格线以及数字等,相机在拍照过程中会发生激烈抖动。In order to verify that the super-resolution image reconstruction method based on the space-time transformation technology in the present invention has high effectiveness, the present invention has carried out two groups of simulation experiments: the first group adopts the foreman rule image sequence of CIF mode, and the object movement of this sequence is relatively strong, Partial deformation and dislocation of the human face occurred, and the motion scene also changed; the second group is a text image sequence of real shooting. The motion scene contains Chinese characters, table lines, numbers, etc., and the camera will shake violently during the shooting process.
在第1组仿真实验中,设定foreman图像序列第33帧为中间参考帧,在时间上进行3:1比例采样获得6帧高分辨率初始图像,对这6帧图像采用3×3大小的高斯模块做卷积模糊处理;之后在时间上进行2:1比例采样得出6帧低分辨率图像;最后对参考图像帧做双线性插值处理,并将其当作超分辨率图像重建的原始预测数据。In the first group of simulation experiments, the 33rd frame of the foreman image sequence is set as the intermediate reference frame, and 6 frames of high-resolution initial images are obtained by sampling at a ratio of 3:1 in time. The Gaussian module performs convolution blurring; after that, 2:1 ratio sampling is performed in time to obtain 6 frames of low-resolution images; finally, bilinear interpolation is performed on the reference image frame, and it is used as a super-resolution image reconstruction Raw forecast data.
在进行第2组仿真实验时,由于采用非规则序列,则可以任意抽取1个帧当作中间参考帧,使用与第1组仿真实验一样的方法得出6帧低分辨率图像。In the second group of simulation experiments, since the irregular sequence is used, one frame can be arbitrarily selected as an intermediate reference frame, and 6 frames of low-resolution images can be obtained using the same method as the first group of simulation experiments.
分别采用基于z轴权重的麦粒图像三维重建方法(基于z轴权重的麦粒图像三维;重建,张红涛,常艳,谭联,等.光学学报,2019,39(3):127-135),以下简称方法一,与本发明中基于时空变换技术的超分辨率图像重建方法对图像进行重建,第1组仿真实验的结果如附图2所示,其中,(a)为初始高分辨率参考图像,(b)为低分辨率参考图像,(c)为利用方法一重建的图像,(d)为利用本发明中的方法重建的图像。The three-dimensional reconstruction method of wheat grain image based on z-axis weight was used respectively (Three-dimensional wheat grain image based on z-axis weight; Reconstruction, Zhang Hongtao, Chang Yan, Tan Lian, et al. Acta Optics Sinica, 2019, 39(3): 127-135) , hereinafter referred to as method one, reconstructs the image with the super-resolution image reconstruction method based on the space-time transformation technology in the present invention, the results of the first group of simulation experiments are shown in Figure 2, where (a) is the initial high resolution Reference image, (b) is a low-resolution reference image, (c) is an image reconstructed by
第2组仿真实验的结果如附图3所示,其中,(a)为初始高分辨率参考图像,(b)为低分辨率参考图像,(c)为利用方法一重建的图像,(d)为利用本发明中的方法重建的图像。The results of the second group of simulation experiments are shown in Figure 3, where (a) is the initial high-resolution reference image, (b) is the low-resolution reference image, (c) is the image reconstructed by
根据以上两组仿真实验,两种方法迭代计算次数都是相同的,各方法参数设定都是根据重建图像质量近于最优的基本原则,需要严格控制参数有:方法一修正残差阈值δ0第1组仿真实验设定为4,第2组仿真实验设定为3,松弛参数都设定为2;本发明中的图像重建方法修正残差函数中的反比例参数第1组A=40,第2组A=20。According to the above two sets of simulation experiments, the number of iterative calculations of the two methods is the same, and the parameter settings of each method are based on the basic principle that the quality of the reconstructed image is close to optimal, and the parameters that need to be strictly controlled are:
从附图2和附图3可以看出,两组仿真实验中低分辨率图像都存在大量噪声,同时由于处于剧烈运动中会导致图像帧运动预测发生极大误差,方法一不能有效解决以上问题,为此处理后得出的结果出现一部分细节损失,特别是表格线上,同时该图像噪声极大(如附图2中(c)和附图3中(c)部分所示)。It can be seen from Figure 2 and Figure 3 that there are a lot of noise in the low-resolution images in the two sets of simulation experiments, and at the same time, due to the violent movement, the motion prediction of the image frame will cause great errors, and the first method cannot effectively solve the above problems , the result obtained after this processing has a part of detail loss, especially on the table line, and at the same time, the image is very noisy (as shown in (c) in Figure 2 and (c) in Figure 3).
而本发明中基于时空变换技术的超分辨率图像重建方法,不仅可以有效降低噪声放大,还可以降低强烈运动导致预测误差。通过附图2中(d)与附图3中(d)部分可知,图像细节更加清晰,图像边缘部分比方法一处理结果有着显著提升,同时噪声也得到了很好地控制,从而得出图像质量极佳。However, the super-resolution image reconstruction method based on the space-time transformation technology in the present invention can not only effectively reduce noise amplification, but also reduce prediction errors caused by strong motion. From the part (d) in Figure 2 and the part (d) in Figure 3, it can be seen that the image details are clearer, the edge part of the image is significantly improved compared with the processing result of
下表1为方法一与本发明中图像重建方法在进行两组仿真实验时迭代流程中所对应的现实指标情况,附图4和附图5分别为方法一与本发明中图像重建方法在迭代流程中所对应的现实指标迭代次数改变而发生变化曲线。The following table 1 shows the actual indicators corresponding to method one and the image reconstruction method in the present invention in the iterative process when two groups of simulation experiments are carried out, and accompanying drawings 4 and 5 respectively show method one and the image reconstruction method in the present invention in the iterative process The change curve occurs when the number of iterations of the actual index corresponding to the process changes.
表1不同方法的重建图像与现实指标情况Table 1 Reconstructed images and real-world indicators of different methods
根据曲线对比情况发现,迭代次数不断增多,即使这两种方法信噪比与均方差结果情况都很快接近于平稳,但本发明方法的信噪比数值更大一些,均方差数值较小一些,提升重建图像的信噪比,降低均方误差,从而使重建后图像视觉极佳,为此证实本文提出时空变换技术的超分辨图像重建方法具有较高实效性。According to the comparison of the curves, it is found that the number of iterations is constantly increasing, even though the signal-to-noise ratio and the mean square error results of the two methods are very close to stable, but the value of the signal-to-noise ratio of the method of the present invention is larger, and the value of the mean square error is smaller , improve the signal-to-noise ratio of the reconstructed image and reduce the mean square error, so that the reconstructed image has an excellent vision. This proves that the super-resolution image reconstruction method proposed in this paper has high effectiveness.
以上显示和描述了本发明的基本原理、主要特征和本发明的优点。本行业的技术人员应该了解,本发明不受上述实施例的限制,上述实施例和说明书中描述的只是说明本发明的原理,在不脱离本发明精神和范围的前提下,本发明还会有各种变化和改进,这些变化和改进都落入要求保护的本发明范围内。本发明要求保护范围由所附的权利要求书及其等效物界定。The basic principles, main features and advantages of the present invention have been shown and described above. Those skilled in the industry should understand that the present invention is not limited by the above-mentioned embodiments. What are described in the above-mentioned embodiments and the description only illustrate the principle of the present invention. Without departing from the spirit and scope of the present invention, the present invention will also have Variations and improvements are possible, which fall within the scope of the claimed invention. The protection scope of the present invention is defined by the appended claims and their equivalents.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010961932.8A CN112184549B (en) | 2020-09-14 | 2020-09-14 | Super-resolution image reconstruction method based on space-time transformation technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010961932.8A CN112184549B (en) | 2020-09-14 | 2020-09-14 | Super-resolution image reconstruction method based on space-time transformation technology |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112184549A CN112184549A (en) | 2021-01-05 |
CN112184549B true CN112184549B (en) | 2023-06-23 |
Family
ID=73920953
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010961932.8A Active CN112184549B (en) | 2020-09-14 | 2020-09-14 | Super-resolution image reconstruction method based on space-time transformation technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112184549B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114612297A (en) * | 2022-01-24 | 2022-06-10 | 北京工业大学 | Hyperspectral image super-resolution reconstruction method and device |
CN114897697B (en) * | 2022-05-18 | 2025-06-17 | 北京航空航天大学 | A super-resolution reconstruction method for camera imaging models |
CN115128789B (en) * | 2022-07-07 | 2023-06-30 | 中国科学院光电技术研究所 | Super-diffraction structure illumination microscopic imaging system and method based on hyperbolic metamaterial |
CN115994858B (en) * | 2023-03-24 | 2023-06-06 | 广东海洋大学 | A super-resolution image reconstruction method and system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101441765A (en) * | 2008-11-19 | 2009-05-27 | 西安电子科技大学 | Self-adapting regular super resolution image reconstruction method for maintaining edge clear |
CN101644773A (en) * | 2009-03-20 | 2010-02-10 | 中国科学院声学研究所 | Real-time frequency domain super-resolution direction estimation method and device |
CN102073866A (en) * | 2010-12-27 | 2011-05-25 | 清华大学 | Video super resolution method by utilizing space-time Markov random field model |
CN103400346A (en) * | 2013-07-18 | 2013-11-20 | 天津大学 | Video super resolution method for self-adaption-based superpixel-oriented autoregression model |
CN103440676A (en) * | 2013-08-13 | 2013-12-11 | 南方医科大学 | Method for reconstruction of super-resolution coronary sagittal plane image of lung 4D-CT image based on motion estimation |
CN106157249A (en) * | 2016-08-01 | 2016-11-23 | 西安电子科技大学 | Based on the embedded single image super-resolution rebuilding algorithm of optical flow method and sparse neighborhood |
CN109658361A (en) * | 2018-12-27 | 2019-04-19 | 辽宁工程技术大学 | A kind of moving scene super resolution ratio reconstruction method for taking motion estimation error into account |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102800071B (en) * | 2012-06-20 | 2015-05-20 | 南京航空航天大学 | Method for reconstructing super resolution of sequence image POCS |
CN104376547A (en) * | 2014-11-04 | 2015-02-25 | 中国航天科工集团第三研究院第八三五七研究所 | Motion blurred image restoration method |
DE102017123969B4 (en) * | 2017-10-16 | 2019-11-28 | Conti Temic Microelectronic Gmbh | Method for the classification of planar structures |
CN108280804B (en) * | 2018-01-25 | 2021-03-16 | 湖北大学 | Multi-frame image super-resolution reconstruction method |
CN109255822B (en) * | 2018-07-13 | 2023-02-24 | 中国人民解放军战略支援部队航天工程大学 | Multi-scale coding and multi-constraint compression sensing reconstruction method for resolution ratio between times out |
CN110060209B (en) * | 2019-04-28 | 2021-09-24 | 北京理工大学 | A MAP-MRF Super-Resolution Image Reconstruction Method Based on Attitude Information Constraints |
CN110458756A (en) * | 2019-06-25 | 2019-11-15 | 中南大学 | Fuzzy video super-resolution method and system based on deep learning |
CN110634105B (en) * | 2019-09-24 | 2023-06-20 | 南京工程学院 | A video signal processing method with high spatio-temporal resolution combining optical flow method and deep network |
CN111583330B (en) * | 2020-04-13 | 2023-07-04 | 中国地质大学(武汉) | Multi-scale space-time Markov remote sensing image sub-pixel positioning method and system |
-
2020
- 2020-09-14 CN CN202010961932.8A patent/CN112184549B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101441765A (en) * | 2008-11-19 | 2009-05-27 | 西安电子科技大学 | Self-adapting regular super resolution image reconstruction method for maintaining edge clear |
CN101644773A (en) * | 2009-03-20 | 2010-02-10 | 中国科学院声学研究所 | Real-time frequency domain super-resolution direction estimation method and device |
CN102073866A (en) * | 2010-12-27 | 2011-05-25 | 清华大学 | Video super resolution method by utilizing space-time Markov random field model |
CN103400346A (en) * | 2013-07-18 | 2013-11-20 | 天津大学 | Video super resolution method for self-adaption-based superpixel-oriented autoregression model |
CN103440676A (en) * | 2013-08-13 | 2013-12-11 | 南方医科大学 | Method for reconstruction of super-resolution coronary sagittal plane image of lung 4D-CT image based on motion estimation |
CN106157249A (en) * | 2016-08-01 | 2016-11-23 | 西安电子科技大学 | Based on the embedded single image super-resolution rebuilding algorithm of optical flow method and sparse neighborhood |
CN109658361A (en) * | 2018-12-27 | 2019-04-19 | 辽宁工程技术大学 | A kind of moving scene super resolution ratio reconstruction method for taking motion estimation error into account |
Also Published As
Publication number | Publication date |
---|---|
CN112184549A (en) | 2021-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112184549B (en) | Super-resolution image reconstruction method based on space-time transformation technology | |
Lyu et al. | MRI super-resolution with ensemble learning and complementary priors | |
Huang et al. | Robust single-image super-resolution based on adaptive edge-preserving smoothing regularization | |
CN103136734B (en) | The suppressing method of edge Halo effect during a kind of convex set projection super-resolution image reconstruction | |
Zhang et al. | Single image super-resolution with multiscale similarity learning | |
CN102902961B (en) | Face super-resolution processing method based on K neighbor sparse coding average value constraint | |
CN102800071B (en) | Method for reconstructing super resolution of sequence image POCS | |
CN107958444A (en) | A kind of face super-resolution reconstruction method based on deep learning | |
CN110443768A (en) | Single-frame image super-resolution reconstruction method based on Multiple Differential consistency constraint and symmetrical redundant network | |
CN102231204A (en) | Sequence image self-adaptive regular super resolution reconstruction method | |
CN101477684A (en) | Process for reconstructing human face image super-resolution by position image block | |
CN113379602B (en) | Light field super-resolution enhancement method using zero sample learning | |
CN107292819A (en) | A kind of infrared image super resolution ratio reconstruction method protected based on edge details | |
CN103020898A (en) | Sequence iris image super-resolution reconstruction method | |
CN113421186A (en) | Apparatus and method for unsupervised video super-resolution using a generation countermeasure network | |
Shen et al. | Projection onto Convex Sets Method in Space-frequency Domain for Super Resolution. | |
Lu et al. | Structure-texture parallel embedding for remote sensing image super-resolution | |
CN103914807B (en) | Non-locality image super-resolution method and system for zoom scale compensation | |
Liu et al. | Robust multi-frame super-resolution with adaptive norm choice and difference curvature based BTV regularization | |
Zhang et al. | Deep residual network based medical image reconstruction | |
CN113793269B (en) | Super-resolution image reconstruction method based on improved neighborhood embedding and priori learning | |
Okuhata et al. | Implementation of super-resolution scaler for Full HD and 4K video | |
CN114757826A (en) | A multi-feature-based POCS image super-resolution reconstruction method | |
Xing et al. | Rigid regression for facial image interpolation with local structure prior | |
Han et al. | Dual discriminators generative adversarial networks for unsupervised infrared super-resolution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |