WO2019174068A1 - 基于距离加权稀疏表达先验的图像复原与匹配一体化方法 - Google Patents
基于距离加权稀疏表达先验的图像复原与匹配一体化方法 Download PDFInfo
- Publication number
- WO2019174068A1 WO2019174068A1 PCT/CN2018/080754 CN2018080754W WO2019174068A1 WO 2019174068 A1 WO2019174068 A1 WO 2019174068A1 CN 2018080754 W CN2018080754 W CN 2018080754W WO 2019174068 A1 WO2019174068 A1 WO 2019174068A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- estimation
- initial
- sparse
- distance
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 230000010354 integration Effects 0.000 title claims abstract description 13
- 239000011159 matrix material Substances 0.000 claims description 36
- 230000000694 effects Effects 0.000 abstract description 3
- 230000000007 visual effect Effects 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 3
- 101001093748 Homo sapiens Phosphatidylinositol N-acetylglucosaminyltransferase subunit P Proteins 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/28—Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/7715—Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/772—Determining representative reference patterns, e.g. averaging or distorting patterns; Generating dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
Definitions
- the invention belongs to the technical field of pattern matching, and more particularly to an image restoration and matching integration method based on distance weighted sparse expression prior.
- the existing methods for matching and locating blurred images mainly include two types: (1) For images with low degree of degradation, the methods of matching or sparse expression are directly used for positioning. (2) First, image restoration is performed to obtain better image quality, and then the restored image is used for positioning.
- these methods have many drawbacks. For the first type of method, if the image quality deteriorates seriously, the accuracy of matching and positioning will be seriously reduced; for the second type of method, many image restoration methods are designed only to improve human visual perception, not machine perception. Therefore, the positioning accuracy cannot be improved. Worse, when the degradation model is unknown, general recovery schemes, such as deblurring, do not perform well on real images.
- the present invention provides an integrated image restoration and matching method based on distance weighted sparse representation prior, thereby solving the prior art technology having low image quality and low positioning accuracy. problem.
- the present invention provides an image restoration and matching integration method based on distance weighted sparse representation prior, comprising:
- step (5) using the clear image obtained in step (3) to estimate the degraded image in step (2), and then performing iterations on steps (2)-(4) to obtain the target image and the target sparse expression coefficient vector, according to the target. Sparsely expressing a maximum value in the coefficient vector to obtain an initial positioning of the target image in the reference image;
- steps (1)-(5) are repeated to obtain a restored image of the target image and a positioning result of the target image in the reference image.
- step (1) includes:
- the initial sparse expression coefficient vector ⁇ of the degraded image in the dictionary set is obtained.
- x is a matrix describing the initial sharp image
- y is the matrix describing the degenerate image
- ⁇ is the regularization term weight of k.
- v is The weight of the regularization term
- k is the fuzzy kernel.
- y is the matrix describing the degraded image
- ⁇ is The weight of the item
- D is the dictionary set
- ⁇ is the weight of the sparse prior constraint of the reference image
- e l is the first-order filter
- s is the sparse index
- x is the matrix describing the initial clear image.
- step (3) further includes:
- ⁇ (x,u) represents the energy equation and x is the matrix describing the initial sharp image
- y is the matrix describing the degraded image
- ⁇ is The weight coefficient of the item
- D is the dictionary set
- ⁇ is The coefficient of the term
- s is the sparse index
- ⁇ is the regular term
- the weight, e l is a first-order filter.
- x is the matrix describing the initial sharp image and u is the auxiliary variable.
- y is the matrix describing the degraded image
- ⁇ is The weight coefficient of the item
- D is the dictionary set
- ⁇ is a regular term
- the weight, e l is a first-order filter.
- x is the matrix describing the initial sharp image
- ⁇ is The coefficient of the term
- s is the sparse index
- ⁇ is the regular term
- the weight, e l is a first-order filter.
- the new reference image is used as the reference image in the step (1).
- the reference image is intercepted by the step b to obtain a dictionary set, where b ⁇ a.
- the present invention performs A iteration to obtain a target image and a target sparse expression coefficient vector.
- the target image is already a good image for restoring the degraded image, and the target is obtained according to the maximum value of the target sparse expression coefficient vector.
- the initial positioning of the image in the reference image we narrow down the search range of the target image to a smaller range than the reference image, and finally use the above algorithm to accurately match the target image in this range, and get The final restored image and positioning results. It effectively solves the problems that the noise-sensitive and poor restoration effect existing in the existing fuzzy image restoration and matching method has serious influence on the matching task, and is applicable to the visual navigation system.
- the present invention provides a method of joint image restoration and matching.
- image restoration and matching tasks can be mutually promoted, which can correct the initial misplaced errors and continuously increase the confidence of positioning.
- the distance weighting information is incorporated into the sparse expression model, so that the weights between different types of data become smaller, which increases the discriminability of the graph.
- an energy equation is established, and the energy equation is decomposed into an x-sub problem and a u-sub problem, and an optimal solution is obtained for the x-sub problem and the u-sub problem, respectively, thereby obtaining a clear image estimation.
- the convergence speed of the algorithm is further improved, and is more concise and clear. Therefore, the present invention is particularly suitable for the field of fuzzy image matching and positioning.
- FIG. 1 is a flowchart of an image restoration and matching integration method according to an embodiment of the present invention
- FIG. 2 is a schematic diagram of an algorithm for integrating image restoration and matching based on distance weighted sparse representation prior to embodiment of the present invention
- Figure 3 (a) is a reference image provided by an embodiment of the present invention.
- FIG. 3(b) is a fuzzy real-time map to be located according to an embodiment of the present invention.
- FIG. 3(c) is a diagram showing a result of positioning of a direct matching algorithm according to an embodiment of the present invention.
- FIG. 3(d) is a result of positioning of a DSRC algorithm according to an embodiment of the present invention.
- FIG. 3(e) is a result of positioning of an SRC algorithm according to an embodiment of the present invention.
- FIG. 3(f) is a result of positioning of a JRL-SR algorithm according to an embodiment of the present invention.
- FIG. 3(g) is a preliminary positioning result diagram of an image restoration and matching integration method according to an embodiment of the present invention.
- FIG. 3(h) is a result of positioning results of an image restoration and matching integration method according to an embodiment of the present invention.
- an image restoration and matching integration method based on distance weighted sparse representation prior includes:
- step (5) using the clear image obtained in step (3) to estimate the degraded image in step (2), and then performing iterations on steps (2)-(4) to obtain the target image and the target sparse expression coefficient vector, according to the target. Sparsely expressing a maximum value in the coefficient vector to obtain an initial positioning of the target image in the reference image;
- step (6) Taking the initial position as the center of the new reference image, adding the step length a as the length and width of the new reference image, respectively, and taking a new reference image from the reference image, and taking the new reference image as In the reference image in step (1), the reference image is intercepted in step b to obtain a dictionary set, where b ⁇ a, steps (1)-(5) are repeated, and the restored image of the target image and the positioning of the target image in the reference image are obtained. result.
- the initial positioning accuracy of the target image in the reference image is measured and the positioning accuracy is statistically calculated.
- the initial positioning accuracy is measured by the positional deviation, that is, the pixel difference between the actual position of the target image in the reference image and the position of the initial positioning.
- the positioning accuracy is calculated using a statistical point of view, that is, the initial positioning accuracy of all target images to be located accounts for less than the specified positioning accuracy, that is, we believe that all target images with initial positioning accuracy less than or equal to 5 are successfully positioned, and the initial positioning accuracy is greater than 5 The location failed.
- step (1) includes:
- the initial sparse expression coefficient vector ⁇ of the degraded image in the dictionary set is obtained.
- the embodiment of the present invention estimates the fuzzy kernel as:
- x is a matrix describing the initial sharp image
- y is the matrix describing the degenerate image
- ⁇ is the regularization term weight of k.
- v is The weight of the regularization term
- k is the fuzzy kernel.
- the embodiment of the invention has a clear image estimate of:
- y is the matrix describing the degraded image
- ⁇ is The weight of the item
- D is the dictionary set
- ⁇ is the weight of the sparse prior constraint of the reference image
- e l is the first-order filter
- s is the sparse index
- x is the matrix describing the initial clear image.
- the step (3) further includes:
- ⁇ (x,u) represents the energy equation and x is the matrix describing the initial sharp image
- y is the matrix describing the degraded image
- ⁇ is The weight coefficient of the item
- D is the dictionary set
- ⁇ is The coefficient of the term
- s is the sparse index
- ⁇ is the regular term
- the weight, e l is a first-order filter.
- the x-sub problem is:
- x is the matrix describing the initial sharp image and u is the auxiliary variable.
- y is the matrix describing the degraded image
- ⁇ is The weight coefficient of the item
- D is the dictionary set
- ⁇ is a regular term
- the weight, e l is a first-order filter.
- the u-sub problem is:
- x is the matrix describing the initial sharp image
- ⁇ is The coefficient of the term
- s is the sparse index
- ⁇ is the regular term
- the weight, e l is a first-order filter.
- a is 5 and b is 2.
- Figure 3 (a) - 3 (h) is a comparison of the positioning effect of the invention with other algorithms on a partial test chart, wherein Figure 3 (a) shows a reference image, 3(b) is the fuzzy real-time graph to be located, FIG. 3(c) shows the direct matching algorithm positioning result graph, FIG. 3(d) shows the positioning result of the DSRC algorithm, and FIG. 3(e) shows the positioning result of the SRC algorithm.
- 3(f) represents the positioning result of the JRL-SR algorithm, FIG. 3(g) shows the positioning result (preliminary positioning) of the JRL-DSR1 algorithm proposed by the present invention, and FIG.
- FIG. 3(h) shows the JRL-DSR2 algorithm proposed by the present invention.
- Positioning results (precise positioning by preliminary positioning results). It can be seen from Fig. 3(a)-3(h) that when the target image blur is more serious, the algorithm proposed by the present invention can correct the initial error location by combining the recovery and matching tasks, and continuously increase the positioning correctly. Confidence, the experimental results also show that this integration method is better than separate processing to deblur and locate.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
本发明公开了一种基于距离加权稀疏表达先验的图像复原与匹配一体化方法,包括:利用退化图像和初始清晰图像对模糊核的约束与梯度算子正则化来估计模糊核,得到模糊核的估计;利用模糊核的估计和字典D对初始清晰图像的约束来估计初始清晰图像,得到清晰图像估计;采用距离加权的稀疏表达算法,得到清晰图像估计在字典集中的稀疏表达系数向量估计;对上述步骤进行A次迭代,得到目标图像和目标图像在参考图像中的初始定位;缩小参考图像的范围后利用上述方法得到目标图像的复原图像和定位结果。本发明有效解决了现有模糊图像复原与匹配方法中存在的对噪声敏感、较差的复原效果对匹配任务造成严重的影响等问题,适用于视觉导航系统。
Description
本发明属于模式匹配技术领域,更具体地,涉及一种基于距离加权稀疏表达先验的图像复原与匹配一体化方法。
在视觉导航系统中,需要利用实时获取的地面场景图像与存储在机载计算机上的参考图像进行比较,以确定高速飞行器的位置。由于图像匹配精度高,因此可以利用这种精确的位置信息提高导航系统的定位精度。然而,获取的通常是退化的图像,如低分辨率和图像模糊。这样将给匹配和定位带来很大的挑战。因此,对模糊图像的校正匹配技术成为高速飞行器导航系统中的一大关键。
现有的对模糊图像进行匹配与定位的方法主要包括两类:(1)对于退化程度较低的图像,直接使用匹配或者稀疏表达的方法进行定位。(2)首先进行图像修复以获得更好的图像质量,然后利用恢复的图像进行定位。然而,这些方法存在很多的缺陷。针对第一类的方法,如果图像质量退化严重,那么匹配与定位的精度会严重降低;针对第二类的方法,许多图像恢复方法的设计仅仅是为了改善人类的视觉感知,而不是机器感知,因此不能提高定位精度。更糟糕的是,当退化模型是未知的,一般的恢复方案,如去模糊,在真实图像上的表现并不好。
由此可见,现有技术存在图像质量低,导致定位精度低的技术问题。
【发明内容】
针对现有技术的以上缺陷或改进需求,本发明提供了一种基于距离加权稀疏表达先验的图像复原与匹配一体化方法,由此解决现有技术存在图像质量低,导致定位精度低的技术问题。
为实现上述目的,本发明提供了一种基于距离加权稀疏表达先验的图像复原与匹配一体化方法,包括:
(1)以步长a截取参考图像得到字典集,利用距离加权的稀疏表达算法,得到退化图像在字典集中的初始化稀疏表达系数向量,利用初始化稀疏表达系数向量和字典集对退化图像进行复原,得到退化图像的初始清晰图像;
(2)利用退化图像和初始清晰图像对模糊核的约束与梯度算子正则化来估计模糊核,得到模糊核的估计;
(3)利用模糊核的估计和字典D对初始清晰图像的约束来估计初始清晰图像,得到清晰图像估计;
(4)采用距离加权的稀疏表达算法,得到清晰图像估计在字典集中的稀疏表达系数向量估计;
(5)利用步骤(3)得到的清晰图像估计更新步骤(2)中的退化图像,然后对步骤(2)-(4)进行A次迭代,得到目标图像和目标稀疏表达系数向量,根据目标稀疏表达系数向量中的最大值得到目标图像在参考图像中的初始定位;
(6)以初始定位为新的参考图像的中心,以目标图像的长宽分别加上步长a作为新的参考图像的长宽,从参考图像截取新的参考图像,将新的参考图像作为步骤(1)中的参考图像,重复步骤(1)-(5),得到目标图像的复原图像和目标图像在参考图像中的定位结果。
进一步地,步骤(1)包括:
(1-1)以步长a截取参考图像得到字典集D,D=[i
1,i
2,...,i
m],其中,i
m为以步长s截取参考图像得到的第m张图片;
(1-3)利用初始化稀疏表达系数向量和字典集对退化图像进行复原,得到退化图像的初始清晰图像,x=Dα,其中,x为描述初始清晰图像的矩阵。
进一步地,模糊核的估计为:
进一步地,清晰图像估计为:
其中,
为描述清晰图像估计的矩阵,
为模糊核的估计,y为描述退化图像的矩阵,η为
项的权重,D为字典集,
为稀疏表达系数向量估计,τ为参考图像的稀疏先验约束项的权重,e
l为一阶滤波器,s为稀疏指数,x为描述初始清晰图像的矩阵。
进一步地,步骤(3)还包括:
引入新变量u
l(l∈1,2,...,L),其中,L为新变量的总数,建立能量方程,将能量方程分解成x-子问题和u-子问题,分别对x-子问题和u-子问题求得最优解,进而得到清晰图像估计。
进一步地,能量方程为:
其中,Ε(x,u)表示能量方程,x为描述初始清晰图像的矩阵,
为模糊核的估计,y为描述退化图像的矩阵,η为
项的权重系数,D为字典集,
为稀疏表达系数向量估计,τ为
项的系数,s为稀疏指数,β为正则项
的权重,e
l为一阶滤波器。
进一步地,x-子问题为:
进一步地,u-子问题为:
进一步地,步骤(6)中将新的参考图像作为步骤(1)中的参考图像,重复步骤(1)-(5)时,以步长b截取参考图像得到字典集,其中b<a。
总体而言,通过本发明所构思的以上技术方案与现有技术相比,能够取得下列有益效果:
1、本发明进行A次迭代,得到目标图像和目标稀疏表达系数向量,此时的目标图像已经是对退化图像复原的较好的图像了,同时根据目标稀疏表达系数向量中的最大值得到目标图像在参考图像中的初始定位;这时我们便把目标图像的搜索范围缩小到了一个相对参考图来说更小的范围,最终在这个范围中再次采用上述算法进行目标图像的精确匹配,并得到最终的复原图像和定位结果。有效解决了现有模糊图像复原与匹配方法中存在的对噪声敏感、较差的复原效果对匹配任务造成严重的影响等问题,适用于视觉导航系统。
2、本发明给出联合图像复原和匹配的方法,在迭代过程中,图像复原和匹配任务可以相互促进,这样可以纠正初始时可能的错误的定位,并不断增加定位正确的置信度。将距离加权信息融入稀疏表达模型中,使得不同类的数据间的权重变小,增加了图的判别性。本发明中通过引入新变量,建立能量方程,将能量方程分解成x-子问题和u-子问题,分别对x-子问题和u-子问题求得最优解,进而得到清晰图像估计,进一步提升了算法的收敛速度,且更加简洁清楚,因此本发明尤其适用于模糊图像匹配与定位领域。
图1是本发明实施例提供的图像复原与匹配一体化方法的流程图;
图2是本发明实施例提供的基于距离加权稀疏表达先验的图像复原与匹配一体化的算法示意图;
图3(a)是本发明实施例提供的参考图像;
图3(b)是本发明实施例提供的待定位的模糊实时图;
图3(c)是本发明实施例提供的直接匹配算法定位结果图;
图3(d)是本发明实施例提供的DSRC算法的定位结果;
图3(e)是本发明实施例提供的SRC算法的定位结果;
图3(f)是本发明实施例提供的JRL-SR算法的定位结果;
图3(g)是本发明实施例提供的图像复原与匹配一体化方法的初步定位结果图;
图3(h)是本发明实施例提供的图像复原与匹配一体化方法的定位结果图。
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。此外,下面所描述的本发明各个实施方式中所涉及到的技术特征只要彼此之间未构成冲突就可以相互组合。
如图1所示,一种基于距离加权稀疏表达先验的图像复原与匹配一体化方法,包括:
(1)以步长a截取参考图像得到字典集,利用距离加权的稀疏表达算法,得到退化图像在字典集中的初始化稀疏表达系数向量,利用初始化稀疏表达系数向量和字典集对退化图像进行复原,得到退化图像的初始清晰图像;
(2)利用退化图像和初始清晰图像对模糊核的约束与梯度算子正则化来估计模糊核,得到模糊核的估计;
(3)利用模糊核的估计和字典D对初始清晰图像的约束来估计初始清晰图像,得到清晰图像估计;
(4)采用距离加权的稀疏表达算法,得到清晰图像估计在字典集中的稀疏表达系数向量估计;
(5)利用步骤(3)得到的清晰图像估计更新步骤(2)中的退化图像,然后对步骤(2)-(4)进行A次迭代,得到目标图像和目标稀疏表达系数向量,根据目标稀疏表达系数向量中的最大值得到目标图像在参考图像中的初始定位;
(6)以初始定位为新的参考图像的中心,以目标图像的长宽分别加上步长a作为新的参考图像的长宽,从参考图像截取新的参考图像,将新的参考图像作为步骤(1)中的参考图像,以步长b截取参考图像得到字典集,其中b<a,重复步骤(1)-(5),得到目标图像的复原图像和目标图像在参考图像中的定位结果。
经过A次迭代以后,测定目标图像在参考图像中的初始定位精度并统计定位准确率。初始定位精度使用位置偏差来衡量,即目标图像在参考图像中的实际位置与初始定位的位置的像素差。定位准确率使用统计观点来计算,即所有待定位的目标图像的初始定位精度占小于指定定位精度的比率,即我们认为所有初始定位精度小于等于5的目标图像是定位成功,初始定位精度大于5定位失败。
如果经过A次迭代,定位准确率大于等于85%,我们认为A是当前参考图像尺度的最有迭代次数,可以进入下一个精细参考图像尺度的迭代。
如图2所示,步骤(1)包括:
(1-1)以步长a截取参考图像得到字典集D,D=[i
1,i
2,...,i
m],其中,i
m为以步长s截取参考图像得到的第m张图片;
(1-3)利用初始化稀疏表达系数向量和字典集对退化图像进行复原,得到退化图像的初始清晰图像,x=Dα,其中,x为描述初始清晰图像的矩阵。
本发明实施例优选地,模糊核的估计为:
本发明实施例优选地,清晰图像估计为:
其中,
为描述清晰图像估计的矩阵,
为模糊核的估计,y为描述退化图像的矩阵,η为
项的权重,D为字典集,
为稀疏表达系数向量估计,τ为参考图像的稀疏先验约束项的权重,e
l为一阶滤波器,s为稀疏指数,x为描述初始清晰图像的矩阵。
本发明实施例优选地,步骤(3)还包括:
引入新变量u
l(l∈1,2,...,L),其中,L为新变量的总数,建立能量方程,将能量方程分解成x-子问题和u-子问题,分别对x-子问题和u-子问题求得最优解,进而得到清晰图像估计。能量方程为:
其中,Ε(x,u)表示能量方程,x为描述初始清晰图像的矩阵,
为模糊核的估计,y为描述退化图像的矩阵,η为
项的权重系数,D为字典集,
为稀疏表达系数向量估计,τ为
项的系数,s为稀疏指数,β为正则项
的权重,e
l为一阶滤波器。
x-子问题为:
u-子问题为:
本发明实施例中a为5,b为2,图3(a)-3(h)为本发明与其它算法在部分测试图上的定位效果比较,其中图3(a)表示参考图像,图3(b)为待定位的模糊实时图,图3(c)表示直接匹配算法定位结果图,图3(d)表示DSRC算法的定位结果,图3(e)表示SRC算法的定位结果,图3(f)表示JRL-SR算法的定位结果,图3(g)表示本发明提出的JRL-DSR1算法的定位结果(初步定位),图3(h)表示本发明提出的JRL-DSR2算法的定位结果(由初步定位结果进行精确定位)。从图3(a)-3(h)可以看出,在目标图像模糊较严重时本发明提出的算法通过结合恢复和匹配任务,可以纠正初始时可能的错误的定位,并不断增加定位正确的置信度,实验结果也表明了这种一体化方法是优于分开处理去模糊和定位的。
本领域的技术人员容易理解,以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。
Claims (9)
- 一种基于距离加权稀疏表达先验的图像复原与匹配一体化方法,其特征在于,包括:(1)以步长a截取参考图像得到字典集,利用距离加权的稀疏表达算法,得到退化图像在字典集中的初始化稀疏表达系数向量,利用初始化稀疏表达系数向量和字典集对退化图像进行复原,得到退化图像的初始清晰图像;(2)利用退化图像和初始清晰图像对模糊核的约束与梯度算子正则化来估计模糊核,得到模糊核的估计;(3)利用模糊核的估计和字典D对初始清晰图像的约束来估计初始清晰图像,得到清晰图像估计;(4)采用距离加权的稀疏表达算法,得到清晰图像估计在字典集中的稀疏表达系数向量估计;(5)利用步骤(3)得到的清晰图像估计更新步骤(2)中的退化图像,然后对步骤(2)-(4)进行A次迭代,得到目标图像和目标稀疏表达系数向量,根据目标稀疏表达系数向量中的最大值得到目标图像在参考图像中的初始定位;(6)以初始定位为新的参考图像的中心,以目标图像的长宽分别加上步长a作为新的参考图像的长宽,从参考图像截取新的参考图像,将新的参考图像作为步骤(1)中的参考图像,重复步骤(1)-(5),得到目标图像的复原图像和目标图像在参考图像中的定位结果。
- 如权利要求1或2所述的一种基于距离加权稀疏表达先验的图像复原与匹配一体化方法,其特征在于,所述步骤(3)还包括:引入新变量u l(l∈1,2,...,L),其中,L为新变量的总数,建立能量方程, 将能量方程分解成x-子问题和u-子问题,分别对x-子问题和u-子问题求得最优解,进而得到清晰图像估计。
- 如权利要求1或2所述的一种基于距离加权稀疏表达先验的图像复原与匹配一体化方法,其特征在于,所述步骤(6)中将新的参考图像作为步骤(1)中的参考图像,重复步骤(1)-(5)时,以步长b截取参考图像得到字典集,其中b<a。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP18865347.1A EP3567545A4 (en) | 2018-03-15 | 2018-03-28 | IMAGE RESTORATION AND MATCHING INTEGRATION METHOD BASED ON DISTANCE-WEIGHTED PRIMARY PARCIMONIOUS PRIORITY REPRESENTATION |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810217553.0 | 2018-03-15 | ||
CN201810217553.0A CN108520497B (zh) | 2018-03-15 | 2018-03-15 | 基于距离加权稀疏表达先验的图像复原与匹配一体化方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019174068A1 true WO2019174068A1 (zh) | 2019-09-19 |
Family
ID=63433960
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/080754 WO2019174068A1 (zh) | 2018-03-15 | 2018-03-28 | 基于距离加权稀疏表达先验的图像复原与匹配一体化方法 |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP3567545A4 (zh) |
CN (1) | CN108520497B (zh) |
WO (1) | WO2019174068A1 (zh) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110689503A (zh) * | 2019-10-09 | 2020-01-14 | 深圳大学 | 一种壁画图像修复方法、系统及存储介质 |
CN111027567A (zh) * | 2019-10-30 | 2020-04-17 | 四川轻化工大学 | 一种基于算法学习的边缘提取方法 |
CN113487491A (zh) * | 2021-05-26 | 2021-10-08 | 辽宁工程技术大学 | 一种基于稀疏性与非局部均值自相似性的图像复原方法 |
CN114071353A (zh) * | 2021-11-04 | 2022-02-18 | 中国人民解放军陆军工程大学 | 结合聚类算法的压缩感知无源被动式目标定位方法 |
CN114897734A (zh) * | 2022-05-18 | 2022-08-12 | 北京化工大学 | 一种基于梯度方向先验的被测目标图像复原方法 |
CN115619659A (zh) * | 2022-09-22 | 2023-01-17 | 北方夜视科技(南京)研究院有限公司 | 基于正则化高斯场模型的低照度图像增强方法与系统 |
CN116167948A (zh) * | 2023-04-21 | 2023-05-26 | 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) | 一种基于空变点扩散函数的光声图像复原方法及系统 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109903233B (zh) * | 2019-01-10 | 2021-08-03 | 华中科技大学 | 一种基于线性特征的联合图像复原和匹配方法及系统 |
CN109949234B (zh) * | 2019-02-25 | 2020-10-02 | 华中科技大学 | 基于深度网络的视频复原模型训练方法及视频复原方法 |
CN110176029B (zh) * | 2019-04-29 | 2021-03-26 | 华中科技大学 | 基于层级稀疏表示的图像复原与匹配一体化方法及系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7359576B1 (en) * | 2004-02-27 | 2008-04-15 | Adobe Systems Incorporated | Using difference kernels for image filtering |
US20130242059A1 (en) * | 2010-12-27 | 2013-09-19 | 3Dmedia Corporation | Primary and auxiliary image capture devices for image processing and related methods |
CN103761710A (zh) * | 2014-01-08 | 2014-04-30 | 西安电子科技大学 | 基于边缘自适应的高效图像盲去模糊方法 |
CN104091350A (zh) * | 2014-06-20 | 2014-10-08 | 华南理工大学 | 一种利用运动模糊信息的物体跟踪方法 |
CN105957024A (zh) * | 2016-04-20 | 2016-09-21 | 西安电子科技大学 | 基于图像块先验与稀疏范数的盲去模糊方法 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101046387A (zh) * | 2006-08-07 | 2007-10-03 | 南京航空航天大学 | 利用景象匹配提高导航系统精度的方法及组合导航仿真系统 |
US8699790B2 (en) * | 2011-11-18 | 2014-04-15 | Mitsubishi Electric Research Laboratories, Inc. | Method for pan-sharpening panchromatic and multispectral images using wavelet dictionaries |
CN103607558A (zh) * | 2013-11-04 | 2014-02-26 | 深圳市中瀛鑫科技股份有限公司 | 一种视频监控系统及其目标匹配方法和装置 |
CN104751420B (zh) * | 2015-03-06 | 2017-12-26 | 湖南大学 | 一种基于稀疏表示和多目标优化的盲复原方法 |
CN106791273B (zh) * | 2016-12-07 | 2019-08-20 | 重庆大学 | 一种结合帧间信息的视频盲复原方法 |
CN107451961B (zh) * | 2017-06-27 | 2020-11-17 | 重庆邮电大学 | 多幅模糊噪声图像下清晰图像的恢复方法 |
CN107341479A (zh) * | 2017-07-11 | 2017-11-10 | 安徽大学 | 一种基于加权稀疏协作模型的目标跟踪方法 |
-
2018
- 2018-03-15 CN CN201810217553.0A patent/CN108520497B/zh not_active Expired - Fee Related
- 2018-03-28 WO PCT/CN2018/080754 patent/WO2019174068A1/zh unknown
- 2018-03-28 EP EP18865347.1A patent/EP3567545A4/en not_active Withdrawn
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7359576B1 (en) * | 2004-02-27 | 2008-04-15 | Adobe Systems Incorporated | Using difference kernels for image filtering |
US20130242059A1 (en) * | 2010-12-27 | 2013-09-19 | 3Dmedia Corporation | Primary and auxiliary image capture devices for image processing and related methods |
CN103761710A (zh) * | 2014-01-08 | 2014-04-30 | 西安电子科技大学 | 基于边缘自适应的高效图像盲去模糊方法 |
CN104091350A (zh) * | 2014-06-20 | 2014-10-08 | 华南理工大学 | 一种利用运动模糊信息的物体跟踪方法 |
CN105957024A (zh) * | 2016-04-20 | 2016-09-21 | 西安电子科技大学 | 基于图像块先验与稀疏范数的盲去模糊方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3567545A4 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110689503A (zh) * | 2019-10-09 | 2020-01-14 | 深圳大学 | 一种壁画图像修复方法、系统及存储介质 |
CN111027567A (zh) * | 2019-10-30 | 2020-04-17 | 四川轻化工大学 | 一种基于算法学习的边缘提取方法 |
CN113487491A (zh) * | 2021-05-26 | 2021-10-08 | 辽宁工程技术大学 | 一种基于稀疏性与非局部均值自相似性的图像复原方法 |
CN113487491B (zh) * | 2021-05-26 | 2024-04-26 | 辽宁工程技术大学 | 一种基于稀疏性与非局部均值自相似性的图像复原方法 |
CN114071353A (zh) * | 2021-11-04 | 2022-02-18 | 中国人民解放军陆军工程大学 | 结合聚类算法的压缩感知无源被动式目标定位方法 |
CN114071353B (zh) * | 2021-11-04 | 2024-02-09 | 中国人民解放军陆军工程大学 | 结合聚类算法的压缩感知无源被动式目标定位方法 |
CN114897734A (zh) * | 2022-05-18 | 2022-08-12 | 北京化工大学 | 一种基于梯度方向先验的被测目标图像复原方法 |
CN114897734B (zh) * | 2022-05-18 | 2024-05-28 | 北京化工大学 | 一种基于梯度方向先验的被测目标图像复原方法 |
CN115619659A (zh) * | 2022-09-22 | 2023-01-17 | 北方夜视科技(南京)研究院有限公司 | 基于正则化高斯场模型的低照度图像增强方法与系统 |
CN115619659B (zh) * | 2022-09-22 | 2024-01-23 | 北方夜视科技(南京)研究院有限公司 | 基于正则化高斯场模型的低照度图像增强方法与系统 |
CN116167948A (zh) * | 2023-04-21 | 2023-05-26 | 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) | 一种基于空变点扩散函数的光声图像复原方法及系统 |
CN116167948B (zh) * | 2023-04-21 | 2023-07-18 | 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) | 一种基于空变点扩散函数的光声图像复原方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
EP3567545A4 (en) | 2019-12-18 |
CN108520497A (zh) | 2018-09-11 |
EP3567545A1 (en) | 2019-11-13 |
CN108520497B (zh) | 2020-08-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019174068A1 (zh) | 基于距离加权稀疏表达先验的图像复原与匹配一体化方法 | |
Bai et al. | Adaptive dilated network with self-correction supervision for counting | |
EP3457357A1 (en) | Methods and systems for surface fitting based change detection in 3d point-cloud | |
CN109974743B (zh) | 一种基于gms特征匹配及滑动窗口位姿图优化的视觉里程计 | |
CN108765327B (zh) | 一种基于景深和稀疏编码的图像去雨方法 | |
US9846974B2 (en) | Absolute rotation estimation including outlier detection via low-rank and sparse matrix decomposition | |
CN110796616B (zh) | 基于范数约束和自适应加权梯度的湍流退化图像恢复方法 | |
WO2022218396A1 (zh) | 图像处理方法、装置和计算机可读存储介质 | |
CN111160229B (zh) | 基于ssd网络的视频目标检测方法及装置 | |
CN110598636A (zh) | 一种基于特征迁移的舰船目标识别方法 | |
CN112837331A (zh) | 一种基于自适应形态重建模糊三维sar图像目标提取方法 | |
CN110276788B (zh) | 用于红外成像式导引头目标跟踪的方法和装置 | |
CN113421210B (zh) | 一种基于双目立体视觉的表面点云重建方法 | |
JP7294275B2 (ja) | 画像処理装置、画像処理プログラムおよび画像処理方法 | |
CN113888603A (zh) | 基于光流跟踪和特征匹配的回环检测及视觉slam方法 | |
CN110211148B (zh) | 一种基于目标状态预估的水下图像预分割方法 | |
CN109902720B (zh) | 基于子空间分解进行深度特征估计的图像分类识别方法 | |
CN115239776B (zh) | 点云的配准方法、装置、设备和介质 | |
US11756319B2 (en) | Shift invariant loss for deep learning based image segmentation | |
CN113807206B (zh) | 一种基于去噪任务辅助的sar图像目标识别方法 | |
CN115082519A (zh) | 一种基于背景感知相关滤波的飞机跟踪方法、存储介质和电子设备 | |
CN115035326A (zh) | 一种雷达图像与光学图像精确匹配方法 | |
CN110060258B (zh) | 基于高斯混合模型聚类的视网膜sd-oct图像分割方法和装置 | |
Wu et al. | Robust variational optical flow algorithm based on rolling guided filtering | |
CN117575966B (zh) | 一种用于无人机高空悬停拍摄场景的视频稳像方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2018865347 Country of ref document: EP Effective date: 20190417 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |