CN109785370A - A kind of weak texture image method for registering based on space time series model - Google Patents

A kind of weak texture image method for registering based on space time series model Download PDF

Info

Publication number
CN109785370A
CN109785370A CN201811517008.XA CN201811517008A CN109785370A CN 109785370 A CN109785370 A CN 109785370A CN 201811517008 A CN201811517008 A CN 201811517008A CN 109785370 A CN109785370 A CN 109785370A
Authority
CN
China
Prior art keywords
image
registration
feature
reference picture
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811517008.XA
Other languages
Chinese (zh)
Other versions
CN109785370B (en
Inventor
郝飞
朱松青
陈茹雯
高海涛
许有熊
韩亚丽
胡运涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute of Technology
Original Assignee
Nanjing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute of Technology filed Critical Nanjing Institute of Technology
Priority to CN201811517008.XA priority Critical patent/CN109785370B/en
Publication of CN109785370A publication Critical patent/CN109785370A/en
Application granted granted Critical
Publication of CN109785370B publication Critical patent/CN109785370B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

A kind of weak texture image method for registering based on space time series model proposed by the present invention, comprising the following steps: obtain reference picture and image subject to registration;The characteristic point of use space temporal model method detection reference picture and image subject to registration;Determine the direction of characteristic point, construction describes son comprising the compound characteristics point of Gradient direction information;Feature Points Matching is carried out to reference picture and image subject to registration by signature search;Erroneous matching pair is rejected, optimal matching points and transformation model are obtained;Image subject to registration is converted by transformation model, obtains registration result.For the present invention compared with region, the present invention replaces direct gray scale operation with characteristics of image, has many advantages, such as that characteristic method is high-efficient, accurate;Compared with characteristic method, present invention reduces algorithms to the degree of dependence of notable feature, does not need manually to add supplemental characteristic, avoids part from scratching, improve the robustness of algorithm, autoregistration may be implemented.

Description

A kind of weak texture image method for registering based on space time series model
Technical field
The present invention relates to image registration fields more particularly to a kind of weak texture image based on space time series model to match Quasi- method.
Background technique
Image registration is two width or several figures for acquiring same scene in different time, different perspectives, different sensors As the process being overlapped mutually, it realizes the alignment on reference picture and image geometry meaning subject to registration.The application of image registration Range is very extensive, including machine vision, three-dimensional reconstruction, remote sensing image processing, target classification and retrieval, image understanding and fusion Deng.Method for registering images includes being registrated method based on region, based on line feature or region feature and based on point feature.
Region registration method directly utilizes the similarity degree of the grayscale information calculating image of image, treatment process are as follows: first The similarity metric function for being suitble to feature of image is selected, then in selected geometric transformation model parameter spatial class, according to certain Search strategy scan for, finding can enable similarity reach maximum geometric transformation parameter.Such methods be by It searches in registration image and is registrated with the maximum region of reference picture similarity, be not related to the extraction of feature, adopted in matching Block is used to carry out matching estimation as registration unit, and the selection of similarity metric function is to determine the accuracy and effect of template registration The key factor of rate height.
The Yin Sumin of Jiangsu University, which is proposed, a kind of realizes the on-line automatic of cylinder body based on the detection algorithm of template matching Detection.Experiment is realized in the specific zonule of engine cylinder body using the improved sequential similarity algorithm of coarse-fine combination search strategy (Yin Sumin, Bao Hongli, Ji Binbin wait to pass based on engine cylinder body defects detection [J] of zonule template matching to defects detection Sensor and micro-system, 2012 (06): 143-145.).The Wu Xiaojun etc. of Shenzhen Space Technology Innovation Academe proposes a kind of fast There is situations such as displacement, rotation, scaling, partial occlusion, Ke Yiying for target image in fast high-precision geometric templates method for registering Occasion (the quick high accuracy geometric templates match party that a kind of band of Wu Xiaojun is rotated, scaled demarcated and identified for machine vision Method: China, CN2016102093086 [P] .2016-09-07.).Wang Monan of Harbin University of Science and Technology et al. establishes a set of Medical image atlas image library, since there is clearly resolution using the gray level image of CT shooting, according to target image label With the map label in image library, in conjunction with the grayscale information and gradient information of image tag, by best atlas image tag fusion At piece image as registration template, effectively raising the accuracy of registration, (Wang Monan, Li Pengcheng, Jing Juntong one kind are based on The Medical Image Registration Algorithm of multichannel chromatogram tag fusion: China, CN107093190 [P] .2017.08.25.).
Region is registrated extraction of the method independent of feature, but carries out global operations to whole sub-picture, and principle is simple, is easy to It realizes.In recent years, for the improvement direction of this kind of algorithm, the building of similarity measure and the optimization of search strategy are concentrated mainly on On.But the building by improving similarity measure, all kinds of algorithms can not still solve the information content excessive caused algorithm time The high problem of complexity.By improved search strategy, search range is successively reduced, is improved the work of efficiency to a certain extent With.But it is different from clearly demarcated gray level image clear in medical image, the applications such as large-scale part measurement often occur big The weak texture region of area.Gray value is close everywhere for weak texture region, there is during rough registration and largely mismatches standard, leads Cause algorithm failure.In addition, picture noise is inevitably present, so that image grayscale changes, to influence the essence of registration Degree.
Point feature is registrated method, generally using angle point or characteristic point as the basic unit of images match, by from benchmark image Corresponding relationship between image subject to registration estimates the value of geometric transformation model and parameter between the two.Selection is closed first Suitable feature vertex type, carries out the detection of characteristic point, then matches the characteristic point in two images, chooses corresponding geometric transformation Model simultaneously carries out parameter estimation, converts finally by image and completes registration.The description method of local image characteristics is to determine point The key of feature registration method accuracy and efficiency.
Liu Bo of Harbin University of Science and Technology et al. is proposed based on pyramid structure and the improved SIFT of feature vertex neighborhood Algorithm is carried out denoising, the reduction grayscale information of image by the original image obtained to imaging system, it is extensive to remove unrelated information Multiple useful information, the characteristic point quantity obtained after feature extraction can greatly reduce, thus improve registration speed (Liu Bo, A kind of improved image registration algorithm based on SIFT feature of Pang Ying: China, CN107886530A [P] .2018.04.06.). Sang Hongshi of the Central China University of Science and Technology etc. extracts a certain number of characteristic points, first when carrying out feature detection to image with characteristic point Coordinate and its neighborhood generate mask images, carry out second of feature detection, the feature that will be detected twice using mask images Point merges, and exports as final feature testing result.The positional relationship between characteristic point is utilized, it is weaker in characteristic strength Region still retains a kind of a certain number of characteristic points (image registration side based on the detection of feature twice Sang Hongshi, Dong Tong, Hu Peng Method: China, CN108182700A [P] .2018.06.19.).Zheng Hong of Wuhan University et al. proposes a kind of towards the more of workpiece Dimensional characteristics method for registering, by obtaining the ORB characteristic point of template workpiece picture and workpiece picture to be positioned, in conjunction with based on k- The various dimensions registration Algorithm of means cluster constraint, improves the robustness and positioning accuracy (Zheng Hong, Zheng Chaohui mono- of matching algorithm The various dimensions feature registration method that kind is positioned towards workpiece: China, CN108502533A.2018.9.11.).
It is very widely used that point feature is registrated method.Point feature is richly stored with image structure information, and registration has aobvious The image of point feature is write, aforementioned several method has higher robustness and registration accuracy.However, being surveyed in large-scale part vision In amount, usually only machining is formed by more microcosmic " trace " in topography, and there is no significant point features to exist, It manually the supplementary means such as portrays if taken, workpiece surface may be scratched, be also unfavorable for realizing on-line checking.
Line feature or region feature are registrated method, and the general discontinuity using image local gray scale determines that the edge line of image is special Sign, is related to asking the first derivative or second dervative of image grayscale function.Firstly, image preprocessing is carried out as needed, by figure Picture filtering and image enhancement make the selection of compromise between enhancing edge and abating noises, then anxious using image border gray scale The characteristics of drastic change, detects edge feature.It is completed finally, the corresponding relationship according to picture edge characteristic restores geometric transformation model Registration based on line feature.In addition, region feature registration method is dependent on edge detection as a result, being enclosed on its basis with edge contour At the features such as centroid, the center of gravity of enclosed region as registration foundation.
Xian Electronics Science and Technology University Wang Shuan et al. is by SAR image Canny operator extraction edge, according to edge fitting Detection straight line obtains the rotation parameter between image, then carries out Fourier transformation to postrotational image subject to registration, calculates image Between horizontal translation amount and vertical translation amount complete SAR image registration (Wang Shuan, Jiao Licheng, Zhang Nan, Liu Kun, Marvin's duckweed, horse Jingjing, a kind of SAR image registration method based on straight line and FFT of Zhang Tao, Liu Chuan: China, CN103839262A.2014.06.04.).The Zhang Yanning of Northwestern Polytechnical University is by extracting marking area in non-homogeneous image, benefit The feature that marking area is described with Zernike rotation invariant moment improves precision (Zhang Yanning, the Chen Yan of non-homogeneous image registration Good, Zhang Xiuwei, yellow plum is tender, the visible light based on the marking area feature and edge degree-infrared image registration side Li Fei, Fu Zhipeng Method: China, CN106447704A.2017.02.22.).Li Huiqi of Beijing Institute of Technology etc. extracts X by Canny algorithm and penetrates The profile information on the head of line chart piece and photochrome selects fixed reference feature point auxiliary to carry out rough registration, recycles consistent point drift It moves algorithm progress Curve Matching and has obtained blending image (Li Huiqi, Wang Shumeng one kind of X-ray picture and photochrome superposition X-ray film and photochrome method for registering based on contour feature: China, CN107240128A.2017.10.10.).
Line/region feature registration method first has to carry out edge detection, then is filtered out and met the requirements by some restrictive conditions Line segment as local feature.Natural image usually has more significant line/region feature, so line/region feature method for registering tool There are higher registration accuracy and robustness.However, the processing " trace " contained in part image is usually fainter, and image Gray scale difference is also little, and even with preprocessing means such as image enhancements, edge is still not easy to detect.In addition, processing " trace " tool There is apparent periodicity, then only describing son using edge self information construction feature point, may cause registration be cannot achieve.
Summary of the invention
The present invention is exactly in view of the deficienciess of the prior art, provide a kind of using space time series model detection figure As characteristic point, the weak texture image registration based on space time series model without notable feature image feature extraction problem is solved Method.
To solve the above problems, the technical solution used in the present invention is as follows:
A kind of weak texture image method for registering based on space time series model, method includes the following steps:
S1: the weak texture image without significant point or line feature is obtained as reference picture, obtains another secondary and reference picture There is the weak texture image of certain degree of overlapping as image subject to registration;
S2: the characteristic point of use space temporal model method detection reference picture and image subject to registration;
S3: determining the direction of characteristic point, and construction describes son comprising the compound characteristics point of Gradient direction information;
S4: Feature Points Matching is carried out to reference picture and image subject to registration by signature search;
S5: rejecting erroneous matching pair, obtains optimal matching points and transformation model;
S6: image subject to registration is converted by transformation model, obtains registration result.
Preferably, specific step is as follows by step S2:
S21: each pixel on reference picture is selected one by one;
S22: taking image block from reference picture around selected pixel, establishes space temporal model, solving model parameter;
S23: model built applicability is examined using AIC criterion;
S24: based on types of models parameter and detection applicability as a result, describing son to selected pixel feature points.
Preferably, step S include thes steps that carrying out conspicuousness detection S25 to characteristic point, and steps are as follows by step S25:
Reference picture or image subject to registration are divided into several image blocks;
Successively calculate other each pixels in feature point description and current block of selecting pixel in each image block Feature point description Euclidean distance, sum after Euclidean distance value is weighted, obtain weighted sum;
When weighted sum is greater than threshold value, determines that selected pixel is the characteristic point of conspicuousness, remain and match for image Standard abandons it, is not used in image registration conversely, selected pixel is not the characteristic point of conspicuousness.
Preferably, specific step is as follows by step S3:
Gradient-norm and the direction for calculating selected characteristic point, with each pixel in the selected feature vertex neighborhood of statistics with histogram The gradient direction and amplitude of point, histogram peak direction indicates the principal direction of characteristic point, based on constructing in principal direction and step S2 Feature point description son, construction compound characteristics point description son.
Preferably, in step s 4, using KD tree or quickly, nearest neighbor algorithm or K- nearest neighbor algorithm carry out signature search, Find the corresponding relationship between reference picture and image characteristic point subject to registration.
Preferably, in step s 5, purification processes are carried out to the characteristic point obtained in S4 using random sampling unification algorism, Reject erroneous matching pair.
Compared with prior art, implementation result of the invention is as follows by the present invention:
Compared with field method, the present invention replaces direct gray scale operation with characteristics of image, high-efficient, accurate etc. with characteristic method Advantage;Compared with characteristic method, present invention reduces algorithms to the degree of dependence of notable feature, does not need manually to add auxiliary spy Sign, avoids part from scratching, improves the robustness of algorithm, autoregistration may be implemented.
Detailed description of the invention
Fig. 1 is the forecast window of space temporal model of the invention.
Fig. 2 is the flow chart of temporal model registration image in space of the invention.
Fig. 3 is the flow chart of temporal model detection characteristic point in space of the invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Describe to whole property.Certainly, described embodiment is a part of the embodiment of the present invention, instead of all the embodiments.
Referring to Fig.1, Fig. 1 is the forecast window of space temporal model of the invention, in figure: 1, with reference to the ash of (or wait forecast) Image is spent, size is m × n-pixel;2, pixel to be forecast;3, known (or being history) pixel.
In conjunction with Fig. 2 and Fig. 3, a kind of weak texture image registration side based on space time series model proposed by the present invention Method, following steps:
(1) reference picture and image subject to registration are obtained
The weak texture image without significant point or line feature is obtained as reference picture, obtaining another secondary and reference picture has one The weak texture image of degree of overlapping is determined as image subject to registration, and reference picture and image subject to registration are gray level images.
(2) characteristic point of the detection of use space temporal model method reference picture and image subject to registration
(1) construction feature point description
Space temporal model G (r;w;w1,w2,...,wr) there are various types, it depends on the type parameter of model, including Model order r, it returns window type w and returns window size wi(i=1,2 ..., r).Order r takes positive integer, r=1 table Show space temporal model be it is linear, 1 representation space temporal model of r > is nonlinear.W takes 1,2,3,4 different values, indicates to adopt Recurrence window type is non-causal type respectively, partly because of fruit type, because of fruit type, by force because fruit type returns window.Black dot indicates To prediction pixel point, (or being history) pixel known to the expression of grey dot, grey and black pixel point are encircled a city from left to right Region be respectively non-causal type, half that window size is 2 because of fruit type, because of fruit type, by force because fruit type returns window.It is various types of The number of space temporal model, modeling parameters is different, and indicates G (r with a column vector α;w;w1,w2,...,wr) model Modeling parameters.For the applicability for examining model built, model testing is carried out with AIC criterion, inspection result is expressed as AIC (r).
In the optional pixel (u, v) of reference picture (or image subject to registration), its gray value is f (u, v).
Select model G (1;1;1), i.e., the order of model is 1, and returning window type is that non-causal type returns window, is returned The size of window is 1.
From original image clipping image block B1(u, v)={ (s, t) ∈ z2:u-1≤s≤u+4,v-1≤t≤v+2};
Define plane point set B2(p, q)={ (p, q) ∈ Z2: 1≤p≤6,1≤q≤4 }, that is, B can be used2(p, q) is Indicate image block B1The pixel coordinate of (u, v) indicates the gray value of pixel with f (p, q), establishes space temporal model:
F (h, l)=a (h, l) α+ε (h, l)
In formula, h ∈ Z and 2≤h≤5, l ∈ Z and 2≤l≤3;α is G (1;1;1) model modeling parameter is one 8 dimension column Vector;A (h, l) is one 8 dimension row vector, and element is f (h ', l '), (h ', l ') ∈ N8(h,l)。
H and l take different value, total to list 8 formulas, are write this 8 formulas as matrix A α=b form, A=in formula a (h, L) }, b=(f (h, l))T
Formula has 8 unknown parameters, therefore more than simultaneous 8 formulas, can be solved to obtain modeling parameters with least square method α.Using α as iterative initial value, estimated to update model parameter with GM (GeneralizedM-estimator)According to changing for GM estimation For known to update mechanismThere is better noiseproof feature than α.
The variance of computation modeling residual errorWhereinUtilize being applicable in for AIC criterion testing model Property,r1It is model order.
Using modeling parameters and model applicability examine as a result, one 9 dimensional feature vector of construction
Select model G (1;1;2), i.e., the order of model is 1, and returning window type is that non-causal type returns window, is returned The size of window is 2.
From original image clipping image block B1(u, v)={ (s, t) ∈ z2:u-2≤s≤u+7,v-2≤t≤v+5};
Define plane point set B2(p, q)={ (p, q) ∈ Z2: 1≤p≤10,1≤q≤8 }, that is, B can be used2(p,q) It is to indicate image block B1The pixel coordinate of (u, v) indicates the gray value of pixel with f (p, q), establishes space temporal model:
A α+ε=b
In formula, A={ a (h, l) }, α are modeling parameters (24 dimensional vector), b=(f (h, l))T, h, l ∈ Z and 2≤h≤ 7, and 2≤l≤5, a (h, l) they are one 24 dimension row vectors, element is f (h ', l '), (h ', l ') ∈ N24(h,l)。
Least square method can solve to obtain formula parameter alpha, and GM estimation updates model parameter
Calculate the variance of residual errorWhereinAIC criterion testing model applicability, result arer2It is model order.
Using modeling parameters and model applicability examine as a result, 25 dimensional feature vectors of construction
Select model G (2;1;1,1), i.e., the order of model is 2, and returning window type is that non-causal type returns window, is returned The size for returning window is 1.
From original image clipping image block B1(u, v)={ (s, t) ∈ z2:u-1≤s≤u+7,v-1≤t≤v+7};
Define plane point set B2(p, q)={ (p, q) ∈ Z2: 1≤p≤9,1≤q≤9 }, that is, B can be used2(p, q) is Indicate image block B1The pixel coordinate of (u, v) indicates the gray value of pixel with f (p, q), establishes space temporal model:
A α+ε=b
In formula, A={ a (h, l), c (h, l) }, α are modeling parameters (44 dimensional vector), b=(f (h, l))T, h, l ∈ Z and 2≤h≤8, and 2≤l≤8.Wherein, a (h, l) is one 8 dimension row vector, and element is f (h ', l '), (h ', l ') ∈ N8(h,l)。 C (h, l) is one 36 dimension row vector, and the rejecting duplicate keys that are multiplied two-by-two by 8 elements in a (h, l) obtain.
Least square method can solve to obtain formula parameter alpha, and GM estimation updates model parameter
Calculate the variance of residual errorWhereinAIC criterion testing model applicability, result arer3It is model order.
Using modeling parameters and model applicability examine as a result, 45 dimensional feature vectors of construction
Select model G (2;1;2,1), i.e., the order of model is 2, and returning window type is that non-causal type returns window, line Property return window size be 2, nonlinear regression window size be 1.
From original image clipping image block B1(u, v)={ (s, t) ∈ z2:u-2≤s≤u+9,v-2≤t≤v+9}。
Define plane point set B2(p, q)={ (p, q) ∈ Z2: 1≤p≤12,1≤q≤12 }, that is, B can be used2(p,q) It is to indicate image block B1The pixel coordinate of (u, v) indicates the gray value of pixel with f (p, q), establishes space temporal model:
A α+ε=b
In formula, A={ a (h, l), c (h, l) }, α are modeling parameters (60 dimensional vector), b=(f (h, l))T, h, l ∈ Z and 3≤h≤10, and the same G (1 of 3≤l≤10, a (h, l);1;2) model, the same to G (2 of c (h, l);1;1,1) model.
Least square method can solve to obtain formula parameter alpha, and GM estimation updates model parameter
Calculate the variance of residual errorWhereinAIC criterion testing model applicability, result arer4It is model order.
Using modeling parameters and model applicability examine as a result, 61 dimensional feature vectors of construction
Construction feature point describes sub- F (u, v)=(f1,f2,f3,f4), F (u, v) is the row vector of one 140 dimension.
Change u or v value, reselect pixel, repeats above-mentioned modeling process, construct the feature point description of new pixel Son.So until traversal entire image, defines feature point description for each pixel.
(2) conspicuousness of characteristic point is detected
Reference picture (or image subject to registration) is divided into several image blocks, k-th of tile size is mk×nk, examine The conspicuousness of characteristic point in image block, detailed process is as follows:
Optional two characteristic points, calculate Euclidean distance between the two on image block, constitute one with whole calculated results A column vector dk, then dkDimension is C (mk×nk,2)。
Delete dkIn repeat element (only retain one), then by dkAccording to element descending order arrange, update to Amount is usedIt indicates, dimension is denoted as ε.
Using elementary function y=f (x) as weight function, meet f (x) monotone decreasing in domain, and f (x) >=1.Meet The elementary function of above-mentioned condition can be y=aγ(x-ε), 0 < a <, 1,0 < γ < 1 in formula
Some optional pixel (p, q) on image block, feature point description vector are F (p, q).On image block optionally One pixelIts feature point description vector isIt calculatesD is searched to exist Middle present position x substitutes into x in function y=f (x), the distance using functional value y as weight, after calculating weighting
ChangeOrValue recalculatesD is searched again to existMiddle present position calculates One new weight calculates distance after weightingSuch Repeated mk×nk- 1 time, until traversing whole image block, in total Obtain mk×nk- 1 Weighted distance sums these Weighted distances, and summation then determines that pixel (p, q) is if it is greater than threshold value Significant characteristic point will use it in image registration, and otherwise just abandon it.
Change p or q value, repeats the above process mk×nkIt is secondary, complete the significance test of whole characteristic points on image block.
Change k value, repeat the above process, completes the significance test of whole characteristic points in entire image.
(3) construction meets feature point description
It is characterized a distribution directional information, to solve rotation image registration problem.Firstly, calculating separately picture according to formula and formula Plain coordinate is the gradient-norm and direction of (u, v) characteristic point:
Using centered on (u, v), using W/2 as radius (W be space temporal model modeling when window size), use statistics with histogram The gradient direction and amplitude of each pixel, histogram peak direction represent the principal direction of key point in selected feature vertex neighborhood β.Feature point description constructed using principal direction information and front constructs compound feature point description
(4) signature search with match;
By signature search and matching, the corresponding relationship between reference picture and image characteristic point subject to registration is found.Feature It is very high that point describes sub (vector) dimension, to improve search efficiency, reduces memory space complexity, can be used and some calculates quickly fastly Method, such as KD tree, quick nearest neighbor algorithm, K- nearest neighbor algorithm etc..
(5) erroneous matching pair is rejected, optimal matching points and transformation model are obtained;
The characteristic point obtained using previous step is solved the parameter of geometric transformation model, realizes reference picture to (at least 8 pairs) With the registration of image subject to registration.Consider that characteristic point to there are error hidings, can carry out at purification thick matching using respective algorithms Reason, such as random sampling unification algorism (Random Sample Consensus, RANSAC).
(6) image subject to registration is converted by transformation model, obtains registration result.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention Made any modifications, equivalent replacements, and improvements etc., should all be included in the protection scope of the present invention within mind and principle.

Claims (6)

1. a kind of weak texture image method for registering based on space time series model, which is characterized in that this method includes following Step:
S1: the weak texture image without significant point or line feature is obtained as reference picture, obtaining another secondary and reference picture has one The weak texture image of degree of overlapping is determined as image subject to registration;
S2: the characteristic point of use space temporal model method detection reference picture and image subject to registration;
S3: determining the direction of characteristic point, and construction describes son comprising the compound characteristics point of Gradient direction information;
S4: Feature Points Matching is carried out to reference picture and image subject to registration by signature search;
S5: rejecting erroneous matching pair, obtains optimal matching points and transformation model;
S6: image subject to registration is converted by transformation model, obtains registration result.
2. a kind of weak texture image method for registering based on space time series model according to claim 1, feature It is, specific step is as follows by step S2:
S21: each pixel on reference picture is selected one by one;
S22: taking image block from reference picture around selected pixel, establishes space temporal model, solving model parameter;
S23: model built applicability is examined using AIC criterion;
S24: based on types of models parameter and detection applicability as a result, describing son to selected pixel feature points.
3. a kind of weak texture image method for registering based on space time series model according to claim 1, feature It is, step S include thes steps that carrying out conspicuousness detection S25 to characteristic point, and specific step is as follows by step S25:
Reference picture or image subject to registration are divided into several image blocks;
Successively calculate the spy of other each pixels in feature point description and current block of selecting pixel in each image block The Euclidean distance of sign point description, sums after Euclidean distance value is weighted, obtains weighted sum;
When weighted sum is greater than threshold value, determines that selected pixel is the characteristic point of conspicuousness, remain for image registration, instead It, selected pixel is not the characteristic point of conspicuousness, abandons it, is not used in image registration.
4. a kind of weak texture image method for registering based on space time series model according to claim 1, feature It is, specific step is as follows by step S3:
Gradient-norm and the direction for calculating selected characteristic point, with each pixel in the selected feature vertex neighborhood of statistics with histogram Gradient direction and amplitude, histogram peak direction indicates the principal direction of characteristic point, based on the spy constructed in principal direction and step S2 Sign point description, construction compound characteristics point description.
5. a kind of weak texture image method for registering based on space time series model according to claim 1, feature It is, in step s 4, signature search is carried out using KD tree or quick nearest neighbor algorithm or K- nearest neighbor algorithm, is found with reference to figure Corresponding relationship between picture and image characteristic point subject to registration.
6. a kind of weak texture image method for registering based on space time series model according to claim 1, feature It is, in step s 5, purification processes is carried out to the characteristic point obtained in S4 using random sampling unification algorism, reject mistake Pairing.
CN201811517008.XA 2018-12-12 2018-12-12 Weak texture image registration method based on space time sequence model Active CN109785370B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811517008.XA CN109785370B (en) 2018-12-12 2018-12-12 Weak texture image registration method based on space time sequence model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811517008.XA CN109785370B (en) 2018-12-12 2018-12-12 Weak texture image registration method based on space time sequence model

Publications (2)

Publication Number Publication Date
CN109785370A true CN109785370A (en) 2019-05-21
CN109785370B CN109785370B (en) 2023-09-15

Family

ID=66496142

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811517008.XA Active CN109785370B (en) 2018-12-12 2018-12-12 Weak texture image registration method based on space time sequence model

Country Status (1)

Country Link
CN (1) CN109785370B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287972A (en) * 2019-06-13 2019-09-27 南京航空航天大学 A kind of animal painting contours extract and matching process
CN112288784A (en) * 2020-10-09 2021-01-29 武汉大学 Descriptor neighborhood self-adaptive weak texture remote sensing image registration method
CN112308887A (en) * 2020-09-30 2021-02-02 西北工业大学 Real-time registration method for multi-source image sequence
CN112927272A (en) * 2021-03-30 2021-06-08 南京工程学院 Image fusion method based on space general autoregressive model
CN117437523A (en) * 2023-12-21 2024-01-23 西安电子科技大学 Weak trace detection method combining SAR CCD and global information capture

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102740082A (en) * 2011-04-01 2012-10-17 佳能株式会社 Image processing apparatus and control method thereof
CN103020945A (en) * 2011-09-21 2013-04-03 中国科学院电子学研究所 Remote sensing image registration method of multi-source sensor
CN106878687A (en) * 2017-04-12 2017-06-20 吉林大学 A kind of vehicle environment identifying system and omni-directional visual module based on multisensor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102740082A (en) * 2011-04-01 2012-10-17 佳能株式会社 Image processing apparatus and control method thereof
CN103020945A (en) * 2011-09-21 2013-04-03 中国科学院电子学研究所 Remote sensing image registration method of multi-source sensor
CN106878687A (en) * 2017-04-12 2017-06-20 吉林大学 A kind of vehicle environment identifying system and omni-directional visual module based on multisensor

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287972A (en) * 2019-06-13 2019-09-27 南京航空航天大学 A kind of animal painting contours extract and matching process
CN112308887A (en) * 2020-09-30 2021-02-02 西北工业大学 Real-time registration method for multi-source image sequence
CN112308887B (en) * 2020-09-30 2024-03-22 西北工业大学 Multi-source image sequence real-time registration method
CN112288784A (en) * 2020-10-09 2021-01-29 武汉大学 Descriptor neighborhood self-adaptive weak texture remote sensing image registration method
CN112288784B (en) * 2020-10-09 2022-03-04 武汉大学 Descriptor neighborhood self-adaptive weak texture remote sensing image registration method
CN112927272A (en) * 2021-03-30 2021-06-08 南京工程学院 Image fusion method based on space general autoregressive model
CN112927272B (en) * 2021-03-30 2023-12-12 南京工程学院 Image fusion method based on space general autoregressive model
CN117437523A (en) * 2023-12-21 2024-01-23 西安电子科技大学 Weak trace detection method combining SAR CCD and global information capture
CN117437523B (en) * 2023-12-21 2024-03-19 西安电子科技大学 Weak trace detection method combining SAR CCD and global information capture

Also Published As

Publication number Publication date
CN109785370B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN109785370A (en) A kind of weak texture image method for registering based on space time series model
CN107063228B (en) Target attitude calculation method based on binocular vision
Armangué et al. Overall view regarding fundamental matrix estimation
JP6216508B2 (en) Method for recognition and pose determination of 3D objects in 3D scenes
CN109903313B (en) Real-time pose tracking method based on target three-dimensional model
CN106960449B (en) Heterogeneous registration method based on multi-feature constraint
JP5385105B2 (en) Image search method and system
CN104123554B (en) SIFT image characteristic extracting methods based on MMTD
JP2016197287A (en) Information processing apparatus, information processing method, and program
CN107492107B (en) Object identification and reconstruction method based on plane and space information fusion
CN111145232A (en) Three-dimensional point cloud automatic registration method based on characteristic information change degree
CN110223355B (en) Feature mark point matching method based on dual epipolar constraint
Belhaoua et al. Error evaluation in a stereovision-based 3D reconstruction system
CN112017223A (en) Heterologous image registration method based on improved SIFT-Delaunay
WO2015035462A1 (en) Point feature based 2d-3d registration
CN112734816B (en) Heterologous image registration method based on CSS-Delaunay
CN110487254B (en) Rapid underwater target size measuring method for ROV
Ziqiang et al. Research of the algorithm calculating the length of bridge crack based on stereo vision
Wan et al. A performance comparison of feature detectors for planetary rover mapping and localization
Jian-dong et al. 3D curve structure reconstruction from a sparse set of unordered images
Hui et al. Surface measurement based on instantaneous random illumination
CN112529960A (en) Target object positioning method and device, processor and electronic device
Kanuki et al. Automatic compensation of radial distortion by minimizing entropy of histogram of oriented gradients
CN114877826B (en) Binocular stereo matching three-dimensional measurement method, system and storage medium
Carrasco et al. Automated visual inspection using trifocal analysis in an uncalibrated sequence of images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant