CN103778626A - Quick image registration method based on visual remarkable area - Google Patents

Quick image registration method based on visual remarkable area Download PDF

Info

Publication number
CN103778626A
CN103778626A CN201310752016.3A CN201310752016A CN103778626A CN 103778626 A CN103778626 A CN 103778626A CN 201310752016 A CN201310752016 A CN 201310752016A CN 103778626 A CN103778626 A CN 103778626A
Authority
CN
China
Prior art keywords
image
parameter
triangle
initial
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310752016.3A
Other languages
Chinese (zh)
Other versions
CN103778626B (en
Inventor
陈禾
马龙
毕福昆
章学静
陈亮
龙腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN201310752016.3A priority Critical patent/CN103778626B/en
Publication of CN103778626A publication Critical patent/CN103778626A/en
Application granted granted Critical
Publication of CN103778626B publication Critical patent/CN103778626B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention discloses a quick image registration method based on a visual remarkable area, which belongs to the field of image processing. The quick image registration method is used for matching an image A to be matched with a reference image B and respectively acquiring remarkable areas of A and B. The mass centers of three remarkable areas of A are vertexes of a triangle, and the triangle is used as a first characteristic triangle. Similarly a second characteristic triangle is acquired. The first characteristic triangle and the second characteristic triangle which are similar form a similar triangle pair, wherein the similarity of (a,b) is maximal. An affine transformation model for transforming from A to B is established based on (a,b), wherein the related initial matching parameters comprise horizontal translation amount, vertical translation amount, rotation angle and scale parameter. From the initial matching parameter, Powell searching for setting step length is performed for acquiring searching values. Each searching value is used for performing multiple registration tests on the A and B, and the searching value which corresponds with an optical registration test is the optimal matching parameter as a result. The optimal matching parameter is used for performing affine transformation on A for acquiring a final registration result.

Description

A kind of fast image registration method based on visual salient region
Technical field
The invention belongs to image processing field, be specifically related to a kind of use based on the auxiliary method that realizes Rapid Image Registration in visual salient region.
Background technology
Multi-source image registration refers to according to the optimum inter-image transformations parameter determining of some similarity measurements, two width or the multiple image of the Same Scene of obtaining from different sensors, different visual angles, different time are transformed under the same coordinate system, and on pixel layer, obtain the process of optimum matching.From then in definition, the transformation relation that registration comprises two-layer of locus and pixel point value between source images.Because image registration makes the explanation of a certain special scenes have, several data is unified to be supported, therefore it is widely used in the fields such as Medical Image Processing, remote Sensing Image Analysis, target identification, object variations detection.
Conventional method for registering images is roughly divided into two classes at present:
(1) image registration based on feature, extracts the remarkable characteristic of image as with reference to information, sets up the geometric transformation relation of two width images and carries out registration; This kind of method keeps stable feature by extracting between image, overcome the gray feature inconsistence problems from different sensors, but reliability and robustness to feature extraction had relatively high expectations; By only processing fraction characteristic information, greatly reduced the method calculated amount, but Simultaneous Registration precision is restricted.
(2) image registration based on gray scale/region, sets up the similarity measurement function take spatial alternation parameter as independent variable according to the information between image, finds out spatial alternation relation by the value of utilizing optimized algorithm to search out metric function.This kind of method utilizes the whole available information of image to set up similarity measurement, is widely used in same source image registration field.The method has brought calculated amount large in obtaining the useful performance of registration accuracy and robustness, the problem that real-time is poor.The technology addressing this problem at present has: the 1. method for registering based on edge and simple crosscorrelation, utilize arithmetic operators to extract two width image borders, using the cross-correlation coefficient of edge image (bianry image) as similarity measurement function, ask the registration result parameter value while making simple crosscorrelation maximum.Although this method speed is fast, only utilize bianry image to calculate, the image information of use is very few, easily causes mistake coupling; 2. the fast image registration method based on form fit, utilizes the target shape in two width images to mate, and matching result is improved to search efficiency as the initial value of Optimizing Search algorithm.Although the method combines based on feature and the feature based on gray scale method for registering, but owing to existing more representative value to set in the some feature that is characterized as pinpoint target profile of extracting in method and leaching process, therefore this algorithm is difficult to show stronger adaptability while applying in the similar more general nature scene of Multiple Shape having.
In sum, based on the image matching method of gray scale, there is higher registration accuracy and robustness but real-time is not good; Based on the method for registering images of feature, there is higher real-time and allos adaptability but registration accuracy and robustness have certain shortcoming, so two class methods have different separately features and good complementarity.What current most of method was strict belongs to a certain class methods, also exposes defect separately presenting separately in algorithm advantage, cannot realize that precision is high, real-time, the autoregistration of wide accommodation.
Summary of the invention
In view of this, the present invention proposes in conjunction with the target level fast image registration method based on gray scale and feature, has improved the real-time of tradition based on gray scale method for registering, has improved the feature robustness based on feature registration method.
In order to achieve the above object, technical scheme of the present invention is: the method, for image A to be joined is mated with reference picture B, comprises the steps:
Step 1, treat and join image A and reference picture B carries out respectively visual salient region extraction and Threshold segmentation, the corresponding sequence of the first marking area and the sequence of the second marking area of obtaining;
Step 2, obtaining the barycenter of all the first marking areas, is the first barycenter, and every three the first barycenter are as a First Characteristic vertex of a triangle; Obtaining the barycenter of all the second marking areas, is the second barycenter, and every three the second barycenter mark a Second Characteristic triangle;
If First Characteristic triangle of step 3 is corresponding similar in affine transformation relationship with a Second Characteristic triangular basis, the two composition similar triangles pair; The similar triangles that find similarity maximum are to (a, b); Wherein a is First Characteristic triangle, and b is Second Characteristic triangle;
Step 4, based on (a, b) setting up the affine Transform Model to reference picture B conversion by image A to be joined, this affine Transform Model comprises translation, rotation and three kinds of mapping modes of change of scale, and the parameter relating to comprises horizontal translation amount and vertical translation amount, the anglec of rotation and scale parameter; Take the parameter that relates to as initial matching parameter;
Scale parameter is the length of side ratio of a and b; The anglec of rotation is the angle that the corresponding sides of a and b form; Translational movement is the relative translation amount of a and b;
The searching method of step 5, employing Powell, start to set the search of step-length from initial matching parameter, obtain search value as test matching parameter, the test matching parameter that image A to be joined is obtained according to each stepping is carried out affined transformation, realize with the repeatedly registration of reference picture B and testing, select repeatedly the registration of result the best in registration test to test corresponding test matching parameter as optimum matching parameters;
Using this to give preferential treatment to most joins parameter and treats figure and look like to carry out affined transformation and obtain final registration result.
Further, in step 3, the wherein a pair of similar triangles that obtain are to being (a ', b '), and the computing method of the similarity Likehood of similar triangles to (a ', b ') are:
Figure BDA0000451387000000031
wherein l a', 3, l a', 2, l a', 1for three limits of a '; l b', 3, l b', 2, l b', 1for three limits of b '.
More preferably, the affine Transform Model in step 4 is:
x ′ y ′ = s cos γ sin γ - sin γ cos γ x y + Δx Δy ;
Wherein s is scale parameter:
Figure BDA0000451387000000033
l a, 3, l a, 2, l a, 1for three limits of a; l b, 3, l b, 2, l b, 1for three limits of b;
γ is the anglec of rotation, and the value of γ is γ 1or γ 2: wherein γ 2=π-γ 1, 0≤γ 1≤ π; γ afor the longest edge of First Characteristic triangle a is with respect to the horizontal tilt angle of the first image coordinate system, γ bfor the longest edge of Second Characteristic triangle b is with respect to the horizontal tilt angle of the second image coordinate system;
Δ x and Δ y are translation parameters, and Δ x is horizontal translation amount Δ x=x a-x b, Δ y is vertical translational movement Δ y=y a-y b, (x a, y a) be the coordinate figure of barycenter pixel in a, (x b, y b) be the coordinate figure of barycenter pixel in b; (x, y) is a bit in image, and (x ', y ') is that (x, y) uses this affine Transform Model to carry out the coordinate figure after affined transformation;
While determining initial matching parameter in step 4, initial matching parameter comprises initial level translational movement and initial vertical translation amount, initial rotation angle degree and initial gauges parameter, take s as initial gauges parameter, take Δ x as initial level translational movement, take Δ y as initial vertical translation amount; Definite method of initial rotation angle degree is as follows:
When the value of γ is γ 1time, this affine Transform Model is γ 1type affine Transform Model, by place, summit the first marking area relative minor face of the First Characteristic triangle a in image A to be joined according to γ 1type affine Transform Model converts, and above-mentioned the first marking area for the treatment of figure picture after conversion is calculated to not bending moment φ of second order a, 2with not bending moment φ of three rank a, 3, simultaneously calculate respectively not bending moment φ of second order for relative place, summit the second marking area of minor face of the Second Characteristic triangle b in reference picture B b, 2with not bending moment φ of three rank b, 3; Calculate γ 1type divergence measurement SSIM 1; SSIM 1=(φ a, 2b, 2) 2a, 3b, 3) 2;
When the value of γ is γ 2time, this affine Transform Model is γ 2type affine Transform Model, finally obtains γ 2type divergence measurement SSIM 2;
If SSIM 1>=SSIM 2, make γ m2, otherwise γ m1;
With γ mfor initial rotation angle degree;
Further, if SSIM in step 4 1with SSIM 2all be greater than 1, in step 4, do not produce initial matching parameter, return and in step 1, reset the segmentation threshold in thresholding method.
Further, described in selecting in step 5, repeatedly the registration of result the best is tested corresponding test matching parameter as optimum matching parameters in registration test, and wherein repeatedly the determination methods of the registration test of result the best is in registration test:
Step 501, for carrying out the result after affined transformation wait joining image A according to test matching parameter, calculate normalized crosscorrelation coefficient CNCC:
CNCC = Π k = 1 3 Σ i , j ( w a , k ( x i , y j ) - w a , k ‾ ) ( w b , k ( x i , y j ) - w b , k ‾ ) Σ i , j ( w a , k ( x i , y j ) - w a , k ‾ ) 2 Σ i , j ( w b , k ( x i , y j ) - w b , k ‾ ) 2
Wherein, w a,k(x i, y j) be the leg-of-mutton limit l of First Characteristic a,kpixel (x in the window ranges of the minimum boundary rectangle of first marking area at corresponding vertex place i, y j) pixel value, the value of i and j makes (x i, y j) get all over all pixels in this window ranges;
Figure BDA0000451387000000051
for pixel average in this window; K=1,2,3;
W b,k(x i, y j) be pixel (x in reference picture B i, y j) pixel value, the value of i and j is the same;
Figure BDA0000451387000000052
for pixel average in this region.
Step 502, CNCC value corresponding with its last search value CNCC value corresponding current search value is compared, when the two differs while being less than setting threshold, join parameter with current search value for preferential treatment.
More preferably, image A to be joined and reference picture B are same source images.
More preferably, the remarkable model of vision adopts the remarkable model of pulse cosine transform.
Beneficial effect:
1, the present invention is by the method for registering in conjunction with based on gray scale and the method for registering based on feature, using the Rapid matching result based on interregional architectural feature as the parameter initial value based on gray scale registration, make the latter near parameter initial value, search for accurately registration to obtain final registration result more accurately, avoid searching algorithm to be absorbed in local optimum, in guaranteeing registration accuracy, given play to the technical advantage of high real-time.
2, the existing Fast image registration algorithm based on form fit of the present invention has wider adaptability, by adopting based on human eye vision marking area feature, can effectively the region-of-interest of registration be locked onto to the signal portion in scene, and these parts are also often the parts that correlativity is higher.So the present invention has better application universality compared with prior art.
3, in the present invention, use ratio of similitude, second moment, these affine invariant features of third moment to replace existing local description, architectural feature, largely reduced the calculated amount of feature mistake matching process, be of value to the real-time and the robustness that improve registration Algorithm.
4, in the present invention, will combine normalized crosscorrelation as method for measuring similarity, the method abandons calculating the co-related measure of all images, and only calculate the associating cross correlation value of limited marking area part, in guaranteeing enough similarities, reduce the complexity of computation process.
Accompanying drawing explanation
Fig. 1 is the method for registering images schematic flow sheet based on visual salient region;
Fig. 2 is that the supplementary angles in transformation parameter is related to schematic diagram.
Embodiment
Below in conjunction with the accompanying drawing embodiment that develops simultaneously, describe the present invention.
Embodiment 1:
The searching method of step 5, employing Powell, start to set the search of step-length from initial matching parameter, obtain search value as test matching parameter, image A to be joined is carried out to affined transformation according to test matching parameter, realize and the repeatedly registration test of reference picture B, repeatedly the registration of result the best is tested corresponding test matching parameter as optimum matching parameters in registration test described in selection;
Using this to give preferential treatment to most joins parameter and treats figure and look like to carry out affined transformation and obtain final registration result.
The invention provides a kind of method for registering images based on visual salient region, as shown in Figure 1, the method for will image A be joined and reference picture B wait to join, comprise the steps:
Extract step 1, visual salient region:
Treat and join image A and reference picture B uses respectively the remarkable model of vision to carry out visual salient region extraction, obtain the first significantly figure and the second remarkable figure corresponding to reference picture B corresponding to image A to be joined; Use thresholding method, with segmentation threshold, the first remarkable figure and the second remarkable figure are cut apart, significantly scheme to obtain the first marking area sequence L by first 1, L 2..., L m, significantly scheme to obtain the second marking area sequence Y by second simultaneously 1, Y 2..., Y n.
The method that extract visual salient region has a variety of, for example Itti model, wherein the remarkable model of pulse cosine transform has advantages of that computation complexity is lower, therefore use in the present embodiment the remarkable model of pulse cosine transform to carry out visual salient region extraction, take image A to be joined as example, the first marking area sequence L is described below 1, L 2..., L mproduction process:
Treat and join image A and obtain A ' through two-dimension discrete cosine transform:
A pq ′ = DCT ( A ) = α p α q Σ m = 0 M - 1 Σ n = 0 N - 1 A mm cos π ( 2 m + 1 ) p 2 M cos π ( 2 n + 1 ) q 2 N , 0 ≤ p ≤ P - 1 0 ≤ q ≤ Q - 1
The image that wherein A is P × Q, A pq' be the pixel grey scale of the capable q row of p in A ' α p = 1 P , p = 0 2 P , 1 ≤ p ≤ P - 1 , α q = 1 Q , q = 0 2 Q , 1 ≤ q ≤ Q - 1 , A mnfor the pixel grey scale of the capable n row of m in A,
Ask its sign function response to obtain P, P to A ' pq=sign (A' pq)
The absolute value F of result after the inverse discrete cosine transformation of calculating P,
F m ′ n ′ = | IDCT ( P pq ) | = | Σ p = 0 M - 1 Σ q = 0 N - 1 α p α q P pq cos π ( 2 m ′ + 1 ) p 2 M cos π ( 2 n ′ + 1 ) q 2 N | , 0 ≤ m ′ ≤ M - 1 0 ≤ n ′ ≤ N - 1
F m ' n 'for the pixel grey scale of m ' row n ' row in F; The gaussian filtering that square carries out of F is obtained and significantly schemes SM,
SM = Gaussian ( F 2 ) = G ⊗ F 2
G = 1 16 1 2 1 2 4 2 1 2 1 Or G = 1 64 1 4 6 4 1 4 16 24 16 4 6 24 36 24 6 4 16 24 16 4 1 4 6 4 1
Re-use global threshold dividing method remarkable figure is partitioned into marking area.
Step 2, feature triangle extract:
Set up the first image coordinate system according to the first remarkable figure, for the marking area L in the first marking area sequence i, 1≤i≤M, extracts its center-of-mass coordinate (x i, y i); For each barycenter (x i, y i), 1≤i≤M, every three be one group as a vertex of a triangle, mark a triangle, finally mark
Figure BDA0000451387000000075
individual triangle, is First Characteristic triangle.
Set up the second image coordinate system according to the second remarkable figure, for the marking area Y in the second marking area sequence j, 1≤j≤M, extracts its center-of-mass coordinate (x j, y j); For each barycenter (x j, y j), 1≤j≤N is summit, every three be one group as a vertex of a triangle, mark a triangle, finally mark
Figure BDA0000451387000000076
individual triangle, is Second Characteristic triangle.
Step 3, feature triangle coupling, the triangle pair of acquisition similarity maximum:
If a First Characteristic triangle is corresponding similar in affine transformation relationship with a Second Characteristic triangular basis, the two composition similar triangles pair, obtain the triangle pair (a, b) of similarity maximum; Wherein a is First Characteristic triangle, and b is Second Characteristic triangle;
Wherein the calculating of the similarity of similar triangles has a lot of existing methods, and the present embodiment is in order to save calculated amount, adopt following formula calculate (a, similarity b):
Figure BDA0000451387000000081
wherein l a, 1, l a, 2, l a, 3for three limits of First Characteristic triangle a, l a, 1≤ l a, 2≤ l a, 3; l b, 1, l b, 2, l b, 3for three limits of Second Characteristic triangle b, l b, 1≤ l b, 2≤ l b, 3;
Step 4, based on (a, b) setting up the affine Transform Model to reference picture B conversion by image A to be joined, this affine Transform Model comprises translation, rotation and three kinds of mapping modes of change of scale, and the parameter relating to comprises horizontal translation amount and vertical translation amount, the anglec of rotation and scale parameter; Take the above-mentioned parameter relating to as initial matching parameter.
Scale parameter is the length of side ratio of a and b; The anglec of rotation is the angle that the corresponding sides of a and b form; Translational movement is the relative translation amount of a and b;
As long as the foundation of affine Transform Model meets the transformation law of affined transformation, for example, for certain a bit (x, y) in image A to be joined, scale parameter should multiply each other with it as multiple, the anglec of rotation can multiply each other with the form of matrix, and translational movement is to be added.
In the present embodiment, be that (x, y) uses this affine Transform Model to carry out the coordinate figure after affined transformation with (x ', y '), set up affine Transform Model to be:
x ′ y ′ = s cos γ sin γ - sin γ cos γ x y + Δx Δy ;
Wherein s is scale parameter:
Figure BDA0000451387000000083
γ is the anglec of rotation, be specially a and b corresponding sides form angle, as shown in Figure 2, consider the anglec of rotation may with the problem of actual match angle complementation, the value of γ may be γ 1or γ 2, due to can not directly obtain a and b corresponding sides form angle, only can obtain the longest edge of First Characteristic triangle a herein with respect to the horizontal tilt angle γ of the first image coordinate system atangent value tan γ aand the longest edge of Second Characteristic triangle b is with respect to the horizontal tilt angle γ of the second image coordinate system btangent value tan γ b, can adopt following formula to calculate γ 1, γ 2:
γ 1 = arctan ( tan γ b - tan γ a 1 + tan γ b tan γ a ) , 0 ≤ γ 1 ≤ π γ 2 = π - γ 1 ;
Δ x and Δ y are translation parameters, and Δ x is horizontal translation amount Δ x=x a-x b, Δ yfor vertical translational movement Δ y=y a-y b, (x a, y a) be the coordinate figure of barycenter pixel in a, (x b, y b) be the coordinate figure of barycenter pixel in b.
In this step, definite initial matching parameter comprises initial level translational movement and initial vertical translation amount, initial rotation angle degree and initial gauges parameter, take s as initial gauges parameter, take Δ x as initial level translational movement with take Δ y as initial vertical translation amount.
And consider that likely, because sense of rotation difference causes the problem that the anglec of rotation is different, initial rotation angle degree determines that method is as follows: when the value of γ is γ 1time, this affine Transform Model is γ 1type affine Transform Model, by minor face corresponding vertex place the first marking area of the First Characteristic triangle a in image A to be joined according to γ 1type affine Transform Model converts, and above-mentioned the first marking area for the treatment of figure picture after conversion is calculated to not bending moment φ of second order a, 2with not bending moment φ of three rank a, 3, simultaneously calculate respectively not bending moment φ of second order for minor face corresponding vertex place the second marking area of the Second Characteristic triangle b ' in reference picture B b, 2with not bending moment φ of three rank b, 3; Calculate γ 1type divergence measurement SSIM 1; SSIM 1=(φ a, 2b', 2) 2a, 3b, 3) 2;
In like manner, when the value of γ is γ 2time, this affine Transform Model is γ 2type affine Transform Model, finally obtains γ 2type divergence measurement SSIM 2;
If SSIM 1>=SSIM 2, make γ m2, otherwise γ m1.Determine that initial matching parameter is: [Δ x, Δ y, γ m, s].
For divergence measurement, if exceeding 1 explanation, its value differs greatly, in the present embodiment, consider that situation about differing greatly may γ 1and γ 2all improper, need to return to step 1 and reset segmentation threshold, thereby produce new γ 1and γ 2; If therefore SSIM 1with SSIM 2all be greater than 1, in step 4, do not produce initial matching parameter, return in step 1 and reset the segmentation threshold in thresholding method, repeating step 1~step 4, finally determines that initial matching parameter is: [Δ x, Δ y, γ m, s].
The searching method of step 5, employing Powell, start to set the step-searching of step-length from initial matching parameter, obtain search value as test matching parameter, the test matching parameter that image A to be joined is obtained according to each stepping is carried out affined transformation, realize with the repeatedly registration of reference picture B and testing, select repeatedly the registration of result the best in registration test to test corresponding test matching parameter as optimum matching parameters;
Using this to give preferential treatment to most joins parameter and treats figure and look like to carry out affined transformation and obtain final registration result.
In the present embodiment, repeatedly the determination methods of result the best is in registration test:
Step 501, for carrying out the result after affined transformation wait joining image A according to test matching parameter, calculate normalized crosscorrelation coefficient CNCC:
CNCC = Π k = 1 3 Σ i , j ( w a , k ( x i , y j ) - w a , k ‾ ) ( w b , k ( x i , y j ) - w b , k ‾ ) Σ i , j ( w a , k ( x i , y j ) - w a , k ‾ ) 2 Σ i , j ( w b , k ( x i , y j ) - w b , k ‾ ) 2
Wherein, w a,k(x i, y j) be the limit l of First Characteristic triangle a a,kpixel (xi, y in the window ranges of the minimum boundary rectangle of first marking area at place, summit relatively j) pixel value, the value of i and j should make (x i, y j) get all over all pixels in this window ranges;
Figure BDA0000451387000000102
for pixel average in this window; K=1,2,3;
W b,k(x i, y j) be pixel (x in reference picture B i, y j) pixel value, wherein the value of i and j is the same, takes out like this w b,k(x i, y j) be each point pixel in region in the B corresponding with above-mentioned window ranges;
Figure BDA0000451387000000103
for pixel average in this window;
Step 502, current search value is calculated to the corresponding CNCC value CNCC value corresponding with its last search value compare, when the two differs while being less than setting threshold, join parameter with current search value for preferential treatment, use this to give preferential treatment to most and join parameter and treat figure and look like to carry out affined transformation and obtain final registration result;
The establishing method of setting threshold is: carry out test of many times, choose different threshold datas at every turn, repeating step 502 is until final registration result corresponding to selected threshold data is best in many group threshold datas, using this selected threshold data as setting threshold.
In order to reach better registration effect, the present embodiment for image A to be joined and reference picture B be same source images.
In sum, these are only preferred embodiment of the present invention, be not intended to limit protection scope of the present invention.Within the spirit and principles in the present invention all, any modification of doing, be equal to replacement, improvement etc., within all should being included in protection scope of the present invention.

Claims (7)

1. the method for registering images based on visual salient region, is characterized in that, the method, for image A to be joined is mated with reference picture B, comprises the steps:
Step 1, treat and join image A and reference picture B carries out respectively visual salient region extraction and Threshold segmentation, the corresponding sequence of the first marking area and the sequence of the second marking area of obtaining;
Step 2, obtaining the barycenter of all the first marking areas, is the first barycenter, and every three the first barycenter are as a First Characteristic vertex of a triangle; Obtaining the barycenter of all the second marking areas, is the second barycenter, and every three the second barycenter mark a Second Characteristic triangle;
If First Characteristic triangle of step 3 is corresponding similar in affine transformation relationship with a Second Characteristic triangular basis, the two composition similar triangles pair; The similar triangles that find similarity maximum are to (a, b); Wherein a is First Characteristic triangle, and b is Second Characteristic triangle;
Step 4, based on (a, b) setting up the affine Transform Model to reference picture B conversion by image A to be joined, this affine Transform Model comprises translation, rotation and three kinds of mapping modes of change of scale, and the parameter relating to comprises horizontal translation amount and vertical translation amount, the anglec of rotation and scale parameter; Take the described parameter relating to as initial matching parameter;
Described scale parameter is the length of side ratio of a and b; The described anglec of rotation is the angle that the corresponding sides of a and b form; Described translational movement is the relative translation amount of a and b;
The searching method of step 5, employing Powell, start to set the search of step-length from initial matching parameter, obtain search value as test matching parameter, the test matching parameter that image A to be joined is obtained according to each stepping is carried out affined transformation, realize and the repeatedly registration test of reference picture B, repeatedly the registration of result the best is tested corresponding test matching parameter as optimum matching parameters in registration test described in selection;
Using this to give preferential treatment to most joins parameter and treats figure and look like to carry out affined transformation and obtain final registration result.
2. a kind of method for registering images based on visual salient region as claimed in claim 1, it is characterized in that, in described step 3, the wherein a pair of similar triangles that obtain are to being (a ', b '), the computing method of the similarity Likehood of similar triangles to (a ', b ') are:
Figure FDA0000451386990000011
wherein l a', 3, l a', 2, l a', 1for three limits of a '; l b', 3, l b', 2, l b', 1for three limits of b '.
3. a kind of method for registering images based on visual salient region as claimed in claim 1 or 2, is characterized in that, the affine Transform Model in described step 4 is:
Figure FDA0000451386990000021
Wherein s is scale parameter:
Figure FDA0000451386990000022
l a, 3, l a, 2, l a, 1for three limits of a; l b, 3, l b, 2, l b, 1for three limits of b;
γ is the anglec of rotation, and the value of γ is γ 1or γ 2: wherein
Figure FDA0000451386990000023
γ 2=π-γ 1,0≤γ 1≤ π; γ afor the longest edge of First Characteristic triangle a is with respect to the horizontal tilt angle of the first image coordinate system, γ bfor the longest edge of Second Characteristic triangle b is with respect to the horizontal tilt angle of the second image coordinate system;
Δ x and Δ y are translation parameters, and Δ x is horizontal translation amount Δ x=x a-x b, Δ y is vertical translational movement Δ y=y a-y b, (x a, y a) be the coordinate figure of barycenter pixel in a, (x b, y b) be the coordinate figure of barycenter pixel in b; (x, y) is a bit in image, and (x ', y ') is that (x, y) uses this affine Transform Model to carry out the coordinate figure after affined transformation;
While determining initial matching parameter in described step 4, described initial matching parameter comprises initial level translational movement and initial vertical translation amount, initial rotation angle degree and initial gauges parameter, take described s as initial gauges parameter, take described Δ x as initial level translational movement, take described Δ y as initial vertical translation amount; Definite method of described initial rotation angle degree is as follows:
When the value of γ is γ 1time, this affine Transform Model is γ 1type affine Transform Model, by place, summit the first marking area relative minor face of the First Characteristic triangle a in image A to be joined according to γ 1type affine Transform Model converts, and above-mentioned the first marking area for the treatment of figure picture after conversion is calculated to not bending moment φ of second order a, 2with not bending moment φ of three rank a, 3, simultaneously calculate respectively not bending moment φ of second order for relative place, summit the second marking area of minor face of the Second Characteristic triangle b in reference picture B b, 2with not bending moment φ of three rank b, 3; Calculate γ 1type divergence measurement SSIM 1; SSIM 1=(φ a, 2b, 2) 2a, 3b, 3) 2;
When the value of γ is γ 2time, this affine Transform Model is γ 2type affine Transform Model, finally obtains γ 2type divergence measurement SSIM 2;
If SSIM 1>=SSIM 2, make γ m2, otherwise γ m1;
With γ mfor initial rotation angle degree.
4. a kind of image based on visual salient region as claimed in claim 3 is treated method of completing the square, it is characterized in that, if SSIM in described step 4 1with SSIM 2all be greater than 1, in step 4, do not produce initial matching parameter, return and in step 1, reset the described segmentation threshold in thresholding method.
5. a kind of method for registering images based on visual salient region as described in claim 1,2 or 4, it is characterized in that, described in selecting in described step 5, repeatedly the registration of result the best is tested corresponding test matching parameter as optimum matching parameters in registration test, and in wherein said repeatedly registration test, the determination methods of the registration test of result the best is:
Step 501, for carrying out the result after affined transformation wait joining image A according to test matching parameter, calculate normalized crosscorrelation coefficient CNCC:
Figure FDA0000451386990000031
Wherein, w a,k(x i, y j) be the leg-of-mutton limit l of First Characteristic a,kpixel (x in the window ranges of the minimum boundary rectangle of first marking area at corresponding vertex place i, y j) pixel value, the value of i and j makes (x i, y j) get all over all pixels in this window ranges;
Figure FDA0000451386990000032
for pixel average in this window; K=1,2,3;
W b,k(x i, y j) be pixel (x in reference picture B i, y j) pixel value, wherein the value of i and j is the same; for pixel average in this region;
Step 502, CNCC value corresponding with its last search value CNCC value corresponding current search value is compared, when the two differs while being less than setting threshold, join parameter with current search value for preferential treatment.
6. a kind of image based on visual salient region as described in claim 1,2 or 4 is treated method of completing the square, it is characterized in that, described in image A to be joined and reference picture B be same source images.
7. a kind of image based on visual salient region as described in claim 1,2 or 4 is treated method of completing the square, it is characterized in that, the remarkable model of described vision adopts the remarkable model of pulse cosine transform.
CN201310752016.3A 2013-12-31 2013-12-31 A kind of fast image registration method of view-based access control model marking area Active CN103778626B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310752016.3A CN103778626B (en) 2013-12-31 2013-12-31 A kind of fast image registration method of view-based access control model marking area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310752016.3A CN103778626B (en) 2013-12-31 2013-12-31 A kind of fast image registration method of view-based access control model marking area

Publications (2)

Publication Number Publication Date
CN103778626A true CN103778626A (en) 2014-05-07
CN103778626B CN103778626B (en) 2016-09-07

Family

ID=50570823

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310752016.3A Active CN103778626B (en) 2013-12-31 2013-12-31 A kind of fast image registration method of view-based access control model marking area

Country Status (1)

Country Link
CN (1) CN103778626B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955678A (en) * 2014-05-13 2014-07-30 深圳市同洲电子股份有限公司 Image recognition method and device
CN104574372A (en) * 2014-12-21 2015-04-29 天津大学 Image registration method based on similar feature triangles
CN104766323A (en) * 2015-04-07 2015-07-08 北京航空航天大学 Point matching method for remote sensing images
CN105869145A (en) * 2016-03-22 2016-08-17 武汉工程大学 k-t acceleration-based multistep registering method for nuclear magnetic resonance images
WO2017067127A1 (en) * 2015-10-19 2017-04-27 Shanghai United Imaging Healthcare Co., Ltd. System and method for image registration in medical imaging system
CN106815832A (en) * 2016-12-20 2017-06-09 华中科技大学 A kind of steel mesh automatic image registration method and system of surface mounting technology
US9760983B2 (en) 2015-10-19 2017-09-12 Shanghai United Imaging Healthcare Co., Ltd. System and method for image registration in medical imaging system
CN107993258A (en) * 2017-11-23 2018-05-04 浙江大华技术股份有限公司 A kind of method for registering images and device
US10043280B2 (en) 2015-10-19 2018-08-07 Shanghai United Imaging Healthcare Co., Ltd. Method and system for image segmentation
CN110503678A (en) * 2019-08-28 2019-11-26 徐衍胜 Navigation equipment based on topological structure constraint is infrared with the heterologous method for registering of optics
CN112085709A (en) * 2020-08-19 2020-12-15 浙江华睿科技有限公司 Image contrast method and equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070237426A1 (en) * 2006-04-04 2007-10-11 Microsoft Corporation Generating search results based on duplicate image detection
CN102136142A (en) * 2011-03-16 2011-07-27 内蒙古科技大学 Nonrigid medical image registration method based on self-adapting triangular meshes
CN103247038A (en) * 2013-04-12 2013-08-14 北京科技大学 Overall image information synthetic method driven by visual cognition model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070237426A1 (en) * 2006-04-04 2007-10-11 Microsoft Corporation Generating search results based on duplicate image detection
CN102136142A (en) * 2011-03-16 2011-07-27 内蒙古科技大学 Nonrigid medical image registration method based on self-adapting triangular meshes
CN103247038A (en) * 2013-04-12 2013-08-14 北京科技大学 Overall image information synthetic method driven by visual cognition model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JIAN ZHENG, ET AL.: "Salient Feature Region: A New Method for Retinal Image Registration", 《IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE》 *
陈广居 等: "基于局部显著特征的快速图像配准方法", 《计算机应用研究》 *
陈硕 等: "基于视觉显著性特征的快速场景配准方法", 《中国图象图形学报》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955678A (en) * 2014-05-13 2014-07-30 深圳市同洲电子股份有限公司 Image recognition method and device
CN104574372A (en) * 2014-12-21 2015-04-29 天津大学 Image registration method based on similar feature triangles
CN104766323A (en) * 2015-04-07 2015-07-08 北京航空航天大学 Point matching method for remote sensing images
CN104766323B (en) * 2015-04-07 2018-03-06 北京航空航天大学 A kind of Point matching method of remote sensing images
WO2017067127A1 (en) * 2015-10-19 2017-04-27 Shanghai United Imaging Healthcare Co., Ltd. System and method for image registration in medical imaging system
US10043280B2 (en) 2015-10-19 2018-08-07 Shanghai United Imaging Healthcare Co., Ltd. Method and system for image segmentation
US9760983B2 (en) 2015-10-19 2017-09-12 Shanghai United Imaging Healthcare Co., Ltd. System and method for image registration in medical imaging system
GB2549618A (en) * 2015-10-19 2017-10-25 Shanghai United Imaging Healthcare Co Ltd System and method for image registration in medical imaging system
GB2549618B (en) * 2015-10-19 2020-07-01 Shanghai United Imaging Healthcare Co Ltd System and method for image registration in medical imaging system
US10275879B2 (en) 2015-10-19 2019-04-30 Shanghai United Imaging Healthcare Co., Ltd. System and method for image registration in medical imaging system
CN105869145B (en) * 2016-03-22 2018-12-14 武汉工程大学 A kind of nuclear magnetic resonance image multistep method for registering accelerated based on k-t
CN105869145A (en) * 2016-03-22 2016-08-17 武汉工程大学 k-t acceleration-based multistep registering method for nuclear magnetic resonance images
CN106815832A (en) * 2016-12-20 2017-06-09 华中科技大学 A kind of steel mesh automatic image registration method and system of surface mounting technology
CN106815832B (en) * 2016-12-20 2019-05-21 华中科技大学 A kind of steel mesh automatic image registration method and system of surface mounting technology
CN107993258A (en) * 2017-11-23 2018-05-04 浙江大华技术股份有限公司 A kind of method for registering images and device
CN107993258B (en) * 2017-11-23 2021-02-02 浙江大华技术股份有限公司 Image registration method and device
CN110503678A (en) * 2019-08-28 2019-11-26 徐衍胜 Navigation equipment based on topological structure constraint is infrared with the heterologous method for registering of optics
CN112085709A (en) * 2020-08-19 2020-12-15 浙江华睿科技有限公司 Image contrast method and equipment
CN112085709B (en) * 2020-08-19 2024-03-22 浙江华睿科技股份有限公司 Image comparison method and device

Also Published As

Publication number Publication date
CN103778626B (en) 2016-09-07

Similar Documents

Publication Publication Date Title
CN103778626B (en) A kind of fast image registration method of view-based access control model marking area
Fan et al. Pothole detection based on disparity transformation and road surface modeling
US11244197B2 (en) Fast and robust multimodal remote sensing image matching method and system
Cohen et al. Discovering and exploiting 3d symmetries in structure from motion
CN104200461B (en) The remote sensing image registration method of block and sift features is selected based on mutual information image
CN109544599B (en) Three-dimensional point cloud registration method based on camera pose estimation
CN105205858A (en) Indoor scene three-dimensional reconstruction method based on single depth vision sensor
CN102129573A (en) SAR (Synthetic Aperture Radar) image segmentation method based on dictionary learning and sparse representation
US8571303B2 (en) Stereo matching processing system, stereo matching processing method and recording medium
US20110153206A1 (en) Systems and methods for matching scenes using mutual relations between features
CN103198475B (en) Based on the total focus synthetic aperture perspective imaging method that multilevel iteration visualization is optimized
CN102982334B (en) The sparse disparities acquisition methods of based target edge feature and grey similarity
CN104835144A (en) Solving camera intrinsic parameter by using image of center of sphere and orthogonality
US20150199573A1 (en) Global Scene Descriptors for Matching Manhattan Scenes using Edge Maps Associated with Vanishing Points
CN104834931A (en) Improved SIFT algorithm based on wavelet transformation
CN104408711A (en) Multi-scale region fusion-based salient region detection method
CN106340010A (en) Corner detection method based on second-order contour difference
CN108053445A (en) The RGB-D camera motion methods of estimation of Fusion Features
CN104318559A (en) Quick feature point detecting method for video image matching
CN107025647A (en) Distorted image evidence collecting method and device
CN104794476B (en) A kind of extracting method of personnel's trace
O'Byrne et al. A comparison of image based 3D recovery methods for underwater inspections
Chetouani A 3D mesh quality metric based on features fusion
Wendt A concept for feature based data registration by simultaneous consideration of laser scanner data and photogrammetric images
CN109242854A (en) A kind of image significance detection method based on FLIC super-pixel segmentation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant