CN101226636B - Method for matching image of rigid body transformation relation - Google Patents

Method for matching image of rigid body transformation relation Download PDF

Info

Publication number
CN101226636B
CN101226636B CN2008100574640A CN200810057464A CN101226636B CN 101226636 B CN101226636 B CN 101226636B CN 2008100574640 A CN2008100574640 A CN 2008100574640A CN 200810057464 A CN200810057464 A CN 200810057464A CN 101226636 B CN101226636 B CN 101226636B
Authority
CN
China
Prior art keywords
image
template
weights
template piece
quality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2008100574640A
Other languages
Chinese (zh)
Other versions
CN101226636A (en
Inventor
曾庆业
王杰
唐娉
马建伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Remote Sensing Applications of CAS
Original Assignee
Institute of Remote Sensing Applications of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Remote Sensing Applications of CAS filed Critical Institute of Remote Sensing Applications of CAS
Priority to CN2008100574640A priority Critical patent/CN101226636B/en
Publication of CN101226636A publication Critical patent/CN101226636A/en
Application granted granted Critical
Publication of CN101226636B publication Critical patent/CN101226636B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Disclosed is an image matching method which is applied to models corresponding with rigid transformation relation, comprising steps as follows: regularly choosing a plurality of small model blocks (or feature points) in the present images, determining the quality and eliminating improper small model blocks (or assigning weights by quality), finding the most matching and corresponding position in the former image respectively, according to the known geometrical constraint, assigning each coordinate dotted pair matched the result a weight and then fitting the rigid transformation relation, the present image of which is relative with the former image. The method of the invention can effectively improve robustness of the matching result under the condition of only adding a small amount of calculation. The method is applied to the image matching in ultrasonic real-time wide sight echography to obtain the high-quality echography result.

Description

A kind of matching process of image of rigid body transformation relation
Technical field the present invention relates to image processing techniques, specifically, relates to the matching technique between the image that meets rigid body transformation relation.
Background technology is in the application of some images match, and the transformation relation between adjacent image meets rigid body transformation relation.In the ultrasonic scanning diagnosis, to move relative to even velocity, the displacement of adjacent two interframe is little, and is out of shape less in the target location for probe.If will generate wide field-of-view image by the image sequence that generates in the scanning process, consider the error accumulation effect in the coupling splicing, generally in coupling, use the rigid body translation model, think that promptly adjacent two interframe have only rotation and translation relation.In this class is used,, can obtain better corresponding between then adjacent two images if two adjacent images are used affined transformation model or multinomial model etc.But in the process that multiframe is constantly mated backward, the accumulation of distortion and error can make the image fault of back more and more serious, needs with the method for using global optimization.And in real-time occasion of mating joining image-forming, the method for global optimization can't be used smoothly.One of solution is to use the rigid body translation model in the coupling of adjacent two images.
Here the rigid body translation model of said image promptly has only the model of rotation and translation relation, and promptly the transformation relation of the relative previous image of present image is expressed as:
X Y = cos θ - sin θ sin θ cos θ X ′ Y ′ + X ′ 0 Y ′ 0
The coordinate on (X ', Y ') expression present image wherein, (X Y) is illustrated in the coordinate of previous image correspondence position, and θ represents the anglec of rotation, (X ' 0, Y ' 0) the expression translational movement.This transformation model has an important characteristic, at the relative position relation between point on the present image, can not change after conversion, remains unchanged as distance between two points etc.
Image matching method comprises that mainly two kinds of thinkings respectively have superiority based on the coupling of template with based on the coupling of feature.The purpose of these two kinds of thinkings all is to obtain a plurality of " point " coordinate corresponding relation between two adjacent images.According to specific model, utilize these " point " coordinate corresponding relations then, calculate the transformation relation of two integral image.When calculating whole corresponding relation, often use the least square method match by the corresponding relation of a plurality of sample points.A shortcoming of least square method is to depart from wholely when big as the individual samples point, and whole fitting result " is drawn " in meeting inclined to one side, and in the images match process, the situation that minority template (or unique point) mismatches appears in regular meeting.Fig. 1 has represented that wherein 1 be expected result with least square method match straight line, and 2 is fitting result, and to depart from integral body far away owing to a point, and make the result produce bigger deviation.Addressing this problem method commonly used is to iterate calculating, as M estimation and stochastic sampling coherence method etc.On the one hand, these method operands are bigger, and having limited it needs the application in the field of calculating in real time at some.On the other hand, they are universal methods, can use under various models, do not consider to utilize the characteristics of rigid body translation model to improve robustness.
Summary of the invention the present invention takes into full account under the rigid body translation model characteristics of corresponding relation between adjacent two images, the character of utilizing the relative position relation of point coordinate on the image under rigid body translation, to remain unchanged, significantly do not increasing under the condition of calculated amount, improving the robustness of matching result.The present invention has lower realization cost and lower computation complexity, makes method of the present invention can be used for the field of the less operand of needs, higher robustness, as the real-time wild eyeshot imaging of ultrasonic device.
Basic ideas of the present invention are: clocklike choose satisfactory template (or unique point) in present image, find corresponding coupling respectively in previous image, obtain the position corresponding relation; Utilize the geometrical constraint between the template of choosing (or unique point) then, judge the situation that departs from of corresponding position in the matching result; Again according to the degree that departs from for " point " of each correspondence to distributing different weights, go out the transformation relation of present image by these position corresponding relation weighted fittings to previous image.
The technical scheme that realizes thinking of the present invention is, utilizes the characteristics of rigid body translation, and a kind of image matching method of robust is provided, and only increases low computational effort relatively.Concrete steps are as follows:
A. a plurality of little template pieces (or unique point) are clocklike chosen in the fixed position in present image, and wherein second-rate (or by the quality tax weights) of rejecting;
B. each effective template piece (or unique point) finds the correspondence position that mates most in previous image;
C. according to known geometrical constraint, compose weights for the matching result of each effective template piece (or unique point);
D. by each correspondence position relation and weights, the transformation relation of the relative previous image of match present image;
It is characterized in that: the template piece (or unique point) in the steps A is in the fixed position, clocklike chooses.Geometrical constraint refers in particular among the step C: in the matching result, in the image with delegation or the same distance that lists other template matches position and current template matches position, with the deviation that distance should be arranged.Promptly concern that with position given in the steps A it is right to make matching result depart from actual positional relationship coupling far away " point ", accounts for less proportion when match as geometrical constraint, be used to the robustness that guarantees that the image rigid body transformation relation calculates.Adopt such scheme, can only increase relatively under the condition of low computational effort, effectively improve the robustness of matching result.
Deviation synoptic diagram when description of drawings Fig. 1 is the least square method fitting a straight line
A kind of template when Fig. 2 is to use template matches is chosen the mode synoptic diagram
Fig. 3 is the schematic flow sheet of a specific embodiment
Embodiment is described one embodiment of the present of invention now in conjunction with the accompanying drawings.
Here use the template matches mode as an object lesson, elaborate embodiments of the present invention.
When two images are mated, use the process of template matches to be exactly: at first on present image, to choose a plurality of little template pieces, search matched on previous image respectively, find best match position, go out transformation relation between two images with the coordinate Calculation of the coordinate of these little template pieces and corresponding matched position thereof then.The flow process of one embodiment of the present of invention as shown in Figure 3, concrete steps are as follows:
A. a plurality of little template pieces are clocklike chosen in zone to be matched in present image, and each template piece are carried out quality judge, reject second-rate template piece (or by quality tax weights);
B. each effective template piece search matched in previous image finds the correspondence position that mates most;
C. according to known geometrical constraint, compose weights for the matching result of each effective template;
D. the correspondence position by each effective template piece concerns and weights the transformation relation of the relative previous image of match present image;
Choosing as shown in Figure 2 the matching template piece in the steps A of present embodiment, wherein 3 represent present images, the a plurality of little image block of choosing on the 4 expression present images, position and the quantity chosen determine by actual conditions, can (be not limited to) be taken as the equidistant several row of several row in zone to be matched, so that calculate and obtain better stability.These little image blocks generally are taken as rectangle, also can get other shape.In the present embodiment, the wide height of each template piece can be taken as (pixel count) 16 * 16,32 * 32,48 * 48,64 * 64 etc.
The template piece of choosing will carry out quality to be judged, rejects second-rate piece (as too evenly, all pixels are near black or white etc.), to reduce erroneous matching in advance, strengthen and mate robustness.If the less weights of ropy tax, the big weights of the measured tax of matter, then the weights that use during overall fit are products of the matching result weights among quality weights and the step C.
Get each fixed effective template piece, respectively near the search matched correspondence position in previous image.In the present embodiment, use to calculate when the absolute difference of the pixel value of front template and previous image corresponding position and method, absolute difference and being expressed as
Figure G2008100574640D00031
F wherein iExpression is as the gray-scale value of i pixel of front template, g iThe grey scale pixel value of corresponding position in the expression previous image.In whole hunting zone, the position of mating most when front template is thought in the position of sad value minimum in previous image.Remove the method in the present embodiment, also have (being not limited to) to use the method for related coefficient coupling, can obtain essentially identical result.
Each effective template piece calculates in previous image, finds one to think the position of coupling.Each effective template in present image coordinate and previous image in the coordinate of corresponding matched position to have constituted corresponding point right.Simulate the transformation relation of present image by the right relation of these points to previous image.Relation meets the rigid body translation model between two images, thus each point to coordinate transform and integral image transformation relation similar, be expressed as:
X i Y i = cos θ - sin θ sin θ cos θ X ′ i Y ′ i + X ′ 0 Y ′ 0
Wherein (X ' i, Y ' i) the effective coordinate of template in the expression present image, (X i, Y i) being illustrated in the previous image coordinate of corresponding matched position, θ represents the anglec of rotation, (X ' 0, Y ' 0) the expression translational movement.Final goal is exactly according to known a plurality of to coordinate, obtains the anglec of rotation and translational movement.
In the matching result of each effective template, have a few errors matching result sometimes, the coupling of these mistakes can reduce the robustness of fitting result.The present invention's known geometric relationship when choosing effective template composes weights for the matching result of each effective template, and match point is to participating in match by the weight size respectively.In the present embodiment, the distance between each effective template is known.Suppose to have in the present image position of two effective templates to be respectively a, b, their corresponding correct matched positions in previous image are A, B.Because use the rigid body translation model between two images, so the distance between A, B should equal the distance between a, b.If it is far away more that distance between A, B and due distance depart from, then the possibility of matching error is big more, and the weight in match is just more little.In the present embodiment, (being not limited to) defines the right weight of each point is the stack of Gaussian function:
W I = Σ J e - ( diff _ dist ( I , J ) ) 2 / σ 2
Wherein the some J in the summation represent with I with delegation or at the point of same row, diff_dist (I, J) distance between expression I and J and their corresponding effective template i and the distance between j poor, σ has determined the A/F of Gaussian function, is used to regulate the difference degree of weight.Weight calculation result does not need high precision, as long as can distinguish the importance of each little template block search matching result, and therefore can be by discretize and calculate the method raising arithmetic speed of look-up table in advance.
Transformation relation between " point to " that use provides above and the right weight of each match point use weighted least-squares to simulate the rigid body transformation relation of present image and previous image.
Embodiments of the invention are realized on the PC platform, through experimental verification, under the condition that only increases low computational effort, can effectively improve the robustness of matching result.Simultaneously, this method is applied to the images match in the ultrasonic real-time wild eyeshot imaging, obtains high-quality imaging results.

Claims (2)

1. image matching method that is applied to the rigid body transformation relation model comprises step:
A. the template piece of equidistant several row of several row is chosen in zone to be matched in present image; After the template piece is selected, carries out quality and judge, reject too evenly, all pixels are near black or the subalbous template piece of all pixel-by-pixel basis;
B. each effective template piece finds the correspondence position that mates most in previous image;
C. according to known geometrical constraint, compose weights for the matching result of each effective template piece;
D. by each correspondence position relation and weights, the transformation relation of the relative previous image of match present image; It is characterized in that:
Effective template piece among step B, the C refers to that quality is judged the template piece that the back keeps in steps A;
Closing with position given in the steps A among the step C is geometrical constraint, and it is right to make matching result depart from actual positional relationship " point " far away, accounts for less proportion when match, is used to the robustness that guarantees that the image rigid body transformation relation calculates; Wherein, geometrical constraint refers in particular to: in the current frame image with delegation or same two template pieces that list, calculate the correct matched position of these two template piece correspondences in former frame, the deviation of the distance of these two the template interblocks in the distance between this correct matched position and the above-mentioned present frame is a geometrical constraint.
2. according to the image matching method described in the claim 1, it is characterized in that:
After the template piece is got and decided in the steps A, carry out quality and judge, compose weights for each template piece by quality; The weights that use in step D are the product of matching result weights among quality weights and the step C.
CN2008100574640A 2008-02-02 2008-02-02 Method for matching image of rigid body transformation relation Expired - Fee Related CN101226636B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008100574640A CN101226636B (en) 2008-02-02 2008-02-02 Method for matching image of rigid body transformation relation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008100574640A CN101226636B (en) 2008-02-02 2008-02-02 Method for matching image of rigid body transformation relation

Publications (2)

Publication Number Publication Date
CN101226636A CN101226636A (en) 2008-07-23
CN101226636B true CN101226636B (en) 2010-06-02

Family

ID=39858616

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008100574640A Expired - Fee Related CN101226636B (en) 2008-02-02 2008-02-02 Method for matching image of rigid body transformation relation

Country Status (1)

Country Link
CN (1) CN101226636B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5311465B2 (en) * 2008-11-25 2013-10-09 Necシステムテクノロジー株式会社 Stereo matching processing system, stereo matching processing method, and program
CN101819680B (en) * 2010-05-12 2011-08-31 上海交通大学 Detection method of picture matching point pair
CN102194030B (en) * 2011-05-19 2012-11-07 南京医科大学附属口腔医院 Implant denture individual abutment design method based on healing abutment dental model
CN109166149B (en) * 2018-08-13 2021-04-02 武汉大学 Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1950849A (en) * 2004-05-06 2007-04-18 皇家飞利浦电子股份有限公司 Pharmacokinetic image registration

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1950849A (en) * 2004-05-06 2007-04-18 皇家飞利浦电子股份有限公司 Pharmacokinetic image registration

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
王军等.图像匹配算法的研究进展.大气与环境光学学报2 1.2007,2(1),11-15.
王军等.图像匹配算法的研究进展.大气与环境光学学报2 1.2007,2(1),11-15. *
邵斌等.基于PC的实时超声全景成像系统中的图像配准.计算机工程与应用43 28.2007,43(28),206-209.
邵斌等.基于PC的实时超声全景成像系统中的图像配准.计算机工程与应用43 28.2007,43(28),206-209. *

Also Published As

Publication number Publication date
CN101226636A (en) 2008-07-23

Similar Documents

Publication Publication Date Title
Garnett et al. 3d-lanenet: end-to-end 3d multiple lane detection
CN107909640B (en) Face relighting method and device based on deep learning
CN102034101B (en) Method for quickly positioning circular mark in PCB visual detection
CN103065323B (en) Subsection space aligning method based on homography transformational matrix
CN101226636B (en) Method for matching image of rigid body transformation relation
CN105091782A (en) Multilane laser light plane calibration method based on binocular vision
CN104021547A (en) Three dimensional matching method for lung CT
US20230134125A1 (en) Alignment validation in vehicle-based sensors
CN108573242A (en) A kind of method for detecting lane lines and device
CN104463845A (en) Method and system for selecting registration points of line heating features
Yang et al. Probabilistic multi-view fusion of active stereo depth maps for robotic bin-picking
Fanani et al. Multimodal scale estimation for monocular visual odometry
CN103868473B (en) A kind of high light body surface phase place quick recovery method based on recurrence method
CN103198465A (en) Rotation error correcting method of CT (Computerized Tomography) scanned images
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
CN104359417A (en) Elliptical speckle generation method for large-viewing-field large-dip-angle measurement
CN114693659B (en) Copper pipe surface cleaning effect evaluation method and system based on image processing
CN101254120B (en) Real time ultrasonic wild eyeshot imaging method
CN101334894A (en) Video camera parameter calibration method by adopting single circle as marker
CN115187612A (en) Plane area measuring method, device and system based on machine vision
CN102789644B (en) Novel camera calibration method based on two crossed straight lines
CN105046691A (en) Method for camera self-calibration based on orthogonal vanishing points
CN102637094A (en) Correction information calculation method and system applied to optical touch device
CN104200460A (en) Image registration method based on images characteristics and mutual information
CN109445229B (en) Method for obtaining focal length of zoom camera with first-order radial distortion

Legal Events

Date Code Title Description
C57 Notification of unclear or unknown address
DD01 Delivery of document by public notice

Addressee: Tang Pin

Document name: Notification of Passing Preliminary Examination of the Application for Invention

C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100602

Termination date: 20110202