CN103512559A - Shot monocular video pose measurement method and target pattern - Google Patents

Shot monocular video pose measurement method and target pattern Download PDF

Info

Publication number
CN103512559A
CN103512559A CN201310464872.9A CN201310464872A CN103512559A CN 103512559 A CN103512559 A CN 103512559A CN 201310464872 A CN201310464872 A CN 201310464872A CN 103512559 A CN103512559 A CN 103512559A
Authority
CN
China
Prior art keywords
bullet
subregion
target pattern
alpha
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310464872.9A
Other languages
Chinese (zh)
Other versions
CN103512559B (en
Inventor
谌德荣
李蒙
王长元
宫久路
周广铭
蒋玉萍
高翔霄
杨晓乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Beijing Institute of Astronautical Systems Engineering
Original Assignee
Beijing Institute of Technology BIT
Beijing Institute of Astronautical Systems Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT, Beijing Institute of Astronautical Systems Engineering filed Critical Beijing Institute of Technology BIT
Priority to CN201310464872.9A priority Critical patent/CN103512559B/en
Priority claimed from CN201310464872.9A external-priority patent/CN103512559B/en
Publication of CN103512559A publication Critical patent/CN103512559A/en
Application granted granted Critical
Publication of CN103512559B publication Critical patent/CN103512559B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/36Videogrammetry, i.e. electronic processing of video signals from a single source or from different sources to give parallax or range information

Abstract

The invention relates to a target pattern for a shot and a monocular video pose measurement method applying the pattern. The target pattern is a strip region around the surface of the outer circumference of the shot; 6 sub-regions with colors set at intervals are equally divided in the direction of a shot axis; a code region is arranged inside each sub-region; 4 peaks of each sub-region are used as feature points. When the target pattern is utilized to perform pose measurement, the shot is required to be positioned in the view of a camera. An exclusive triad code is set for each sub-region. The sub-regions are identified according to the codes, and the feature points are identified according to positions of the feature points in the sub-regions. When shot pose is calculated, firstly image coordinates of the feature points in the image of the camera are determined, and then the coordinate value of the shot coordinate system of a corresponding feature point is utilized, and an equation set is established through the coordinate system transformation, so that the shot pose is obtained. The method is simple and wide in application range.

Description

A kind of bullet monocular video pose measuring method and target pattern
Technical field
The present invention relates to a kind of bullet target pattern for airbound target, and the monocular video pose measuring method of applying a kind of bullet of this bullet target pattern.
Background technology
In the development and process of the test of shell bullet, often need to obtain the pose parameter of bullet, as landing attitude of bullet etc.But conventionally accurate pose measurement equipment is not installed, thereby the pose measurement of bullet just becomes a difficult problem on bullet.
" determining the axis collimation method of the extraterrestrial target angle of pitch and crab angle with light altimetric image " (National University of Defense technology's journal of Yu Qifeng etc., 22 volumes (2) in 2000: 15-19 page), axis collimation method has been proposed, the method is first utilized the equation of bullet contour feature matching bullet, according to equation, obtain again the angle of pitch and the crab angle of bullet, but the method cannot be obtained the roll angle of bullet, due to what adopt, be binocular video measurement scheme simultaneously, measurement range is less.Conventionally, because bullet smooth surface, texture are single, the six-dimensional pose parameter that the bullet edge that video camera is taken can not provide enough characteristic informations to resolve bullet, thereby need to be at bullet surface design target pattern.Target pattern designs in the plane conventionally at present, for example, in the scheme that document " the no-manned plane three-dimensional position and orientation estimation method based on plane target following " proposes, designed on the ground the target pattern of a plurality of black and white block features, realize unmanned plane independent landing, (referring to: Iv ' an F, 3D pose estimation based on planar object tracking for UAVs control2010IEEE International Conference on Robotics and Automation, Alaska, USA:IEEE, May, 2010:35-41)." the object pose method of measuring based on monocular vision " of Zhao Rui, employing luminophor cursor has been proposed, utilize position and orientation measurement that single camera shooting realized Three dimensional Targets (referring to the object pose method of measuring based on monocular vision, HeFei University of Technology's academic dissertation, 2005).And the target design on relevant bullet surface and relevant measuring method have not been reported.
The design of target pattern has appreciable impact to bullet pose measurement precision, on the curved surface of bullet, design target pattern and mainly have 3 difficult points: first, bullet is solid of revolution, should guarantee that video camera all the time can be in target pattern photographs to enough unique points when bullet rotates; The second, bullet integral body is a solid surface, and the design of target pattern should have the unique point that meets certain constraint (as coplanar); The 3rd, should make target pattern adapt to bullet resemblance.
Bullet of the present invention refer to bullet shape ,Ji top on ordinary meaning roughly tapered ,Zhong lower end roughly cylindrical, integral body be the bullet that rotary structure and bullet middle part do not have rudder sheet, this structure is the most common in bullet.
For monocular pose measurement system, according to Hu Zhanyi etc. " about a bit discussing of P4P problem " (referring to " robotization journal ", calendar year 2001s 27 volumes (6): 770-776 page), if the image coordinate of 4 coplanar unique points can obtain the unique solution of object pose parameter on known measured target.Therefore need to guarantee that shooting function photographs all the time 4 coplanar unique points in the target pattern of bullet.
Summary of the invention
The problem that cannot adapt to bullet curved-surface structure in order to solve existing plane target drone, and the pose measurement precision that improves bullet, the object of the invention is: at a kind of target pattern of bullet surface design, this target pattern has more than 4 coplanar characteristic point in the visual field of video camera, and proposes bullet monocular video pose measuring method corresponding thereto.
The present invention solves the technical scheme that its technical matters takes:
A target pattern, and the bullet monocular video pose measuring method of applying this pattern.
Target pattern setting on bullet is the belt-like zone around bullet external peripheral surface, and is divided into 6 sub regions along playing direction of principal axis; The color that described subregion is set to two kinds of colors and adjacent subarea territory is different, and described two kinds of colors and bullet surface color have obvious contrast.
The side of described subregion is coplanar with bullet axle, and the bottom and upper segment of described subregion is positioned at perpendicular on the circle of Dan Zhou cross section.
Area compare little region is set as coding region in the inside of described subregion, the subregion color at the color of described coding region and place has obvious contrast; The shape of coding region can be for square or circular, or other is easy to the shape of identification; And the inner coding region quantity arranging of subregion of same color is different; 4 summits of described every sub regions are as the unique point of target pattern.
A kind of bullet monocular video pose measuring method of applying described target pattern, comprise the following steps: video camera is arranged on capable of regulating pedestal, towards the bullet with described target pattern that will carry out pose measurement, and make bullet be arranged in the visual field of video camera and camera optical axis and the angle that plays axle 30 °~150 ° scopes, shooting function is taken bullet side;
The resolution pixel of video camera is more than 640 * 480, and field angle is 25 °~45 ° (corresponding equivalent focal length is 43.5mm~81.2mm);
The surface of described bullet arranges above-mentioned target pattern of the present invention, and 4 summits of described every sub regions, as the unique point of target pattern, make described target pattern in the visual field of video camera, have all the time more than 4 coplanar characteristic point.
Take the image of bullet, and by camera acquisition to view data send to processor, resolve the pose parameter of bullet;
According to the quantity that comprises coding region in the color of subregion described in target pattern of the present invention and subregion, for arranging unique recognition coding, every sub regions identifies.
Coding can adopt triad coding, and first described of triad coding represents subregion color, the second of coding and the 3rd quantity that represents the coding region comprising in described every sub regions.According to described coding, for example triad encodes recognin region, then according to described unique point the distributing position recognition feature point on described subregion.
While resolving described bullet pose, first from the image of video camera, extract and identify the image coordinate of the unique point of described target pattern, then the coordinate in bullet coordinate system in conjunction with character pair point, carries out coordinate system transformation and sets up system of equations, combines and solves the relative pose that obtains bullet;
Set up bullet coordinate system, camera coordinate system and image coordinate system, determine the coordinate of described unique point in bullet coordinate system; Extract described unique point coordinate in image coordinate system, extraction step is as follows:
(1) adopt background subtraction method to detect bullet imaging region;
(2) at bullet imaging region, utilize the Image Segmentation Using of threshold method to target pattern, and the image after cutting apart is carried out to connected component analysis, connection district that pixel quantity the is maximum subregion as target pattern will be comprised, in the threshold value adopting when cutting apart and this connected region, comprise little connected region quantity and carry out constructor regional code, to identify this subregion;
(3) extract the profile of this connected region, utilize Hough converter technique from subregion profile, to extract the formed rectilinear point of point on subregion side, and two straight lines of matching; The formed curve point of point from subregion profile on the bottom and upper segment of segregant region, and fitted ellipse curve, ask respectively the intersection point of straight line and this elliptic curve, obtains subregion summit as the target pattern characteristics point coordinate in image coordinate system;
(4) after recognin region and acquisition unique point image coordinate, the distributing position recognition feature point according to described unique point on described subregion;
The respective coordinates of described unique point under bullet coordinate system is known quantity, utilizes the coordinate of described unique point in bullet coordinate system and the coordinate in its corresponding image coordinate system, through coordinate system transformation, calculates bullet with respect to the pose of video camera.
Because target design is on the side of bullet, so should guarantee while laying video camera that video camera can photograph the side of bullet.If in the larger scene of the pose amplitude of variation of bullet, for example, when there is the state of upset in bullet often, need two covers or the two above video cameras of cover and corresponding graphics processing unit, form respectively two covers or the above monocular video measuring system of two covers, adjust video camera position and towards, make it in projectile attitude change procedure, have at least a video camera can photograph bullet side all the time.
Method of the present invention, has and does not contact bullet, the noncumulative advantage of measuring error than traditional bullet pose measuring methods such as gyros.And owing to adopting monocular video measuring system, than axis of the prior art method system architecture is simpler, measurement range is large and can measure the roll angle of bullet.By adopting target pattern of the present invention, the surface of bullet is divided into 6 sub regions, and the unique point in every sub regions all meets coplanarity constraint, overcome the deficiency that existing plane target drone cannot adapt to bullet curved-surface structure.Target pattern of the present invention can be used for various bullet and the guided missiles with rotary structure, is not subject to bullet bore or other size restrictions, and the scope of application is very wide.
Below in conjunction with accompanying drawing, after the specific embodiment of the present invention is described in detail, other features and advantages of the present invention manifest with will be more readily apparent from.
Accompanying drawing explanation
Fig. 1 is that the system of the monocular video pose measuring method of bullet of the present invention forms schematic diagram;
Fig. 2 is monocular bullet imaging model schematic diagram of the present invention;
Fig. 3 is bullet of the present invention cross section monocular imaging model schematic diagram;
Fig. 4 is that bullet subregion of the present invention is divided schematic diagram;
Fig. 5 is bullet unique point co-planar designs schematic diagram of the present invention;
Fig. 6 is the unique point distribution Ji Zi sector code Design figure of target pattern of the present invention;
Fig. 7 is target patterning figure of the present invention;
Fig. 8 is the axis of bullet of the present invention and the camera optical axis angle target pattern while being 30 °;
Fig. 9 is each Coordinate system definition of bullet monocular video pose measurement system of the present invention;
Figure 10 is conventional coordinates Transformation Graphs of the present invention;
Figure 11 is that ideal coordinates of the present invention are Transformation Graphs.
Embodiment
Below in conjunction with accompanying drawing and a typical embodiment, the technical scheme to the target pattern of bullet of the present invention, and a kind of monocular video pose measuring method of bullet is elaborated.
Referring to Fig. 6, shown in 7,8, a kind of target pattern of bullet, is designed to around the belt-like zone of the external peripheral surface of bullet, and described target pattern is divided into 6 sub regions 5 along playing direction of principal axis; The color interval of adjacent subregion is set to two kinds of colors, and described two kinds of colors and bullet surface color have obvious contrast.Target pattern is positioned at bullet outside surface, and is arranged on the larger part of diameter in bullet xsect as far as possible.
6 sub regions 5 of bullet are set to respectively the obvious other two kinds of colors of surface color contrast with bullet itself.Shown in Figure 7, because the bullet in this embodiment is grey, select black and white to design target pattern.Also can as the case may be, use the colors such as redness, black or blueness.
The side 8 of subregion is coplanar with bullet axle, and the top 9 of subregion and following 10 is positioned at perpendicular on the circle of Dan Zhou cross section.
For identifying each sub regions 5, in the subregion inside of same color, 0~2 coding region 7 is set.Coding region 7 is area compare little region, has obvious contrast with subregion 5 colors.In this embodiment, the coding region in black subregion is selected white, and the coding region in white subregion is selected black.
The shape of coding region 7 can, for square or circular, also can adopt other to be easy to the shape of identification.
In conjunction with the quantity of coding region 7 and the color of place subregion 5, give every sub regions 5 designs unique triad coding.4 summits that the unique point 6 of target pattern is subregion.
The side of described subregion is coplanar with bullet axle, and the bottom and upper segment of described subregion is positioned at perpendicular on the circle of Dan Zhou cross section.
Area compare little region is set as coding region in the inside of described subregion, the subregion color at the color of described coding region and place has obvious contrast; The shape of coding region can be for square or circular, or other is easy to the shape of identification; The inner coding region quantity arranging of subregion of described same color is different, and the coding region quantity in the present embodiment is 0~2.
4 summits of described subregion are as the unique point of target pattern.
Shown in Figure 1, be that the system of the bullet monocular video pose measuring method of the above-mentioned target pattern of application of the present invention forms schematic diagram.Comprise video camera 1 and the bullet 3 with above-mentioned target pattern, video camera 1 is arranged on adjustable pedestal 2.Adjust pedestal, make video camera 1 towards bullet 3, described bullet 3 is arranged in the visual field of video camera 1.By camera acquisition to view data send to the pose parameter of resolving bullet in processor.
The resolution pixel of described video camera is 768 * 576, and field angle is 30 °, and equivalent focal length is 67.2mm.
First describe the partial occlusion problem that how to solve target pattern in detail.
In bullet pose change procedure, video camera can only photograph bullet part surface.For guaranteeing there are all the time enough characteristic informations in the imaging of target pattern, with target pattern, bullet surface is divided into the subregion of some, make bullet in motion process, have at least a complete subregion to be photographed by video camera; In every sub regions, design the unique point of some, just can guarantee has enough characteristic informations in the imaging of target pattern all the time again.In conjunction with Fig. 2, describe the process of dividing subregion in detail.
Referring to Fig. 2, set up bullet monocular imaging model, analyze subregion division numbers.θ represents camera optical axis o cz cand the angle between bullet axis, l prepresent bullet length, l ithe imaging length that represents bullet.Use l p' representing that bullet is in the projected length of the plane of delineation 4, between them, pass is:
l′ p=l psinθ,l I∝l p′ (1)
When θ ∈ (0,30 °) ∪ (150 °, 180 °), l<sub TranNum="130">p</sub>'<0.5l<sub TranNum="131">p</sub>, l<sub TranNum="132">i</sub>half of length while being also less than it in θ=90 °, the deformation quantity of now bullet imaging is larger, designs also greatlyr in the target pattern imaging deformation on bullet surface, causes unique point to extract.Therefore only for design mark in θ ∈ [30 °, 150 °] scope, this scope inner structure that makes to be marked at θ is clear.(within the scope of this, target pattern imaging deformation is relatively little).
In Fig. 2, use face x co cz ccrosscut bullet, it is oval that the bullet cross section obtaining is approximately.Referring to Fig. 3, set up the monocular vision model of this bullet xsect.
L represents video camera photocentre o cto oval center of circle o odistance; C point is camera optical axis o cz cwith oval intersection point, camber line camber line
Figure BDA0000392557660000072
two sections of identical camber lines; Line o ca, o cb and oval tangent and A, B 2 points, point-to-point transmission camber line respectively
Figure BDA0000392557660000073
can be photographed by video camera.The central angle that this camber line is corresponding is φ, and the camber line that φ is corresponding all can be photographed by video camera.
By oval segmentation and make the central angle size that every section of oval camber line is corresponding identical; Reasonably design number of fragments, guarantee no matter how bullet rotates around playing axle, has all the time one section of complete camber line to fall into the camber line that angle φ is corresponding, should meet following condition:
Figure BDA0000392557660000074
Figure BDA0000392557660000075
represent every section of central angle that oval camber line is corresponding.With this understanding, in the camber line that angle φ is corresponding, there is all the time one section of above complete camber line; For example, if this condition does not meet, camber line
Figure BDA0000392557660000076
camber line
Figure BDA0000392557660000077
corresponding central angle is
∠Fo oC=∠Eo oC>0.5φ (3)
Camber line
Figure BDA0000392557660000081
camber line
Figure BDA0000392557660000082
all do not have can completely to fall within imaging window.
Calculate φ to determine the oval umber amount of dividing.In Fig. 2, ellipse short shaft a and major axis b length are respectively
a=R,b=R/sinθ (4)
In formula, R is bullet radius.Referring to Fig. 2, the coordinate (x of points of tangency B b, z b) meet
x b 2 a 2 + ( z b - L ) 2 b 2 = 1 x b = kz b - - - ( 5 )
In formula, k is tangent line o cthe slope of B.Owing to only having an intersection point, solving, obtain the coordinate that B is ordered
k = a L 2 - b 2 , x b = a d o L 2 - b 2
And then try to achieve φ
&phi; = 2 arctan ( x b L - z b ) = 2 arctan ( a b 2 L 2 - b 2 ) = 2 arctan ( sin 2 &theta; ( L 2 sin 2 &theta; - R 2 ) R )
Sin θ is larger, and φ is larger.When θ is 30 ° or 150 °, sin θ is minimum, therefore
&phi; min = 2 arctan ( L 2 - 4 R 2 ) 4 R )
In actual monocular video measuring system, conventionally meet L >=10R, so there is φ > 135.5 °.For meeting formula (2), oval segments is answered n >=6.In Fig. 3, get 1 D and make arc
Figure BDA0000392557660000087
for one section of arc wherein,
Figure BDA0000392557660000088
cross D point and make oval tangent line friendship o cz cin a G
Figure BDA0000392557660000089
l′ D≈l Dsin∠CGD,l I∝l′ D (6)
In formula, l dthe 1 ° of camber line that angle is corresponding that represents D point place, is approximately near linear here, l ' drepresent that this camber line is in the projected length of picture plane.
Get θ=30 °, when n=6, l ' d≈ 0.27l d, the camber line of equal length now, the imaging length at D point place is about C point place 1/4th; N value is larger, and the camber line imaging deformation of D point place is just less, is also just more conducive to the extraction of the unique point of design on bullet; But along with the increase of n, target pattern block count increases, and the design complexities of target pattern also can increase, on target pattern, between unique point, distance reduces simultaneously, and this can reduce the resolving power ability of system to attitude angle.Referring to Fig. 4, thereby choose n=6, be about to ellipse and be divided into 6 sections of camber lines, accordingly all cross sections of bullet are all divided into 6 sections of camber lines by central angle, be about to bullet surface and be divided into 6 sub regions 5 '.
In Fig. 4, no matter bullet, how around playing axle rotation, always has a complete subregion 5 ' to appear in camera field of view.If in every sub regions 5 ', the unique point of design some just can make there are all the time enough characteristic informations in the imaging of target pattern again.
Secondly, in every sub regions, design 4 unique points, and guarantee these 4 coplanar, describe in detail and how to guarantee the unique point coplanarity on target pattern below:
Referring to Fig. 5, coplanar for guaranteeing unique point, unique point is designed respectively on two circular arcs perpendicular to Dan Zhou cross section circle.Face ABC and face A ' B ' C ' are perpendicular to and play ZhouAA′ cross section, coplanar for guaranteeing unique point B, C, B ', C ', should make line BB ', line CC ' respectively with to play axle AA ' coplanar.Prove as follows:
If two sections radius of circle is identical, there is BB ' //AA ', CC ' //AA '.So gauge point B, C, B ', C ' is coplanar.
If two sections radius of circle is not identical, because line BB ', line CC ' are all coplanar with bullet axle AA ', BB ' extended line meets at D with playing axle AA ', and CC ' extended line meets at D ' with bullet axle AA '.According to triangle similarity, have
A &prime; B &prime; AB = A &prime; D AD , A &prime; C &prime; AC = A &prime; D &prime; AD &prime; - - - ( 7 )
Due to
AB=AC=R 1,A′B′=A′C′=R 2 (8)
R in formula 1, R 2be respectively Liang Ge cross section radius of a circle.And then
A &prime; D AD = A &prime; D &prime; AD &prime; A &prime; D AA &prime; + A &prime; D = A &prime; D &prime; AA &prime; + A &prime; D &prime; - - - ( 9 )
Can obtain
A′D′=A′D (10)
Be D ', 2 coincidences of D, line BB ', line CC ' intersect at D (D ').Two intersecting straight lines are inevitable coplanar, so unique point B, C, B ', C ' are coplanar.Card is finished.
Referring to Fig. 6, due to banded subregion 5 in Fig. 4, side with to play axle coplanar, according to above analyzing with two perpendicular to the xsect crosscut bullet that plays axle, 6 sub regions that only retain two xsect center sections, other parts are recovered the original color of bullet (not designing target pattern), and using the summit of these subregions as the unique point 6 of target pattern.
The subregion retaining should be arranged in the part that bullet diameter is larger as far as possible, and target pattern should be enough large along playing axial width.
Solving after target pattern partial occlusion problem and characteristic point position design, obtaining the general configuration of target pattern:
For making the unique point on target pattern meet coplanar constraint, with two, perpendicular to the xsect that plays axle, the subregion in Fig. 45 ' is divided into three sections, retain center section to form the banded target pattern part around bullet excircle, target pattern is divided into 6 sub regions 5 around bullet outside surface, and 4 summits of every sub regions 5 are as unique point 6.The side 8 of subregion 5 is coplanar with bullet axle, and the top 9 of subregion 5 and following 10 is positioned on the cross section circle perpendicular to the bullet longitudinal axis.According to target pattern of the present invention, no matter how bullet rotates, and always has a complete subregion 5 to appear in camera field of view.
According to the color of the quantity of described coding region and subregion, be that every sub regions arranges unique triad coding.
Describe the code Design of target pattern below in detail:
For completing unique point identification, should make each unique point of target pattern all there is unique identifiability.Referring to Fig. 7, according to subregion color and comprise coding region 7 quantity and design unique coding to every sub regions.The all subregion coding of this specific embodiment sees the following form.
Figure BDA0000392557660000101
According to adjacent sequential, by subarea number, be subregion one, subregion two successively ..., subregion six, wherein subregion one, three, five is designed to black, and subregion two, four, six is designed to white; In subregion one, four, there is no coding region, in subregion three, six, contain 1 coding region, in subregion two, five, contain 2 coding regions.The design of coding is: the 1st represents that subregion color, black are 0, and white is 1; The 2nd, 3 quantity that represent coding region of coding, there is no coding region is 00, containing 1 coding region, is 01, containing 2 coding regions 7, is 10.
According to the color of subregion and the quantity that comprises coding region recognin region first, behind recognin region, according to unique point, in subregion Nei position, as being as the criterion with bullet top, according to unique point, in subregion lower left, still in upper right side, carry out recognition feature point upward.
Referring to Fig. 8, when the axis of bullet and camera axis angle are 30 °, the deformation quantity of bullet imaging is maximum in scope of design.In Fig. 8, when target deformation quantity is maximum, the structure of target pattern is very clear, and each several part does not have adhesion, can extract unique point.
As shown in Figure 8, have complete subregion one in image, have 4 coplanar unique points on subregion one, unique point 11~14 is designated as respectively P0~P3, utilizes these points can resolve the pose parameter of bullet.
The target image of Fig. 8 of take is below example, in conjunction with target pattern of the present invention, describes the monocular video pose measuring method of a kind of bullet of this target pattern of application in detail, to resolving the process of bullet pose, is described in detail.Hereinafter, feature point extraction process is the application's inventive point, and pose resolves that principle and formula adopt, is technology well known in the art, and the application carries out resolving of bullet pose in conjunction with target pattern.
Referring to Fig. 9, each coordinate system of define system first.Take subregion one as example, illustrate that 4 unique points on anyon region coordinate figure in each coordinate system is:
(1) setting up bullet coordinate is O-XYZ.In this example, this true origin, in bullet bottom center, is established i unique point P on subregion one icoordinate at this coordinate system is W i=(X i, Y i, Z i) t, i=0,1,2,3, because the design parameter of target is all known, so the coordinate of unique point under this coordinate system is known quantity;
(2) set up camera coordinate system O c-X cy cz c.This coordinate origin, at video camera photocentre, is established i unique point P on subregion one icoordinate in this mark system is the coordinate of unique point under this coordinate system is unknown quantity;
(3) set up image pixel coordinate system o i-uv, take the plane of delineation upper left corner as initial point, and u, v axle are respectively along transverse axis and the longitudinal axis of image.If on subregion one i unique point Pi at the coordinate of this coordinate system for being respectively (u i, v i) t, i=0,1,2,3, the coordinate of unique point under this coordinate system obtains by extraction.Image coordinate is o-xy, and the intersection point of optical axis and the plane of delineation of take is initial point.If i unique point P on subregion one icoordinate in image coordinate system is (x i, y i) t, i=0,1,2,3.
While resolving, first want the coordinate (u of extract minutiae in image pixel coordinate system i, v i) t, i=0,1,2,3, extraction step is as follows:
(1) adopt background subtraction method to detect bullet imaging region.Video camera transfixion in measuring process, first takes a width background image; Utilizing follow-up projectile measurement image subtracting background image, is not that 0 region is bullet regional imaging in difference image;
(2), at bullet imaging region, according to target gray-scale value, select suitable threshold value to Image Segmentation Using.Select respectively different Threshold segmentation black regions and white portion, as cut apart black region, less threshold value can be set, threshold value can be chosen as T1=20; Cut apart white portion, larger threshold value can be set, threshold value can be chosen as T2=200.Image after cutting apart is carried out to connected component analysis, connection district that pixel quantity the is maximum subregion as target pattern will be comprised, in the threshold value adopting when cutting apart and this connected region, comprise little connected region quantity and carry out constructor regional code, to identify this subregion;
(3) morphological method of utilizing first expansion to deduct again the front image that expands is extracted the profile of this connected region, utilize Hough converter technique separated rectilinear point from profile, the rectilinear point here refers to the point on subregion side, and two straight lines of matching, wherein, the method for fitting a straight line belongs to the known technology in this area; Detach Spline point from subregion profile, curve point refers to the point on subregion bottom and upper segment here, and two elliptic curves of matching, ask respectively the intersection point of two straight lines and two elliptic curves, have four intersection points, exclude two intersection points of apparent error, can obtain subregion apex coordinate.Wherein, the method for matched curve belongs to the known technology in this area;
(4) according to the color of subregion, the quantity that includes coding region, obtain the coding of this subregion, according to coding, can complete subregion identification.The distributing position recognition feature point on subregion according to unique point, for example, be as the criterion upward with bullet again, according to the position of unique point, is lower left or the lower right at subregion, recognition feature point.
By above step, obtained unique point on the subregion one coordinate (u under image coordinate system i, v i) t, i=0,1,2,3, and the respective coordinates W of these unique points under bullet coordinate system i=(X i, Y i, Z i) t, i=0,1,2,3 is known quantity.Pose resolves algorithm and utilizes the bullet coordinate of these 4 coplanar characteristic points and the image pixel coordinate of their correspondences to solve bullet with respect to the relative pose of video camera.The process of resolving belongs to the known technology of this area, and concrete process of solution is as follows:
Utilize formula (11) by unique point the coordinate (u in image pixel coordinate system o-uv i, v i) tbe converted into image coordinate system coordinate (x i, y i) t.Wherein, dx and dy are the distance of pixel on x axle and y direction of principal axis in pixel coordinate system, (u 0, v 0) be video camera principal point coordinate, these all belong to camera intrinsic parameter, and camera intrinsic parameter obtains by camera calibration.Camera calibration is technology well known in the art.
x i = u i dx - u 0 dx y i = v i dy - v 0 dy - - - ( 11 )
According to perspective projection model, the pass between the image coordinate system coordinate of unique point camera coordinates corresponding to it is:
x i = f X i c Z i c y i = f Y i c Z i c - - - ( 12 )
In formula, f is focal length of camera, also by camera calibration, obtains.
According to coordinate system transformation relation, have
Figure BDA0000392557660000135
m represents the relative coordinate conversion between camera coordinate system and bullet coordinate system.
X i c Y i c Z i c 1 = r 11 r 12 r 13 t x r 21 r 22 r 23 t y r 31 r 32 r 33 t z 0 0 0 1 X i Y i Z i 1 - - - ( 13 )
The pose parameters R of bullet in formula, t is defined as follows:
R = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 , t = t x t y t z
R wherein, t is respectively rotation matrix and motion vector.The image coordinate system coordinate of unique point and the pass between bullet coordinate are
x i f = r 11 X i + r 12 Y i + r 13 Z i + t x r 31 X i + r 32 Y i + r 33 Z i + t z y i f = r 21 X i + r 22 Y i + r 23 Z i + t y r 31 X i + r 32 Y i + r 33 Z i + t z - - - ( 14 )
Metzler matrix can be done to following decomposition:
M=S -1CT (15)
In formula (15), will solve Metzler matrix and be converted to solution matrix S respectively -1, C, T.Three matrixes are defined as follows:
(1) referring to Figure 10, to bullet coordinate, system converts, make 4 unique point P0 on subregion one, P1, P2, P3 position is in new coordinate system, P0 is the initial point of new coordinate system, and P1 is on the positive axis of the X-axis of new coordinate system, and P2 is positioned at the first interval of new coordinate system XY plane, this conversion represents with T, claims that new coordinate is conventional coordinates.
(2) referring to Figure 11, camera coordinate system is converted, making P0 picture point p0 is image coordinate system initial point, P1 picture point p1 is on the positive axis of image coordinate system x axle, this conversion represents with S, claims that new camera coordinates is ideal coordinates systems, and the image obtaining is ideal image.Camera focus invariant position now, be just shown in the initial point of conventional coordinates α, and the x axle of image coordinate system is parallel with the X-axis of conventional coordinates.
(3) Relative Transformation between conventional coordinates and ideal coordinates system is represented by Matrix C, obtain C.
Utilize known technology to solve each transformation matrix below:
(1) ask T
Referring to Figure 10, by thinking above, bullet coordinate system is become to conventional coordinates through translation, rotational transform, make the evolution of 4 unique points to the position of conventional coordinates appointment.
Through translation transformation, P0 point is moved to new coordinate origin, coordinate transform T orepresent, W i o = ( X i o , Y i o , Z i o , 1 ) T = T o W i For coordinate figure after translation.
T o = 1 0 0 - X o 0 1 0 - Y o 0 0 0 - Z o 0 0 0 1
First θ angle is rotated to around Z axis in bullet coordinate system after translation, make the coordinate figure of P1 in the positive axis of new coordinate axis X-axis, conversion R θrepresent, W i &theta; = ( X i &theta; , Y i &theta; , Z i &theta; , 1 ) T = R &theta; W i o .
R &theta; = cos &theta; sin &theta; 0 0 - sin &theta; cos &theta; 0 0 0 0 1 0 0 0 0 1
Guarantee that P1, in the positive axis of X-axis, should have
Figure BDA0000392557660000154
can try to achieve
&theta; = tan - 1 ( Y 1 o X 1 o ) - - - ( 15 )
If X 1 o < 0 , θ=θ+π, makes X 1 &theta; > 0 ; If X 1 o = Y 1 o = 0 , θ=0.
In like manner, then the bullet coordinate after conversion is Y-axis rotation β angle, and making the Z axis coordinate that P1 is ordered is 0.Conversion R βrepresent, W i &beta; = ( X i &beta; , Y i &beta; , Z i &beta; , 1 ) T = R &beta; W i &theta;
R &theta; = cos &beta; 0 - sin &beta; 0 0 1 0 0 sin &beta; 0 cos &beta; 0 0 0 0 1
Make P1 in XY plane, should have
Figure BDA00003925576600001511
can be in the hope of
&beta; = tan - 1 ( - Z 1 &theta; X 1 &theta; ) - - - ( 16 )
In like manner, then around conversion after bullet coordinate be X-axis rotation alpha angle, make P2 at XY plane first quartile, conversion R αrepresent, W i &alpha; = ( X i &alpha; , Y i &alpha; , Z i &alpha; , 1 ) T = R &alpha; W i &beta;
R &theta; = 1 0 0 0 0 cos &alpha; - sin &alpha; 0 0 - sin &alpha; cos &alpha; 0 0 0 0 1
Make P2 at XY plane first quartile, should have
Figure BDA00003925576600001515
can be in the hope of
&alpha; = tan - 1 ( Z 2 &beta; Y 2 &beta; ) - - - ( 17 )
If
Figure BDA0000392557660000162
α=α+π, to guarantee that P2 is at XY plane first quartile.
Use X i, Y i, Z ireplace variable in above formula (15~17), can obtain
&theta; = tan - 1 ( Y 1 - Y 0 X 1 - X 0 ) &beta; = tan - 1 { - ( Z 1 - Z 0 ) ( X 1 - X 0 ) 2 + ( Y 1 - Y 0 ) 2 } &alpha; = tan - 1 { [ ( X 2 - X 0 ) cos &theta; + ( Y 2 - Y 0 ) sin &theta; ] sin &beta; + ( Z 2 - Z 0 ) cos &beta; - ( X 2 - X 0 ) sin &theta; + ( Y 2 - Y 0 ) cos &theta; } - - - ( 18 )
Wherein, if X<sub TranNum="352">1</sub><X<sub TranNum="353">0</sub>, θ=θ+π, if X<sub TranNum="354">1</sub>=X<sub TranNum="355">0</sub>and Y<sub TranNum="356">1</sub>=Y<sub TranNum="357">0</sub>, θ=0; If-(X<sub TranNum="358">2</sub>-X<sub TranNum="359">0</sub>) sin θ+(Y<sub TranNum="360">2</sub>-Y<sub TranNum="361">0</sub>) cos θ<0, α=α+π.
Matrix T is as shown in (19) formula.Now, the initial point that unique point P0 in new object coordinates is, P1 is on X-axis positive axis, and P2 is in XY plane.
T = T 11 T 12 T 13 T 14 T 21 T 22 T 23 T 24 T 31 T 32 T 33 T 34 0 0 0 1 - - - ( 19 )
Wherein:
T 11 = cos &beta; cos &theta; , T 12 = cos &beta; sin &theta; , T 13 = - sin &beta; , T 21 = - cos &alpha; sin &theta; + sin &alpha; sin &beta; cos &theta; , T 22 = cos &alpha; cos &theta; + sin &alpha; sin &beta; sin &theta; , T 23 = sin &alpha; cos &beta; , T 31 = sin &alpha; sin &theta; + cos &alpha; sin &beta; cos &theta; , T 32 = - sin &alpha; cos &theta; + cos &alpha; sin &beta; sin &theta; , T 33 = cos &beta; cos &theta; , T i 4 = - X 0 T i 1 - Y 0 T i 2 - Z 0 T i 3 , i = 1,2,3
Have:
( X i &alpha; , Y i &alpha; , Z i &alpha; , 1 ) T = T &CenterDot; ( X i , Y i , Z i , 1 ) T , i = 0,1,2,3
(2) ask S
Referring to Figure 11, from former camera coordinate system, be transformed into the S that is transformed to of ideal coordinates system.Former camera coordinate system is rotated, and making P0 imaging point p0 is image coordinate system initial point, and P1 picture point p1 is in the positive axis of the new coordinate system X-axis of image coordinate system.
Bullet is done to T conversion, do not change unique point coordinate in image.Therefore unique point is identical in bullet coordinate system with conventional coordinates imaging.Therefore
x i = x i &alpha; = f X i c Z i c y i = y i &alpha; = f Y i c Z i c - - - ( 20 )
In formula (20), f is focal length of camera, refer to the coordinate in image coordinate with conventional coordinates α representation feature point.
First, around camera coordinate system Y caxle rotation φ angle, is positioned on the y axle of image coordinate system p0, conversion R φrepresent, have W i &phi; = ( X i &phi; , Y i &phi; , Z i &phi; , 1 ) T = R &phi; W i c . Wherein
R &phi; = cos &phi; 0 - sin &phi; 0 0 1 0 0 sin &phi; 0 cos &phi; 0 0 0 0 1
Unique point still meets perspective projection relation under new coordinate system, that is:
x i &phi; = f X i &phi; Z i c y i &phi; = f Y i &phi; Z i c
P0 is positioned on the y axle of image coordinate system, should make
Figure BDA0000392557660000176
Figure BDA0000392557660000177
according to rotation matrix R φhave X 0 c cos &phi; - Z 0 c sin &phi; = 0 .
&phi; = tan - 1 ( X 0 c Z 0 c ) = tan - 1 ( x 0 &alpha; f ) , 0 &le; | &phi; | &le; &pi; 2 - - - ( 21 )
In like manner, around camera coordinate system X caxle rotation ω angle, makes p0 point be positioned at image coordinate system initial point, conversion R ωrepresent, have W i &omega; = ( X i &omega; , Y i &omega; , Z i &omega; , 1 ) T = R &omega; W i &phi; .
R &omega; 1 0 0 0 0 cos &omega; sin &omega; 0 0 - sin &omega; cos &omega; 0 0 0 0 1
Unique point still meets perspective projection relation under new coordinate system, that is:
x i &omega; = f X i &omega; Z i &omega; y i &omega; = f Y i &omega; Z i &omega;
Make p0 point be positioned at image coordinate system initial point, should have
Figure BDA0000392557660000183
so
Figure BDA0000392557660000184
&omega; = tan - 1 ( - Y 0 &phi; Z 0 &phi; ) = tan - 1 ( - y 0 &alpha; cos &phi; f ) , 0 &le; | &omega; | &le; &pi; 2 - - - ( 22 )
In like manner, around camera coordinate system Z caxle rotation ρ angle, makes p1 point be positioned at image coordinate system x axle positive axis, conversion R ρrepresent, have W i &rho; = ( X i &rho; , Y i &rho; , Z i &rho; , 1 ) T = R &phi; W i &omega;
R &rho; = cos &rho; sin &rho; 0 0 - sin &rho; cos &rho; 0 0 0 0 1 0 0 0 0 1
Unique point still meets perspective projection relation under new coordinate system, that is:
x i &rho; = fX i &rho; Z i &rho; y i &rho; = fY i &rho; Z i &rho;
P1 point is positioned at image coordinate system x axle positive axis, should y 1 &rho; = 0 . Y 1 &rho; = - X 1 &omega; sin &rho; + Y 1 &omega; cos &rho; = 0
&rho; = tan - 1 ( Y 1 &omega; X 1 &omega; ) = tan - 1 ( y 1 &alpha; cos &omega; + x 1 &alpha; sin &phi; sin &omega; + f cos &phi; sin &omega; x 1 &alpha; cos &phi; - f sin &phi; ) - - - ( 23 )
To sum up formula (21~23) can obtain three rotation angle:
&phi; = tan - 1 ( x 0 &alpha; f ) , 0 &le; | &phi; | &le; &pi; 2 &omega; = tan - 1 ( - y 0 &alpha; cos &phi; f ) , 0 &le; | &omega; | &le; &pi; 2 &rho; = tan - 1 ( y 1 &alpha; cos &omega; + x 1 &alpha; sin &phi; sin &omega; + f cos &phi; sin &omega; x 1 &alpha; cos &phi; - f sin &phi; ) - - - ( 24 )
In formula (24),
Figure BDA0000392557660000192
for the coordinate of unique point in conventional coordinates in original image coordinate system.In fact, it equals not convert previous image coordinate system coordinate.
Entered with up conversion camera coordinate system is become to ideal coordinates system, the rotation matrix between two coordinate systems is S=R ρr ωr φ, after S conversion, the coordinate of each unique point in ideal image is:
x i &rho; = f X i &rho; Z i &rho; = f S 11 X i c + S 12 Y i c + S 13 Z i c S 31 X i c + S 32 Y i c + S 33 Z i c = f S 11 x i &alpha; + S 12 y i &alpha; + S 13 f S 31 x i &alpha; + S 32 y i &alpha; + S 33 f y i &rho; = f Y i &rho; Z i &rho; = f S 21 X i c + S 22 Y i c + S 23 Z i c S 31 X i c + S 32 Y i c + S 33 Z i c = f S 21 x i &alpha; + S 22 y i &alpha; + S 33 f S 31 x i &alpha; + S 32 y i &alpha; + S 33 f - - - ( 25 )
In order finally to solve M, draw matrix
Figure BDA0000392557660000194
for
S - 1 = S 11 S 12 S 13 0 S 21 S 22 S 23 0 S 31 S 32 S 33 0 0 0 0 1 - - - ( 26 )
Wherein:
S 11 = cos &phi; cos &rho; + sin &phi; sin &omega; sin &rho; , S 12 = - cos &phi; sin &rho; + sin &phi; sin &omega; cos &rho; , S 13 = sin &phi; cos &omega; S 21 = cos &omega; sin &rho; , S 22 = cos &omega; cos &rho; , S 23 = - sin &omega; S 31 = - sin &phi; cos &rho; + cos &phi; sin &omega; sin &rho; , S 32 = sin &phi; sin &rho; + cos &phi; sin &omega; cos &rho; , S 33 = cos &phi; cos &omega;
(3) ask C
Matrix C represents the Relative Transformation between conventional coordinates and ideal coordinates system,
Figure BDA0000392557660000197
according to perspective projection model
x i &rho; = f X i &rho; Z i &rho; = f C 11 X i &alpha; + C 12 X i &alpha; + C 13 Z i &alpha; + C 14 C 31 X i &alpha; + C 32 Y i &alpha; + C 33 Z i &alpha; + C 34 y i &rho; = f X i &rho; Z i &rho; = f C 21 X i &alpha; + C 22 Y i &alpha; + C 23 Z i &alpha; + C 24 C 31 X i &alpha; + C 32 Y i &alpha; + C 33 Z i &alpha; + C 34
Entered after T conversion and S conversion, had following relation:
X 0 &alpha; = Y 0 &alpha; = Z 0 &alpha; = Y 1 &alpha; = Z 1 &alpha; = Z 2 &alpha; = 0 x 0 &rho; = x 1 &rho; = y 0 &rho; = 0 - - - ( 27 )
By x 0 &rho; = 0 , Known X 0 &rho; = 0 . Have
C 11 X 0 &alpha; + C 12 Y 0 &alpha; + C 13 Z 0 &alpha; + C 14 = 0
Due to
Figure BDA0000392557660000206
all equal 0, so C 14 = 0 .
In like manner,
Figure BDA0000392557660000208
so C24=0.
x 1 &rho; = f X 1 &rho; Z 1 &rho; = f C 11 X 1 &alpha; + C 12 Y 1 &alpha; + C 13 Z 1 &alpha; + C 14 C 31 X 1 &alpha; + C 32 Y 1 &alpha; + C 33 Z 1 &alpha; + C 34 = f C 11 X 1 &alpha; + C 14 C 31 X 1 &alpha; + C 34
Arrangement obtains:
f X 1 &alpha; C 11 - x 1 &rho; X 1 &alpha; C 31 - x 1 &rho; C 34 = 0 - - - ( 28 )
Due to y 1 &rho; = 0 , Known Y 1 &rho; = 0
C 21 X 1 &alpha; + C 22 Y 1 &alpha; + C 23 Z 1 &alpha; + C 24 = 0
Due to c 24all equal 0, so C 21=0
x 2 &rho; = f X 2 &rho; Z 2 &rho; = f C 11 X 2 &alpha; + C 12 Y 2 &alpha; + C 13 Z 2 &alpha; + C 14 C 31 X 2 &alpha; + C 32 Y 2 &alpha; + C 33 Z 2 &alpha; + C 34 = f C 11 X 1 &alpha; + C 12 Y 2 &alpha; C 31 X 2 &alpha; + C 32 Y 2 &alpha; + C 34
f X 2 &alpha; C 11 + f Y 2 &alpha; C 12 - x 2 &rho; X 2 &alpha; C 31 - x 2 &rho; Y 2 &alpha; C 32 - x 2 &rho; C 34 = 0 - - - ( 29 )
In like manner, by
Figure BDA00003925576600002017
can equationof structure
f Y 2 &alpha; C 22 - y 2 &rho; X 2 &alpha; C 31 - y 2 &rho; Y 2 &alpha; C 32 - y 2 &rho; C 34 = 0 - - - ( 30 )
Character by unit orthogonal matrix can obtain:
C 11 2 + C 31 2 = 1 C 12 2 + C 22 2 + C 32 2 = 1 C 11 C 2 + C 32 C 31 = 0 - - - ( 31 )
The sealing solution that requires C, only cannot realize with three points, and information that must the 4th some P3 of introducing is coplanar due to P0, P1, P2, P3, has
Figure BDA0000392557660000211
can construct two equations.
x 3 &rho; = f X 3 &rho; Z 3 &rho; = f C 11 X 3 &alpha; + C 12 Y 3 &alpha; + C 13 Z 3 &alpha; + C 14 C 31 X 3 &alpha; + C 32 Y 3 &alpha; + C 33 Z 3 &alpha; + C 34 = f C 11 X 3 &alpha; + C 12 Y 3 &alpha; C 31 X 3 &alpha; + C 32 Y 3 &alpha; + C 34
f X 3 &alpha; C 11 + f Y 3 &alpha; C 12 - x 3 &rho; X 3 &alpha; C 31 - x 3 &rho; Y 3 &alpha; C 32 - x 3 &rho; C 34 = 0 - - - ( 32 )
By
Figure BDA0000392557660000214
can construct
f Y 3 &alpha; C 22 - y 3 &rho; X 3 &alpha; C 31 - y 3 &rho; Y 3 &alpha; C 32 - y 3 &rho; C 34 = 0 - - - ( 33 )
Simultaneous formula (28~33) solves, and draws following result:
C 14=C 24=C 21=0 (34)
r = f X 1 &alpha; B 3 - x 1 &rho; B 1 x 1 &rho; ( X 1 &alpha; B 3 - B 2 ) , C 11 = &PlusMinus; 1 1 + r 2 , C 31 = r C 11 - - - ( 35 )
C 34 = ( f X 1 &alpha; C 11 - x 1 &rho; X 1 &alpha; C 31 ) x 1 &rho; = C 11 ( f X 1 &alpha; - r x 1 &rho; X 1 &alpha; ) x 1 &rho; - - - ( 36 )
Wherein: B 1 = f Y 2 &alpha; Y 3 &alpha; ( y 2 &rho; - y 3 &rho; ) ( X 2 &alpha; Y 3 &alpha; - X 3 &alpha; Y 2 &alpha; ) B 2 = Y 2 &alpha; Y 3 &alpha; ( y 2 &rho; x 3 &rho; - y 3 &rho; x 2 &rho; ) ( X 2 &alpha; Y 3 &alpha; - X 3 &alpha; Y 2 &alpha; ) B 3 = Y 2 &alpha; Y 3 &alpha; ( y 2 &rho; x 3 &rho; - y 3 &rho; x 2 &rho; ) ( Y 3 &alpha; - Y 2 &alpha; )
Due to C<sub TranNum="505">34</sub>expression is the distance to conventional coordinates initial point by camera focus, so C<sub TranNum="506">34</sub>>0.Therefore C<sub TranNum="507">11</sub>symbol be chosen as: if<maths TranNum="508" num="0072"><![CDATA[<math> <mrow> <mrow> <mo>(</mo> <mi>f</mi> <msubsup> <mi>X</mi> <mn>1</mn> <mi>&alpha;</mi> </msubsup> <mo>-</mo> <mi>r</mi> <msubsup> <mi>x</mi> <mn>1</mn> <mi>&rho;</mi> </msubsup> <msubsup> <mi>X</mi> <mn>1</mn> <mi>&alpha;</mi> </msubsup> <mo>)</mo> </mrow> <mo>/</mo> <msubsup> <mi>x</mi> <mn>1</mn> <mi>&rho;</mi> </msubsup> <mo>></mo> <mn>0</mn> <mo>,</mo> </mrow></math>]]></maths>C<sub TranNum="509">11</sub>>0; If<maths TranNum="510" num="0073"><![CDATA[<math> <mrow> <mrow> <mo>(</mo> <mi>f</mi> <msubsup> <mi>X</mi> <mn>1</mn> <mi>&alpha;</mi> </msubsup> <mo>-</mo> <mi>r</mi> <msubsup> <mi>x</mi> <mn>1</mn> <mi>&rho;</mi> </msubsup> <msubsup> <mi>X</mi> <mn>1</mn> <mi>&alpha;</mi> </msubsup> <mo>)</mo> </mrow> <mo>/</mo> <msubsup> <mi>x</mi> <mn>1</mn> <mi>&rho;</mi> </msubsup> <mo><</mo> <mn>0</mn> <mo>,</mo> </mrow></math>]]></maths>C<sub TranNum="511">11</sub><0.Have simultaneously:
C 32 = ( Y 2 &alpha; X 3 &alpha; y 3 &rho; - X 2 &alpha; Y 3 &alpha; y 2 &rho; ) C 31 + ( Y 2 &alpha; y 3 &rho; - Y 3 &alpha; y 2 &rho; ) C 34 Y 2 &alpha; Y 3 &alpha; ( y 2 &rho; - y 3 &rho; ) C 22 = X 3 &alpha; y 3 &rho; C 31 + Y 3 &alpha; y 3 &rho; C 32 + y 3 &rho; C 34 f Y 3 &alpha; , 31 C 12 = X 2 &alpha; x 2 &rho; C 31 + Y 2 &alpha; x 2 &rho; C 32 + x 2 &rho; C 34 - f X 2 &alpha; C 11 f Y 3 &alpha; - - - ( 37 )
According to C, be unit orthogonal matrix, have following equation to set up:
C 13 = C 21 C 32 - C 31 C 22 C 23 = C 12 C 31 - C 11 C 32 C 33 = C 11 C 22 - C 12 C 21 - - - ( 38 )
So far all elements of C is all determined.
Like this, three transformation matrix T, S, C all can decide, and by formula (15), obtain M, and rotation matrix R and offset vector t also can determine.M represents the pose conversion between bullet and camera coordinate system, and the twiddle factor R of its inside should meet orthogonality.And in the solution procedure of M, there is error in unique point coordinate figure, easily cause M not meet orthogonality, therefore also on this basis, to the gyrator R of matrix, be optimized.
Due to W i c = M W i
M = m 1 m 2 m 3 m 10 m 4 m 5 m 6 m 11 m 7 m 8 m 9 m 12 0 0 0 1
Arrangement obtains:
X 1 c - m 10 X 2 c - m 10 X 3 c - m 10 X 4 c - m 10 Y 1 c - m 11 Y 2 c - m 11 Y 3 c - m 11 Y 4 c - m 11 Z 1 c - m 12 Z 2 c - m 12 Z 3 c - m 12 Z 4 c - m 12 = R X 1 X 2 X 3 X 4 Y 1 Y 2 Y 3 Y 4 Z 1 Z 2 Z 3 Z 4
Above formula is abbreviated as to D=RE, optimizes and make min||RE-D||, wherein, E=[E 1e 2e 3e 4], D=[D 1d 2d 3d 4], constraint condition is that R is orthogonal matrix.
Define the rotation matrix B of 4 * 4
B = &Sigma; i = 1 4 B i T B i
Wherein B = 0 ( E i - D i ) T ( D i - E i ) [ D i - E i ] &times; , [] * represent [ ( x , y , z ) ] &times; = 0 - z y z 0 - x - y x 0
With r=(r 0, r 1, r 2, r 3) trepresent the corresponding proper vector of minimal eigenvalue with matrix B.The solution of rotation matrix R is
R = r 0 2 + r 1 2 - r 2 2 - r 3 2 2 ( r 1 r 2 - r 0 r 3 ) 2 ( r 1 r 3 + r 0 r 2 ) 2 ( r 1 r 2 + r 0 r 3 ) r 0 2 - r 1 2 + r 2 2 - r 3 2 2 ( r 2 r 3 - r 0 r 1 ) 2 ( r 1 r 3 - r 0 r 2 ) 2 ( r 2 r 3 + r 0 r 1 ) r 0 2 - r 1 2 - r 2 2 + r 3 2
What next will carry out is the method for utilizing light beam adjustment iteration, and the result of rotation matrix R and translation vector t is further processed, and can be separated more accurately.Light beam method of adjustment is technology well known in the art, is not described in detail here.
Will be appreciated that, above description is one particular embodiment of the present invention, and the present invention is not limited only to the specific structure of above diagram or description, and claim is by all changes scheme covering in connotation of the present invention and scope.

Claims (5)

1. a bullet target pattern, is characterized in that, target pattern setting is the belt-like zone around bullet external peripheral surface, and is divided into 6 sub regions along playing direction of principal axis; The color that described subregion is set to two kinds of colors and adjacent subarea territory is different, and described two kinds of colors and bullet surface color all have obvious contrast;
The side of described subregion is coplanar with bullet axle, and the bottom and upper segment of described subregion is positioned at perpendicular on the circle of Dan Zhou cross section;
The inside of described subregion arranges area compare little region as coding region, and the subregion color at the color Yu Qi place of described coding region has obvious contrast; The shape of described coding region can be for square or circular, or other is easy to the shape of identification;
And the inner coding region quantity arranging of subregion of same color is different;
4 summits of described every sub regions are as the unique point of target pattern.
2. bullet target pattern according to claim 1, is characterized in that, the color of described subregion and coding region adopts respectively black and white; The quantity of the coding region in described every sub regions is 0~2.
3. application rights requires a bullet monocular video pose measuring method for the target pattern described in 1 or 2, it is characterized in that comprising the following steps:
Video camera is arranged on capable of regulating pedestal, make video camera towards the bullet with described target pattern and make camera optical axis with the angle that plays axle within the scope of 30 °~150 °, shooting function photographs bullet side and can be at more than described target pattern photographs to 4 coplanar characteristic point;
The resolution pixel of described video camera is more than 640 * 480, and field angle is 25 °~45 °;
Described video camera is taken the image of bullet, and by camera acquisition to view data send to processor, for resolving the pose parameter of bullet;
According to the quantity that comprises coding region in the color of the subregion of described target pattern, subregion, for described every sub regions arranges unique recognition coding;
According to described coding, come recognin region, then according to described unique point the distributing position recognition feature point on described subregion;
While resolving described bullet pose, first from the image of video camera, extract and identify the image coordinate of the unique point of described target pattern, then the coordinate in bullet coordinate system in conjunction with character pair point, carries out coordinate system transformation and sets up system of equations, combines and solves the relative pose that obtains bullet;
Set up bullet coordinate system, camera coordinate system and image coordinate system, determine the coordinate of described unique point in bullet coordinate system; Extract described unique point coordinate in image coordinate system, extraction step is as follows:
(1) adopt background subtraction method to detect bullet imaging region;
(2) at bullet imaging region, utilize the Image Segmentation Using of threshold method to target pattern, and the image after cutting apart is carried out to connected component analysis, connection district that pixel quantity the is maximum subregion as target pattern will be comprised, in the threshold value adopting when cutting apart and this connected region, comprise little connected region quantity and carry out constructor regional code, to identify this subregion;
(3) extract the profile of this connected region, utilize Hough converter technique from subregion profile, to extract the formed rectilinear point of point on subregion side, and two straight lines of matching; The formed curve point of point from subregion profile on the bottom and upper segment of segregant region, and fitted ellipse curve, ask respectively the intersection point of straight line and this elliptic curve, obtains subregion summit as the target pattern characteristics point coordinate in image coordinate system;
(4) after recognin region and acquisition unique point image coordinate, the distributing position recognition feature point according to described unique point on described subregion;
The respective coordinates of described unique point under bullet coordinate system is known quantity, utilizes the coordinate of described unique point in bullet coordinate system and the coordinate in its corresponding image coordinate system, through coordinate system transformation, calculates bullet with respect to the pose of video camera.
4. bullet monocular video pose measuring method according to claim 3, is characterized in that, for described every sub regions arranges unique triad coding; First described of triad coding represents subregion color, second and the 3rd quantity that represents the coding region comprising in described every sub regions.
5. according to the bullet monocular video pose measuring method described in claim 3 or 4, it is characterized in that, in the larger occasion of the pose amplitude of variation of bullet, the above video camera of two covers or two covers and corresponding graphics processing unit are set, form respectively two covers or the above monocular video measuring system of two covers; And adjust video camera position and towards, make its in projectile attitude change procedure, have at least one shooting function photograph bullet side.
CN201310464872.9A 2013-10-08 A kind of shot monocular video pose measurement method and target pattern Active CN103512559B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310464872.9A CN103512559B (en) 2013-10-08 A kind of shot monocular video pose measurement method and target pattern

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310464872.9A CN103512559B (en) 2013-10-08 A kind of shot monocular video pose measurement method and target pattern

Publications (2)

Publication Number Publication Date
CN103512559A true CN103512559A (en) 2014-01-15
CN103512559B CN103512559B (en) 2016-11-30

Family

ID=

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103925909A (en) * 2014-03-28 2014-07-16 中国科学院长春光学精密机械与物理研究所 Method and device for measuring position of chamber-opening point by two high-speed cameras
CN104517291A (en) * 2014-12-15 2015-04-15 大连理工大学 Pose measuring method based on coaxial circle characteristics of target
CN104865594A (en) * 2015-04-28 2015-08-26 中国白城兵器试验中心 Acoustic vibration wave bullet landing point detecting device
CN105444982A (en) * 2015-11-24 2016-03-30 中国空气动力研究与发展中心高速空气动力研究所 Monocular video measurement method for external store separating locus wind tunnel test
CN107421509A (en) * 2017-07-10 2017-12-01 同济大学 A kind of high-speed video measuring method of reticulated shell type Approaches for Progressive Collapse of Structures
CN108090931A (en) * 2017-12-13 2018-05-29 中国科学院光电技术研究所 It is a kind of that jamproof marker identification and pose measuring method are blocked based on circle and the anti-of cross characteristics combination
CN109883536A (en) * 2019-01-29 2019-06-14 北京理工大学 A kind of three wave point continuous capturing method of shock wave
CN112179210A (en) * 2020-08-31 2021-01-05 河北汉光重工有限责任公司 Method for correcting shot hit deviation of naval gun

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7392153B2 (en) * 2004-08-25 2008-06-24 Microsoft Corporation Relative range camera calibration
CN101261738A (en) * 2008-03-28 2008-09-10 北京航空航天大学 A camera marking method based on double 1-dimension drone
CN101608920A (en) * 2008-06-18 2009-12-23 中国科学院国家天文台 A kind of combined type spatial pose precisely and dynamically measuring device and method
CN102692214A (en) * 2012-06-11 2012-09-26 北京航空航天大学 Narrow space binocular vision measuring and positioning device and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7392153B2 (en) * 2004-08-25 2008-06-24 Microsoft Corporation Relative range camera calibration
CN101261738A (en) * 2008-03-28 2008-09-10 北京航空航天大学 A camera marking method based on double 1-dimension drone
CN101608920A (en) * 2008-06-18 2009-12-23 中国科学院国家天文台 A kind of combined type spatial pose precisely and dynamically measuring device and method
CN102692214A (en) * 2012-06-11 2012-09-26 北京航空航天大学 Narrow space binocular vision measuring and positioning device and method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
WEI SUN: "Binocular Vision-based Position Determination Algorithm and System", 《2012 INTERNATIONAL CONFERENCE ON COMPUTER DISTRIBUTED CONTROL AND INTELLIGENT ENVIROMENTAL MONITORING》 *
李蒙 等: "用于圆锥体位姿测量的曲面标记", 《中国图象图形学报》 *
阮利锋 等: "基于标志点识别的三维位姿测量方法", 《计算机应用》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103925909A (en) * 2014-03-28 2014-07-16 中国科学院长春光学精密机械与物理研究所 Method and device for measuring position of chamber-opening point by two high-speed cameras
CN103925909B (en) * 2014-03-28 2016-06-15 中国科学院长春光学精密机械与物理研究所 Two high-speed camera measurements are opened the cabin the method for a position and device
CN104517291B (en) * 2014-12-15 2017-08-01 大连理工大学 Pose measuring method based on target coaxial circles feature
CN104517291A (en) * 2014-12-15 2015-04-15 大连理工大学 Pose measuring method based on coaxial circle characteristics of target
CN104865594A (en) * 2015-04-28 2015-08-26 中国白城兵器试验中心 Acoustic vibration wave bullet landing point detecting device
CN104865594B (en) * 2015-04-28 2017-06-20 中国白城兵器试验中心 A kind of acoustic shock ripple bullet drop point detection device
CN105444982A (en) * 2015-11-24 2016-03-30 中国空气动力研究与发展中心高速空气动力研究所 Monocular video measurement method for external store separating locus wind tunnel test
CN105444982B (en) * 2015-11-24 2017-12-19 中国空气动力研究与发展中心高速空气动力研究所 A kind of monocular video measuring method of Halo vest track wind tunnel test
CN107421509A (en) * 2017-07-10 2017-12-01 同济大学 A kind of high-speed video measuring method of reticulated shell type Approaches for Progressive Collapse of Structures
CN107421509B (en) * 2017-07-10 2019-08-02 同济大学 A kind of high-speed video measurement method of reticulated shell type Approaches for Progressive Collapse of Structures
CN108090931A (en) * 2017-12-13 2018-05-29 中国科学院光电技术研究所 It is a kind of that jamproof marker identification and pose measuring method are blocked based on circle and the anti-of cross characteristics combination
CN109883536A (en) * 2019-01-29 2019-06-14 北京理工大学 A kind of three wave point continuous capturing method of shock wave
CN112179210A (en) * 2020-08-31 2021-01-05 河北汉光重工有限责任公司 Method for correcting shot hit deviation of naval gun
CN112179210B (en) * 2020-08-31 2022-09-02 河北汉光重工有限责任公司 Method for correcting shot hit deviation of naval gun

Similar Documents

Publication Publication Date Title
CN103745458B (en) A kind of space target rotating axle based on binocular light flow of robust and mass center estimation method
Lemmens A survey on stereo matching techniques
Bazin et al. Motion estimation by decoupling rotation and translation in catadioptric vision
CN109711321B (en) Structure-adaptive wide baseline image view angle invariant linear feature matching method
EP2887315B1 (en) Camera calibration device, method for implementing calibration, program and camera for movable body
US10043279B1 (en) Robust detection and classification of body parts in a depth map
CN107424196B (en) Stereo matching method, device and system based on weak calibration multi-view camera
CN103400366A (en) Method for acquiring dynamic scene depth based on fringe structure light
CN103093479A (en) Target positioning method based on binocular vision
CN102971604A (en) System and related method for determining vehicle wheel alignment
CN102506758A (en) Object surface three-dimensional morphology multi-sensor flexible dynamic vision measurement system and method
CN111145232A (en) Three-dimensional point cloud automatic registration method based on characteristic information change degree
CN110415304B (en) Vision calibration method and system
Lim et al. Stereo correspondence: A hierarchical approach
CN105184786A (en) Floating-point-based triangle characteristic description method
Boroson et al. 3D keypoint repeatability for heterogeneous multi-robot SLAM
Holz et al. Registration of non-uniform density 3D point clouds using approximate surface reconstruction
CN103400377B (en) A kind of three-dimensional circular target based on binocular stereo vision detects and determination methods
Budge et al. Automatic registration of fused lidar/digital imagery (texel images) for three-dimensional image creation
CN103512559A (en) Shot monocular video pose measurement method and target pattern
CN103512559B (en) A kind of shot monocular video pose measurement method and target pattern
Abdellali et al. A direct least-squares solution to multi-view absolute and relative pose from 2D-3D perspective line pairs
CN111986248B (en) Multi-vision sensing method and device and automatic driving automobile
Ly et al. Scale invariant line matching on the sphere
CN109325974B (en) Method for obtaining length of three-dimensional curve by adopting image processing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant