CN102818561A - Method for measuring motion parameters of projectile in shooting range based on digital slit shooting technology - Google Patents

Method for measuring motion parameters of projectile in shooting range based on digital slit shooting technology Download PDF

Info

Publication number
CN102818561A
CN102818561A CN2012102356469A CN201210235646A CN102818561A CN 102818561 A CN102818561 A CN 102818561A CN 2012102356469 A CN2012102356469 A CN 2012102356469A CN 201210235646 A CN201210235646 A CN 201210235646A CN 102818561 A CN102818561 A CN 102818561A
Authority
CN
China
Prior art keywords
prime
bullet
camera
imaging
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012102356469A
Other languages
Chinese (zh)
Inventor
文贡坚
赵竹新
回丙伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN2012102356469A priority Critical patent/CN102818561A/en
Publication of CN102818561A publication Critical patent/CN102818561A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a method for measuring motion parameters of projectile in a shooting range based on a digital slit shooting technology. The technical scheme comprises the following three steps of: first, arranging a stereo digital slit measuring system; second, determining a stereo linear array camera imaging model; and finally, measuring motion parameters of projectile. The second step comprises: 1, respectively determining inner orientation elements and exterior orientation elements of a front camera and a bottom camera; and 2, respectively determining imaging model expressions of the front camera and the bottom camera. The third step comprises: 1, selecting feature points and calculating initial estimated value of the motion parameters of projectile; and 2, constructing a motion parameter optimization solving model based on a projectile three-dimensional digital model and calculating the optimization estimated value of the projectile motion parameters. By the adoption of the measuring method provided by the invention, high-precision measuring of motion parameters of projectile in a shooting range such as target speed, attitude, angle of attack and the like is solved, and digitization and automation of the shooting range slit technology are realized.

Description

Target range bullet movement parameter measurement method based on digital slit camera technique
Technical field
The present invention relates to photogrammetric, image processing field, relate in particular to a kind of measuring method of the target range bullet kinematic parameter based on digital slit camera technique.
Background technology
The shooting range measurment that big gun is penetrated weapon (bullet, guided missile etc.) is that the evaluation and the improvement of weapon performance provides important basis.It is the principal element that influences its range, fire accuracy that bullet goes out parameters such as attitude after the thorax, speed, the angle of attack, so the accurate measurement of these parameters is main tasks of shooting range measurment.
With data by MoM and MEI, the method for optical measurement has advantages such as measuring accuracy height, data visualization be effective, anti-interference, therefore in the measurement of high-speed targets such as target range bullet and test, has irreplaceable status.The motion of adopting flash spotting to analyze the bullet target at present mainly contains two kinds of approach: the one, and frame width of cloth formula method for imaging; High speed imaging through frame width of cloth formula video camera obtains a series of sequence images that bullet moves, and calculates the kinematic parameters such as speed of bullet through the distance of calculating bullet bullet motion between different frame; The 2nd, the slit method for imaging is obtained the Slit Image of bullet motion through the scanning imagery of slit video camera, can write down time and spatial information in the bullet imaging process simultaneously, calculates its kinematic parameter with this.The former starts to walk early, but the image-forming principle of frame width of cloth formula formation method has determined such video camera to be difficult to realize the measurement to the velocity of shot direction and the angle of attack, thereby weakens greatly in using value aspect the bullet parameter measurement.Adopt the slit method for imaging in the speed and attitude parameter of record bullet, also can well reflect the angle of attack variation after bullet goes out thorax, therefore in conventional target range, become the important way that high-speed targets such as bullet are measured and tested.
Measure some technical bottlenecks that are difficult to overcome of kinematic parameter existence of bullet at present through the slit method for imaging; Mainly contain: the one, the speed of slit photography requirement bullet and the sweep velocity of slit video camera reach trajectory synchronous condition; Owing to can't obtain bullet speed accurately in advance, be unapproachable in actual imaging therefore; The 2nd, in order to obtain the three-dimensional information of bullet kinematic parameter, often adopt three-dimensional quadrature intersection measurement, present equipment layout method falls behind, operation inconvenience, the accuracy of spatial relationship is difficult to guarantee; The 3rd, only adopt on the bullet image limited unique point to calculate kinematic parameter, the uncertainty that unique point is chosen can cause bigger measuring error.
Along with the development of digital technology, it will be the slit video camera of the linear array camera replacement tradition use of image device with film imaging with the line array sensor that people attempt.The present invention will adopt the linear array camera to obtain the bullet image and the technology of measuring the bullet kinematic parameter is referred to as digital slit camera technique.At present the intrinsic difficulty of this Study on Technology in having traditional slit technology, also because the restriction of linear array camera sweep frequency, the trajectory synchronous condition difficult problem that is beyond one's reach has appearred far away.
In order to solve these difficulties, realize the digitizing of slit technology, the present invention proposes measuring method based on the target range bullet parameters of target motion of digital slit camera technique.
Summary of the invention
The technical matters that the present invention will solve is the high-acruracy survey of kinematic parameters such as target range bullet target velocity, attitude and the angle of attack.The purpose of this invention is to provide a kind of target range bullet movement parameter measurement method based on digital slit camera technique; Three-dimensional digit slit measuring system through the present invention design and based on the movement parameter measurement method of bullet three-dimensional digital model has realized that digitizing, the robotization of target range slit technology improves demand.
Describe in conjunction with 1 pair of technical scheme of the present invention of accompanying drawing:
The first step is arranged three-dimensional digit slit measuring system.
After 2-3 rice bullet goes out thorax near the cannon mouth trajectory under place the double image device that is made by the rectangle plane mirror; The minute surface of double image device becomes 45 ° of placements with ground; Simultaneously place a linear array camera, make the primary optical axis of linear array camera approximately become the level crossing of 45 ° of incident double image devices in position apart from double image device 5-10 rice.The focal length of adjustment linear array camera makes to observe the bullet above the double image device that flew in the visual field of picture (below be called end camera) of linear array camera in the linear array camera placed (below be called preceding camera) and the double image device.Preceding camera and end camera have been formed the three-dimensional linear array camera in the three-dimensional digit slit measuring system; The image of camera and the image of end camera before the piece image that three-dimensional linear array camera obtains comprises; Write down the imaging results of bullet on both direction, the present invention is referred to as the three-dimensional linear array images of bullet.
The layout of three-dimensional digit slit measuring system is shown in accompanying drawing 2.
Second goes on foot, and confirms the imaging model of three-dimensional linear array camera.
In order to describe the spatial relationship of three-dimensional digit slit measuring system, define 3 coordinate systems:
Target range coordinate system o p-x py pz p: with the reference point in target range initial point o as coordinate system p, the bullet ballistic path direction is coordinate axis x pPositive dirction, the direction that makes progress perpendicular to ground, target range ground is coordinate axis z pPositive dirction, o p-x py pPlane and coordinate axis z pConstitute right-handed system.The target range coordinate system is a measurement coordinate system, and the elements of exterior orientation of preceding camera and end camera and the kinematic parameter of bullet all define in the coordinate system of target range.
Preceding camera image space coordinate system o f-x fy fz f: the photo centre of former camera is as initial point o f, be x along the projectile flight direction and perpendicular to the direction of the line array sensor of preceding camera fThe axle positive dirction is y along the downward direction of line array sensor fThe axle positive dirction is z along the target-bound direction of primary optical axis fThe positive dirction of axle, o f-x fy fPlane and coordinate axis z fConstitute right-handed system, shown in accompanying drawing 2.
End camera image space coordinate system o b-x by bz b: according to the minute surface symmetry principle, end camera image space coordinate system is the minute surface symmetry picture of preceding camera image space coordinate system.
The imaging model of camera and the imaging model of end camera before the imaging model of three-dimensional linear array camera comprises.The concrete steps of imaging model of confirming three-dimensional linear array camera are following:
The 1st step, the elements of interior orientation and the elements of exterior orientation of camera and end camera before confirming respectively.
The demarcation of camera before carrying out, the elements of interior orientation and the elements of exterior orientation of camera before obtaining.The focal distance f of camera before the elements of interior orientation of preceding camera comprises, principal point side-play amount y 0The location parameter of camera was (with the coordinate o of photo centre before elements of exterior orientation comprised f(X s, Y s, Z s) expression) and the attitude parameter (angle that rotates through between the coordinate axis with preceding camera image space coordinate system and target range coordinate system
Figure BDA00001867368200031
ω f, κ fExpression).Measure the position coordinates that the double image device is confirmed in the coordinate system of target range,, confirm the equation expression formula of level crossing plane, place in the coordinate system of target range of double image device according to double image device putting position and attitude in the coordinate system of target range.
The elements of interior orientation of end camera is identical with preceding camera, according to the minute surface symmetric relation of preceding camera and end camera, calculates the elements of exterior orientation of end camera, comprises that location parameter is (with the coordinate o of end camera photo centre b(X ' s, Y ' s, Z ' s) expression) and the attitude parameter (angle that rotates through between the coordinate axis with end camera image space coordinate system and target range coordinate system
Figure BDA00001867368200041
ω b, κ bExpression).
The 2nd step, the imaging model expression formula of camera and end camera before confirming respectively.
Suppose that (X, Y Z) drop on o in the coordinate system of target range to P f-y fz fOn the plane more arbitrarily, some P (X, Y, Z) the preceding camera imaging point p corresponding with it (promptly the imaging model of preceding camera is expressed as for x, the imaging geometry between y):
x = 0 = a 1 ( X - X S ) + b 1 ( Y - Y S ) + c 1 ( Z - Z S ) y = y 0 = - f · a 2 ( X - X S ) + b 2 ( Y - Y S ) + c 2 ( Z - Z S ) a 3 ( X - X S ) + b 3 ( Y - Y S ) + c 3 ( Z - Z S )
Wherein, a i, b i, c i(i=1,2,3) are and preceding camera attitude parameter
Figure BDA00001867368200043
ω f, κ f9 relevant elements are expressed as:
In like manner, some P (X, Y, Z) end camera imaging point p ' corresponding with it (promptly the imaging model of end camera is expressed as for x, the imaging geometry between y):
x = 0 = a 1 ′ ( X - X S ′ ) + b 1 ′ ( Y ′ - Y S ) + c 1 ′ ( Z - Z S ′ ) y - y 0 = - f · a 2 ′ ( X - X S ′ ) + b 2 ′ ( Y - Y S ′ ) + c 2 ′ ( Z - Z S ′ ) a 3 ′ ( X - X S ′ ) + b 3 ′ ( Y - Y S ′ ) + c 3 ′ ( Z - Z S ′ )
Wherein, a ' i, b ' i', c ' i(i=1,2,3) are and end camera attitude parameter
Figure BDA00001867368200052
ω b, κ bIdentical in 9 the relevant elements, expression mode and preceding camera.
The 3rd step, the kinematic parameter of measurement bullet.
The present invention is when measuring the bullet kinematic parameter; At first choose the unique point in the three-dimensional linear array images of bullet; Calculate the initial estimate of kinematic parameter, optimize solving model through kinematic parameter then, further obtain the optimization estimated value of bullet kinematic parameter based on the bullet three-dimensional digital model.Detailed process can be divided into two steps:
The 1st step, select unique point, calculate the initial estimate of bullet kinematic parameter.
Choose moving projectile bullet point A, projectile tail B in three-dimensional linear array images corresponding imaging point a and a ', b and b ' as unique point (like accompanying drawing 2).Suppose the velocity V (V of bullet x, V y, V z), photograph bullet bullet point to preceding camera constantly as initial time, suppose the volume coordinate A (X of bullet point this moment a, Y a, Z a), the axis vector L (L of bullet x, L y, L z), then the volume coordinate of projectile tail is B (X at this moment a-L x, Y a-L y, Z a-L z).The speed of bullet, attitude and location parameter just can pass through V x, V y, V z, X a, Y a, Z a, L x, L y, L z9 parameters are described.Through bullet bullet point, the imaging relations of projectile tail in three-dimensional linear array images, make up equation about these 9 parameters.
Suppose that it is 0 that preceding camera photographs the sharp moment of bullet bullet, according to preceding camera imaging model, has:
a 1 ( X a - X S ) + b 1 ( Y a - Y S ) + c 1 ( Z a - Z S ) = 0 y a - y 0 = - f · a 2 ( X a - X S ) + b 2 ( Y a - Y S ) + c 2 ( Z a - Z S ) a 3 ( X a - X S ) + b 3 ( Y a - Y S ) + c 3 ( Z a - Z S ) (equation 1 and equation 2)
Wherein, y aFor this moment bullet point imaging point y fCoordinate on the direction.
Suppose that it is t that end camera photographs the sharp moment of bullet bullet 1, the imaging model according to end camera has:
a 1 ′ ( X a + V x t 1 - X S ′ ) + b 1 ′ ( Y a + V y t 1 - Y S ′ ) + c 1 ′ ( Z a + V z t 1 - Z S ′ ) = 0 y a ′ - y 0 = - f · a 2 ′ ( X a + V x t 1 - X S ′ ) + b 2 ′ ( Y a + V y t 1 - Y S ′ ) + c 2 ′ ( Z a + V z t 1 - Z S ′ ) a 3 ′ ( X a + V x t 1 - X S ′ ) + b 3 ′ ( Y a + V y t 1 - Y S ′ ) + c 3 ′ ( Z a + V z t 1 - Z S ′ ) (equation 3 and equation 4)
Wherein, y A 'For this moment bullet point imaging point at y bCoordinate on the direction.
Suppose that the moment that preceding camera photographs the bullet projectile tail is t 2, according to preceding camera imaging model, have:
a 1 ( X a - L x + V x t 2 - X S ) + b 1 ( Y a - L y + V y t 2 - Y S ) + c 1 ( Z a - L z + V z t 2 - Z S ) = 0 y b - y 0 = - f · a 2 ( X a - L x + V x t 2 - X S ) + b 2 ( Y a - L y + V y t 2 - Y S ) + c 2 ( Z a - L z + V z t 2 - Z S ) a 3 ( X a - L x + V x t 2 - X S ) + b 3 ( Y a - L y + V y t 2 - Y S ) + c 3 ( Z a - L z + V z t 2 - Z S )
(equation 5 and equation 6)
Wherein, y bFor this moment the projectile tail imaging point at y fCoordinate on the direction.
Suppose that the moment that end camera photographs the bullet projectile tail is t3, the imaging model according to end camera has:
a 1 ′ ( X a - L x + V x t 3 - X S ′ ) + b 1 ′ ( Y a - L y + V y t 3 - Y S ′ ) + c 1 ′ ( Z a - L z + V z t 3 - Z S ′ ) = 0 y b ′ - y 0 = - f · a 2 ′ ( X a - L x + V x t 3 - X S ′ ) + b 2 ′ ( Y a - L y + V y t 3 - Y S ′ ) + c 2 ′ ( Z a - L z + V z t 3 - Z S ′ ) a 3 ′ ( X a - L x + V x t 3 - X S ′ ) + b 3 ′ ( Y a - L y + V y t 3 - Y S ′ ) + c 3 ′ ( Z a - L z + V z t 3 - Z S ′ )
(equation 7 and equation 8)
Wherein, y B 'For this moment the projectile tail imaging point at y bCoordinate on the direction.
Through bullet point, projectile tail imaging point at x f(x b) a pixel point covering number of order relatively on the direction, the sweep frequency in conjunction with preceding camera can obtain t 1, t 2, t 3Like this, through bullet point, projectile tail formed 4 imaging points on three-dimensional linear array images, can obtain 8 equations altogether about the bullet kinematic parameter.
In addition, the length L of bullet is known, and:
L x 2+ L y 2+ L z 2=L 2(equation 9)
To sum up, can obtain 9 equations about 9 kinematic parameters of bullet.Preceding 8 equations are linear equations, and last is a nonlinear equation.Wherein, L x, V xAll be positive number, can obtain unique solution as constraint condition.Find the solution this 9 equations, obtain bullet kinematic parameter initial estimate.
The 2nd step made up based on the kinematic parameter of bullet three-dimensional digital model and optimizes solving model, calculated the optimization estimated value of bullet kinematic parameter.
According to known bullet shape data; Utilize 3D MAX software to set up the bullet three-dimensional digital model; According to the imaging model of three-dimensional linear array camera, emulation generates the theoretical modeling imaging results of three-dimensional digital model under bullet kinematic parameter initial estimate situation of bullet.Difference in the measuring image that edge gradient information through theoretical modeling imaging results relatively and three-dimensional linear array camera are taken between the bullet edge of image gradient information, structure is the optimization solving model of input value with the bullet kinematic parameter.Optimize solving model through constantly revising bullet kinematic parameter input value (the initial input value of optimizing solving model is set at bullet kinematic parameter initial estimate); Make that the bullet image is realized the optimization coupling in the measuring image that theoretical modeling imaging results and the three-dimensional linear array camera of bullet three-dimensional digital model take, and bullet kinematic parameter input value that will this moment is as the optimization estimated value of bullet kinematic parameter.Beneficial effect of the present invention: the method that the present invention adopts the linear array camera to demarcate is confirmed the imaging model of three-dimensional linear array camera; Both guaranteed the accuracy of three-dimensional digit slit measuring system spatial relationship, and overcome the linear array camera again because sweep frequency is low and can't realize the difficulty that trajectory is synchronous.Kinematic parameter through based on the bullet three-dimensional digital model is optimized the kinematic parameter that solving model calculates bullet; Reduced and only adopted limited unique point to calculate the instability that the bullet kinematic parameter exists in the classic method, improved the precision of bullet movement parameter measurement.
Description of drawings
Fig. 1 is the principle process synoptic diagram of the target range bullet movement parameter measurement method based on digital slit camera technique provided by the invention;
Fig. 2 is a three-dimensional digit slit measuring system synoptic diagram of the present invention;
Fig. 3 is that the bullet kinematic parameter is optimized the synoptic diagram that solving model makes up;
Fig. 4 is the bullet three-dimensional digital model and constitutes the triangle surface synoptic diagram;
Fig. 5 is a bullet three-dimensional digital model theoretical modeling imaging synoptic diagram;
Fig. 6 is t nThe synoptic diagram of theoretical modeling imaging constantly.
Embodiment
Below in conjunction with accompanying drawing the present invention is described further.
Fig. 1 is the principle schematic of a certain specific embodiment of the target range bullet movement parameter measurement method based on digital slit camera technique provided by the invention.As shown in the figure, technical scheme of the present invention comprises following three steps: the first step, arrange three-dimensional digit slit measuring system; In second step, confirm three-dimensional linear array camera imaging model; The 3rd step, the kinematic parameter of measurement bullet.
Need be at following 2 of explanation again:
First point, the demarcation of line array sensor and imaging model
What adopt among the present invention is that the linear array camera is as photogrammetric instrument.The linear array camera adopts line array sensor as the IMAQ components and parts; Line array sensor can be regarded a kind of special case of area array sensor as; Therefore its imaging model also can be from the imaging model amplification of area array sensor; The elements of interior orientation and the elements of exterior orientation that have comprised sensor in the imaging model, these elements of orientation are to obtain through the demarcation of line array sensor.The demarcation of line array sensor and the description of imaging model are specifically referring to C.A.Luna, M.Mazo, et al; Calibration of Line-Scan Cameras, IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2010; 59, (8), pp.2185 – 2190.
Second point is optimized finding the solution of solving model
The 2nd step in the 3rd step of the present invention makes up based on the kinematic parameter of bullet three-dimensional digital model and optimizes solving model, calculates the optimization estimated value of bullet kinematic parameter.With reference to accompanying drawing 3, detailed process is:
(1) step, the theoretical modeling imaging
Generate the theoretical modeling imaging results of bullet three-dimensional digital model according to the initial estimate of the three-dimensional digital model of the imaging model of three-dimensional linear array camera, bullet and bullet kinematic parameter, detailed process is:
1), according to the profile geometric parameter of known bullet, in 3D MAX 3 d modeling software, make up the three-dimensional digital model of bullet, shown in the left figure of accompanying drawing 4.
2), the three-dimensional digital model data are preserved, and derive with 3DS formatted file form.The structure of the three-dimensional digital model that in 3D MAX, makes up is to be spliced by many triangle surfaces, shown in the right figure of accompanying drawing 4.All information of bullet target three-dimensional structure, like the title of target, apex coordinate, mapping point, the polygon tabulation, the Vehicle on surface is included in the 3DS file.Write 3DS file read program fetch, read the data list of all summits of bullet three-dimensional digital model, line segment and their formation triangle surfaces.
3), according to 3D MAX modeling characteristic; The bullet three-dimensional digital model is spliced by many little triangles; With the example that is imaged as of end camera, the result that view plane and bullet three-dimensional digital model intersect is the joining on a series of little leg-of-mutton limits by tangent plane and three-dimensional digital model.Shown in accompanying drawing 5, the little triangle that Δ ABC was cut by tangent plane, the intersection point of tangent plane and leg-of-mutton two limit AB and BC is respectively P1, P2.
At t nConstantly, all and x of bullet three-dimensional digital model fThe leg-of-mutton intersection point of=0 Plane intersects is that (1,2,3...), imaging point y is fallen on the photo in process central projection to Pi fThe set of coordinate is [pl.y, p2.y, p3.y...], after comparing, chooses maximal value in the set, is defined as nymax, chooses minimum value in the set, is defined as nymin (shown in accompanying drawing 6), connects imaging band l maximum, that minimum point obtains nBe the imaging of this simulation.
Simulate the process of the shooting of three-dimensional linear array camera; Each shooting all can obtain the linear array band of bullet three-dimensional digital model imaging; When the bullet three-dimensional digital model passes through imaging region fully, just can obtain the theoretical modeling imaging results of whole bullet three-dimensional digital model.
In (2) step, make up and optimize solving model
Detailed process is:
1), obtains the gradient map of bullet actual imaging and theoretical modeling imaging
Suppose (x i, y j), i=1 ... N, j=1 ... M is N * M the pixel of bullet actual imaging I.Because three-dimensional linear array images is made up of the one-dimensional scanning line, so its gradient map obtains through on the one dimension direction, getting gradient-norm, supposes that its gradient map is G, then:
G(x i,y j)=|I(x i,y j+1)-I(x i,y j)|,i=1…N,j=1…M-1
Suppose (x ' i, y ' j), i=1 ... N, j=1 ... M is N * M the pixel of three-dimensional digital model analog imaging I '.In like manner, suppose that its gradient map is G ', then:
G′(x′ i,y′ j)=|I(x′ i,y′ j+1)-I(x′ i,y′ j)|,i=1…N,j=1…M-1
2), make up the optimal model that kinematic parameter is found the solution
According to the gradient map that obtains, set up the energy function that characterizes analog imaging and actual imaging matching relationship:
ϵ = Σ i = 1 N Σ j = 1 M - 1 [ G ′ ( x i ′ , y j ′ ) - G ( x i , y j ) ] 2
Make up optimal model according to energy function:
min ϵ = min { Σ i = 1 N Σ j = 1 M - 1 [ G ′ ( x i ′ , y j ′ ) - G ( x i , y j ) ] 2 }
The kinematic parameter that optimal model is passed through to change three-dimensional digital model is (with vectorial X=[X a, Y a, Z a, V x, V y, V z, L x, L y, L z] expression), reduce the value of energy function.When the kinematic parameter of three-dimensional digital model makes the energy function value hour, the kinematic parameter of this moment is separating of optimal model, also is the optimization estimated value of bullet kinematic parameter.
In (3) step, Optimization Model is found the solution
Adopt the Powell algorithm to find the solution optimal model.Detailed process is:
1) will utilize kinematic parameter initial estimate that unique point tries to achieve initial value, be made as X as optimal model (0), set convergence threshold δ (convergence threshold δ choose with measure the precision that need obtain and practical application in relevant to the requirement of calculated amount, to those skilled in the art, how to confirm that this value is a common practise.), put k=1, the direction of given 9 linear independences, the generally unit's of employing direction vector:
d (1,1),d (1,2),...d (1,n),n=1,...9;
2) put X (k, 0)=X (k-1), from X (k, 0)Set out, along 9 direction vectors energy function ε is carried out linear search successively, obtain X (k, 1), X (k, 2)..., X (k, n)
3) again along X (k, n)Set out, along direction d (k, n+1)=X (k, n)-X (k, 0)Energy function ε is carried out linear search, obtain X (k)
4) if X (k)-X (k-1)||<δ, then stop iteration, obtain X (k)Otherwise, make d (k+1, j)=d (k,, j+1), j=1 ..., n puts k=k+1, returns 2).
Initial estimate is through continuous revised value X a, Y a, Z a, V x, V y, V z, L x, L y, L z, be we to optimize the result who finds the solution.After the position that has obtained bullet, attitude and speed trivector parameter, just can further obtain the speed of bullet and the size of the angle of attack.If speed is v, the angle of attack is φ, then:
v = V x 2 + V y 2 + V z 2
The bullet kinematic parameter is through the acquisition of finding the solution to the optimization solving model among
Figure DEST_PATH_GDA00002168085100112
the present invention; Described Powell method is referring to Chen Baolin; Optimum Theory and algorithm; Publishing house of Tsing-Hua University, 1989, pp.420-422.

Claims (1)

1. the target range bullet movement parameter measurement method based on digital slit camera technique is characterized in that comprising the steps:
The first step, arrange three-dimensional digit slit measuring system:
After 2-3 rice bullet goes out thorax near the cannon mouth trajectory under place the double image device; The minute surface of double image device becomes 45o to place with ground; Place a linear array camera in position simultaneously, can both observe the bullet of the double image device top of flying in feasible linear array camera of placing and the double image device in the visual field of the picture of linear array camera apart from double image device 5-10 rice; The linear array camera is called preceding camera, the picture of linear array camera is called end camera, preceding camera and end camera have been formed the three-dimensional linear array camera in the three-dimensional digit slit measuring system, and the image that three-dimensional linear array camera obtains is called the three-dimensional linear array images of bullet;
Second goes on foot, and confirms the imaging model of three-dimensional linear array camera:
Define following 3 coordinate systems:
Target range coordinate system o p-x py pz p: with the reference point in target range initial point o as coordinate system p, the bullet ballistic path direction is coordinate axis x pPositive dirction, the direction that makes progress perpendicular to ground, target range ground is coordinate axis z pPositive dirction, o p-x py pPlane and coordinate axis z pConstitute right-handed system;
Preceding camera image space coordinate system o f-x fy fz f: the photo centre of former camera is as initial point o f, be x along the projectile flight direction and perpendicular to the direction of the line array sensor of preceding camera fThe axle positive dirction is y along the downward direction of line array sensor fThe axle positive dirction is z along the target-bound direction of primary optical axis fThe positive dirction of axle, o f-x fy fPlane and coordinate axis z fConstitute right-handed system;
End camera image space coordinate system o b-x by bz b: according to the minute surface symmetry principle, end camera image space coordinate system is the minute surface symmetry picture of preceding camera image space coordinate system;
The concrete steps of imaging model of confirming three-dimensional linear array camera are following:
The 1st step, the elements of interior orientation and the elements of exterior orientation of camera and end camera before confirming respectively;
The demarcation of camera before carrying out, the elements of interior orientation and the elements of exterior orientation of camera before obtaining; The focal distance f of camera before the elements of interior orientation of preceding camera comprises, principal point side-play amount y 0The location parameter of camera before elements of exterior orientation comprises is with the coordinate o of photo centre f(X s, Y s, Z s) expression, the angle that rotates through between the coordinate axis of attitude parameter with preceding camera image space coordinate system and target range coordinate system
Figure FDA00001867368100021
ω f, κ fExpression; Measure the position coordinates that the double image device is confirmed in the coordinate system of target range,, confirm the equation expression formula of level crossing plane, place in the coordinate system of target range of double image device according to double image device putting position and attitude in the coordinate system of target range;
The elements of interior orientation of end camera is identical with preceding camera, according to the minute surface symmetric relation of preceding camera and end camera, calculates the elements of exterior orientation of end camera, comprises the coordinate o of location parameter with end camera photo centre b(X ' s, Y ' s, Z ' s) expression, the angle that rotates through between the coordinate axis of attitude parameter with end camera image space coordinate system and target range coordinate system
Figure FDA00001867368100022
ω b, κ bExpression;
The 2nd step, the imaging model expression formula of camera and end camera before confirming respectively;
Suppose that (X, Y Z) drop on o in the coordinate system of target range to P f-y fz fOn the plane more arbitrarily, some P (X, Y, Z) the preceding camera imaging point p corresponding with it (promptly the imaging model of preceding camera is expressed as for x, the imaging geometry between y):
x = 0 = a 1 ( X - X S ) + b 1 ( Y - Y S ) + c 1 ( Z - Z S ) y - y 0 = - f · a 2 ( X - X S ) + b 2 ( Y - Y S ) + c 2 ( Z - Z S ) a 3 ( X - X S ) + b 3 ( Y - Y S ) + c 3 ( Z - Z S )
Wherein, a i, b i, c i, i=1,2,3rd, with preceding camera attitude parameter
Figure FDA00001867368100024
ω f, κ f9 relevant elements are expressed as:
Figure FDA00001867368100031
In like manner, some P (X, Y, Z) end camera imaging point p ' corresponding with it (promptly the imaging model of end camera is expressed as for x, the imaging geometry between y):
x = 0 = a 1 ′ ( X - X S ′ ) + b 1 ′ ( Y ′ - Y S ) + c 1 ′ ( Z - Z S ′ ) y - y 0 = - f · a 2 ′ ( X - X S ′ ) + b 2 ′ ( Y - Y S ′ ) + c 2 ′ ( Z - Z S ′ ) a 3 ′ ( X - X S ′ ) + b 3 ′ ( Y - Y S ′ ) + c 3 ′ ( Z - Z S ′ )
Wherein, a ' i, b ' i, c ' i, i=1,2,3rd, with end camera attitude parameter ω b, κ bIdentical in 9 the relevant elements, expression mode and preceding camera;
The 3rd step, the kinematic parameter of measurement bullet:
The 1st step, select unique point, calculate the initial estimate of bullet kinematic parameter:
Choose moving projectile bullet point, projectile tail in three-dimensional linear array images corresponding imaging point a and a ', b and b ' as unique point; Suppose the velocity V (V of bullet x, V y, V z), photograph bullet bullet point to preceding camera constantly as initial time, suppose the volume coordinate A (X of bullet point this moment a, Y a, Z a), the axis vector L (L of bullet x, L y, L z), then the volume coordinate of projectile tail is B (X at this moment a-L x, Y a-L y, Z a-L z); The speed of bullet, attitude and location parameter pass through V x, V y, V z, X a, Y a, Z a, L x, L y, L z9 parameters are described; Through bullet bullet point, the imaging relations of projectile tail in three-dimensional linear array images, make up equation about these 9 parameters;
Suppose that it is 0 that preceding camera photographs the sharp moment of bullet bullet, according to preceding camera imaging model, has:
a 1 ( X a - X S ) + b 1 ( Y a - Y S ) + c 1 ( Z a - Z S ) = 0 y a - y 0 = - f · a 2 ( X a - X S ) + b 2 ( Y a - Y S ) + c 2 ( Z a - Z S ) a 3 ( X a - X S ) + b 3 ( Y a - Y S ) + c 3 ( Z a - Z S ) (equation 1 and equation 2)
Wherein, y aFor this moment bullet point imaging point y fCoordinate on the direction;
Suppose that it is t that end camera photographs the sharp moment of bullet bullet 1, the imaging model according to end camera has:
a 1 ′ ( X a + V x t 1 - X S ′ ) + b 1 ′ ( Y a + V y t 1 - Y S ′ ) + c 1 ′ ( Z a + V z t 1 - Z S ′ ) = 0 y a ′ - y 0 = - f · a 2 ′ ( X a + V x t 1 - X S ′ ) + b 2 ′ ( Y a + V y t 1 - V S ′ ) + c 2 ′ ( Z a + V z t 1 - Z S ′ ) a 3 ′ ( X a + V x t 1 - X S ′ ) + b 3 ′ ( Y a + V y t 1 - Y S ′ ) + c 3 ′ ( Z a + V z t 1 - Z S ′ ) (equation 3 and equation 4)
Wherein, y A 'For this moment bullet point imaging point at y bCoordinate on the direction;
Suppose that the moment that preceding camera photographs the bullet projectile tail is t 2, according to preceding camera imaging model, have:
a 1 ( X a - L x + V x t 2 - X S ) + b 1 ( Y a - L y + V y t 2 - Y S ) + c 1 ( Z a - L z + V z t 2 - Z S ) = 0 y b - y 0 = - f · a 2 ( X a - L x + V x t 2 - X S ) + b 2 ( Y a - L y + V y t 2 - Y S ) + c 2 ( Z a - L z + V z t 2 - Z S ) a 3 ( X a - L x + V x t 2 - X S ) + b 3 ( Y a - L y + V y t 2 - Y S ) + c 3 ( Z a - L z + V z t 2 - Z S )
(equation 5 and equation 6)
Wherein, y bFor this moment the projectile tail imaging point at y fCoordinate on the direction;
Suppose that the moment that end camera photographs the bullet projectile tail is t 3, the imaging model according to end camera has:
a 1 ′ ( X a - L x + V x t 3 - X S ′ ) + b 1 ′ ( Y a - L y + V y t 3 - Y S ′ ) + c 1 ′ ( Z a - L z + V z t 3 - Z S ′ ) = 0 y b ′ - y 0 = - f · a 2 ′ ( X a - L x + V x t 3 - X S ′ ) + b 2 ′ ( Y a - L y + V y t 3 - Y S ′ ) + c 2 ′ ( Z a - L z + V z t 3 - Z S ′ ) a 3 ′ ( X a - L x + V x t 3 - X S ′ ) + b 3 ′ ( Y a - L y + V y t 3 - Y S ′ ) + c 3 ′ ( Z a - L z + V z t 3 - Z S ′ )
(equation 7 and equation 8)
Wherein, y B 'For this moment the projectile tail imaging point at y bCoordinate on the direction;
Through bullet point, projectile tail imaging point at x f(x b) a pixel point covering number of order relatively on the direction, the sweep frequency in conjunction with preceding camera can obtain t 1, t 2, t 3Through bullet point, projectile tail formed 4 imaging points on three-dimensional linear array images, can obtain 8 equations altogether about the bullet kinematic parameter;
In addition, the length L of bullet is known, and:
L x 2+ L y 2+ L z 2=L 2(equation 9)
To sum up, can obtain 9 equations about 9 kinematic parameters of bullet; Find the solution this 9 equations, obtain bullet kinematic parameter initial estimate;
The 2nd step made up based on the kinematic parameter of bullet three-dimensional digital model and optimizes solving model, calculated the optimization estimated value of bullet kinematic parameter;
According to known bullet shape data; Utilize 3D MAX software to set up the bullet three-dimensional digital model; According to the imaging model of three-dimensional linear array camera, emulation generates the theoretical modeling imaging results of three-dimensional digital model under bullet kinematic parameter initial estimate situation of bullet; Difference in the measuring image that edge gradient information through theoretical modeling imaging results relatively and three-dimensional linear array camera are taken between the bullet edge of image gradient information, structure is the optimization solving model of input value with the bullet kinematic parameter; Optimize solving model through constantly revising bullet kinematic parameter input value; Make that the bullet image is realized the optimization coupling in the measuring image that theoretical modeling imaging results and the three-dimensional linear array camera of bullet three-dimensional digital model take, and bullet kinematic parameter input value that will this moment is as the optimization estimated value of bullet kinematic parameter.
CN2012102356469A 2012-07-09 2012-07-09 Method for measuring motion parameters of projectile in shooting range based on digital slit shooting technology Pending CN102818561A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012102356469A CN102818561A (en) 2012-07-09 2012-07-09 Method for measuring motion parameters of projectile in shooting range based on digital slit shooting technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012102356469A CN102818561A (en) 2012-07-09 2012-07-09 Method for measuring motion parameters of projectile in shooting range based on digital slit shooting technology

Publications (1)

Publication Number Publication Date
CN102818561A true CN102818561A (en) 2012-12-12

Family

ID=47302830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012102356469A Pending CN102818561A (en) 2012-07-09 2012-07-09 Method for measuring motion parameters of projectile in shooting range based on digital slit shooting technology

Country Status (1)

Country Link
CN (1) CN102818561A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106643306A (en) * 2016-12-30 2017-05-10 中国科学院长春光学精密机械与物理研究所 High-speed imaging method and system for light screen target trajectory measuring system
CN108896017A (en) * 2018-05-09 2018-11-27 西安工业大学 A kind of closely fried Fragment Group location parameter measurement of bullet and calculation method
CN114018108A (en) * 2021-09-17 2022-02-08 中国人民解放军63875部队 Single-station attitude processing method based on matching of linear vector direction and image length

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6101455A (en) * 1998-05-14 2000-08-08 Davis; Michael S. Automatic calibration of cameras and structured light sources
CN101074869A (en) * 2007-04-27 2007-11-21 东南大学 Method for measuring three-dimensional contour based on phase method
CN102252653A (en) * 2011-06-27 2011-11-23 合肥工业大学 Position and attitude measurement method based on time of flight (TOF) scanning-free three-dimensional imaging

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6101455A (en) * 1998-05-14 2000-08-08 Davis; Michael S. Automatic calibration of cameras and structured light sources
CN101074869A (en) * 2007-04-27 2007-11-21 东南大学 Method for measuring three-dimensional contour based on phase method
CN102252653A (en) * 2011-06-27 2011-11-23 合肥工业大学 Position and attitude measurement method based on time of flight (TOF) scanning-free three-dimensional imaging

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵竹新等: "利用数字狭缝摄像技术估计弹丸速度和攻角", 《国防科技大学学报》, vol. 34, no. 1, 29 February 2012 (2012-02-29), pages 144 - 148 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106643306A (en) * 2016-12-30 2017-05-10 中国科学院长春光学精密机械与物理研究所 High-speed imaging method and system for light screen target trajectory measuring system
CN106643306B (en) * 2016-12-30 2018-07-06 中国科学院长春光学精密机械与物理研究所 A kind of high speed imaging method and its system for light curtain target trajectory measurement system
CN108896017A (en) * 2018-05-09 2018-11-27 西安工业大学 A kind of closely fried Fragment Group location parameter measurement of bullet and calculation method
CN108896017B (en) * 2018-05-09 2022-04-15 西安工业大学 Method for measuring and calculating position parameters of projectile near-explosive fragment groups
CN114018108A (en) * 2021-09-17 2022-02-08 中国人民解放军63875部队 Single-station attitude processing method based on matching of linear vector direction and image length
CN114018108B (en) * 2021-09-17 2023-03-28 中国人民解放军63875部队 Single-station attitude processing method based on matching of linear vector direction and image length

Similar Documents

Publication Publication Date Title
CN102072725B (en) Spatial three-dimension (3D) measurement method based on laser point cloud and digital measurable images
CN105667518B (en) The method and device of lane detection
Kong et al. Autonomous landing of an UAV with a ground-based actuated infrared stereo vision system
CN113781582A (en) Synchronous positioning and map creating method based on laser radar and inertial navigation combined calibration
CN103604417B (en) The multi-view images bi-directional matching strategy that object space is information constrained
Tscharf et al. On the use of UAVs in mining and archaeology-geo-accurate 3d reconstructions using various platforms and terrestrial views
CN103983263A (en) Inertia/visual integrated navigation method adopting iterated extended Kalman filter and neural network
Cai et al. Mobile robot localization using gps, imu and visual odometry
CN102645209A (en) Joint positioning method for spatial points by means of onboard LiDAR point cloud and high resolution images
CN104482924B (en) Body of revolution object pose vision measuring method
US11587446B2 (en) Method and system for generating aerial imaging flight path
CN104992074A (en) Method and device for splicing strip of airborne laser scanning system
CN110910498B (en) Method for constructing grid map by using laser radar and binocular camera
CN105466400B (en) One kind utilizes RPC detection multi-source satellite image corresponding image points imaging intersection angle methods
CN102519436A (en) Chang'e-1 (CE-1) stereo camera and laser altimeter data combined adjustment method
CN114964212A (en) Multi-machine collaborative fusion positioning and mapping method oriented to unknown space exploration
CN110986888A (en) Aerial photography integrated method
CN102818561A (en) Method for measuring motion parameters of projectile in shooting range based on digital slit shooting technology
CN118168545A (en) Positioning navigation system and method for weeding robot based on multi-source sensor fusion
CN115388890A (en) Visual sense-based multi-unmanned aerial vehicle cooperative ground target positioning method
CN113340272B (en) Ground target real-time positioning method based on micro-group of unmanned aerial vehicle
CN103364167A (en) Inspection window refraction offset correction method
CN112163309A (en) Method for quickly extracting space circle center of single plane circular image
CN101770656A (en) Stereo orthophoto pair-based large-scene stereo model generating method and measuring method thereof
Bai et al. Application of unmanned aerial vehicle multi-vision image 3D modeling in geological disasters

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20121212