CN115690211A - Air explosion point three-dimensional coordinate detection device and measurement method - Google Patents

Air explosion point three-dimensional coordinate detection device and measurement method Download PDF

Info

Publication number
CN115690211A
CN115690211A CN202211316447.0A CN202211316447A CN115690211A CN 115690211 A CN115690211 A CN 115690211A CN 202211316447 A CN202211316447 A CN 202211316447A CN 115690211 A CN115690211 A CN 115690211A
Authority
CN
China
Prior art keywords
image
explosion
coordinate
point
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211316447.0A
Other languages
Chinese (zh)
Inventor
王泽民
刘敏
王守民
雷志勇
李静
史志军
闫克丁
雷秉山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Technological University
Original Assignee
Xian Technological University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Technological University filed Critical Xian Technological University
Priority to CN202211316447.0A priority Critical patent/CN115690211A/en
Publication of CN115690211A publication Critical patent/CN115690211A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a device and a method for detecting three-dimensional coordinates of an air explosion point, wherein the device comprises an explosion point image processing system and at least two high-speed cameras; arranging at least one high-speed camera on two sides of a safe area of a terminal point trajectory respectively, enabling the pre-explosion range of the cannonball to be located in the intersection detection view field area of the high-speed cameras on the two sides of the trajectory, placing a test marker post at a theoretical explosion point, measuring the coordinate of the test marker post, placing a dome target triggering device in the safe area below the front-end pre-trajectory at the position 300-500 meters away from the centers of the high-speed cameras on the two sides of the trajectory, and arranging a explosion point image processing system and a Beidou time system device in the safe area outside the explosion point; the high-speed camera is respectively connected with the sky screen target triggering device, the explosion point image processing system and the Beidou time system device, and the Beidou time system device is connected with the explosion point image processing system. The problem that instantaneous three-dimensional coordinate measurement of the air explosion point is influenced by the frame frequency limitation factor of a high-speed camera is solved, and the precision of the coordinate measurement of the explosion point is improved.

Description

Air explosion point three-dimensional coordinate detection device and measurement method
Technical Field
The invention relates to the field of image processing technology and target detection, in particular to a device and a method for detecting three-dimensional coordinates of an air explosion point.
Background
In the shooting range test, the measurement of the space coordinates of the explosion points is one of the most important test items of the conventional shooting range, and has important significance for the damage performance evaluation of the weapon system. The common methods for measuring the spatial coordinates of the explosion points mainly comprise 3 methods: photoelectric measurement, acoustic sensor measurement, and image measurement. The photoelectric measurement mainly adopts the photoelectric theodolite to measure the coordinates of the explosion points, the starting is early, the application is wide, the automation degree is higher, and the current explosion point theodolite is the main near-ground explosion point measuring equipment of a target range. However, in practical application, the focal length is mostly fixed, the lens and the camera are packaged into a whole and cannot be replaced, the application range is limited aiming at different test requirements, and the traditional explosion point theodolite catches a first frame of explosion image due to low frame frequency, so that the light is large, and the extraction precision of the pixel coordinates of the explosion point is influenced. The acoustic sensor measures the flying direction of the cannonball and the three-dimensional coordinates of the explosion point by acquiring the explosion shock wave information radiated to the periphery during explosion, combining a sky screen target and a flame detector of the explosion point and utilizing a multi-sensor information fusion theory. The array is flexible in arrangement, wide in detection range and far in distance, is not influenced by visibility and observation visual field shielding, and can be used all the day long. However, the explosion sound waves are greatly influenced by the landform and the environment of the falling area of the cannonball, sound wave signals are easy to adhere and mix up, and the positioning error is large. The image measuring method utilizes a high-speed camera to track and shoot a measuring target, has the advantages of convenience in station arrangement and flexible replacement of multiple lenses, and is more applied to target range testing in recent years. Because the explosion fire light of the cannonball expands rapidly at the moment of explosion and generates smoke, the fire light of the cannonball is maintained for only a few milliseconds, so that the cannonball has high speed and instantaneity, and the image information of the fire light at the first moment of the explosion moment cannot be captured frequently due to the limit of the frame frequency of a high-speed camera.
Disclosure of Invention
The invention provides a device and a method for detecting the three-dimensional coordinate of an air explosion point, solves the problem that the instantaneous three-dimensional coordinate measurement of the air explosion point is influenced by the frame frequency limitation factor of a high-speed camera, and also improves the precision of the coordinate measurement of the explosion point.
The invention is realized by the following technical scheme:
an instantaneous three-dimensional coordinate detection device for an air explosion point comprises a sky-screen target triggering device, a test marker post, an explosion point image processing system, a Beidou time system device and at least two high-speed cameras; at least one high-speed camera is respectively erected on two sides of a trajectory by a tripod at the station distribution position of a safe region of an end point trajectory, rock mass protection is performed, the pre-explosion range of a shell is positioned in the intersection detection view field region of the high-speed cameras on the two sides of the end point trajectory, a sky dome target trigger device is distributed in the safe region below the pre-trajectory, 300-500 meters away from the front end of the central position of the high-speed cameras on the two sides of the end point trajectory, a test standard rod is placed at the theoretical explosion point, the coordinate of the test standard rod is measured, an explosion point image processing system and a Beidou time system device are distributed in the safe region outside the explosion point, and rock mass protection is set; the high-speed camera is respectively connected with the sky screen target triggering device, the explosion point image processing system and the Beidou time system device, and the Beidou time system device is connected with the explosion point image processing system.
The test benchmarks are at least two groups of test benchmarks.
The high-speed camera is used for synchronously shooting sequence image information of a target cannonball in an end point trajectory from a plurality of angles and in a close range; the sky-screen target triggering device is used for providing uniform triggering signals for the high-speed cameras on two sides of the trajectory so as to ensure that the high-speed cameras are provided with accurate synchronous starting shooting signals when the cannonball flies over the sky-screen target triggering device, so that the high-speed cameras can accurately shoot image information of the cannonball before and after explosion, and the storage capacity of the image information of the high-speed cameras is reduced; the test marker post is used for providing known coordinate points for the high-speed cameras on two sides of the trajectory before the test is started, and calibrating the internal and external parameters of the high-speed cameras and calculating the space coordinates after the test; the system comprises a shot image processing system, a high-speed camera and a shell explosion fireball expansion model, wherein the shot image processing system is used for acquiring shot image information of a target shell sequence captured by the high-speed camera when a shell is not detonated before explosion and shot flare image information of a first frame with flare after explosion of the shell by adopting image framing, acquiring flare image information of the first moment when the shell is detonated by adopting an image frame interpolation algorithm and combining the shot explosion fireball expansion model, and calculating a three-dimensional coordinate of a shot space and explosion moment information of the first moment when the shell is exploded by utilizing the flare image information of the first moment when the shell is detonated.
An instantaneous three-dimensional coordinate measuring method of an air explosion point by using an instantaneous three-dimensional coordinate detecting device of the air explosion point comprises the following steps:
step 1): the explosion point image processing system detects the shell image information of the identified shell before explosion when the first frame is not detonated, the shell flare image information of the first frame with flare after explosion, and the corresponding frame time T n-1 、T n1 Adopting a frame mixing frame interpolation algorithm in an image frame interpolation algorithm, and combining a shell explosion fireball expansion model to calculate each pixel point of an interpolated frame image and generate an intermediate frame image Gd 0 (u e ,v e ,t 0 ) Namely the image information of the fire at the first moment when the shell is detonated;
step 2): for the intermediate frame image Gd generated in the step 1) 0 (u e ,v e ,t 0 ) There are two cases where the analysis is performed: 1) After the shell and the flare image frame at the first moment when the shell is detonated are mixed, no superposition or partial superposition exists, and the explosion point coordinate of the intermediate frame image is the intermediate point of the shell coordinate and the flare image center coordinate at the first moment when the shell is detonated; 2) After the shell and the flare image frame at the first moment when the shell is detonated are mixed, the shell target is completely superposed with the flare image, and the explosion point coordinate of the intermediate frame image is the central coordinate of the flare image at the first moment when the shell is detonated;
calculating the information of the flare image at the first moment when the cannonball obtained in the step 1) is detonated according to the analysis result by adopting a moment-based barycentric coordinate extraction algorithm to calculate the intermediate frame image Gd 0 (u e ,v e ,t 0 ) Coordinates of frying point (u) p0 ,v p0 ) (ii) a Calculating the frame time of the corresponding frame inserting image;
and step 3): resolving the internal and external calibration parameters of the camera according to the test benchmarking image to obtain system calibration data; solving a space explosion point coordinate model by establishing an image pixel coordinate system, an image coordinate system, a camera coordinate system and a world coordinate system and combining a conversion relation among the coordinate systems, and obtaining the intermediate frame image Gd obtained in the step 2) 0 (u e ,v e ,t 0 ) Frying point coordinate (u) p0 ,v p0 ) And substituting the space explosion point coordinate model with the space explosion point coordinate model to solve the space three-dimensional coordinate of the explosion point and the explosion time.
The method also comprises the following steps before the step 1):
step 4): arranging a backdrop target triggering device, a test marker post, a blast point image processing system, a Beidou time system device and at least two high-speed cameras arranged at two sides of a terminal trajectory in a safety area of the terminal trajectory; all the devices are installed to form an instantaneous three-dimensional coordinate detection device of the air explosion point;
and step 5): when a target cannonball passes through a detection area of a dome target trigger device, the dome target trigger device outputs a trigger signal, high-speed cameras on two sides of a trajectory are started to acquire real-time continuous high-frame-frequency video images of the target cannonball, the acquired images are cached to obtain sequence image information of the target cannonball, and the obtained sequence image information of the target cannonball is sent to a blast point image processing system;
step 6): the bomb image processing system receives target bomb sequence image information sent by a high-speed camera, extracts image information of a bomb region from the obtained target bomb sequence image information by adopting a video framing method, and then detects and identifies bomb image information when a previous frame of the bomb explosion is not detonated and first frame of the bomb after explosion by adopting background subtraction and morphological filtering methodsImage information of cannonball flare in the present flare and corresponding time T of each frame n-1 、T n1
The step 4) is specifically as follows:
respectively erecting at least one high-speed camera on two sides of a trajectory by using a tripod at a station distribution position of a safe region of an end point trajectory, performing rock mass protection, enabling a pre-explosion range of the cannonball to be positioned in an intersection detection view field region of the high-speed cameras on two sides of the end point trajectory, distributing a sky screen target trigger device in the safe region below the pre-trajectory at a distance of 300-500 meters away from the center of the high-speed cameras on two sides of the trajectory, placing at least two groups of test benchmarks at a theoretical explosion point, measuring coordinates of the test benchmarks, distributing an explosion point image processing system and a Beidou time system device in the safe region outside the explosion point, and setting rock mass protection; the high-speed camera is respectively connected with the sky screen target triggering device, the explosion point image processing system and the Beidou time system device, and the Beidou time system device is connected with the explosion point image processing system;
the step 5) is specifically as follows: when a target shell passes through a light curtain formed by a backdrop target optical lens in a detection area of a backdrop target trigger device, because the target shell shields a part of light, the luminous flux reaching the photosensitive surface of a photoelectric sensor on the backdrop target optical lens is changed, the changed luminous flux is subjected to the processing of extraction, amplification, noise filtering and level conversion circuits through an analog circuit, a TTL level signal with a fixed pulse width is finally output and simultaneously sent to high-speed cameras on two sides of a trajectory, the high-speed cameras on the two sides of the trajectory are started to carry out real-time continuous high-frame-frequency video image acquisition on the target shell, the acquired images are cached to obtain target shell sequence image information, and the obtained target shell sequence image information is sent to a blast point image processing system;
the step 6) is specifically as follows:
extracting the starting and ending time of the video information of the target cannonball before and after the cannonball is detonated in the image information of the target cannonball sequence obtained in the step 5), and deriving each frame of JPEG image I (u) with time marks by adopting video framing processing e ,v e T); acquiring a previous frame image of the cannonball entering a field of view as a background image M bg (u e ,v e ,t 0 ) (ii) a Background subtraction is carried out on the sequence image information of the target cannonball with the time marks by adopting a background difference algorithm to extract the image information M of the cannonball in the previous frame before explosion n-1 (u e ,v e ,t n-1 ) And the first frame of cannonball flare image information M after explosion n1 (u e ,v e ,t n1 ) Wherein, the background difference algorithm is as follows:
M n-1 (u e ,v e ,t n-1 )=I(u e ,v e ,t n-1 )-M bg (u e ,v e ,t 0 )
M n1 (u e ,v e ,t n1 )=I(u e ,v e ,t n1 )-M bg (u e ,v e ,t 0 )
setting a gray image in the multi-gray-level target cannonball sequence image information after background subtraction processing, wherein the gray image is set to be M (u) e ,v e T) in a grayscale image M (u) e ,v e T) finding out a grey value T as a threshold value, dividing the image into two parts, then carrying out binarization processing on the initial multi-grey-level target cannonball sequence image information, determining the threshold value T by adopting a maximum inter-class variance method, and mutually exclusively dividing all pixels of the image into an object pixel set G O And background pixelet G B Suppose that two sets of pixels each have w O (t)、w B (t) pixels each having an average gray level of μ O (t) and μ B (t) each gray-scale distribution variance is σ O 2 (t) and σ B 2 (T), then the maximum between-class variance method will find the threshold T that minimizes the between-class variance * Namely:
Figure BDA0003908899250000051
pixel values greater than or equal to T are set to 1 and pixel values less than the threshold are set to 0. The binary process can be expressed as follows by using a mathematical expression:
Figure BDA0003908899250000052
then, morphology filtering method is adopted to carry out binarization processing on the shell image information Gd 'when the previous frame before explosion is not detonated' n-1 (u e ,v e ,t n-1 ) And shell flare image information Gd 'when flare appears in the first frame after explosion' n1 (u e ,v e ,t n1 ) Performing open operation filtering treatment of corrosion and expansion, removing small particle noise, smoothing the boundary of the cannonball target and the flare target, accurately extracting the cannonball target and the flare target without changing the shape and the area of the cannonball target and obtaining cannonball image information Gd when the cannonball is not detonated in the frame before explosion n-1 (u e ,v e ,t n-1 ) And the shell flare image information Gd when flare appears in the first frame after explosion n1 (u e ,v e ,t n1 )。
The step 1) is specifically as follows:
the shot point image processing system carries out morphological filtering processing on the Gd image of the shell before the explosion when the explosion is not detonated n-1 (u e ,v e ,t n-1 ) And a cannonball flare image Gd when flare appears in the first frame after explosion n1 (u e ,v e ,t n1 ) Substituting into an interpolation frame algorithm, combining with a shell explosion fireball expansion model, assuming that the weight of a frame before detonation is L-alpha/L, the weight of a first frame of image after detonation is alpha/L, multiplying two reference frames by respective weights, and adding to obtain an intermediate frame image Gd 0 (u e ,v e ,t 0 ) (ii) a The algorithm is as follows:
Figure BDA0003908899250000061
wherein, L is the distance between two frames of the original frame, and alpha is the relative distance between the frame before detonation and the inserted frame;
the frame insertion time is:
T n0 =T n-1
the step 2) is specifically as follows:
judging two position relations of the intermediate frame image obtained in the step 1):
(1) If the cannonball and the flare image are overlapped or partially overlapped, the coordinates of the explosion point of the intermediate frame image are intermediate points of the cannonball coordinates and the central pixel coordinates of the flare image; (2) If the cannonball target is completely overlapped with the flare image, the coordinates of the explosion point of the intermediate frame image are the coordinates of the central pixel of the flare image;
according to the judgment result, the explosion point coordinate of the intermediate frame image is calculated by adopting a moment-based barycentric coordinate extraction algorithm, the mass of each pixel in the fire light area in the explosion light image is set to be 1, namely the mass of each pixel is equal to the pixel value of the pixel, (u) e ,v e ) For the coordinates of the image pixel, S is the area of the pixel region, the p + q order moment of the target can be expressed as:
Figure BDA0003908899250000062
where M is the moment of the image at different values of p, q, f (u) e ,v e ) Is the quality of one pixel; respectively calculating the zero order moment and the first order moment thereof, wherein the three conditions are as follows:
when p =0,q =0, the zero order moment M (0, 0) is obtained as:
Figure BDA0003908899250000071
when p =1, q =0, the first moment M (0, 1) is:
Figure BDA0003908899250000072
when p =0,q =1, the first order moment M (1, 0) has the value:
Figure BDA0003908899250000073
the barycenter of the target image can be calculated by using the zeroth order moment and the first order moment, and is used as (u) p0 ,v p0 ) And representing the coordinates of the explosion points of the intermediate frame image, wherein the solving algorithm of the coordinates of the explosion points is as follows:
Figure BDA0003908899250000074
wherein M (1, 0) represents the sum of the abscissa of all pixels of the bullet hole, M (0, 1) represents the sum of the column coordinates of all pixels of the bullet hole, and M (0, 0) represents the number of pixels contained in the bullet hole;
the step 3) is specifically as follows: solving a space explosion point coordinate model and bringing explosion point coordinates into the space explosion point coordinate model to solve three-dimensional explosion point coordinates:
assuming that the three-dimensional coordinate of the space explosion point P in the world coordinate system is (X) P ,Y P ,Z P ) The imaged image coordinate is (u) P ,v P ) (ii) a The high speed camera linear model can be expressed as:
Figure BDA0003908899250000075
the camera distortion model is as follows:
Figure BDA0003908899250000076
Figure BDA0003908899250000077
wherein the content of the first and second substances,
Figure BDA0003908899250000078
considering the second-order radial distortion, the distortion coefficient is a 1 、a 2 (ii) a λ is a scale factor, (u) w ,v w ) As undistorted image coordinates, (R, T) are external parameters of the camera, and R and T are rotation matrices from the world coordinate system to the camera coordinate system, respectivelyAnd a translation vector, a is a camera intrinsic parameter matrix, which can be expressed as:
Figure BDA0003908899250000081
wherein (u) 0 ,v 0 ) As principal point coordinates of an image coordinate system, f x 、f y Scale factors for the u-axis and v-axis, respectively, and α is a non-perpendicular factor for the u-axis and v-axis; high-speed camera calibration of 5 parameters f of internal parameter matrix of camera to be referenced x 、f y 、α、u 0 And v 0 And a distortion coefficient a 1 、a 2
Before testing, two standard test benchmarks rotate in multiple directions within the range of the pre-explosion point of the cannonball, and a high-speed binocular camera is adopted to shoot moving images of the multiple standard test benchmarks at the pre-explosion point of the cannonball in different directions; resolving 5 parameters and a camera distortion coefficient calibrated by a camera according to an imaging relation of characteristic points on a standard test target on a binocular camera consisting of high-speed cameras on two sides;
establishing an image pixel coordinate system, an image coordinate system, a camera coordinate system and a world coordinate system; the solving steps for the conversion between the coordinate systems are as follows:
1) Pixel coordinate system, image coordinate system and transformation relation
The image pixel coordinate system takes the upper left corner of an image as a coordinate origin, and takes rows and columns as the u-axis direction and the v-axis direction of the coordinate system respectively, the coordinate system takes pixels as a unit, the image size as the image resolution, and the pixel coordinate is the position of the pixel in the image; the pixels cannot reflect the physical size of an object in the image, so that an image coordinate system is established, the origin of the coordinate system is established in the pixel coordinate system and is superposed with the intersection point between the camera optical axis and the camera photoelectric sensing imaging plane, the x axis of the coordinate system is parallel to the u axis of the pixel coordinate system, and the y axis of the coordinate system is parallel to the v axis of the pixel coordinate system; let the pixel coordinate at the center of the image be (u) 0 ,v 0 ) The physical dimensions of each pixel on the target surface of the CCD camera in the directions of the x axis and the y axis are dx and dy, respectively, thenThe relationship between pixel coordinates (u, v) and image coordinates (x, y) is:
x=udx-u 0 dx
y=vdy-v 0 dy
it is written as homogeneous coordinates in the form:
Figure BDA0003908899250000082
wherein (u) 0 ,v 0 ) The coordinate of the image center is 1/dx, and 1/dy is the sampling frequency in the x and y directions respectively, namely the number of pixels in unit length;
2) Camera coordinate system and image coordinate system and transformation relation
The camera coordinate system is established on the camera, and the coordinate system projects the center O with the optical system c As a coordinate origin, taking the Z axis as the optical axis of the camera, and forming a right-hand coordinate system with the X axis and the Y axis; object point P (X) in camera coordinate system c ,Y c ,Z c ) The relationship between image points p (x, y) in the image coordinate system is:
x=fX c /Z c
y=fY c /Z c
it is written as homogeneous coordinates in the form:
Figure BDA0003908899250000091
3) Camera coordinate system, world coordinate system and transformation relation
Selecting a measurement coordinate system of the control point as an object space coordinate system; the mapping of points in the world coordinate system to the camera coordinate system is represented by an orthogonal rotation matrix R and a translation matrix T, which is formulated as:
Figure BDA0003908899250000092
written as homogeneous coordinate form can be expressed as:
Figure BDA0003908899250000093
wherein, the orthogonal rotation matrix R is the cosine combination of the camera coordinate system relative to the direction of the world coordinate system coordinate axis, and the translation matrix T = [ T ] 1 t 2 t 3 ] T Is the coordinate of the origin of the camera coordinate system in the world coordinate system;
combining the conversion relation among the coordinate systems to obtain a conversion relation from the pixel coordinate system to the world coordinate system as follows;
Figure BDA0003908899250000101
wherein, Z c The value of (A) is set to be 1, wherein s is a coordinate axis inclination parameter, under an ideal condition, the value is 0, A is an in-camera parameter matrix, and R is a rotation matrix T and is a translation matrix; by further solving the above equation, the conversion relationship between the system pixel coordinate and the world coordinate system can be obtained as follows:
Figure BDA0003908899250000102
order to
Figure BDA0003908899250000103
The above formula can be simplified to the following space fryer point coordinate model:
[X Y Z 1] T =C -1 [u v 1] T (2.13)
wherein (X, Y, Z) is the solved coordinates of the space explosion point, and the unit is meter; (u, v) are pixel coordinates in pixels;
wherein, the coordinates (u) of the frying point obtained in the step 2) are used p0 ,v p0 ) And substituting the three-dimensional space coordinates into the upper formula space explosion point coordinate model to obtain explosion time explosion point space three-dimensional coordinates (X, Y and Z).
Compared with the prior art, the invention has the following beneficial technical effects:
the invention provides an aerial explosion point three-dimensional coordinate detection device and a measurement method based on an image frame interpolation method, which are based on a binocular high-speed camera detection mechanism, fully utilize the relation between sequence images in the process of shooting an explosion point by a camera to form a sequence image, generate a first frame of flame image at the explosion point explosion moment more accurately by adopting the image frame interpolation method to form a continuous frame image, and solve the problem that the flame image information at the initial explosion moment of a shell cannot be captured due to the limitation of the frame frequency of the camera by combining the binocular vision detection principle formed by high-speed cameras at two sides so as to improve the calculation precision of the aerial explosion point coordinate.
The invention provides an air explosion point three-dimensional coordinate detection device and a measurement method based on an image frame interpolation method, which solve the technical problems that in the prior art, the diffusion speed of explosion flame is high, the image information of the initial moment of explosion fireball detonation cannot be captured due to the limitation of the frame frequency of a high-speed camera, the image of the explosion fireball in a first frame after the detonation is often a very large fireball, even the image of the explosion fireball is changed into an irregular fireball, and the like, so that the calculation of the instantaneous coordinates of the detonation of the fireball has large deviation. The method can generate a coherent intermediate image, predict the image information of the flare at the first moment of detonation of the explosive, calculate the space three-dimensional coordinate of the explosion point by using a moment-based gravity center extraction algorithm, solve the technical problem that the instantaneous three-dimensional coordinate measurement of the explosion point in the air is influenced by the frame frequency limitation factor of a high-speed camera, and improve the measurement precision of the explosion point coordinate.
Drawings
FIG. 1 is a station diagram of a detection system embodying the present invention;
FIG. 2 is a frame interpolation principle of the invention for a fried dot image;
FIG. 3 is a flow chart of the algorithm of the present invention;
FIG. 4 is a graph of the temporal relationship of the images of the frame interpolation algorithm of the present invention;
FIG. 5 is a diagram of the transformation relationship between the camera coordinate system and the world coordinate system according to the present invention.
Detailed Description
The present invention will now be described in further detail with reference to specific examples, which are intended to be illustrative, but not limiting, of the invention.
Example 1:
referring to fig. 1, an instantaneous three-dimensional coordinate detection device for an air explosion point comprises a sky-curtain target triggering device, a test marker post, an explosion point image processing system, a Beidou time system device and at least two high-speed cameras; at least one high-speed camera is respectively erected on two sides of a safe area of an end point trajectory by a tripod at the station distribution position of the safe area of the end point trajectory, rock mass protection is performed, the pre-explosion range of the cannonball is positioned in the intersection detection view area of the high-speed cameras on the two sides of the end point trajectory, a sky screen target trigger device is distributed in the safe area below the pre-explosion path which is 300-500 m away from the front end of the central position of the high-speed cameras on the two sides of the end point trajectory, at least two groups of test benchmarks are placed at the theoretical explosion point, the coordinates of the test benchmarks are measured, an explosion point image processing system and a Beidou time-measuring system device are distributed in the safe area outside the explosion point, and rock mass protection is set; the high-speed camera is respectively connected with the sky-curtain target triggering device, the explosion point image processing system and the Beidou time master device, and the Beidou time master device is connected with the explosion point image processing system. It should be noted that, there are two high-speed cameras, and one high-speed camera is arranged on each side to form a binocular test, so that the coordinate measurement accuracy of the explosion point is improved.
The high-speed camera is used for synchronously shooting sequence image information of a target cannonball in an end point trajectory from a plurality of angles in a close range; the sky-screen target triggering device is used for providing uniform triggering signals for the high-speed cameras on two sides of the trajectory so as to ensure that the high-speed cameras are provided with accurate synchronous starting shooting signals when the cannonball flies over the sky-screen target triggering device, so that the high-speed cameras can accurately shoot image information of key time periods before and after the explosion of the cannonball, and the storage capacity of the image information of the high-speed cameras is reduced; the test marker post is used for providing known coordinate points for the high-speed cameras on two sides of the trajectory before the test is started, and calibrating the internal and external parameters of the high-speed cameras and calculating the space coordinate after the test; the system comprises a shot image processing system, a high-speed camera and a shell explosion fireball expansion model, wherein the shot image processing system is used for acquiring shot image information of a target shell sequence captured by the high-speed camera when a shell is not detonated before explosion and shot flare image information of a first frame with flare after explosion of the shell by adopting image framing, acquiring flare image information of the first moment when the shell is detonated by adopting an image frame interpolation algorithm and combining the shot explosion fireball expansion model, and calculating a three-dimensional coordinate of a shot space and explosion moment information of the first moment when the shell is exploded by utilizing the flare image information of the first moment when the shell is detonated.
Referring to fig. 1 to 5, a method for measuring an instantaneous three-dimensional coordinate of an air-burst point using an instantaneous three-dimensional coordinate detecting device of an air-burst point, includes the steps of:
step 1): arranging a sky screen target triggering device, a test marker post, a bomb spot image processing system, a Beidou time system device and at least two high-speed cameras arranged at two sides of a terminal trajectory in a terminal trajectory safety area; all the devices are installed to form an instantaneous three-dimensional coordinate detection device of the air explosion point;
step 2): when a target cannonball passes through a detection area of a dome target trigger device, the dome target trigger device outputs a trigger signal, high-speed cameras on two sides of a trajectory are started to acquire real-time continuous high-frame-frequency video images of the target cannonball, the acquired images are cached to obtain sequence image information of the target cannonball, and the obtained sequence image information of the target cannonball is sent to a blast point image processing system;
step 3): the bomb image processing system receives target bomb sequence image information sent by the high-speed camera, extracts image information of a bomb explosion region from the obtained target bomb sequence image information by adopting a video framing method, detects and identifies bomb image information when a frame before the bomb explosion does not explode, bomb flare image information when flare occurs in a first frame after the explosion and corresponding frame time T by adopting background subtraction and morphological filtering methods n-1 、T n1
Step 4): the explosion point image processing system detects and identifies the cannonball image information when the first frame of the cannonball before explosion is not detonated and the cannonball flare image information when flare appears in the first frame after explosion, and the corresponding time T of each frame n-1 、T n1 The frame mixing frame interpolation algorithm in the image frame interpolation algorithm is adopted and combined with the shell explosion fireball expansion model to calculate the interpolationGenerating intermediate frame image Gd by each pixel point of the incoming frame image 0 (u e ,v e ,t 0 ) Namely the image information of the fire at the first moment when the shell is detonated;
and step 5): firstly, analyzing the intermediate frame image generated in the step 1), wherein two conditions exist: 1) After the shell and the flare image frame at the first moment when the shell is detonated are mixed, no superposition or partial superposition exists, and the explosion point coordinate of the intermediate frame image is the intermediate point of the shell coordinate and the flare image center coordinate at the first moment when the shell is detonated; 2) After the shell and the flare image frame at the first moment when the shell is detonated are mixed, the shell target is completely superposed with the flare image, and the explosion point coordinate of the intermediate frame image is the central coordinate of the flare image at the first moment when the shell is detonated;
then according to the analysis result, calculating the intermediate frame image Gd by using moment-based barycentric coordinate extraction algorithm according to the flare image information at the first moment when the cannonball obtained in the step 1) is detonated 0 (u e ,v e ,t 0 ) Frying point coordinate (u) p0 ,v p0 ) (ii) a Calculating the frame time of the corresponding frame inserting image;
step 6): resolving the internal and external calibration parameters of the camera according to the test benchmarking image to obtain system calibration data; solving a space explosion point coordinate model by establishing an image pixel coordinate system, an image coordinate system, a camera coordinate system and a world coordinate system and combining a conversion relation among the coordinate systems, and obtaining the intermediate frame image Gd obtained in the step 2) 0 (u e ,v e ,t 0 ) Frying point coordinate (u) p0 ,v p0 ) And substituting the space three-dimensional coordinates into a space explosion point coordinate model to solve the space three-dimensional coordinates of the explosion point and the explosion time.
Example 2: an instantaneous three-dimensional coordinate measuring method of an air explosion point by using an instantaneous three-dimensional coordinate detecting device of the air explosion point comprises the following steps: the method specifically comprises the following steps:
the step 1) is specifically as follows: respectively erecting at least one high-speed camera on two sides of a trajectory by using a tripod at the station distribution position of a safe region of an end point trajectory, performing rock mass protection, enabling the pre-explosion range of the shell to be positioned in the intersection detection view field region of the high-speed cameras on the two sides of the end point trajectory, distributing a sky dome target trigger device in the safe region below the pre-trajectory which is 300-500 meters away from the front end of the central position of the high-speed cameras on the two sides of the end point trajectory, placing at least two groups of test benchmarks at the theoretical explosion point, measuring the coordinates of the test benchmarks, and distributing an explosion point image processing system and a Beidou time system device in the safe region outside the explosion point and setting rock mass protection; the high-speed camera is respectively connected with the sky-curtain target triggering device, the explosion point image processing system and the Beidou time master device, and the Beidou time master device is connected with the explosion point image processing system;
the step 2) is specifically as follows: when a target shell passes through a light curtain formed by a backdrop target optical lens in a detection area of a backdrop target trigger device, because the target shell shields a part of light, the luminous flux reaching the photosensitive surface of a photoelectric sensor on the backdrop target optical lens is changed, the changed luminous flux is subjected to the processing of extraction, amplification, noise filtering and level conversion circuits through an analog circuit, a TTL level signal with a fixed pulse width is finally output and simultaneously sent to high-speed cameras on two sides of a trajectory, the high-speed cameras on the two sides of the trajectory are started to carry out real-time continuous high-frame-frequency video image acquisition on the target shell, the acquired images are cached to obtain target shell sequence image information, and the obtained target shell sequence image information is sent to a blast point image processing system;
the step 3) is specifically as follows:
extracting the starting and ending time of the video information of the target cannonball before and after the cannonball is detonated in the image information of the target cannonball sequence obtained in the step 2), and deriving each frame of JPEG image I (u) with time marks by adopting video framing processing e ,v e T); acquiring a previous frame image of the cannonball entering a field of view as a background image M bg ((u e ,v e ,t 0 ) (ii) a Background subtraction processing is carried out on the image information of the target cannonball sequence with the time marks by adopting a background difference algorithm to extract the image information M of the cannonball in the previous frame before detonation n-1 (u e ,v e ,t n-1 ) And the first frame of cannonball flare image information M after explosion n1 (u e ,v e ,t n1 ) Wherein, the background difference algorithm is as follows:
M n-1 (u e ,v e ,t n-1 )=I(u e ,v e ,t n-1 )-M bg (u e ,v e ,t 0 )
M n1 (u e ,v e ,t n1 )=I(u e ,v e ,t n1 )-M bg (u e ,v e ,t 0 )
setting a gray image in the multi-gray-level target cannonball sequence image information after background subtraction processing, wherein the gray image is set to be M (u) e ,v e T) in a grayscale image M (u) e ,v e T) finding out a gray value T as a threshold value, dividing the image into two parts, then carrying out binarization processing on the initial multi-gray-level target cannonball sequence image information, determining the threshold value T by adopting a maximum inter-class variance method, and mutually exclusively dividing all pixels of the image into an object pixel set G O And background pixelet G B Suppose that two sets of pixels each have w O (t)、w B (t) pixels each having an average gray level of μ O (t) and μ B (t) each gray-scale distribution variance is σ O 2 (t) and σ B 2 (T), then the maximum between-class variance method will find the threshold T that minimizes the between-class variance * Namely:
Figure BDA0003908899250000151
pixel value greater than or equal to T is set to 1, and pixel value less than the threshold is set to 0. The binary process can be expressed by a mathematical expression as follows:
Figure BDA0003908899250000152
then, the shell image information Gd 'of the non-detonation previous frame after binarization processing is carried out by adopting a morphological filtering method' n-1 (u e ,v e ,t n-1 ) And when the first frame appears in the presence of fire after explosionMarble light image information Gd' n1 (u e ,v e ,t n1 ) Performing corrosion-first and expansion-second opening operation filtering treatment to remove small particle noise and smooth the boundary of the cannonball target and the flare target, accurately extracting the cannonball target and the flare target without changing the shape and the area of the cannonball target and obtaining cannonball image information Gd when the cannonball is not detonated in the frame before explosion n-1 (u e ,v e ,t n-1 ) And the shell flare image information Gd when flare appears in the first frame after explosion n1 (u e ,v e ,t n1 )。
The step 4) is specifically as follows:
the explosive point image processing system carries out morphological filtering processing on the Gd image of the shell when the previous frame before explosion is not detonated n-1 (u e ,v e ,t n-1 ) And a cannonball flare image Gd when flare appears in the first frame after explosion n1 (u e ,v e ,t n1 ) Substituting into a frame interpolation algorithm, combining with a shell explosion fireball expansion model, assuming that the weight of a frame before detonation is L-alpha/L, the weight of a first frame of a flare image after detonation is alpha/L, multiplying two reference frames by respective weights, and then adding to obtain an intermediate frame image Gd 0 (u e ,v e ,t 0 ) (ii) a The algorithm is as follows:
Figure BDA0003908899250000153
wherein, L is the distance between two frames of the original frame, and alpha is the relative distance between the frame before detonation and the inserted frame;
the frame insertion time is:
T n0 =T n-1
step 5) solving the coordinates of the frying points
Judging two position relations of the intermediate frame image obtained in the step 4):
(1) If the cannonball and the flare image are overlapped or partially overlapped, the coordinates of the explosion point of the intermediate frame image are the intermediate point of the cannonball coordinates and the central pixel coordinates of the flare image; (2) If the cannonball target is completely overlapped with the flare image, the coordinates of the explosion point of the intermediate frame image are the coordinates of the central pixel of the flare image;
calculating explosion point coordinates of the intermediate frame image by using a moment-based barycentric coordinate extraction algorithm according to the judgment result, and setting the mass of each pixel in the explosion point image to be 1, namely the mass of each pixel is equal to the pixel value thereof, (u) e ,v e ) For the coordinates of the image pixel, S is the area of the pixel region, the p + q order moment of the target can be expressed as:
Figure BDA0003908899250000161
where M is the moment of the image at different values of p, q, f (u) e ,v e ) Is the quality of one pixel; respectively obtaining the zero order moment and the first order moment thereof, wherein the three conditions are as follows:
when p =0, q =0, the obtainable zero order moment M (0, 0) is:
Figure BDA0003908899250000162
when p =1, q =0, the first moment M (0, 1) is:
Figure BDA0003908899250000163
when p =0,q =1, the first order moment M (1, 0) has the value:
Figure BDA0003908899250000164
the barycenter of the target image can be calculated by using the zeroth order moment and the first order moment, and is used as (u) p0 ,v p0 ) And representing the coordinates of the explosion points of the intermediate frame image, wherein the solving algorithm of the coordinates of the explosion points is as follows:
Figure BDA0003908899250000171
wherein, M (1, 0) represents the sum of the horizontal coordinates of all pixels of the bullet hole, M (0, 1) represents the sum of the column coordinates of all pixels of the bullet hole, and M (0, 0) represents the number of pixels contained in the bullet hole.
The step 6) is specifically as follows: solving a space explosion point coordinate model and bringing explosion point coordinates into the space explosion point coordinate model to solve explosion point space three-dimensional coordinates:
assuming that the three-dimensional coordinate of the space explosion point P in the world coordinate system is (X) P ,Y P ,Z P ) The imaged image coordinate is (u) P ,v P ) (ii) a The high-speed camera linear model can be expressed as:
Figure BDA0003908899250000172
the camera distortion model is as follows:
Figure BDA0003908899250000173
Figure BDA0003908899250000174
wherein the content of the first and second substances,
Figure BDA0003908899250000175
considering the second-order radial distortion, the distortion coefficient is a 1 、a 2 (ii) a λ is a scale factor, (u) w ,v w ) For undistorted image coordinates, (R, T) are the extrinsic parameters of the camera, R and T are the rotation matrix and translation vector from the world coordinate system to the camera coordinate system, respectively, a is the camera intrinsic parameter matrix, which can be expressed as:
Figure BDA0003908899250000176
in the formula (u) 0 ,v 0 ) As principal point coordinates of an image coordinate system, f x 、f y Scale factors of the u axis and the v axis respectively, and alpha is a non-perpendicular factor of the u axis and the v axis; high-speed camera calibrating 5 parameters f of internal parameter matrix of reference camera x 、f y 、α、u 0 And v 0 And a distortion coefficient a 1 、a 2
Before testing, two standard test benchmarks rotate in multiple directions within the range of the pre-explosion point of the cannonball, and a high-speed binocular camera is adopted to shoot moving images of the multiple standard test benchmarks at the pre-explosion point of the cannonball in different directions; according to the imaging relationship of the characteristic points on the standard test benchmarking on a binocular camera consisting of high-speed cameras on two sides, 5 parameters and a camera distortion coefficient calibrated by the camera are solved;
establishing an image pixel coordinate system, an image coordinate system, a camera coordinate system and a world coordinate system; the resolving steps for the conversion between the coordinate systems are as follows:
1) Pixel coordinate system, image coordinate system and transformation relation
The image pixel coordinate system takes the upper left corner of an image as a coordinate origin, and takes rows and columns as the u-axis direction and the v-axis direction of the coordinate system respectively, the coordinate system takes pixels as a unit, the image size as the image resolution, and the pixel coordinate is the position of the pixel in the image; the pixels cannot reflect the physical size of an object in the image, so that an image coordinate system is established, the origin of the coordinate system is established in the pixel coordinate system and is superposed with the intersection point between the optical axis of the camera and the photoelectric sensing imaging plane of the camera, the x axis of the coordinate system is parallel to the u axis of the pixel coordinate system, and the y axis of the coordinate system is parallel to the v axis of the pixel coordinate system; let the pixel coordinate at the center of the image be (u) 0 ,v 0 ) And the physical dimensions of each pixel on the target surface of the CCD camera in the directions of the x axis and the y axis are dx and dy, respectively, the relationship between the pixel coordinate (u, v) and the image coordinate (x, y) is as follows:
x=udx-u 0 dx
y=vdy-v 0 dy
it is written as homogeneous coordinates in the form:
Figure BDA0003908899250000181
wherein (u) 0 ,v 0 ) The coordinate of the image center is 1/dx, and 1/dy is the sampling frequency in the x and y directions respectively, namely the number of pixels in unit length;
2) Camera coordinate system and image coordinate system and transformation relation
The camera coordinate system is established on the camera, and the coordinate system projects the center O with the optical system c As a coordinate origin, taking the Z axis as the optical axis of the camera, and forming a right-hand coordinate system with the X axis and the Y axis; object point P (X) in camera coordinate system c ,Y c ,Z c ) The relationship between image points p (x, y) in the image coordinate system is:
x=fX c /Z c
y=fY c /Z c
it is written as homogeneous coordinates in the form:
Figure BDA0003908899250000191
3) Camera coordinate system, world coordinate system and transformation relation
Selecting a measurement coordinate system of the control point as an object space coordinate system; the mapping of points in the world coordinate system to the camera coordinate system is represented by an orthogonal rotation matrix R and a translation matrix T, which is formulated as:
Figure BDA0003908899250000192
written as homogeneous coordinate form can be expressed as:
Figure BDA0003908899250000193
wherein, the orthogonal rotation matrix R is the cosine combination of the camera coordinate system relative to the direction of the world coordinate system coordinate axis, and the translation matrix T = [ T ] 1 t 2 t 3 ] T Is the coordinate of the origin of the camera coordinate system under the world coordinate system;
combining the conversion relation among the coordinate systems to obtain a conversion relation from the pixel coordinate system to the world coordinate system as follows;
Figure BDA0003908899250000194
wherein Z is c The value of (A) is set to be 1, wherein s is a coordinate axis inclination parameter, under an ideal condition, the value is 0, A is an in-camera parameter matrix, and R is a rotation matrix T and is a translation matrix; by further solving the above equation, the conversion relationship between the system pixel coordinate and the world coordinate system can be obtained as follows:
Figure BDA0003908899250000201
order to
Figure BDA0003908899250000202
The above formula can be simplified to the following spatial fryer coordinates model:
[X Y Z 1] T =C -1 [u v 1] T (2.13)
in the formula, (X, Y, Z) is the solved coordinates of the space explosion point, and the unit is meter; (u, v) are pixel coordinates in pixels;
wherein, the coordinates (u) of the frying point obtained in the step 5) are used p0 ,v p0 ) And substituting the three-dimensional coordinates into the formula space explosion point coordinate model to obtain explosion time explosion point space three-dimensional coordinates (X, Y and Z).
The invention provides an aerial explosion point three-dimensional coordinate detection device and a measurement method based on an image frame interpolation method, which are based on a binocular high-speed camera detection mechanism, fully utilize the relation between sequence images in the process of shooting an explosion point by a camera to form a sequence image, generate a first frame of flame image at the explosion point explosion moment more accurately by adopting the image frame interpolation method to form a continuous frame image, and solve the problem that the flame image information at the initial explosion moment of a shell cannot be captured due to the limitation of the frame frequency of the camera by combining the binocular vision detection principle formed by high-speed cameras at two sides so as to improve the calculation precision of the aerial explosion point coordinate.
The invention provides an air explosion point three-dimensional coordinate detection device and a measurement method based on an image frame interpolation method, which solve the technical problem that in the prior art, the explosion light diffusion speed is high, the image information of the initial explosion fireball detonation moment cannot be captured due to the limitation of the frame frequency of a high-speed camera, the image of the explosion fireball in the first frame after detonation is often a very large fireball, even the image of the explosion fireball is changed into an irregular fireball, and the like, so that the calculation of the instantaneous coordinates of the detonation fireball has large deviation. The method can generate a coherent intermediate image, predict the image information of the flare at the first moment of detonation of the explosive, calculate the space three-dimensional coordinate of the explosion point by using a moment-based gravity center extraction algorithm, solve the technical problem that the instantaneous three-dimensional coordinate measurement of the explosion point in the air is influenced by the frame frequency limitation factor of a high-speed camera, and improve the measurement precision of the explosion point coordinate.
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (7)

1. An instantaneous three-dimensional coordinate detection device for an air explosion point is characterized by comprising a sky-screen target triggering device, a test marker post, an explosion point image processing system, a Beidou time system device and at least two high-speed cameras; at least one high-speed camera is respectively erected on two sides of a trajectory by a tripod at the station distribution position of a safe region of an end point trajectory, rock mass protection is performed, the pre-explosion range of a shell is positioned in the intersection detection view field region of the high-speed cameras on the two sides of the end point trajectory, a sky dome target trigger device is distributed in the safe region below the pre-trajectory, 300-500 meters away from the front end of the central position of the high-speed cameras on the two sides of the end point trajectory, a test standard rod is placed at the theoretical explosion point, the coordinate of the test standard rod is measured, an explosion point image processing system and a Beidou time system device are distributed in the safe region outside the explosion point, and rock mass protection is set; the high-speed camera is respectively connected with the sky-curtain target triggering device, the explosion point image processing system and the Beidou time master device, and the Beidou time master device is connected with the explosion point image processing system.
2. The apparatus of claim 1 wherein the test targets are at least two sets of test targets.
3. The apparatus of claim 1, wherein the high-speed camera is used for synchronously shooting the sequence image information of the target cannonball in the terminal trajectory from a plurality of angles and close distances; the sky-screen target triggering device is used for providing uniform triggering signals for the high-speed cameras on two sides of the trajectory so as to ensure that the high-speed cameras are provided with accurate synchronous starting shooting signals when the cannonball flies over the sky-screen target triggering device, so that the high-speed cameras can accurately shoot image information of the cannonball before and after explosion, and the storage capacity of the image information of the high-speed cameras is reduced; the test marker post is used for providing known coordinate points for the high-speed cameras on two sides of the trajectory before the test is started, and calibrating the internal and external parameters of the high-speed cameras and calculating the space coordinates after the test; the system comprises a shot image processing system, a high-speed camera and a shell explosion fireball expansion model, wherein the shot image processing system is used for acquiring shot image information of a target shell sequence captured by the high-speed camera when a shell is not detonated before explosion and shot flare image information of a first frame with flare after explosion of the shell by adopting image framing, acquiring flare image information of the first moment when the shell is detonated by adopting an image frame interpolation algorithm and combining the shot explosion fireball expansion model, and calculating a three-dimensional coordinate of a shot space and explosion moment information of the first moment when the shell is exploded by utilizing the flare image information of the first moment when the shell is detonated.
4. An instantaneous three-dimensional coordinate measuring method of an air-burst point using the instantaneous three-dimensional coordinate detecting device of an air-burst point according to any one of claims 1 to 3, comprising the steps of:
step 1): the explosion point image processing system detects the shell image information of the identified shell before explosion when the first frame is not detonated, the shell flare image information of the first frame with flare after explosion, and the corresponding frame time T n-1 、T n1 Calculating each pixel point of the inserted frame image by adopting a frame mixing and frame inserting algorithm in an image frame inserting algorithm and combining a shell explosion fireball expansion model to generate an intermediate frame image Gd 0 (u e ,v e ,t 0 ) Namely the image information of the fire at the first moment when the shell is detonated;
step 2): gd for the intermediate frame image generated in the step 1) 0 (u e ,v e ,t 0 ) There are two cases where the analysis is performed: 1) After the shell and the flare image frame at the first moment when the shell is detonated are mixed, no superposition or partial superposition exists, and the explosion point coordinate of the intermediate frame image is the intermediate point of the shell coordinate and the flare image center coordinate at the first moment when the shell is detonated; 2) After the shell and the flare image frame at the first moment when the shell is detonated are mixed, the shell target is completely superposed with the flare image, and the explosion point coordinate of the intermediate frame image is the central coordinate of the flare image at the first moment when the shell is detonated;
then according to the analysis result, calculating the intermediate frame image Gd by using moment-based barycentric coordinate extraction algorithm according to the flare image information at the first moment when the cannonball obtained in the step 1) is detonated 0 (u e ,v e ,t 0 ) Frying point coordinate (u) p0 ,v p0 ) (ii) a Calculating the frame time of the corresponding frame inserting image;
step 3): resolving the internal and external calibration parameters of the camera according to the test benchmarking image to obtain system calibration data; solving a space explosion point coordinate model by establishing an image pixel coordinate system, an image coordinate system, a camera coordinate system and a world coordinate system and combining the conversion relation among the coordinate systems, and obtaining the intermediate frame image Gd obtained in the step 2) 0 (u e ,v e ,t 0 ) Coordinates of frying point (u) p0 ,v p0 ) Substitution intoAnd solving the space three-dimensional coordinates of the explosion point and the explosion time in the space explosion point coordinate model.
5. The method of claim 4, further comprising the steps of, prior to step 1):
step 4): arranging a backdrop target triggering device, a test marker post, a blast point image processing system, a Beidou time system device and at least two high-speed cameras arranged at two sides of a terminal trajectory in a safety area of the terminal trajectory; all the devices are installed to form an instantaneous three-dimensional coordinate detection device of the air explosion point;
step 5): when a target cannonball passes through a detection area of a dome target trigger device, the dome target trigger device outputs a trigger signal, high-speed cameras on two sides of a trajectory are started to acquire real-time continuous high-frame-frequency video images of the target cannonball, the acquired images are cached to obtain sequence image information of the target cannonball, and the obtained sequence image information of the target cannonball is sent to a blast point image processing system;
step 6): the bomb image processing system receives target bomb sequence image information sent by the high-speed camera, extracts image information of a bomb explosion region from the obtained target bomb sequence image information by adopting a video framing method, detects and identifies bomb image information when a frame before the bomb explosion does not explode, bomb flare image information when flare occurs in a first frame after the explosion and corresponding frame time T by adopting background subtraction and morphological filtering methods n-1 、T n1
6. The method for measuring instantaneous three-dimensional coordinates of an air-fry spot according to claim 5, wherein the step 4) is specifically as follows:
respectively erecting at least one high-speed camera on two sides of a trajectory by using a tripod at the station distribution position of a safe region of an end point trajectory, performing rock mass protection, enabling the pre-explosion range of the shell to be positioned in the intersection detection view field region of the high-speed cameras on the two sides of the end point trajectory, distributing a sky dome target trigger device in the safe region below the pre-trajectory at a distance of 300-500 meters away from the centers of the high-speed cameras on the two sides of the trajectory, placing at least two groups of test benchmarks at the theoretical explosion point, measuring the coordinates of the test benchmarks, and distributing an explosion point image processing system and a Beidou time system device in the safe region outside the explosion point and setting rock mass protection; the high-speed camera is respectively connected with the sky screen target triggering device, the explosion point image processing system and the Beidou time system device, and the Beidou time system device is connected with the explosion point image processing system;
the step 5) is specifically as follows: when a target shell passes through a light curtain formed by a backdrop target optical lens in a detection area of a backdrop target trigger device, because the target shell shields a part of light, the luminous flux reaching the photosensitive surface of a photoelectric sensor on the backdrop target optical lens is changed, the changed luminous flux is subjected to the processing of extraction, amplification, noise filtering and level conversion circuits through an analog circuit, a TTL level signal with a fixed pulse width is finally output and simultaneously sent to high-speed cameras on two sides of a trajectory, the high-speed cameras on the two sides of the trajectory are started to carry out real-time continuous high-frame-frequency video image acquisition on the target shell, the acquired images are cached to obtain target shell sequence image information, and the obtained target shell sequence image information is sent to a blast point image processing system;
the step 6) is specifically as follows:
extracting the starting and ending time of the video information of the target cannonball before and after the cannonball is detonated in the image information of the target cannonball sequence obtained in the step 5), and deriving each frame of JPEG image I (u) with time marks by adopting video framing processing e ,v e T); acquiring a previous frame image of the cannonball entering a field of view as a background image M bg (u e ,v e ,t 0 ) (ii) a Background subtraction processing is carried out on the image information of the target cannonball sequence with the time marks by adopting a background difference algorithm to extract the cannonball image information M of the previous frame before explosion n-1 (u e ,v e ,t n-1 ) And the first frame of cannonball flare image information M after explosion n1 (u e ,v e ,t n1 ) Wherein, the background difference algorithm is as follows:
M n-1 (u e ,v e ,t n-1 )=I(u e ,v e ,t n-1 )-M bg (u e ,v e ,t 0 )
M n1 (u e ,v e ,t n1 )=I(u e ,v e ,t n1 )-M bg (u e ,v e ,t 0 )
setting a gray image in the multi-gray-level target cannonball sequence image information after background subtraction processing, wherein the gray image is set to be M (u) e ,v e T) in a grayscale image M (u) e ,v e T) finding out a gray value T as a threshold value, dividing the image into two parts, then carrying out binarization processing on the initial multi-gray-level target cannonball sequence image information, determining the threshold value T by adopting a maximum inter-class variance method, and mutually exclusively dividing all pixels of the image into an object pixel set G O And background pixelet G B Suppose that two sets of pixels each have w O (t)、w B (t) pixels each having an average gray level of μ O (t) and μ B (t) the respective gray-scale distribution variances are σ O 2 (t) and σ B 2 (T), then the maximum between-class variance method will find the threshold T that minimizes the between-class variance * Namely:
Figure FDA0003908899240000041
pixel value greater than or equal to T is set to 1, and pixel value less than the threshold is set to 0. The binary process can be expressed as follows by using a mathematical expression:
Figure FDA0003908899240000042
then, morphology filtering method is adopted to carry out binarization processing on the shell image information Gd 'when the previous frame before explosion is not detonated' n-1 (u e ,v e ,t n-1 ) And shell flare image information Gd 'when flare appears in the first frame after explosion' n1 (u e ,v e ,t n1 ) Performing open operation filtering treatment of corrosion and expansion to remove small particle noise, smoothening the boundary of the cannonball target and the flare target, accurately extracting the cannonball target and the flare target without changing the shape and the area of the cannonball target and the flare target, and obtaining cannonball image information Gd when the frame before explosion is not detonated n-1 (u e ,v e ,t n-1 ) And the shell flare image information Gd when flare appears in the first frame after explosion n1 (u e ,v e ,t n1 )。
7. The method for measuring instantaneous three-dimensional coordinates of an air fryer point according to claim 6, wherein said step 1) is specifically:
the explosive point image processing system carries out morphological filtering processing on the Gd image of the shell when the previous frame before explosion is not detonated n-1 (u e ,v e ,t n-1 ) And the fire light image Gd of the shell when the fire light appears in the first frame after explosion n1 (u e ,v e ,t n1 ) Substituting into an interpolation frame algorithm, combining with a shell explosion fireball expansion model, assuming that the weight of a frame before detonation is L-alpha/L, the weight of a first frame of image after detonation is alpha/L, multiplying two reference frames by respective weights, and adding to obtain an intermediate frame image Gd 0 (u e ,v e ,t 0 ) (ii) a The algorithm is as follows:
Figure FDA0003908899240000051
wherein, L is the distance between two frames of the original frame, and α is the relative distance between the frame before initiation and the interpolated frame;
the frame insertion time is:
T n0 =T n-1
the step 2) is specifically as follows:
judging two position relations of the intermediate frame image obtained in the step 1):
(1) If the cannonball and the flare image are overlapped or partially overlapped, the coordinates of the explosion point of the intermediate frame image are the intermediate point of the cannonball coordinates and the central pixel coordinates of the flare image; (2) If the cannonball target is completely overlapped with the flare image, the coordinates of the explosion point of the intermediate frame image are the coordinates of the central pixel of the flare image;
according to the judgment result, the explosion point coordinate of the intermediate frame image is calculated by adopting a moment-based barycentric coordinate extraction algorithm, the mass of each pixel in the fire light area in the explosion light image is set to be 1, namely the mass of each pixel is equal to the pixel value of the pixel, (u) e ,v e ) For the coordinates of the image pixel, S is the area of the pixel region, the p + q order moment of the target can be expressed as:
Figure FDA0003908899240000061
where M is the moment of the image at different values of p, q, f (u) e ,v e ) Is the quality of one pixel; respectively calculating the zero order moment and the first order moment thereof, wherein the three conditions are as follows:
when p =0, q =0, the obtainable zero order moment M (0, 0) is:
Figure FDA0003908899240000062
when p =1, q =0, the first moment M (0, 1) is:
Figure FDA0003908899240000063
when p =0,q =1, the first order moment M (1, 0) has the value:
Figure FDA0003908899240000064
the center of gravity of the target image can be calculated by using the zero order moment and the first order moment, and the center of gravity is calculated by using (u) p0 ,v p0 ) And representing the coordinates of the explosion points of the intermediate frame image, wherein the solving algorithm of the coordinates of the explosion points is as follows:
Figure FDA0003908899240000065
wherein M (1, 0) represents the sum of the abscissa of all pixels of the bullet hole, M (0, 1) represents the sum of the column coordinates of all pixels of the bullet hole, and M (0, 0) represents the number of pixels contained in the bullet hole;
the step 3) is specifically as follows: solving a space explosion point coordinate model and bringing explosion point coordinates into the space explosion point coordinate model to solve explosion point space three-dimensional coordinates:
the three-dimensional coordinate of the space explosion point P in the world coordinate system is assumed to be (X) P ,Y P ,Z P ) The imaged image coordinate is (u) P ,v P ) (ii) a The high speed camera linear model can be expressed as:
Figure FDA0003908899240000066
the camera distortion model is as follows:
Figure FDA0003908899240000067
Figure FDA0003908899240000068
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003908899240000071
considering the second-order radial distortion, the distortion coefficient is a 1 、a 2 (ii) a λ is a scale factor, (u) w ,v w ) For undistorted image coordinates, (R, T) are the extrinsic parameters of the camera, R and T are the rotation matrix and translation vector from the world coordinate system to the camera coordinate system, respectively, a is the camera intrinsic parameter matrix, which can be expressed as:
Figure FDA0003908899240000072
in the formula (u) 0 ,v 0 ) As principal point coordinates of an image coordinate system, f x 、f y Scale factors of the u axis and the v axis respectively, and alpha is a non-perpendicular factor of the u axis and the v axis; high-speed camera calibration of 5 parameters f of internal parameter matrix of camera to be referenced x 、f y 、α、u 0 And v 0 And a distortion coefficient a 1 、a 2
Before testing, two standard test benchmarks rotate in multiple directions within the range of the pre-explosion point of the cannonball, and a high-speed binocular camera is adopted to shoot a plurality of standard motion images of the test benchmarks in different directions of the pre-explosion point of the cannonball; resolving 5 parameters and a camera distortion coefficient calibrated by a camera according to an imaging relation of characteristic points on a standard test target on a binocular camera consisting of high-speed cameras on two sides;
establishing an image pixel coordinate system, an image coordinate system, a camera coordinate system and a world coordinate system; the solving steps for the conversion between the coordinate systems are as follows:
1) Pixel coordinate system, image coordinate system and transformation relation
The image pixel coordinate system takes the upper left corner of an image as a coordinate origin, and takes rows and columns as the u-axis direction and the v-axis direction of the coordinate system respectively, the coordinate system takes pixels as a unit, the image size as the image resolution, and the pixel coordinate is the position of the pixel in the image; the pixels cannot reflect the physical size of an object in the image, so that an image coordinate system is established, the origin of the coordinate system is established in the pixel coordinate system and is superposed with the intersection point between the camera optical axis and the camera photoelectric sensing imaging plane, the x axis of the coordinate system is parallel to the u axis of the pixel coordinate system, and the y axis of the coordinate system is parallel to the v axis of the pixel coordinate system; let the pixel coordinate at the center of the image be (u) 0 ,v 0 ) And the physical dimensions of each pixel on the target surface of the CCD camera in the directions of the x axis and the y axis are dx and dy, respectively, the relationship between the pixel coordinates (u, v) and the image coordinates (x, y) is:
x=udx-u 0 dx
y=vdy-v 0 dy
it is written as homogeneous coordinates in the form:
Figure FDA0003908899240000081
wherein (u) 0 ,v 0 ) The coordinate of the image center is 1/dx, and 1/dy is the sampling frequency in the x and y directions respectively, namely the number of pixels in unit length;
2) Camera coordinate system and image coordinate system and transformation relation
The camera coordinate system is established on the camera, and the coordinate system projects the center O with the optical system c As a coordinate origin, taking the Z axis as the optical axis of the camera, and forming a right-hand coordinate system with the X axis and the Y axis; object point P (X) in camera coordinate system c ,Y c ,Z c ) The relationship between image points p (x, y) in the image coordinate system is:
x=fX c /Z c
y=fY c /Z c
it is written as homogeneous coordinates in the form:
Figure FDA0003908899240000082
3) Camera coordinate system, world coordinate system and transformation relation
Selecting a measurement coordinate system of the control point as an object space coordinate system; the mapping of points in the world coordinate system to the camera coordinate system is represented by an orthogonal rotation matrix R and a translation matrix T, which is formulated as:
Figure FDA0003908899240000083
written as homogeneous coordinate form can be expressed as:
Figure FDA0003908899240000084
wherein, the orthogonal rotation matrix R is the cosine combination of the camera coordinate system relative to the direction of the world coordinate system coordinate axis, and the translation matrix T = [ T ] 1 t 2 t 3 ] T Is the coordinate of the origin of the camera coordinate system under the world coordinate system;
combining the conversion relation among the coordinate systems to obtain a conversion relation from the pixel coordinate system to the world coordinate system as follows;
Figure FDA0003908899240000091
wherein, Z c The value of (A) is set to be 1, wherein s is a coordinate axis inclination parameter, the value is 0 under an ideal condition, A is an in-camera parameter matrix, and R is a rotation matrix T and is a translation matrix; by further solving the above equation, the conversion relationship between the system pixel coordinate and the world coordinate system can be obtained as follows:
Figure FDA0003908899240000092
order to
Figure FDA0003908899240000093
The above formula can be simplified to the following spatial fryer coordinates model:
[X Y Z 1] T =C -1 [u v 1] T (2.13)
in the formula, (X, Y, Z) is the solved coordinates of the space explosion point, and the unit is meter; (u, v) are pixel coordinates in pixels;
wherein, the coordinates (u) of the frying point obtained in the step 2) are used p0 ,v p0 ) And substituting the three-dimensional coordinates into the formula space explosion point coordinate model to obtain explosion time explosion point space three-dimensional coordinates (X, Y and Z).
CN202211316447.0A 2022-10-26 2022-10-26 Air explosion point three-dimensional coordinate detection device and measurement method Pending CN115690211A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211316447.0A CN115690211A (en) 2022-10-26 2022-10-26 Air explosion point three-dimensional coordinate detection device and measurement method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211316447.0A CN115690211A (en) 2022-10-26 2022-10-26 Air explosion point three-dimensional coordinate detection device and measurement method

Publications (1)

Publication Number Publication Date
CN115690211A true CN115690211A (en) 2023-02-03

Family

ID=85099843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211316447.0A Pending CN115690211A (en) 2022-10-26 2022-10-26 Air explosion point three-dimensional coordinate detection device and measurement method

Country Status (1)

Country Link
CN (1) CN115690211A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117848169A (en) * 2024-03-08 2024-04-09 中国科学院长春光学精密机械与物理研究所 Automatic detection system and method for frying point time based on double-station intersection
CN117974967A (en) * 2024-03-28 2024-05-03 沈阳长白电子应用设备有限公司 Fried spot position measurement method based on image identification positioning

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117848169A (en) * 2024-03-08 2024-04-09 中国科学院长春光学精密机械与物理研究所 Automatic detection system and method for frying point time based on double-station intersection
CN117848169B (en) * 2024-03-08 2024-04-30 中国科学院长春光学精密机械与物理研究所 Automatic detection system and method for frying point time based on double-station intersection
CN117974967A (en) * 2024-03-28 2024-05-03 沈阳长白电子应用设备有限公司 Fried spot position measurement method based on image identification positioning

Similar Documents

Publication Publication Date Title
CN115690211A (en) Air explosion point three-dimensional coordinate detection device and measurement method
US7324663B2 (en) Flight parameter measurement system
EP1509781B1 (en) Flight parameter measurement system
KR101222447B1 (en) Enhancement of aimpoint in simulated training systems
CN108398123B (en) Total station and dial calibration method thereof
JP6763559B1 (en) Ball tracking device and ball tracking method
CN111445522B (en) Passive night vision intelligent lightning detection system and intelligent lightning detection method
CN112485785A (en) Target detection method, device and equipment
CN115585740A (en) Detection device and measurement method for spatial coordinates of explosion points
US10612891B1 (en) Automated ammunition photogrammetry system
CN113514182B (en) Shock wave overpressure field measuring method based on high-speed photographic system
CN110298864A (en) A kind of vision sensing method and device of golf push rod equipment
CN109767471B (en) Dynamic core-bursting positioning method and system
RU2570025C1 (en) Determination of blast coordinates and projectile energy characteristics at tests
CN115984369A (en) Shooting aiming track acquisition method based on gun posture detection
CN113240749B (en) Remote binocular calibration and ranging method for recovery of unmanned aerial vehicle facing offshore ship platform
KR100914573B1 (en) Method for obtaining weapon separation coefficient of aircraft
CN110806572B (en) Device and method for testing distortion of long-focus laser three-dimensional imager based on angle measurement method
CN114035175A (en) System and method for generating interference situation of diffuse reflection plate false target
CN110895120B (en) Image processing technology-based ship cannon system precision detection device and detection method
CN207050843U (en) A kind of gun muzzle vibration test system
CN112989972A (en) Automatic identification method, device and system for target shooting and storage medium
CN112598617A (en) Outer trajectory optical measurement precision analysis method based on virtual platform
US20230065922A1 (en) Self-organized learning of three-dimensional motion data
CN108510455A (en) A kind of laser irradiation device image interfusion method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination