CN102359788A - Series image target recursive identification method based on platform inertia attitude parameter - Google Patents

Series image target recursive identification method based on platform inertia attitude parameter Download PDF

Info

Publication number
CN102359788A
CN102359788A CN 201110266278 CN201110266278A CN102359788A CN 102359788 A CN102359788 A CN 102359788A CN 201110266278 CN201110266278 CN 201110266278 CN 201110266278 A CN201110266278 A CN 201110266278A CN 102359788 A CN102359788 A CN 102359788A
Authority
CN
China
Prior art keywords
mrow
msub
mfrac
target
imaging
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201110266278
Other languages
Chinese (zh)
Other versions
CN102359788B (en
Inventor
杨卫东
朱鹏
朱虎
黎云
杨洋
张桥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN 201110266278 priority Critical patent/CN102359788B/en
Publication of CN102359788A publication Critical patent/CN102359788A/en
Application granted granted Critical
Publication of CN102359788B publication Critical patent/CN102359788B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a series image target recursive identification method based on platform inertia attitude parameters. Based on an initial whole image capture result, the method utilizes imaging attitude parameters to carry out recursive prediction on target point pixel coordinates in a series image and combines with a local detection identification method to carry out accurate positioning, so as to reduce search scope in a series image target identification process, inhibit interferences of a background image on the target detection identification, reduce positioning calculating time of the target identification, effectively increase reliability and positioning precision of the target identification, and provide a reliable and efficient target identification strategy for moving platform imaging.

Description

Sequential image target recursive identification method based on platform inertial attitude parameters
Technical Field
The invention belongs to the technical field of autonomous navigation and imaging target identification of aircrafts, and particularly relates to a sequential image target recursive identification method based on platform inertia attitude parameters, which is used for realizing rapid and accurate positioning of targets in sequential images and providing a new target identification tracking method and strategy for imaging accurate navigation.
Background
The problem of how to improve the recognition rate and the real-time performance of the automatic target recognition system under the condition of limited computing resources is a difficult problem in the technical field of navigation recognition and positioning, and the method has wide application value for the research of the technology.
Under the conditions of different time phases, climates, viewpoints, heights and the like of the aircraft navigation, the change of the characteristics of the target background in the optical image acquired by the imaging sensor is large. Especially, under the condition of remote imaging, the ground target under a complex background is identified, because the target is small, the number of repeated modes similar to the target in a view field is large, a large amount of interference exists, and the difficulty of identification and accurate positioning of the target is increased.
In the field of digital image processing, a traditional target identification method is to search in a full image range of a real-time image by using a prepared target feature template image and perform relevant matching identification so as to position the pixel coordinate position of a target point. However, when the scene where the target is located is complex, a plurality of repetitive patterns of target feature recognition exist in the background, or the contrast of the formed real-time image is low under the condition that the imaging condition is not ideal, and a plurality of areas close to the target features are formed in the background, the target recognition by using the traditional matching recognition method often generates a false capture phenomenon, so that the requirement of target recognition and positioning cannot be met. Meanwhile, the calculation amount of the full-field search and target identification is large, the real-time identification difficulty of the sequence image target is large under the condition that the calculation resources of the aircraft platform are limited, and a rapid identification method needs to be developed.
In the existing published documents, no report exists on a technology for quickly and automatically positioning and identifying a sequence image target specially aiming at a complex imaging environment, and the problem of automatic positioning and identifying of the target is a technical problem which has to be solved in the navigation positioning field and needs a new technical scheme to solve.
How to fully utilize the imaging attitude parameters acquired by the platform imaging system to predict the coordinates of the target in the image so as to carry out local identification and accurate positioning on the target has very important significance for the accurate navigation of the aircraft.
Disclosure of Invention
The invention provides a sequential image target recursive identification method based on platform inertia attitude parameters, and aims to solve the problems that due to the change of optical imaging conditions or the existence of a pattern which is matched with target features in a scene, the contrast of an imaged image is low, the features of a target are not obvious compared with a background, or the target is difficult to accurately identify and position under the condition that a plurality of repeated patterns which are similar to the target exist in an image background.
A sequential image target recursion identification method based on platform inertial attitude parameters specifically comprises the following steps:
(1) calculating the position of a light axis point in the current frame image in a target coordinate system according to the target initial pixel coordinate;
(2) estimating the position of the optical axis point in the target coordinate system in the next frame of image according to the position of the optical axis point in the target coordinate system in the current frame;
(3) predicting the final pixel coordinate of the target in the next frame image according to the position of the optical axis point in the next frame image in the target coordinate system;
(4) and constructing a local search window in the next frame image by taking the target final pixel coordinate as a center, updating the target final pixel coordinate by capturing the target in the local search window, and taking the updated target final pixel coordinate as the next identified target initial pixel coordinate.
The initial pixel coordinate of the target in the first recognition is obtained by full-image capture in the history sequence image.
The step (1) is specifically as follows: position (x) of optical axis point in target coordinate system in current frameo0,yo0,zo0) The calculation method comprises the following steps:
<math> <mrow> <msub> <mi>x</mi> <mrow> <mi>o</mi> <mn>0</mn> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mn>0</mn> </msub> <mo>&times;</mo> <mi>ctg</mi> <msub> <mi>&theta;</mi> <mn>0</mn> </msub> <mo>-</mo> <mfrac> <msub> <mi>h</mi> <mn>0</mn> </msub> <mrow> <mi>tg</mi> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mn>0</mn> </mrow> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>cos</mi> <msub> <mi>&alpha;</mi> <mn>0</mn> </msub> <mo>+</mo> <mfrac> <mrow> <msub> <mi>h</mi> <mn>0</mn> </msub> <mo>&times;</mo> <mi>tg</mi> <msub> <mi>A</mi> <mrow> <mi>u</mi> <mn>0</mn> </mrow> </msub> </mrow> <mrow> <mi>sin</mi> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mn>0</mn> </mrow> </msub> </mrow> </mfrac> <mo>&times;</mo> <mi>sin</mi> <msub> <mi>&alpha;</mi> <mn>0</mn> </msub> </mrow> </math>
<math> <mrow> <msub> <mi>z</mi> <mrow> <mi>o</mi> <mn>0</mn> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mn>0</mn> </msub> <mo>&times;</mo> <mi>ctg</mi> <msub> <mi>&theta;</mi> <mn>0</mn> </msub> <mo>-</mo> <mfrac> <msub> <mi>h</mi> <mn>0</mn> </msub> <mrow> <mi>tg</mi> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mn>0</mn> </mrow> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>sin</mi> <msub> <mi>&alpha;</mi> <mn>0</mn> </msub> <mo>-</mo> <mfrac> <mrow> <msub> <mi>h</mi> <mn>0</mn> </msub> <mo>&times;</mo> <mi>tg</mi> <msub> <mi>A</mi> <mrow> <mi>u</mi> <mn>0</mn> </mrow> </msub> </mrow> <mrow> <mi>sin</mi> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mn>0</mn> </mrow> </msub> </mrow> </mfrac> <mo>&times;</mo> <mi>cos</mi> <msub> <mi>&alpha;</mi> <mn>0</mn> </msub> </mrow> </math>
yo0=0
wherein
<math> <mrow> <msub> <mi>A</mi> <mrow> <mi>u</mi> <mn>0</mn> </mrow> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>d</mi> <msub> <mi>A</mi> <mi>u</mi> </msub> </mrow> </mfrac> <mo>[</mo> <mrow> <mo>(</mo> <msub> <mi>u</mi> <mrow> <mi>c</mi> <mn>0</mn> </mrow> </msub> <mo>-</mo> <mfrac> <mi>COL</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>cos</mi> <msub> <mi>&gamma;</mi> <mn>0</mn> </msub> <mo>-</mo> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mrow> <mi>c</mi> <mn>0</mn> </mrow> </msub> <mo>-</mo> <mfrac> <mi>ROW</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>sin</mi> <msub> <mi>&gamma;</mi> <mn>0</mn> </msub> <mo>]</mo> </mrow> </math>
<math> <mrow> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mn>0</mn> </mrow> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>dA</mi> <mi>v</mi> </msub> </mfrac> <mo>[</mo> <mrow> <mo>(</mo> <msub> <mi>u</mi> <mrow> <mi>c</mi> <mn>0</mn> </mrow> </msub> <mo>-</mo> <mfrac> <mi>COL</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>sin</mi> <msub> <mi>&gamma;</mi> <mn>0</mn> </msub> <mo>+</mo> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mrow> <mi>c</mi> <mn>0</mn> </mrow> </msub> <mo>-</mo> <mfrac> <mi>ROW</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>cos</mi> <msub> <mi>&gamma;</mi> <mn>0</mn> </msub> <mo>]</mo> </mrow> </math>
dAv=ROW/φ
α0To imager azimuth angle, θ0For imager pitch angle, h0Is imager height, gamma0Is the rolling angle of the imager, phi is the longitudinal imaging field angle of the imager,
Figure BDA0000090119320000036
for the imager transverse imaging angle, ROW is the number of real-time imaging lines, COL is the number of real-time imaging columns, (u)c0,vc0) Is the target initial position.
The step (2) is specifically as follows: position (x) of the optical axis point in the target coordinate system in the next frameo(i),yo(i),zo(i)) The calculation method comprises the following steps:
xo(i)=xo0+ΔPx(i)
zo(i)=zo0+ΔPz(i)
yo(i)=0
wherein,
ΔPx(i)=hictgθicosαi-h0ctgθ0cosα0+Δxi
ΔPz(i)=hictgθisinαi-h0ctgθ0sinα0+Δzi
and,
Δxi=Δti×wx×dAu
Δzi=Δti×wz×dAv
Figure BDA0000090119320000041
dAv=ROW/φ
(xo0,0,zo0) Is the position of the optical axis point of the current frame in the target coordinate system, Δ tiRepresenting the imaging time difference from the current frame to the next frame; omegaxFor the angular velocity, w, of the imaging platform along the x-axis of the target coordinate systemzThe angular velocity of the imaging platform along the direction of the z axis of the target coordinate system; alpha is alpha0、θ0、h0Respectively representing the azimuth angle, the pitch angle and the height of the current frame image; alpha is alphai、θi、hiRespectively representing the imaging azimuth, the pitch angle and the height corresponding to the next frame of image; phi is the longitudinal imaging field angle of the imager,for the lateral imaging angle of the imager, ROW is the number of real-time imaging lines, and COL is the number of real-time imaging columns.
The step (3) is specifically as follows: target final pixel coordinate (u)c(i),vc(i)) The calculation method comprises the following steps:
uc(i)=dAu×[(Av(i)i)×sinγi+Au(i)×cosγi]
vc(i)=dAv×[(Av(i)i)×cosγi-Au(i)×sinγi]
wherein,
<math> <mrow> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>=</mo> <msup> <mi>tg</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <mfrac> <msub> <mi>h</mi> <mi>i</mi> </msub> <mrow> <msub> <mi>h</mi> <mi>i</mi> </msub> <mi>ctg</mi> <msub> <mi>&theta;</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>x</mi> <mrow> <mi>o</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mi>cos</mi> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>z</mi> <mrow> <mi>o</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mi>sin</mi> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> </mrow> </mfrac> <mo>]</mo> </mrow> </math>
<math> <mrow> <msub> <mi>A</mi> <mrow> <mi>u</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>=</mo> <msup> <mi>tg</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <mfrac> <mrow> <msub> <mi>x</mi> <mrow> <mi>o</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mi>sin</mi> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>z</mi> <mrow> <mi>o</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mi>cos</mi> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> </mrow> <mrow> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>/</mo> <mi>sin</mi> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> </mrow> </mfrac> <mo>]</mo> </mrow> </math>
Figure BDA0000090119320000045
dAv=ROW/φ
αi、θi、hiand gammaiRespectively representing the azimuth angle, the pitch angle, the height and the roll angle corresponding to the next frame of image imaging, (x)o(i),0,zo(i)) Is the position of the optical axis point in the next frame image in the target coordinate system, phi is the longitudinal imaging field angle of the imager,
Figure BDA0000090119320000051
for the lateral imaging angle of the imager, ROW is the number of real-time imaging lines, and COL is the number of real-time imaging columns.
The length T _ Row and the width T _ Col of the local search window in the step (4) are respectively:
T_Row=L_Row+2×biasX
T_Col=L_Col+2×biasZ
wherein,
<math> <mrow> <mi>L</mi> <mo>_</mo> <mi>Row</mi> <mo>=</mo> <mn>2</mn> <mo>&times;</mo> <mfrac> <mi>ROW</mi> <mi>&phi;</mi> </mfrac> <mo>&times;</mo> <mrow> <mo>(</mo> <mi>arctan</mi> <mfrac> <mi>h</mi> <mrow> <mfrac> <mi>h</mi> <mrow> <mi>tan</mi> <mi>&theta;</mi> </mrow> </mfrac> <mo>-</mo> <mfrac> <mi>Length</mi> <mn>2</mn> </mfrac> </mrow> </mfrac> <mo>-</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> </mrow> </math>
Figure BDA0000090119320000053
biasX=maxωx×Δt×dAu
biasZ=maxωz×Δt×dAv
length, Width represents the actual Length and Width of the target under the geodetic coordinates, phi is the longitudinal imaging field angle of the imager,
Figure BDA0000090119320000054
the imaging angle is the transverse imaging angle of the imager, ROW is the real-time imaging line number, COL is the real-time imaging column number, and theta and h are the imaging pitch angle and height respectively; max omegax,maxωzRespectively representing the maximum angular speed of the aircraft in the direction of an x axis and the direction of a z axis; at represents the time difference between two adjacent frames of images of the aircraft, dAuAnd dAvRepresenting the lateral and longitudinal angular resolutions, respectively.
The technical effects of the invention are as follows:
due to various reasons, the optical image has more repeated patterns similar to the target features, so that the target cannot be accurately identified and positioned. According to the method, the coordinate position of the target pixel in the sequence image is predicted by utilizing the imaging attitude parameter acquired by the platform imaging system, and then accurate positioning is realized in a local matching positioning mode, so that the robustness and the adaptability of the target automatic identification positioning system are ensured, and the defect that accurate identification positioning cannot be realized due to more repeated modes in a real-time image is overcome.
In the initial capturing and optical axis point estimating stage of the target, the position of the optical axis point under the target coordinate system is estimated according to the target pixel coordinate obtained by initial capturing by taking the target point coordinate as a reference, and the position of the ground optical axis point of the next frame of image in the sequence is predicted according to the imaging attitude parameter. And in the stage of recursive prediction of the pixel coordinates of the target point, the pixel coordinate position of the target in the image is reversely deduced according to the ground optical axis point position and the imaging attitude parameters, and then the local positioning is carried out by taking the position as the basis, so that the accurate positioning of the target is realized. The method has the advantages that the target is accurately identified and positioned under the conditions that the contrast of the imaged image is low, the characteristic of the target is not obvious compared with the background due to the change of the optical imaging condition or the existence of the mode which is similar to the target characteristic in the scene, or more repeated modes which are similar to the target exist in the image background, a new thought is provided for increasing the identification rate of the automatic identification of the target, and the method has great significance for the research and development of the automatic identification and navigation positioning technology of the target.
The invention predicts the pixel coordinates of the target point in the process of identifying and positioning the target, and then replaces the whole-image search by using a local positioning mode, thereby greatly reducing the target search range in the calculation process, saving the identification calculation time, improving the performance of the whole automatic target identification system, and providing a feasible method and a new thought for the application occasion with higher real-time requirement on target identification.
Drawings
FIG. 1 is a general flow chart of a sequential image target recursive identification method based on platform inertial attitude parameters according to the present invention;
FIG. 2 is a diagram of an imaging geometry mathematical model defined in the present invention, including a target coordinate system, an imaging platform measurement coordinate system, and a line-of-sight coordinate system;
FIG. 3 is a schematic view of the imaging attitude parameter determination and imaging field range in the present invention;
FIG. 4 is a schematic diagram of the principle of the present invention for performing mobile pipeline filtering on an initial target point pixel coordinate trajectory;
FIG. 5 is a schematic diagram of the transformation of space and geometric coordinates according to the present invention;
FIG. 6 is a schematic diagram of the location estimation of the ground optical axis point in the present invention;
FIG. 7 is a schematic diagram of predicting pixel coordinates of a target point according to a ground optical axis point position in the present invention;
FIG. 8 is a graph comparing the height with the abscissa deviation curve of the target recognition and location results in the top and bottom frame images, with and without the algorithm of the present invention;
FIG. 9 is a graph comparing the height with the vertical coordinate deviation curves of the target recognition and location results in the top and bottom frame images, with and without the algorithm of the present invention.
Detailed Description
A general flow chart 1 of a sequential image target recursive prediction identification method based on platform inertial attitude parameters is shown in the figure, and the specific process of the method is as follows:
1. target point initial pixel coordinate capture
(1.1) coordinate System and inertial attitude parameter definition
In order to establish an imaging geometric mathematical model, a target coordinate system, an imaging platform measurement coordinate system and a sight line coordinate system are defined, and the following steps are respectively carried out:
(1) target coordinate system ogAxgAygAzgA
Target coordinate system ogAxgAygAzgAFor the world coordinate system of the target observed by the imager, i.e. the geometric center o of the target point detected at the initial imaging momentgAIs the origin, ogAygAThe axis is consistent with the normal line of the local reference ellipsoid and points upwards, and the sky is positive; ogAxgAThe local horizontal plane is oriented to the north, i.e. to the northgAzgAShaft and ogAxgA、ogAygAThe axes are in a right-handed orthogonal coordinate system, east as shown in fig. 2.
(2) Aircraft coordinate system oRxRyRzR
With the aircraft centroid oRIs the origin, oRxRAlong the axial direction of the platform and pointing to the head, when the installation error is not counted, the platform measurement coordinate system is overlapped with the aircraft coordinate system. The imaging platform outputs angle information of a pitch angle theta, an azimuth angle alpha and a roll angle gamma of the aircraft.
(3) Line of sight coordinate system oRxSySzS
The observation angle measurement coordinate system is defined by taking a target as a reference. With the aircraft centroid oRIs the origin, oRxsThe axis being coincident with the aircraft's line of sight to the target, oRzsThe axis is parallel to the horizontal plane of the target point and is oRxsThe axis pointing vertically to the right, oRysShaft and oRxsShaft oRzsThe axes form a right-handed orthogonal coordinate system.
(1.2) initial Capture of target Point
(1.2.1) obtaining the track coordinates of the target in the previous n frames of images by full-image matching
In the process of matching the whole image, methods such as a gray level cross-correlation matching algorithm, a phase cross-correlation matching algorithm, a mutual information matching algorithm and a feature-based matching algorithm can be adopted. The invention adopts the gray level cross-correlation matching algorithm of mean value removal normalization and correspondingly simplifies the algorithm according to the real-time requirement.
Assume a template is M and its size is Rr×CrF is the image to be matched and R is the sizes×CsAnd R iss>Rr,Rs>RrThen the calculated traversal range is (R)s-Rr)×(Cs-Cr). According to a calculation formula of the mean value removing normalized cross-correlation algorithm, after the template and the image to be matched are convolved, the variance of the template and the variance of the current position of the image to be matched are divided.
Under the condition of high real-time requirement, besides ensuring the accuracy of the calculation result, the method of replacing the calculation time by the storage space is also a very effective way. Each point traversed by the point to be matched is divided by the variance of the template, so that the variance of the template can be eliminated in actual calculation, and the variance of the template is eliminated after the maximum value of the calculated correlation coefficient is obtained, thereby omitting one division operation. The normalized de-mean cross-correlation match metric given above is converted into:
<math> <mrow> <mi>NC</mi> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <munder> <mi>&Sigma;</mi> <mi>x</mi> </munder> <munder> <mi>&Sigma;</mi> <mi>y</mi> </munder> <mo>[</mo> <mi>M</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <mover> <mi>M</mi> <mo>&OverBar;</mo> </mover> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>&times;</mo> <mo>[</mo> <mi>F</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mi>m</mi> <mo>,</mo> <mi>y</mi> <mo>+</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>-</mo> <mover> <msub> <mi>F</mi> <mi>mn</mi> </msub> <mo>&OverBar;</mo> </mover> <mo>]</mo> </mrow> <msqrt> <munder> <mi>&Sigma;</mi> <mi>x</mi> </munder> <munder> <mi>&Sigma;</mi> <mi>y</mi> </munder> <msup> <mrow> <mo>[</mo> <mi>F</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mi>m</mi> <mo>,</mo> <mi>y</mi> <mo>+</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>-</mo> <mover> <msub> <mi>F</mi> <mi>mn</mi> </msub> <mo>&OverBar;</mo> </mover> <mo>]</mo> </mrow> <mn>2</mn> </msup> </msqrt> </mfrac> </mrow> </math>
simplifying the formula, and removing the root number as follows:
<math> <mrow> <msup> <mi>NC</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <msup> <mrow> <mo>{</mo> <munder> <mi>&Sigma;</mi> <mi>x</mi> </munder> <munder> <mi>&Sigma;</mi> <mi>y</mi> </munder> <mo>[</mo> <mi>M</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <mover> <mi>M</mi> <mo>&OverBar;</mo> </mover> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>]</mo> <mo>&times;</mo> <mo>[</mo> <mi>F</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mi>m</mi> <mo>,</mo> <mi>y</mi> <mo>+</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>-</mo> <mover> <msub> <mi>F</mi> <mi>mn</mi> </msub> <mo>&OverBar;</mo> </mover> <mo>]</mo> <mo>}</mo> </mrow> <mn>2</mn> </msup> <mrow> <munder> <mi>&Sigma;</mi> <mi>x</mi> </munder> <munder> <mi>&Sigma;</mi> <mi>y</mi> </munder> <msup> <mrow> <mo>[</mo> <mi>F</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mi>m</mi> <mo>,</mo> <mi>y</mi> <mo>+</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>-</mo> <mover> <mrow> <msub> <mi>F</mi> <mi>mn</mi> </msub> <mo>]</mo> </mrow> <mo>&OverBar;</mo> </mover> </mrow> <mn>2</mn> </msup> </mrow> </mfrac> </mrow> </math>
the molecules of the above formula are simplified to give:
<math> <mrow> <msup> <mi>NC</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <msup> <mrow> <mo>[</mo> <munder> <mi>&Sigma;</mi> <mi>x</mi> </munder> <munder> <mi>&Sigma;</mi> <mi>y</mi> </munder> <mi>M</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>F</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mi>m</mi> <mo>,</mo> <mi>y</mi> <mo>+</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>-</mo> <mi>&Delta;</mi> <mo>&times;</mo> <mover> <mi>M</mi> <mo>&OverBar;</mo> </mover> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>&times;</mo> <mover> <msub> <mi>F</mi> <mi>mn</mi> </msub> <mo>&OverBar;</mo> </mover> <mo>]</mo> </mrow> <mn>2</mn> </msup> <msub> <mi>D</mi> <mi>mn</mi> </msub> </mfrac> </mrow> </math>
wherein:
<math> <mrow> <msub> <mi>D</mi> <mi>mn</mi> </msub> <mo>=</mo> <munder> <mi>&Sigma;</mi> <mi>x</mi> </munder> <munder> <mi>&Sigma;</mi> <mi>y</mi> </munder> <msup> <mrow> <mo>[</mo> <mi>F</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>+</mo> <mi>m</mi> <mo>,</mo> <mi>y</mi> <mo>+</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>-</mo> <mover> <msub> <mi>F</mi> <mi>mn</mi> </msub> <mo>&OverBar;</mo> </mover> <mo>]</mo> </mrow> <mn>2</mn> </msup> </mrow> </math>
Δ=Rr×Cr
wherein M (x, y) is the gray value of the pixel point (x, y) in the template,
Figure BDA0000090119320000095
is the mean value of the gray levels of the template, DmnF (x + m, y + n) is the gray value of the point (x + m, y + n) of the image to be matched, which is the square of the variance of the template size range at the point (m, n) of the image to be matched,
Figure BDA0000090119320000096
is the gray average value of the image to be matched, wherein m is more than or equal to 0 and less than or equal to (R)s-Rr),0≤n≤(Cs-Cr)。
(1.2.2) determining initial target pixel coordinates
(1) Determining a target point fluctuation range
Because the angular velocity of the aircraft changes at a constant speed according to a relatively stable value in an imaging period, the fluctuation range of the pixel coordinate positions of the target points of two adjacent frames of images in the sequence can be estimated according to the condition. The specific calculation process is as follows:
Δx=maxωx×Δt×dAu
Δy=maxωy×Δt×dAv
wherein, Deltax, Deltay respectively represent the maximum fluctuation range of the horizontal and vertical coordinates of the target point of two adjacent frames of images, max omegax,maxωyRespectively representing the maximum angular speed of the aircraft in the direction of an x axis and the direction of a z axis; at represents the time difference between two adjacent frames of images of the aircraft, dAu,davRespectively representing the lateral and longitudinal angular resolutions,
(2) mobile pipeline filtering
According to the change rule of the angular velocity of the aircraft in the flying process, the coordinate tracks of the target points in the formed sequence images fluctuate within a certain range, so that a time domain pipeline filter based on the tracks of the target points in the formed images can be established according to the fluctuation range of the pixel coordinate positions of the target points of two adjacent frames of images in the estimated sequence, n track points obtained through global matching are filtered, the error capturing result is filtered, and the accurate initial coordinate position of the target point is obtained, as shown in fig. 4.
And performing full map matching on the prepared target reference map and the first n (n > -3) frame image of the image sequence to obtain a track of a target point, wherein the track is expressed as: a. the0,A1...AnThe corresponding pixel coordinate position points are respectively as follows: (x)0,y0),(x1,y1)...(xn,yn) And the size of n is selected according to the real-time requirement of the algorithm.
Assuming that the filter pipeline length is N (N ≦ N, N is typically an odd number), the gate size is denoted by M (l, d), where l and d denote the length and width of the filter gate, respectively, and the filter pipeline is implemented in a first-in-first-out manner.
The candidate target point Ai(iN) as the center of the current pipeline, N-1 frames of images (before and after each) adjacent to the current pipeline are calculated
Figure BDA0000090119320000111
Frame) and all other track points in the pipeline meet the following conditions, and the judgment formula is as follows:
xj-Δn×Δx<=xi<=xj+Δn×Δx
yj-Δn×Δy<=yi<=yj+Δn×Δy
where Δ N denotes a frame difference between the previous and subsequent sequence images, Δ x, Δ y denote fluctuation ranges of pixel coordinate positions of target points of two adjacent frame images in the sequence estimated from 1.2.2, j < i < N, and there are:
Δn=i-j+1
Δx=maxωx×Δt×dAu
Δy=maxωy×Δt×dAv
the dots satisfying the condition are counted by a counter, ktIndicating that if the candidate target point in the pipeline meets the judgment condition, k is settIs added by 1 when
Figure BDA0000090119320000112
When the number of the effective points in the pipeline is less than the preset value, the point in the pipeline is considered to be effective
Figure BDA0000090119320000113
Then, the center point of the pipeline is considered as the accurate coordinate (x) of the target point in the current frame imagei,yi)·
Continuously updating the image in the pipeline, filtering the track coordinates of the target points in the selected image sequence through the pipeline, and filtering the error capturing result to obtain the accurate coordinates (x) of the target points0,y0)·,(x1,y1)·...(xs,ys)·. Selecting the coordinate (x) of the target point with the frame number closest to ns,ys)·As initial coordinate position (u) of target pointc0,vc0) Subsequent calculations are performed.
2. Optical axis point position estimation
(2.1) estimation of initial position of optical axis point
(2.1.1) target coordinate System Down-conversion model
As shown in fig. 5, the position coordinates of the target point in the target coordinate system are transformed to the pixel coordinates of the target point in the image coordinate system, and the calculation method is as follows:
in a target coordinate system, assuming that the position of a platform corresponding to the imaging time is P, defining imaging attitude parameters as (h, alpha, theta and gamma), and defining the origin of a platform measurement coordinate system as OR(xR,xR,zR) The coordinate of the ground point (hereinafter referred to as ground optical axis point) to which the optical axis of the camera is directed is Oo(xo,yo,zo) Then, there are:
xo=xR+h·ctgθ·cosα
zo=zR+h·ctgθ·sinα
yR=h
yo=0
in the case where the roll angle γ is not considered, the coordinates of the target point in the image are (u, v), and there are:
u=COL/2+Au·dAu
v=ROW/2+(Av-θ)·dAv
wherein the transverse angular resolution
Figure BDA0000090119320000121
And longitudinal angular resolution dAv=ROW/φ,
Figure BDA0000090119320000122
The field angle is transverse, phi is longitudinal, ROW is the number of real-time imaging lines, and COL is the number of real-time imaging columns.
<math> <mrow> <msub> <mi>A</mi> <mi>v</mi> </msub> <mo>=</mo> <msup> <mi>tg</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <mfrac> <mi>h</mi> <mrow> <mi>hctg&theta;</mi> <mo>-</mo> <msub> <mi>x</mi> <mi>o</mi> </msub> <mi>cos</mi> <mi>&alpha;</mi> <mo>-</mo> <msub> <mi>z</mi> <mi>o</mi> </msub> <mi>sin</mi> <mi>&alpha;</mi> </mrow> </mfrac> <mo>]</mo> </mrow> </math>
<math> <mrow> <msub> <mi>A</mi> <mi>u</mi> </msub> <mo>=</mo> <msup> <mi>tg</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <mfrac> <mrow> <msub> <mi>x</mi> <mi>o</mi> </msub> <mi>sin</mi> <mi>&alpha;</mi> <mo>-</mo> <msub> <mi>z</mi> <mi>o</mi> </msub> <mi>cos</mi> <mi>&alpha;</mi> </mrow> <mrow> <mi>h</mi> <mo>/</mo> <mi>sin</mi> <msub> <mi>A</mi> <mi>v</mi> </msub> </mrow> </mfrac> <mo>]</mo> </mrow> </math>
The pose roll angle results in a rotational transformation of the target point in the image plane with the coordinate (u)c,vc):
<math> <mrow> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msub> <mi>u</mi> <mi>c</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>v</mi> <mi>c</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <mi>COL</mi> <mo>/</mo> <mn>2</mn> </mtd> </mtr> <mtr> <mtd> <mi>ROW</mi> <mo>/</mo> <mn>2</mn> </mtd> </mtr> </mtable> </mfenced> <mo>+</mo> <mi>dA</mi> <mo>&CenterDot;</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>cos</mi> <mi>&gamma;</mi> </mtd> <mtd> <mi>sin</mi> <mi>&gamma;</mi> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <mi>sin</mi> <mi>&gamma;</mi> </mtd> <mtd> <mi>cos</mi> <mi>&gamma;</mi> </mtd> </mtr> </mtable> </mfenced> <mo>&CenterDot;</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msub> <mi>A</mi> <mi>u</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>A</mi> <mi>v</mi> </msub> <mo>-</mo> <mi>&theta;</mi> </mtd> </mtr> </mtable> </mfenced> </mrow> </math>
As can be seen from the above formula, in the target coordinate systemNext, the position of the ground optical axis point (x) is determinedo,yo,zo) Then, given the imaging attitude parameters, the coordinates (u) of the target point in the real-time image can be calculatedc,vc). The above model can be expressed in the form:
(uc,vc)=F[(xo,zo)|h,α,θ,γ]
the corresponding inverse transformation model is:
(xo,zo)=F-1[(uc,vc)|h,α,θ,γ]
(2.1.2) estimation of ground optical axis Point position
Since the pixel coordinates of the target point in the image have been obtained at the initial coordinate capturing stage of the target point, the initial coordinate position (u) of the target in the resulting image is determined according to the inverse transformation model described above, as shown in fig. 5c0,vc0) As a known quantity, the corresponding ground optical axis point position (x) in the target coordinate system is solvedo0,yo0,zo0) The specific calculation process is as follows:
firstly, calculating the field angle (A) of the target point and the optical axis point in the transverse and longitudinal directionsu0,Av0):
The change model comprises:
<math> <mrow> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msub> <mi>u</mi> <mi>c</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>v</mi> <mi>c</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <mi>COL</mi> <mo>/</mo> <mn>2</mn> </mtd> </mtr> <mtr> <mtd> <mi>ROW</mi> <mo>/</mo> <mn>2</mn> </mtd> </mtr> </mtable> </mfenced> <mo>+</mo> <mi>dA</mi> <mo>&CenterDot;</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>cos</mi> <mi>&gamma;</mi> </mtd> <mtd> <mi>sin</mi> <mi>&gamma;</mi> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <mi>sin</mi> <mi>&gamma;</mi> </mtd> <mtd> <mi>cos</mi> <mi>&gamma;</mi> </mtd> </mtr> </mtable> </mfenced> <mo>&CenterDot;</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msub> <mi>A</mi> <mi>u</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>A</mi> <mi>v</mi> </msub> <mo>-</mo> <mi>&theta;</mi> </mtd> </mtr> </mtable> </mfenced> </mrow> </math>
by inverse transformation of the matrix one can obtain:
<math> <mrow> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msub> <mi>A</mi> <mrow> <mi>u</mi> <mn>0</mn> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mn>0</mn> </mrow> </msub> <mo>-</mo> <msub> <mi>&theta;</mi> <mn>0</mn> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfrac> <mn>1</mn> <mi>dA</mi> </mfrac> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mrow> <mi>cos</mi> <mi>&gamma;</mi> </mrow> <mn>0</mn> </msub> </mtd> <mtd> <mo>-</mo> <mi>sin</mi> <msub> <mi>&gamma;</mi> <mn>0</mn> </msub> </mtd> </mtr> <mtr> <mtd> <mi>sin</mi> <msub> <mi>&gamma;</mi> <mn>0</mn> </msub> </mtd> <mtd> <mi>cos</mi> <msub> <mi>&gamma;</mi> <mn>0</mn> </msub> </mtd> </mtr> </mtable> </mfenced> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>u</mi> <mrow> <mi>c</mi> <mn>0</mn> </mrow> </msub> <mo>-</mo> <mfrac> <mi>COL</mi> <mn>2</mn> </mfrac> </mtd> </mtr> <mtr> <mtd> <msub> <mi>v</mi> <mrow> <mi>c</mi> <mn>0</mn> </mrow> </msub> <mo>-</mo> <mfrac> <mi>ROW</mi> <mn>2</mn> </mfrac> </mtd> </mtr> </mtable> </mfenced> </mrow> </math>
then:
<math> <mrow> <msub> <mi>A</mi> <mrow> <mi>u</mi> <mn>0</mn> </mrow> </msub> <mo>=</mo> <mo>[</mo> <mfrac> <mn>1</mn> <mrow> <mi>d</mi> <msub> <mi>A</mi> <mi>u</mi> </msub> </mrow> </mfrac> <mrow> <mo>(</mo> <msub> <mi>u</mi> <mrow> <mi>c</mi> <mn>0</mn> </mrow> </msub> <mo>-</mo> <mfrac> <mi>COL</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>cos</mi> <msub> <mi>&gamma;</mi> <mn>0</mn> </msub> <mo>-</mo> <mfrac> <mn>1</mn> <mrow> <mi>d</mi> <msub> <mi>A</mi> <mi>v</mi> </msub> </mrow> </mfrac> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mrow> <mi>c</mi> <mn>0</mn> </mrow> </msub> <mo>-</mo> <mfrac> <mi>ROW</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>sin</mi> <msub> <mi>&gamma;</mi> <mn>0</mn> </msub> <mo>]</mo> </mrow> </math>
<math> <mrow> <mrow> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mn>0</mn> </mrow> </msub> <mo>=</mo> <mo>[</mo> <mfrac> <mn>1</mn> <msub> <mi>dA</mi> <mi>v</mi> </msub> </mfrac> <mrow> <mo>(</mo> <msub> <mi>u</mi> <mrow> <mi>c</mi> <mn>0</mn> </mrow> </msub> <mo>-</mo> <mfrac> <mi>COL</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>sin</mi> <msub> <mi>&gamma;</mi> <mn>0</mn> </msub> <mo>+</mo> <mfrac> <mn>1</mn> <mrow> <mi>d</mi> <msub> <mi>A</mi> <mi>v</mi> </msub> </mrow> </mfrac> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mrow> <mi>c</mi> <mn>0</mn> </mrow> </msub> <mo>-</mo> <mfrac> <mi>ROW</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>cos</mi> <msub> <mi>&gamma;</mi> <mn>0</mn> </msub> <mo>]</mo> </mrow> <mo>+</mo> <msub> <mi>&theta;</mi> <mn>0</mn> </msub> </mrow> </math>
according to the obtained opening angle (A) in the transverse and longitudinal directionsu0,Av0) Solving the corresponding ground optical axis point position (x) under the target coordinate systemo0,yo0,zo0):
<math> <mrow> <msub> <mi>x</mi> <mrow> <mi>o</mi> <mn>0</mn> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mn>0</mn> </msub> <mo>&times;</mo> <mi>ctg</mi> <msub> <mi>&theta;</mi> <mn>0</mn> </msub> <mo>-</mo> <mfrac> <msub> <mi>h</mi> <mn>0</mn> </msub> <mrow> <mi>tg</mi> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mn>0</mn> </mrow> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>cos</mi> <msub> <mi>&alpha;</mi> <mn>0</mn> </msub> <mo>+</mo> <mfrac> <mrow> <msub> <mi>h</mi> <mn>0</mn> </msub> <mo>&times;</mo> <mi>tg</mi> <msub> <mi>A</mi> <mrow> <mi>u</mi> <mn>0</mn> </mrow> </msub> </mrow> <mrow> <mi>sin</mi> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mn>0</mn> </mrow> </msub> </mrow> </mfrac> <mo>&times;</mo> <mi>sin</mi> <msub> <mi>&alpha;</mi> <mn>0</mn> </msub> </mrow> </math>
<math> <mrow> <msub> <mi>z</mi> <mrow> <mi>o</mi> <mn>0</mn> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mn>0</mn> </msub> <mo>&times;</mo> <mi>ctg</mi> <msub> <mi>&theta;</mi> <mn>0</mn> </msub> <mo>-</mo> <mfrac> <msub> <mi>h</mi> <mn>0</mn> </msub> <mrow> <mi>tg</mi> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mn>0</mn> </mrow> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>sin</mi> <msub> <mi>&alpha;</mi> <mn>0</mn> </msub> <mo>-</mo> <mfrac> <mrow> <msub> <mi>h</mi> <mn>0</mn> </msub> <mo>&times;</mo> <mi>tg</mi> <msub> <mi>A</mi> <mrow> <mi>u</mi> <mn>0</mn> </mrow> </msub> </mrow> <mrow> <mi>sin</mi> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mn>0</mn> </mrow> </msub> </mrow> </mfrac> <mo>&times;</mo> <mi>cos</mi> <msub> <mi>&alpha;</mi> <mn>0</mn> </msub> </mrow> </math>
yo0=0
Wherein alpha is0To an imaging azimuth angle, theta0For imaging pitch angle, gamma0For the imager roll angle, h0Is the imaging height, phi is the longitudinal imaging field angle of the imager,
Figure BDA0000090119320000145
for the imager transverse imaging angle, ROW is the number of real-time imaging lines, COL is the number of real-time imaging columns, dAu,dAvRepresenting the lateral and longitudinal angular resolutions, respectively.
(2.2) recursive estimation of optical axis point position
Under the imaging condition, the position of the ground optical axis point changes due to the motion of the platform, and the coordinates of the target point in the formed image correspondingly change. The position offset (delta P) of the ground optical axis point in the target coordinate system can be calculated according to the position of the initial ground optical axis point in the target coordinate system and by combining the imaging attitude parameters of the current frame and the next framex(i),ΔPz(i)) Thereby estimating the position (x) of the ground optical axis point of the next frame imageo(i),yo(i),zo(i)) The method comprises the following steps:
as shown in fig. 6, the target coordinate system is chosen as the reference frame, assuming in the initial case the position of the imaging platform is: p0(xR0,h0,zR0) The corresponding ground optical axis positions are: q0(xo0,0,zo0) And the position of the current frame image imaging platform is as follows: pi(xR(i),hi,zR(i)) The corresponding ground optical axis positions are: qi(xo(i),0,zo(i)) Then, the following relationship is given:
P0and Q0The following components are arranged in between:
xR0=xo0-h0·ctgθ0·cosα0
zR0=zR0-h0·ctgθ0·sinα0
Piand QiThe following components are arranged in between:
xR(i)=xo(i)-hi·ctgθi·cosαi
zR(i)=zo(i)-hi·ctgθi·sinαi
due to P0And PiIs the coordinate position difference caused by the movement of the platform, and the following relationship exists between the two types of the platform:
xR(i)=xR0+Δxi
zR(i)=zR0+Δzi
Δxi、Δzithe movement amounts (which may be negative values) of the imaging platform along the x-axis and the z-axis under the target coordinate system are respectively expressed by the following calculation formula:
Δxi=Δti×wx×dAu
Δzi=Δti×wz×dAv
Figure BDA0000090119320000151
dAv=ROW/φ
where Δ tiRepresenting the platform motion time, ω, from the initial frame to the current framexFor the angular velocity, w, of the imaging platform in the direction of the x-axis of the target coordinate systemzThe angular velocity of the imaging platform in the direction of the z-axis of the target coordinate system.
This can be calculated as:
xo(i)=xo0+ΔPx(i)
zo(i)=zo0+ΔPz(i)
yo(i)=0
and has the following components:
ΔPx(i)=hictgθicosαi-h0ctgθ0cosα0+Δxi
ΔPz(i)=hictgθisinαi-h0ctgθ0sinα0+Δzi
in many cases, when the difference between the frame numbers of the two images is small and the difference between the imaging times is short, as shown in FIG. 8, the near vision can be considered as P0And PiThe projections on the XOZ plane in the target coordinate system are overlapped, and the movement amount of the imaging platform in the directions of the x axis and the z axis in the target coordinate system is ignored, and then:
Δxi=Δzi=0
in the derivation process described above, α0、θ0、h0Respectively representing the azimuth angle, the pitch angle and the height of the current frame; alpha is alphai、θi、hiRespectively representing the azimuth angle, the pitch angle and the height of the next frame; phi is the longitudinal imaging field angle of the imager,
Figure BDA0000090119320000161
for the lateral imaging angle of the imager, ROW is the number of real-time imaging lines, and COL is the number of real-time imaging columns.
3. Target point coordinate prediction and accurate positioning
(3.1) target Point coordinate prediction
As shown in FIG. 7, the position (x) of the ground optical axis of the next frame image is obtained by estimationo(i),yo(i),zo(i)) Thereafter, the pixel coordinate position (u) of the target point in the next frame image can be predicted according to the imaging attitude parameters thereofc(i),vc(i))。
Firstly, the field angle (A) between the target point and the ground optical axis point is calculatedu(i),Av(i)) If the target coordinate system is selected as the reference system, the following steps are performed:
<math> <mrow> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>=</mo> <msup> <mi>tg</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <mfrac> <msub> <mi>h</mi> <mi>i</mi> </msub> <mrow> <msub> <mi>h</mi> <mi>i</mi> </msub> <mi>ctg</mi> <msub> <mi>&theta;</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>x</mi> <mrow> <mi>o</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mi>cos</mi> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>z</mi> <mrow> <mi>o</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mi>sin</mi> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> </mrow> </mfrac> <mo>]</mo> </mrow> </math>
<math> <mrow> <msub> <mi>A</mi> <mrow> <mi>u</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>=</mo> <msup> <mi>tg</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <mfrac> <mrow> <msub> <mi>x</mi> <mrow> <mi>o</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mi>sin</mi> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>z</mi> <mrow> <mi>o</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mi>cos</mi> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> </mrow> <mrow> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>/</mo> <mi>sin</mi> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> </mrow> </mfrac> <mo>]</mo> </mrow> </math>
according to the opening angle and the imaging roll angle gammaiPredicting the coordinate position (u) of the target point pixel in the imagec(i),vc(i)) The calculation is as follows:
<math> <mrow> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msub> <mi>u</mi> <mrow> <mi>c</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>v</mi> <mrow> <mi>c</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <mi>COL</mi> <mo>/</mo> <mn>2</mn> </mtd> </mtr> <mtr> <mtd> <mi>ROW</mi> <mo>/</mo> <mn>2</mn> </mtd> </mtr> </mtable> </mfenced> <mo>+</mo> <mi>dA</mi> <mo>&CenterDot;</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>cos</mi> <msub> <mi>&gamma;</mi> <mi>i</mi> </msub> </mtd> <mtd> <mi>sin</mi> <msub> <mi>&gamma;</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mo>-</mo> <mi>sin</mi> <msub> <mi>&gamma;</mi> <mi>i</mi> </msub> </mtd> <mtd> <mi>cos</mi> <msub> <mi>&gamma;</mi> <mi>i</mi> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>&CenterDot;</mo> <mfenced open='(' close=')'> <mtable> <mtr> <mtd> <msub> <mi>A</mi> <mrow> <mi>u</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>-</mo> <msub> <mi>&theta;</mi> <mi>i</mi> </msub> </mtd> </mtr> </mtable> </mfenced> </mrow> </math>
then there are:
<math> <mrow> <msub> <mi>u</mi> <mrow> <mi>c</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>=</mo> <mo>[</mo> <msub> <mi>dA</mi> <mi>v</mi> </msub> <mo>&times;</mo> <mrow> <mo>(</mo> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>-</mo> <msub> <mi>&theta;</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>sin</mi> <msub> <mi>&gamma;</mi> <mi>i</mi> </msub> <mo>+</mo> <mi>d</mi> <msub> <mi>A</mi> <mi>u</mi> </msub> <mo>&times;</mo> <msub> <mi>A</mi> <mrow> <mi>u</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>&times;</mo> <mi>cos</mi> <msub> <mi>&gamma;</mi> <mi>i</mi> </msub> <mo>]</mo> <mo>+</mo> <mfrac> <mi>COL</mi> <mn>2</mn> </mfrac> </mrow> </math>
<math> <mrow> <msub> <mi>v</mi> <mrow> <mi>c</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>=</mo> <mo>[</mo> <msub> <mi>dA</mi> <mi>v</mi> </msub> <mo>&times;</mo> <mrow> <mo>(</mo> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>-</mo> <msub> <mi>&theta;</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>cos</mi> <msub> <mi>&gamma;</mi> <mi>i</mi> </msub> <mo>-</mo> <mi>d</mi> <msub> <mi>A</mi> <mi>u</mi> </msub> <mo>&times;</mo> <msub> <mi>A</mi> <mrow> <mi>u</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>&times;</mo> <mi>sin</mi> <msub> <mi>&gamma;</mi> <mi>i</mi> </msub> <mo>]</mo> <mo>+</mo> <mfrac> <mi>ROW</mi> <mn>2</mn> </mfrac> </mrow> </math>
wherein alpha isi、θi、hiRespectively representing the azimuth angle, the pitch angle and the height of the next frame; ROW is the number of real-time imaging lines, COL is the number of real-time imaging columns, dAu,dAvRepresenting the lateral and longitudinal angular resolutions, respectively.
(3.2) accurate positioning of coordinates of target points
In order to obtain accurate target point coordinates, a local search window is constructed by taking the predicted target point coordinates as a center, and identification and positioning are carried out in the local search window.
And when the local search window range is determined, estimating the size of the target in the sequence image according to the flight attitude parameters, and then giving out pixels in the column direction and the row direction as the size of a target search area in the image according to the imaging characteristics of the imager and various errors after comprehensive consideration.
The region size of the target region in the sequence image is:
<math> <mrow> <mi>L</mi> <mo>_</mo> <mi>Row</mi> <mo>=</mo> <mn>2</mn> <mo>&times;</mo> <mfrac> <mi>ROW</mi> <mi>&phi;</mi> </mfrac> <mo>&times;</mo> <mrow> <mo>(</mo> <mi>arctan</mi> <mfrac> <mi>h</mi> <mrow> <mfrac> <mi>h</mi> <mrow> <mi>tan</mi> <mi>&theta;</mi> </mrow> </mfrac> <mo>-</mo> <mfrac> <mi>Length</mi> <mn>2</mn> </mfrac> </mrow> </mfrac> <mo>-</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> </mrow> </math>
Figure BDA0000090119320000175
length and Width represent the actual Length and Width of the target under the geodetic coordinate, ROW is the number of real-time imaging lines, COL is the number of real-time imaging columns, and theta and h are the imaging pitch angle height respectively.
Because the photoelectric platform has certain error when measuring the imaging attitude parameter in the actual imaging process. After various errors are comprehensively considered, performing pixel external expansion biasX and biasZ on the search area in the column direction and the row direction respectively, wherein the size of the search area is as follows:
T_Row=L_Row+2×biasX
T_Col=L_Col+2×biasZ
wherein the sizes of biasX and biasZ are estimated according to the movement limit speed of the imaging platform in the actual imaging process, and the calculation formula is expressed as:
biasX=maxωx×Δt×dAu
biasZ=maxωz×Δt×dAv
maxωx,maxωzrespectively representing the maximum angular speed of the aircraft in the direction of an x axis and the direction of a z axis; at represents the time difference between two adjacent frames of images of the aircraft, dAu,dAvRepresenting the lateral and longitudinal angular resolutions, respectively.
(3.3) optical axis Point position update
And obtaining accurate target point coordinates in the current frame image in a local identification and positioning mode. Because the position of the ground optical axis is deduced according to the imaging attitude parameters, and the imaging platform has certain errors in the measurement process of the imaging attitude parameters, if the errors are always calculated, the errors are inevitably accumulated, thereby causing the capture failure.
And updating the position of the ground optical axis by using the accurate coordinates of the target point in the current frame image obtained by local identification and positioning, and eliminating the accumulated error of the target point coordinate estimation.
Selecting a target coordinate system as a reference system, and according to an opening angle calculation formula, the relation between the pixel coordinate position of a target point in an image and the opening angles of the target point and the ground optical axis point in the transverse and longitudinal directions is as follows:
<math> <mrow> <msup> <msub> <mi>A</mi> <mrow> <mi>u</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>&CenterDot;</mo> </msup> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>d</mi> <msub> <mi>A</mi> <mi>u</mi> </msub> </mrow> </mfrac> <mo>[</mo> <mrow> <mo>(</mo> <msup> <msub> <mi>u</mi> <mrow> <mi>c</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>&CenterDot;</mo> </msup> <mo>-</mo> <mfrac> <mi>COL</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>cos</mi> <msub> <mi>&gamma;</mi> <mi>i</mi> </msub> <mo>-</mo> <mrow> <mo>(</mo> <msup> <msub> <mi>v</mi> <mrow> <mi>c</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>&CenterDot;</mo> </msup> <mo>-</mo> <mfrac> <mi>ROW</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>sin</mi> <msub> <mi>&gamma;</mi> <mi>i</mi> </msub> <mo>]</mo> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>&CenterDot;</mo> </msup> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>d</mi> <msub> <mi>A</mi> <mi>v</mi> </msub> </mrow> </mfrac> <mo>[</mo> <mrow> <mo>(</mo> <msup> <msub> <mi>u</mi> <mrow> <mi>c</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>&CenterDot;</mo> </msup> <mo>-</mo> <mfrac> <mi>COL</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mo>&CenterDot;</mo> <mi>sin</mi> <msub> <mi>&gamma;</mi> <mi>i</mi> </msub> <mo>+</mo> <mrow> <mo>(</mo> <msup> <msub> <mi>v</mi> <mrow> <mi>c</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>&CenterDot;</mo> </msup> <mo>-</mo> <mfrac> <mi>ROW</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>cos</mi> <msub> <mi>&gamma;</mi> <mi>i</mi> </msub> <mo>]</mo> <mo>+</mo> <msub> <mi>&theta;</mi> <mi>i</mi> </msub> </mrow> </math>
according to the obtained opening angle (A) in the transverse and longitudinal directionsu(i) ·,Av(i) ·) And updating the position (x) of the corresponding ground optical axis under the target coordinate systemo(i) ·,0,zo(i) ·) The specific method comprises the following steps:
<math> <mrow> <msup> <msub> <mi>x</mi> <mrow> <mi>o</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>&CenterDot;</mo> </msup> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>&CenterDot;</mo> <mi>ctg</mi> <msub> <mi>&theta;</mi> <mi>i</mi> </msub> <mo>-</mo> <mfrac> <msub> <mi>h</mi> <mi>i</mi> </msub> <mrow> <mi>tg</mi> <msup> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>&CenterDot;</mo> </msup> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>cos</mi> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> <mo>-</mo> <mfrac> <mrow> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>&times;</mo> <msup> <msub> <mi>tgA</mi> <mrow> <mi>u</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>&CenterDot;</mo> </msup> </mrow> <mrow> <mi>sin</mi> <msup> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>&CenterDot;</mo> </msup> </mrow> </mfrac> <mo>&times;</mo> <mi>sin</mi> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> </mrow> </math>
<math> <mrow> <msup> <msub> <mi>z</mi> <mrow> <mi>o</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>&CenterDot;</mo> </msup> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>&times;</mo> <mi>ctg</mi> <msub> <mi>&theta;</mi> <mi>i</mi> </msub> <mo>-</mo> <mfrac> <msub> <mi>h</mi> <mi>i</mi> </msub> <mrow> <mi>tg</mi> <msup> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>&CenterDot;</mo> </msup> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>sin</mi> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> <mo>-</mo> <mfrac> <mrow> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>&times;</mo> <mi>tg</mi> <msup> <msub> <mi>A</mi> <mrow> <mi>u</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>&CenterDot;</mo> </msup> </mrow> <mrow> <mi>sin</mi> <msup> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>&CenterDot;</mo> </msup> </mrow> </mfrac> <mo>&times;</mo> <mi>cos</mi> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> </mrow> </math>
wherein alpha isi、θi、hiRespectively representing the azimuth angle, the pitch angle and the height of the next frame; ROW is the number of real-time imaging lines, and COL is the number of real-time imaging columns;dAu,dAvRepresenting the lateral and longitudinal angular resolutions, respectively.
After the ground optical axis position is obtained, the next frame of image ground optical axis position is predicted, and the target point coordinate prediction positioning in the sequence image is realized through recursion.

Claims (6)

1. A sequential image target recursion identification method based on platform inertial attitude parameters specifically comprises the following steps:
(1) calculating the position of a light axis point in the current frame image in a target coordinate system according to the target initial pixel coordinate;
(2) estimating the position of the optical axis point in the target coordinate system in the next frame of image according to the position of the optical axis point in the target coordinate system in the current frame;
(3) predicting the final pixel coordinate of the target in the next frame image according to the position of the optical axis point in the next frame image in the target coordinate system;
(4) and constructing a local search window in the next frame image by taking the target final pixel coordinate as a center, updating the target final pixel coordinate by capturing the target in the local search window, and taking the updated target final pixel coordinate as the next identified target initial pixel coordinate.
2. The method for recursive identification of sequence images targets based on platform inertial gesture parameters according to claim 1, wherein the initial pixel coordinates of the target in the first identification are obtained by full image capture in the historical sequence images.
3. The method for recursive identification of sequence image targets based on platform inertial attitude parameters according to claim 1, wherein the step (1) is specifically as follows: position (x) of optical axis point in target coordinate system in current frameo0,yo0,zo0) The calculation method comprises the following steps:
<math> <mrow> <msub> <mi>x</mi> <mrow> <mi>o</mi> <mn>0</mn> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mn>0</mn> </msub> <mo>&times;</mo> <mi>ctg</mi> <msub> <mi>&theta;</mi> <mn>0</mn> </msub> <mo>-</mo> <mfrac> <msub> <mi>h</mi> <mn>0</mn> </msub> <mrow> <mi>tg</mi> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mn>0</mn> </mrow> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>cos</mi> <msub> <mi>&alpha;</mi> <mn>0</mn> </msub> <mo>+</mo> <mfrac> <mrow> <msub> <mi>h</mi> <mn>0</mn> </msub> <mo>&times;</mo> <mi>tg</mi> <msub> <mi>A</mi> <mrow> <mi>u</mi> <mn>0</mn> </mrow> </msub> </mrow> <mrow> <mi>sin</mi> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mn>0</mn> </mrow> </msub> </mrow> </mfrac> <mo>&times;</mo> <mi>sin</mi> <msub> <mi>&alpha;</mi> <mn>0</mn> </msub> </mrow> </math>
<math> <mrow> <msub> <mi>z</mi> <mrow> <mi>o</mi> <mn>0</mn> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mn>0</mn> </msub> <mo>&times;</mo> <mi>ctg</mi> <msub> <mi>&theta;</mi> <mn>0</mn> </msub> <mo>-</mo> <mfrac> <msub> <mi>h</mi> <mn>0</mn> </msub> <mrow> <mi>tg</mi> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mn>0</mn> </mrow> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>sin</mi> <msub> <mi>&alpha;</mi> <mn>0</mn> </msub> <mo>-</mo> <mfrac> <mrow> <msub> <mi>h</mi> <mn>0</mn> </msub> <mo>&times;</mo> <mi>tg</mi> <msub> <mi>A</mi> <mrow> <mi>u</mi> <mn>0</mn> </mrow> </msub> </mrow> <mrow> <mi>sin</mi> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mn>0</mn> </mrow> </msub> </mrow> </mfrac> <mo>&times;</mo> <mi>cos</mi> <msub> <mi>&alpha;</mi> <mn>0</mn> </msub> </mrow> </math>
yo0=0
wherein
<math> <mrow> <msub> <mi>A</mi> <mrow> <mi>u</mi> <mn>0</mn> </mrow> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>d</mi> <msub> <mi>A</mi> <mi>u</mi> </msub> </mrow> </mfrac> <mo>[</mo> <mrow> <mo>(</mo> <msub> <mi>u</mi> <mrow> <mi>c</mi> <mn>0</mn> </mrow> </msub> <mo>-</mo> <mfrac> <mi>COL</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>cos</mi> <msub> <mi>&gamma;</mi> <mn>0</mn> </msub> <mo>-</mo> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mrow> <mi>c</mi> <mn>0</mn> </mrow> </msub> <mo>-</mo> <mfrac> <mi>ROW</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>sin</mi> <msub> <mi>&gamma;</mi> <mn>0</mn> </msub> <mo>]</mo> </mrow> </math>
<math> <mrow> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mn>0</mn> </mrow> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msub> <mi>dA</mi> <mi>v</mi> </msub> </mfrac> <mo>[</mo> <mrow> <mo>(</mo> <msub> <mi>u</mi> <mrow> <mi>c</mi> <mn>0</mn> </mrow> </msub> <mo>-</mo> <mfrac> <mi>COL</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>sin</mi> <msub> <mi>&gamma;</mi> <mn>0</mn> </msub> <mo>+</mo> <mrow> <mo>(</mo> <msub> <mi>v</mi> <mrow> <mi>c</mi> <mn>0</mn> </mrow> </msub> <mo>-</mo> <mfrac> <mi>ROW</mi> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mo>&times;</mo> <mi>cos</mi> <msub> <mi>&gamma;</mi> <mn>0</mn> </msub> <mo>]</mo> </mrow> </math>
Figure FDA0000090119310000023
dAv=ROW/φ
α0To imager azimuth angle, θ0For imager pitch angle, h0Is imager height, gamma0Is the rolling angle of the imager, phi is the longitudinal imaging field angle of the imager,
Figure FDA0000090119310000024
for the imager transverse imaging angle, ROW is the number of real-time imaging lines, COL is the number of real-time imaging columns, (u)c0,vc0) Is the target initial position.
4. The method for recursive identification of sequence image targets based on platform inertial attitude parameters according to claim 2, wherein the step (2) is specifically as follows: position (x) of the optical axis point in the target coordinate system in the next frameo(i),yo(i),zo(i)) The calculation method comprises the following steps:
xo(i)=xo0+ΔPx(i)
zo(i)=zo0+ΔPz(i)
yo(i)=0
wherein,
ΔPx(i)=hictgθicosαi-h0ctgθ0cosα0+Δxi
ΔPz(i)=hictgθisinαi-h0ctgθ0sinα0+Δzi
and,
Δxi=Δti×wx×dAu
Δzi=Δti×wz×dAv
dAv=ROW/φ
(xo0,0,zo0) Is the position of the optical axis point of the current frame in the target coordinate system, Δ tiRepresenting the imaging time difference from the current frame to the next frame; omegaxFor the angular velocity, w, of the imaging platform along the x-axis of the target coordinate systemzThe angular velocity of the imaging platform along the direction of the z axis of the target coordinate system; alpha is alpha0、θ0、h0Respectively representing the azimuth angle, the pitch angle and the height of the current frame image; alpha is alphai、θi、hiRespectively representing the imaging azimuth, the pitch angle and the height corresponding to the next frame of image; phi is the longitudinal imaging field angle of the imager,
Figure FDA0000090119310000031
for the lateral imaging angle of the imager, ROW is the number of real-time imaging lines, and COL is the number of real-time imaging columns.
5. The method for recursive identification of sequence image targets based on platform inertial attitude parameters according to claim 2, wherein the step (3) is specifically as follows: target final pixel coordinate (u)c(i),vc(i)) The calculation method comprises the following steps:
uc(i)=dAu×[(Av(i)i)×sinγi+Au(i)×cosγi]
vc(i)=dAv×[(Av(i)i)×cosγi-Au(i)×sinγi]
wherein,
<math> <mrow> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>=</mo> <msup> <mi>tg</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <mfrac> <msub> <mi>h</mi> <mi>i</mi> </msub> <mrow> <msub> <mi>h</mi> <mi>i</mi> </msub> <mi>ctg</mi> <msub> <mi>&theta;</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>x</mi> <mrow> <mi>o</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mi>cos</mi> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>z</mi> <mrow> <mi>o</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mi>sin</mi> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> </mrow> </mfrac> <mo>]</mo> </mrow> </math>
<math> <mrow> <msub> <mi>A</mi> <mrow> <mi>u</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mo>=</mo> <msup> <mi>tg</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> <mo>[</mo> <mfrac> <mrow> <msub> <mi>x</mi> <mrow> <mi>o</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mi>sin</mi> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>z</mi> <mrow> <mi>o</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> <mi>cos</mi> <msub> <mi>&alpha;</mi> <mi>i</mi> </msub> </mrow> <mrow> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>/</mo> <mi>sin</mi> <msub> <mi>A</mi> <mrow> <mi>v</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </msub> </mrow> </mfrac> <mo>]</mo> </mrow> </math>
dAv=ROW/φ
αi、θi、hiand gammaiRespectively representing the azimuth angle, the pitch angle, the height and the roll angle corresponding to the next frame of image imaging, (x)o(i),0,zo(i)) Is the position of the optical axis point in the next frame image in the target coordinate system, phi is the longitudinal imaging field angle of the imager,
Figure FDA0000090119310000035
for the lateral imaging angle of the imager, ROW is the number of real-time imaging lines, and COL is the number of real-time imaging columns.
6. The method for recursive identification of sequence image targets based on platform inertial gesture parameters according to claim 1, 2, 3, 4, or 5, wherein the length T _ Row and the width T _ Col of the local search window in step (4) are respectively:
T_Row=L_Row+2×biasX
T_Col=L_Col+2×biasZ
wherein,
<math> <mrow> <mi>L</mi> <mo>_</mo> <mi>Row</mi> <mo>=</mo> <mn>2</mn> <mo>&times;</mo> <mfrac> <mi>ROW</mi> <mi>&phi;</mi> </mfrac> <mo>&times;</mo> <mrow> <mo>(</mo> <mi>arctan</mi> <mfrac> <mi>h</mi> <mrow> <mfrac> <mi>h</mi> <mrow> <mi>tan</mi> <mi>&theta;</mi> </mrow> </mfrac> <mo>-</mo> <mfrac> <mi>Length</mi> <mn>2</mn> </mfrac> </mrow> </mfrac> <mo>-</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> </mrow> </math>
Figure FDA0000090119310000042
biasX=maxωx×Δt×dAu
biasZ=maxωz×Δt×dAv
length, Width represents the actual Length and Width of the target under the geodetic coordinates, phi is the longitudinal imaging field angle of the imager,
Figure FDA0000090119310000043
the imaging angle is the transverse imaging angle of the imager, ROW is the real-time imaging line number, COL is the real-time imaging column number, and theta and h are the imaging pitch angle and height respectively; max omegax,maxωzRespectively representing the maximum angular speed of the aircraft in the direction of an x axis and the direction of a z axis; at represents the time difference between two adjacent frames of images of the aircraft, dAuAnd dAvRepresenting the lateral and longitudinal angular resolutions, respectively.
CN 201110266278 2011-09-09 2011-09-09 Series image target recursive identification method based on platform inertia attitude parameter Expired - Fee Related CN102359788B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110266278 CN102359788B (en) 2011-09-09 2011-09-09 Series image target recursive identification method based on platform inertia attitude parameter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110266278 CN102359788B (en) 2011-09-09 2011-09-09 Series image target recursive identification method based on platform inertia attitude parameter

Publications (2)

Publication Number Publication Date
CN102359788A true CN102359788A (en) 2012-02-22
CN102359788B CN102359788B (en) 2013-02-13

Family

ID=45585152

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110266278 Expired - Fee Related CN102359788B (en) 2011-09-09 2011-09-09 Series image target recursive identification method based on platform inertia attitude parameter

Country Status (1)

Country Link
CN (1) CN102359788B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103456026A (en) * 2013-07-29 2013-12-18 华中科技大学 Method for detecting ground moving object under road landmark constraints
CN106488143A (en) * 2015-08-26 2017-03-08 刘进 The method of object, system and filming apparatus in a kind of generation video data, marking video
CN107203770A (en) * 2017-05-27 2017-09-26 上海航天控制技术研究所 A kind of optics strapdown seeker image tracking method
CN111862200A (en) * 2020-06-30 2020-10-30 同济大学 Method for positioning unmanned aerial vehicle in coal shed

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1750029A (en) * 2005-10-24 2006-03-22 南京大学 Optimizing platform based on evolution algorithm
CN101609504B (en) * 2009-07-21 2011-04-20 华中科技大学 Method for detecting, distinguishing and locating infrared imagery sea-surface target
CN102096086A (en) * 2010-11-22 2011-06-15 北京航空航天大学 Self-adaptive filtering method based on different measuring characteristics of GPS (Global Positioning System)/INS (Inertial Navigation System) integrated navigation system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1750029A (en) * 2005-10-24 2006-03-22 南京大学 Optimizing platform based on evolution algorithm
CN101609504B (en) * 2009-07-21 2011-04-20 华中科技大学 Method for detecting, distinguishing and locating infrared imagery sea-surface target
CN102096086A (en) * 2010-11-22 2011-06-15 北京航空航天大学 Self-adaptive filtering method based on different measuring characteristics of GPS (Global Positioning System)/INS (Inertial Navigation System) integrated navigation system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
《兵工学报》 20110430 吴一全等 一种可有效分割小目标图像的阈值选取方法 全文 1、2 第32卷, 第4期 *
《红外与激光工程》 20041231 荆根强等 局部自适应序列图像拼接算法 全文 1、2 第33卷, 第6期 *
《自动化学报》 20060930 张天序等 三维运动目标的多尺度智能递推识别新方法 全文 1、2 第32卷, 第5期 *
《计算机与数字工程》 20051231 刘翔等 三轴稳定卫星图像系统性几何校正研究 全文 1、2 第33卷, 第11期 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103456026A (en) * 2013-07-29 2013-12-18 华中科技大学 Method for detecting ground moving object under road landmark constraints
CN103456026B (en) * 2013-07-29 2016-11-16 华中科技大学 A kind of Ground moving target detection method under highway terrestrial reference constraint
CN106488143A (en) * 2015-08-26 2017-03-08 刘进 The method of object, system and filming apparatus in a kind of generation video data, marking video
CN106488143B (en) * 2015-08-26 2019-08-16 刘进 It is a kind of generate video data, in marking video object method, system and filming apparatus
CN107203770A (en) * 2017-05-27 2017-09-26 上海航天控制技术研究所 A kind of optics strapdown seeker image tracking method
CN107203770B (en) * 2017-05-27 2020-07-31 上海航天控制技术研究所 Optical strapdown seeker image tracking method
CN111862200A (en) * 2020-06-30 2020-10-30 同济大学 Method for positioning unmanned aerial vehicle in coal shed
CN111862200B (en) * 2020-06-30 2023-04-28 同济大学 Unmanned aerial vehicle positioning method in coal shed

Also Published As

Publication number Publication date
CN102359788B (en) 2013-02-13

Similar Documents

Publication Publication Date Title
CN110472496B (en) Traffic video intelligent analysis method based on target detection and tracking
CN102999759B (en) A kind of state of motion of vehicle method of estimation based on light stream
CN106529587B (en) Vision course recognition methods based on object detection
CN103697855B (en) A kind of hull horizontal attitude measuring method detected based on sea horizon
CN105644785B (en) A kind of UAV Landing method detected based on optical flow method and horizon
CN104408725B (en) A kind of target reacquisition system and method based on TLD optimized algorithms
CN103791902B (en) It is applicable to the star sensor autonomous navigation method of high motor-driven carrier
WO2013133129A1 (en) Moving-object position/attitude estimation apparatus and method for estimating position/attitude of moving object
CN103106667A (en) Motion target tracing method towards shielding and scene change
CN102629329B (en) Personnel indoor positioning method based on adaptive SIFI (scale invariant feature transform) algorithm
CN107831776A (en) Unmanned plane based on nine axle inertial sensors independently makes a return voyage method
CN101750017A (en) Visual detection method of multi-movement target positions in large view field
CN109871739B (en) Automatic target detection and space positioning method for mobile station based on YOLO-SIOCTL
CN102359788B (en) Series image target recursive identification method based on platform inertia attitude parameter
CN117173215B (en) Inland navigation ship whole-course track identification method and system crossing cameras
CN114708293A (en) Robot motion estimation method based on deep learning point-line feature and IMU tight coupling
CN110889353B (en) Space target identification method based on primary focus large-visual-field photoelectric telescope
CN115077519A (en) Positioning and mapping method and device based on template matching and laser inertial navigation loose coupling
CN113554705B (en) Laser radar robust positioning method under changing scene
CN104567879A (en) Method for extracting geocentric direction of combined view field navigation sensor
CN102288134B (en) Perspective projection-based method for measuring spatial rotary moving parameters of circular object
CN103488801A (en) Geographical information space database-based airport target detection method
CN103791901B (en) A kind of star sensor data processes system
CN112945233A (en) Global drift-free autonomous robot simultaneous positioning and map building method
CN104484647A (en) High-resolution remote sensing image cloud height detection method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130213