CN115657072A - Aerial real-time image-based space target positioning method - Google Patents

Aerial real-time image-based space target positioning method Download PDF

Info

Publication number
CN115657072A
CN115657072A CN202211347168.0A CN202211347168A CN115657072A CN 115657072 A CN115657072 A CN 115657072A CN 202211347168 A CN202211347168 A CN 202211347168A CN 115657072 A CN115657072 A CN 115657072A
Authority
CN
China
Prior art keywords
pixel
target object
real
unmanned aerial
aerial vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211347168.0A
Other languages
Chinese (zh)
Inventor
陈伟
查俊林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Innovation Center of Beijing University of Technology
Original Assignee
Chongqing Innovation Center of Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Innovation Center of Beijing University of Technology filed Critical Chongqing Innovation Center of Beijing University of Technology
Priority to CN202211347168.0A priority Critical patent/CN115657072A/en
Publication of CN115657072A publication Critical patent/CN115657072A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides a space target positioning method based on aerial real-time images, which utilizes a visual pod with a laser ranging function to acquire a distance between a target object and an unmanned aerial vehicle and a real-time image picture, and simultaneously acquires real-time parameters of the unmanned aerial vehicle, and adopts a plane positioning method to realize the calculation of the height and longitude and latitude of the target object based on the real-time parameters of the unmanned aerial vehicle, the distance between the target object and the unmanned aerial vehicle and the real-time image picture.

Description

Space target positioning method based on aerial real-time image
Technical Field
The invention relates to the technical field of unmanned aerial vehicle vision, in particular to a space target positioning method based on aerial real-time images.
Background
With the development of small-sized unmanned aerial vehicle technology and 5G communication in recent years, small-sized civil unmanned aerial vehicles are widely applied, and meanwhile, higher requirements are put forward on functions of unmanned aerial vehicles, the existing unmanned aerial vehicle remote control technology, the image transmission technology and the data transmission technology are developed more maturely, the target detection and identification technology is in endlessly following the development of computer vision technology, and meanwhile, the problems of precision and efficiency which are more concerned in target detection are better solved.
However, for an unmanned aerial vehicle aerial image, real-time positioning after target detection and identification is still to be improved, and in the present stage, an image-based target three-dimensional positioning method is mainly applied to methods of point cloud image acquisition, three-dimensional map construction and the like, and these algorithms need strong calculation power and a map constructed in advance, and are difficult to apply to an unmanned aerial vehicle real-time reconnaissance image with a high requirement on time efficiency, so that the realization is complex, and the real-time performance is not high.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a space target positioning method based on aerial real-time images, and aims to solve the problems of complex realization and low real-time property in the unmanned aerial vehicle real-time reconnaissance images with higher requirements on time efficiency in the related art.
The invention provides a space target positioning method based on aerial real-time images, which comprises the following steps:
s1, an unmanned aerial vehicle acquires a real-time image based on a visual pod with a laser ranging function, and acquires a distance P2 between a target object and the unmanned aerial vehicle based on the visual pod;
s2, acquiring real-time parameters of the unmanned aerial vehicle, wherein the real-time parameters at least comprise a flight height H, a visual nacelle pitch angle alfa, a visual nacelle yaw angle belta, a visual nacelle transverse field angle Wangle, a longitudinal field angle Hangle, image center coordinates (uc, vc) and a focal length f;
s3, respectively selecting pixel coordinates (u, v) of a target object and vertical projection pixel coordinates (u 1, v 1) of the target object in the real-time image picture;
s4, calculating a height difference Hp between the target object and a visual pod of the unmanned aerial vehicle through the distance P2 and a vertical deflection angle of the target object relative to the unmanned aerial vehicle, and obtaining the height Ht of the target object according to the flight height of the unmanned aerial vehicle and the height difference;
and S5, calculating the longitude and latitude of the target object by combining the height difference Hp, the pixel point coordinates (u, v) of the target object and the real-time parameters by using a plane positioning method.
Optionally, the vertical declination angle of the target with respect to the drone includes:
calculating a vertical deflection angle of the unmanned aerial vehicle according to the visual pod pitch angle alfa, the pixel coordinates (u, v) of the target object, the image center coordinates (uc, vc), the size of each pixel in the image and the focal length f, wherein the calculation formula of the vertical deflection angle is as follows:
gama=-alfa+actan((v-vc)*Yp/f)
where Yp is the width of each pixel in the image frame.
Optionally, the size of each pixel in the image frame includes:
the pixel size includes a pixel length Xp and a pixel width Yp, the pixel length Xp is calculated by a focal length f and a lateral field angle Wangle of the visual pod, and the pixel length Xp is calculated by the formula:
xp =2 × f tan (Wangle/2)/number of pixel columns
The pixel width Yp is calculated by a focal length f and a longitudinal field angle Hangle, and the calculation formula of the pixel width Yp is as follows:
yp =2 × f tan (Hangle/2)/pixel line number
Optionally, the pixel width Yp includes:
and adjusting the pixel width Yp by a longitudinal resolution adjusting parameter Ys to form a new pixel width Yp, wherein the longitudinal resolution adjusting parameter Ys is 0.835.
Optionally, the calculating a height difference Hp between the target object and the visual pod of the drone by the distance P2 and the vertical declination angle of the target object relative to the drone includes:
judging whether the vertical projection pixel coordinates (u 1, v 1) of the target object and the image center coordinates (uc, vc) are the same point, if so, the calculation formula of the height difference Hp is as follows:
Hp=P2*sin(gama)
if not, the calculation formula of the height difference Hp is as follows:
Hp=P2*cos(alfa)*tan(gama)
compared with the prior art, the invention has the following beneficial effects:
the invention can obtain the distance between the target object and the unmanned aerial vehicle and the real-time image picture based on the visual pod with the laser ranging function, simultaneously obtain the real-time parameter of the unmanned aerial vehicle, and realize the calculation of the height and the longitude and latitude of the target object based on the real-time parameter of the unmanned aerial vehicle, the distance between the target object and the unmanned aerial vehicle and the real-time image picture by adopting a plane positioning method.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a schematic flow chart of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. Like numbered functional units in the examples of the present invention have the same and similar structure and function.
Referring to fig. 1, the invention provides a space target positioning method based on aerial real-time images, comprising the following steps:
s1, an unmanned aerial vehicle acquires a real-time image based on a visual pod with a laser ranging function, and acquires a distance P2 between a target object and the unmanned aerial vehicle based on the visual pod;
s2, acquiring real-time parameters of the unmanned aerial vehicle, wherein the real-time parameters at least comprise a flight height H, a visual nacelle pitch angle alfa, a visual nacelle yaw angle belta, a visual nacelle transverse field angle Wangle, a longitudinal field angle Hangle, image center coordinates (uc, vc) and a focal length f;
s3, respectively selecting pixel coordinates (u, v) of a target object and vertical projection pixel coordinates (u 1, v 1) of the target object in the real-time image picture;
s4, calculating a height difference Hp between a target object and a visual pod of the unmanned aerial vehicle through the distance P2 and a vertical deflection angle of the target object relative to the unmanned aerial vehicle, and obtaining the height Ht of the target object according to the flight height of the unmanned aerial vehicle and the height difference;
and S5, calculating the longitude and latitude of the target object by using a plane positioning method and combining the height difference Hp, the pixel point coordinates (u, v) of the target object and the real-time parameters.
In the embodiment, the distance between the target object and the unmanned aerial vehicle and the real-time image picture which can realize measurement are obtained by using the visual pod with the laser ranging function, the real-time parameter of the unmanned aerial vehicle is obtained, the plane positioning method is adopted, and the calculation of the height and the longitude and latitude of the target object is realized based on the real-time parameter of the unmanned aerial vehicle, the distance between the target object and the unmanned aerial vehicle and the real-time image picture.
Optionally, the vertical declination angle of the target with respect to the drone includes:
calculating a vertical deflection angle of the unmanned aerial vehicle according to the visual pod pitch angle alfa, the pixel coordinates (u, v) of the target object, the image center coordinates (uc, vc), the size of each pixel in the image and the focal length f, wherein the calculation formula of the vertical deflection angle is as follows:
gama=-alfa+actan((v-vc)*Yp/f)
where Yp is the width of each pixel in the image frame.
Optionally, the size of each pixel in the image frame includes:
the pixel size includes a pixel length Xp and a pixel width Yp, the pixel length Xp is calculated by a focal length f and a lateral field angle Wangle of the visual pod, and the pixel length Xp is calculated by the formula:
xp =2 × f tan (Wangle/2)/number of pixel columns
The pixel width Yp is calculated by a focal length f and a longitudinal field angle Hangle, and the calculation formula of the pixel width Yp is as follows:
yp =2 × f tan (Hangle/2)/pixel line number
In this embodiment, the resolution of the real-time image obtained by the visual pod is 1920 × 1080, the number of pixel columns is 1920, and the number of pixel rows is 1080.
Optionally, the pixel width Yp includes:
and adjusting the pixel width Yp by a longitudinal resolution adjusting parameter Ys to form a new pixel width Yp, wherein the longitudinal resolution adjusting parameter Ys is 0.835.
In this embodiment, for the real-time picture with 1920 × 1080 resolution, the width of the pixel is adjusted, and the new pixel width Yp is obtained by multiplying the pixel width Yp by the longitudinal resolution adjustment parameter Ys, so as to replace the previous pixel width Yp, that is, yp (new) = Yp (old) × Ys, where the longitudinal resolution adjustment parameter Ys is a coefficient obtained through test experimental data, which is beneficial to improving the positioning accuracy.
Optionally, the calculating a height difference Hp between the target object and the visual pod of the drone by the distance P2 and the vertical declination angle of the target object relative to the drone includes:
judging whether the vertical projection pixel coordinates (u 1, v 1) of the target object and the image center coordinates (uc, vc) are the same point, if so, the calculation formula of the height difference Hp is as follows:
Hp=P2*sin(gama)
if not, the calculation formula of the height difference Hp is as follows:
Hp=P2*cos(alfa)*tan(gama)
in this embodiment, the real-time image fed back by the visual pod may be fed back to a display unit in the background, and a controller in the background may click the real-time image on the display unit through a mouse to determine a pixel coordinate (u, v) of the target object and a pixel coordinate (u 1, v 1) of the target object projected in the real-time image, where the vertical projection pixel coordinate (u 1, v 1) of the target object is usually the image center coordinate (uc, vc), and if the vertical projection pixel coordinate (u 1, v 1) of the target object is not a center point, the vertical projection pixel coordinate (u 1, v 1) of the target object may be calculated as the center point, and a pixel coordinate deviation between the pixel coordinate (u, v) of the target object and the center may be changed to a deviation from the vertical projection pixel coordinate (u 1, v 1) of the target object;
if the vertical projection pixel coordinates (u 1, v 1) of the target object and the image center coordinates (uc, vc) are the same point, if the vertical projection pixel coordinates and the image center coordinates are the same point, the calculation formula of the height difference Hp is as follows:
Hp=P2*sin(gama)
if the vertical projection pixel coordinates (u 1, v 1) of the object are not the building surface, that is, the vertical projection pixel coordinates (u 1, v 1) of the object and the image center coordinates (uc, vc) are on the same wall surface but are not the same point, the calculation formula of the height difference Hp is as follows:
Hp=P2*cos(alfa)*tan(gama)
the two equations are the same, and if the distance measurement point and the target point are the same point, gama = alfa, and other points are different. And then according to the real-time height H of the unmanned aerial vehicle, the height Ht of the target object can be obtained:
Ht=H-Hp
the calculation process of the longitude and latitude of the target object is as follows:
calculating the longitudinal deflection angle Vgama of the target object to the central point, namely:
Vgama=actan((v-vc)*Yp/f)
calculating the component Y of the distance between the target object and the unmanned aerial vehicle in the longitudinal axis direction of the nacelle, namely:
Y=H/tan(gama)
calculating the focal length Px of the same row of pixels with the target object in the image, namely:
Px=f/cos(vgama)
calculating the horizontal deflection angle Delta of the target object, namely:
Delta=actan((u-uc)*Xp/Px)
calculating a component X of the distance between the target object and the unmanned aerial vehicle in the transverse axis direction of the nacelle, namely:
X=Y*tan(Delta)
calculating the distance P between the target object and the ground projection of the unmanned aerial vehicle (namely the projection of the target object), namely:
P=sqrt(X*X+Y*Y)
calculating a heading declination angle, coureerangle, of the target deviating from the visual pod direction, namely:
coureAngle=tan(X/Y)
through longitude and latitude (lon, lat) fed back by a sensor of the unmanned aerial vehicle, the earth radius r, a heading angle airplaneCoa of the unmanned aerial vehicle, a yaw angle belta of a visual pod and a distance = P of a ground projection of the target object and the unmanned aerial vehicle, a heading deviation angle stringing of the unmanned aerial vehicle and the target object = airplaneCoa + belta + coureAngle, longitude and latitude (lon 2, lat 2) of the target object can be calculated, and the calculation formula is as follows:
lat2=acsin(sin(lat)*cos(distance/r)+cos(lat)*sin(distance/r)*cos(bring))
lon2=(lon+actan2(sin(bring)*sin(distance/r)*cos(lat),cos(distance/r)-sin(lat)*sin(lat2)))
and realizing the space positioning of the target object according to the longitude and latitude (lon 2, lat 2) of the target object and the height Ht of the target object.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (5)

1. A space target positioning method based on aerial real-time images is characterized by comprising the following steps:
s1, an unmanned aerial vehicle acquires a real-time image based on a visual pod with a laser ranging function, and acquires a distance P2 between a target object and the unmanned aerial vehicle based on the visual pod;
s2, acquiring real-time parameters of the unmanned aerial vehicle, wherein the real-time parameters at least comprise a flight height H, a visual pod pitch angle alfa, a visual pod yaw angle belta, a visual pod transverse field angle Wangle, a longitudinal field angle Hangle, image center coordinates (uc, vc) and a focal length f;
s3, respectively selecting pixel coordinates (u, v) of a target object and vertical projection pixel coordinates (u 1, v 1) of the target object in the real-time image picture;
s4, calculating a height difference Hp between a target object and a visual pod of the unmanned aerial vehicle through the distance P2 and a vertical deflection angle of the target object relative to the unmanned aerial vehicle, and obtaining the height Ht of the target object according to the flight height of the unmanned aerial vehicle and the height difference;
and S5, calculating the longitude and latitude of the target object by combining the height difference Hp, the pixel point coordinates (u, v) of the target object and the real-time parameters by using a plane positioning method.
2. The aerial real-time image-based spatial target positioning method according to claim 1, wherein the vertical deflection angle of the target relative to the unmanned aerial vehicle comprises:
calculating a vertical deflection angle of the unmanned aerial vehicle according to the visual pod pitch angle alfa, the pixel coordinates (u, v) of the target object, the image center coordinates (uc, vc), the size of each pixel in the image and the focal length f, wherein the calculation formula of the vertical deflection angle is as follows:
gama=-alfa+actan((v-vc)*Yp/f)
where Yp is the width of each pixel in the image frame.
3. The aerial real-time image-based spatial target positioning method according to claim 2, wherein the size of each pixel in the image frame comprises:
the pixel size includes a pixel length Xp and a pixel width Yp, the pixel length Xp is calculated by a focal length f and a lateral field angle of the visual pod, and the pixel length Xp is calculated by the following formula:
xp =2 × f tan (Wangle/2)/number of pixel columns
The pixel width Yp is calculated by a focal length f and a longitudinal field angle Hangle, and the calculation formula of the pixel width Yp is as follows:
yp =2 × f tan (Hangle/2)/number of pixel rows.
4. The aerial real-time image based spatial target localization method according to claim 3, wherein the pixel width Yp comprises:
and adjusting the pixel width Yp by a longitudinal resolution adjusting parameter Ys to form a new pixel width Yp, wherein the longitudinal resolution adjusting parameter Ys is 0.835.
5. The aerial real-time image-based spatial target positioning method according to claim 4, wherein the calculating of the height difference Hp between the target object and the visual pod of the unmanned aerial vehicle through the distance P2 and the vertical declination angle of the target object relative to the unmanned aerial vehicle comprises:
judging whether the vertical projection pixel coordinates (u 1, v 1) of the target object and the image center coordinates (uc, vc) are the same point, if so, the calculation formula of the height difference Hp is as follows:
Hp=P2*sin(gama)
if not, the calculation formula of the height difference Hp is as follows:
Hp=P2*cos(alfa)*tan(gama)。
CN202211347168.0A 2022-10-31 2022-10-31 Aerial real-time image-based space target positioning method Pending CN115657072A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211347168.0A CN115657072A (en) 2022-10-31 2022-10-31 Aerial real-time image-based space target positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211347168.0A CN115657072A (en) 2022-10-31 2022-10-31 Aerial real-time image-based space target positioning method

Publications (1)

Publication Number Publication Date
CN115657072A true CN115657072A (en) 2023-01-31

Family

ID=84993752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211347168.0A Pending CN115657072A (en) 2022-10-31 2022-10-31 Aerial real-time image-based space target positioning method

Country Status (1)

Country Link
CN (1) CN115657072A (en)

Similar Documents

Publication Publication Date Title
CN111473739B (en) Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area
CN106780618B (en) Three-dimensional information acquisition method and device based on heterogeneous depth camera
CN105445721B (en) Based on laser radar and video camera combined calibrating method with feature protrusion V-type calibration object
US10176595B2 (en) Image processing apparatus having automatic compensation function for image obtained from camera, and method thereof
CA2395257C (en) Any aspect passive volumetric image processing method
US11307595B2 (en) Apparatus for acquisition of distance for all directions of moving body and method thereof
EP3132231B1 (en) A method and system for estimating information related to a vehicle pitch and/or roll angle
CN110930508B (en) Two-dimensional photoelectric video and three-dimensional scene fusion method
KR101105361B1 (en) The method for geometric registration for images data and lidar data and the apparatus thereof
US10121223B2 (en) Post capture imagery processing and deployment systems
CN111192235A (en) Image measuring method based on monocular vision model and perspective transformation
CN106408601A (en) GPS-based binocular fusion positioning method and device
CN110555813B (en) Rapid geometric correction method and system for remote sensing image of unmanned aerial vehicle
CN108830811A (en) A kind of aviation image real-time correction method that flight parameter is combined with camera internal reference
CN104976968A (en) Three-dimensional geometrical measurement method and three-dimensional geometrical measurement system based on LED tag tracking
JP2017524122A (en) Method and apparatus for measuring displacement of mobile platform
CN106033614A (en) Moving object detection method of mobile camera under high parallax
CN110750153A (en) Dynamic virtualization device of unmanned vehicle
CN111105467B (en) Image calibration method and device and electronic equipment
CN112461204B (en) Method for satellite to dynamic flying target multi-view imaging combined calculation of navigation height
CN102628693A (en) Method for registering camera spindle and laser beam in parallel
CN115690612A (en) Unmanned aerial vehicle photoelectric image target search quantization indicating method, device and medium
CN111402315A (en) Three-dimensional distance measuring method for adaptively adjusting base line of binocular camera
CN113340272B (en) Ground target real-time positioning method based on micro-group of unmanned aerial vehicle
CN107941241B (en) Resolution board for aerial photogrammetry quality evaluation and use method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination