CN116309839A - Runway automatic labeling method based on telemetry data - Google Patents

Runway automatic labeling method based on telemetry data Download PDF

Info

Publication number
CN116309839A
CN116309839A CN202310270588.1A CN202310270588A CN116309839A CN 116309839 A CN116309839 A CN 116309839A CN 202310270588 A CN202310270588 A CN 202310270588A CN 116309839 A CN116309839 A CN 116309839A
Authority
CN
China
Prior art keywords
coordinate system
runway
flight
aerial vehicle
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310270588.1A
Other languages
Chinese (zh)
Inventor
陶呈纲
马波
王瑞
王波
王春兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Hanlan Technology Co ltd
AVIC Chengdu Aircraft Design and Research Institute
Original Assignee
Chengdu Hanlan Technology Co ltd
AVIC Chengdu Aircraft Design and Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Hanlan Technology Co ltd, AVIC Chengdu Aircraft Design and Research Institute filed Critical Chengdu Hanlan Technology Co ltd
Priority to CN202310270588.1A priority Critical patent/CN116309839A/en
Publication of CN116309839A publication Critical patent/CN116309839A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a runway automatic labeling method based on telemetry data, which comprises the following steps: acquiring a flight video of the unmanned aerial vehicle in a real scene, and capturing and cutting the flight video to obtain a flight picture; reading a flight log in a real scene to obtain flight information of the unmanned aerial vehicle; acquiring GPS information of four endpoints of a runway in a real scene, and converting the GPS information into a world coordinate system of the runway; the calculated rotation matrix and translation matrix of the unmanned aerial vehicle convert the world coordinate system into a camera coordinate system; and converting the camera coordinate system into an image coordinate system, and drawing points of the image coordinate system on the image to obtain a runway area. The method is used for solving the problems that a large amount of manpower and material resources are required to be consumed and the manual labeling efficiency is low in the existing method for generating the training set mainly by manually capturing pictures and then manually extracting the picture features to perform manual labeling.

Description

Runway automatic labeling method based on telemetry data
Technical Field
The invention relates to the technical field of image processing, in particular to an automatic runway labeling method based on telemetry data.
Background
In recent years, with the rapid development of economy and computer technology in China, the field of computer vision is also rapidly developed. With the development of three-dimensional scene display technologies such as Virtual Reality (VR), augmented Reality (AR), naked eye 3d and the like, the three-dimensional scene display technology plays an important role in a large number of real scenes such as intelligent buildings, intelligent public security, intelligent flying and the like. Because of the cost and safety issues of unmanned aerial vehicle flight tests, a large number of training sets are required to train the model for unmanned aerial vehicle flight tests. The current method for generating the training set mainly uses a manual screenshot, and then manually extracts the picture characteristics to carry out manual labeling.
Disclosure of Invention
In view of the above, the invention aims to provide an automatic runway labeling method based on telemetry data, which is used for solving the problems that a great deal of manpower and material resources are required to be consumed and the manual labeling efficiency is low in the existing method for generating training sets by mainly using manual screenshot and then manually extracting picture features for manual labeling.
The invention solves the technical problems by the following technical means:
in a first aspect, the present invention provides a method for automatically labeling a runway based on telemetry data, comprising the steps of:
acquiring a flight video of the unmanned aerial vehicle in a real scene, and capturing and cutting the flight video to obtain a flight picture;
reading a flight log in a real scene to obtain flight information of the unmanned aerial vehicle;
acquiring GPS information of four endpoints of a runway in a real scene, and converting the GPS information into a world coordinate system of the runway;
converting the world coordinate system into a camera coordinate system according to a rotation matrix and a translation matrix of the unmanned aerial vehicle calculated by the flight information;
and converting the camera coordinate system into an image coordinate system, and drawing points of the image coordinate system on an image to obtain a runway area.
According to the technical scheme, the flying video of the unmanned aerial vehicle in the real scene is subjected to screenshot and clipping to obtain the flying picture and the corresponding flying time, the flying information is obtained according to the flying log and is used for obtaining the coordinates in the world coordinate system of the runway, the world coordinate system of the runway is converted into the coordinates in the camera coordinate system through the rotation matrix and the translation matrix, the coordinates in the camera coordinate system are converted into the coordinates in the image coordinate system by combining the internal reference matrix of the camera, and the positions of the four endpoints of the runway in the picture can be obtained. According to the technical scheme, the problems that a large amount of manpower and material resources are required to be consumed and the manual labeling efficiency is low due to manual labeling are avoided, the unmanned aerial vehicle runway labeling efficiency is effectively improved, and the manpower consumption is reduced.
With reference to the first aspect, in some embodiments, the flight information includes GPS information, heading angle yaw, roll angle roll, pitch angle of the unmanned aerial vehicle.
With reference to the first aspect, in some embodiments, the runway world coordinate system is obtained as follows:
setting the midpoint of the bottom edge of the runway as the origin of a world coordinate system, and establishing the world coordinate system through the origin;
and obtaining GPS information of four endpoints of the runway and the midpoint of the bottom edge according to flight information in the real scene, calculating the distances from the four endpoints of the runway to the midpoint of the bottom edge by using a GPS calculation formula through the GPS information, and calculating the coordinates of the four endpoints of the runway in a world coordinate system under the virtual three-dimensional scene according to the distances.
With reference to the first aspect, in some embodiments, the rotation matrix is calculated as follows:
a1=cos(yaw)
a2=cos(roll)
a3=cos(pitch)
b1=sin(yaw)
b2=sin(roll)
b3=sin(pitch)
Figure BDA0004134468510000021
with reference to the first aspect, in some embodiments, the translation matrix is calculated as follows: according to the GPS information, the linear distance Tx, the transverse distance Ty and the longitudinal distance Tz of the unmanned aerial vehicle relative to the runway can be calculated, the translation matrix T is calculated, and the calculation formula is as follows:
T=[Tx,Ty,Tz]。
with reference to the first aspect, in some embodiments, the acquisition of the camera coordinate system is as follows:
firstly, multiplying a rotation matrix by a point in a world coordinate system, and adding a translation matrix T to a result obtained by multiplying the rotation matrix by the left to obtain a coordinate point in a camera coordinate system, wherein the calculation formula is as follows:
Figure BDA0004134468510000031
with reference to the first aspect, in some embodiments, the image coordinate system is acquired as follows:
according to a camera internal parameter matrix K for shooting the flight video, the runway coordinates in a camera coordinate system are converted into points in an image coordinate system in a virtual three-dimensional scene, and the calculation formula is as follows:
Figure BDA0004134468510000032
wherein,,
Figure BDA0004134468510000033
in a second aspect, the present invention provides an automatic generation device for an unmanned aerial vehicle runway training set, including:
the image acquisition module is used for acquiring a flight video of the unmanned aerial vehicle in a real scene, and capturing and cutting the flight video to obtain a flight image;
the flight information acquisition module is used for reading the flight log in the real scene from the flight picture to acquire the flight information of the unmanned aerial vehicle;
the world coordinate system acquisition module is used for acquiring GPS information of four endpoints of the runway and converting the GPS information into a world coordinate system of the runway;
the camera coordinate system acquisition module is used for converting the world coordinate system into a camera coordinate system according to the rotation matrix and the translation matrix of the unmanned aerial vehicle calculated by the flight information;
and the runway area acquisition module is used for converting the camera coordinate system into an image coordinate system and drawing points of the image coordinate system on an image to acquire the runway area.
In a third aspect, the invention provides a computer device comprising a processor and a memory storing at least one program loaded and executed by the processor to implement a telemetry-based runway automatic labeling method as described hereinbefore.
In a fourth aspect, the invention provides a computer readable storage medium having at least one program loaded and executed by the processor to implement the telemetry-based runway automatic labeling method as described above.
According to the runway automatic labeling method based on the telemetry data, screenshot and cutting are carried out on flight videos of an unmanned aerial vehicle in a real scene to obtain flight pictures and corresponding flight time, flight information is obtained according to the flight logs and used for obtaining coordinates in a world coordinate system of the runway, the world coordinate system coordinates of the runway are converted into coordinates in a camera coordinate system through a rotation matrix and a translation matrix, and the coordinates of the camera coordinate system are converted into coordinates in an image coordinate system by combining an internal reference matrix of a camera, so that positions of four endpoints of the runway in the pictures can be obtained. The technical scheme avoids the problems of great consumption of manpower and material resources and lower manual labeling efficiency caused by manual labeling, greatly improves the unmanned aerial vehicle runway labeling efficiency, and reduces the manpower consumption.
Drawings
FIG. 1 is a flow chart of a method for automatically labeling a runway based on telemetry data according to the present invention;
FIG. 2 is a block diagram of an automatic labeling device for unmanned aerial vehicle runways;
FIG. 3 is a schematic diagram illustrating key points of a track coordinate system according to an embodiment of the present invention;
the unmanned aerial vehicle runway automatic labeling device 200, the picture acquisition module 210, the flight information acquisition module 220, the world coordinate system acquisition module 230, the camera coordinate system acquisition module 240 and the runway area acquisition module 250.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms "comprising," "having," and "including" are synonymous, unless the context dictates otherwise. The phrase "a/B" means "a or B", and the phrase "a and/or B" means "(a and B) or (a or B)".
It should be noted that in this specification, like reference numerals or letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
As used herein, the term module or unit may refer to or include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality, or may be part of an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
According to the technical scheme, firstly, corresponding flight information of the unmanned aerial vehicle is obtained according to the shooting time of a picture, then, the coordinates of the runway in a world coordinate system are obtained through priori information, a rotation matrix and a translation matrix are obtained through flight information calculation, the world coordinate system coordinates of the runway are converted into the coordinates under a camera coordinate system through the rotation matrix and the translation matrix, then, the coordinates of the camera coordinate system are converted into the coordinates in an image coordinate system according to an internal reference matrix of the camera, and therefore, the positions of four endpoints of the runway in the picture can be obtained, and the runway area is obtained.
Referring to fig. 1, an embodiment of the present application provides an automatic runway labeling method based on telemetry data, which includes the following steps:
step 110, acquiring a flight video of the unmanned aerial vehicle in a real scene, and capturing and cutting the flight video to obtain a flight picture.
And 120, reading a flight log in a real scene to acquire flight information of the unmanned aerial vehicle, wherein the flight information comprises GPS information, heading angle yw, roll angle roll and pitch angle of the unmanned aerial vehicle.
And 130, acquiring GPS information of four endpoints of the runway in the real scene, and converting the GPS information into a world coordinate system of the runway.
Specifically, the runway world coordinate system is obtained as follows:
setting the midpoint of the bottom edge of the runway as the origin of a world coordinate system in the virtual three-dimensional scene, and establishing the world coordinate system through the origin;
and obtaining GPS information of four endpoints of the runway and the midpoint of the bottom edge according to flight information in the real scene, calculating the distances from the four endpoints of the runway to the midpoint of the bottom edge by using a GPS calculation formula through the GPS information, and calculating the coordinates of the four endpoints of the runway in a world coordinate system under the virtual three-dimensional scene according to the distances.
And 140, converting the world coordinate system into a camera coordinate system according to the rotation matrix and the translation matrix of the unmanned aerial vehicle calculated by the flight information.
And 150, converting the camera coordinate system into an image coordinate system, and drawing points of the image coordinate system on an image to obtain a runway area.
According to the runway automatic labeling method based on the telemetry data, the flight video of the unmanned aerial vehicle in the real scene is subjected to screenshot, the flight pictures and the corresponding flight time are obtained, the flight information is obtained according to the flight logs and used for obtaining the coordinates of the runway in the world coordinate system, the world coordinate system coordinates of the runway are converted into the coordinates in the camera coordinate system through the rotation matrix and the translation matrix, the coordinates of the camera coordinate system are converted into the coordinates in the image coordinate system by combining the internal reference matrix of the camera, and the positions of the four endpoints of the runway in the pictures can be obtained. The technical scheme avoids the problems of great consumption of manpower and material resources and lower manual labeling efficiency caused by manual labeling, greatly improves the unmanned aerial vehicle runway labeling efficiency, and reduces the manpower consumption.
The steps of the runway automatic labeling method based on telemetry data of the present application will be described in detail below, as follows:
step 110, acquiring a flight video of the unmanned aerial vehicle in a real scene, and capturing and cutting the flight video to obtain a flight picture. Firstly, a user is required to manually control the unmanned aerial vehicle to fly in a real scene, the flight task of the unmanned aerial vehicle is simulated by setting the initial position and the final position of the unmanned aerial vehicle in the real scene, the user starts video recording at the position where the user can see a runway, the flight video data are stored, video recordings are cut according to the designated frame number, the flight pictures are stored according to the format of the video recording time plus the frame number, and the time stamp of the corresponding flight picture can be obtained according to the video recording time.
And 120, reading a flight log in a real scene to acquire flight information of the unmanned aerial vehicle, wherein the flight information comprises GPS information, heading angle yw, roll angle roll and pitch angle of the unmanned aerial vehicle. The method comprises the steps that a user derives a flight log of the real scene unmanned aerial vehicle, and the time stamp of a flight picture is compared with data in the real flight log to obtain GPS information, heading angle yaw, roll angle roll and pitch angle pitch of the corresponding unmanned aerial vehicle when the image is shot.
And 130, acquiring GPS information of four endpoints of the runway in the real scene, and converting the GPS information into a world coordinate system of the runway.
Specifically, the runway world coordinate system is obtained as follows:
referring to fig. 3, a midpoint of a runway bottom edge is set as an origin of a world coordinate system in a virtual three-dimensional scene, and the world coordinate system is established through the origin; and obtaining GPS information of four endpoints of the runway and the midpoint of the bottom edge according to flight information in the real scene, calculating the distances from the four endpoints of the runway to the midpoint of the bottom edge respectively by using a GPS calculation formula through the GPS information, and calculating the coordinates of the four endpoints of the runway in a world coordinate system under the virtual three-dimensional scene according to the distances.
More specifically, the calculation distance of the GPS calculation formula is as follows:
assuming the GPS information of the first point (LonA, latA) and the GPS information of the second point (LonB, latB), the formula for calculating the distance from the GPS is as follows:
C=sin(LatA)*sin(LatB)*cos(LonA-LonB)+cos(LatA)*cos(LatB)
D=R*Arccos(C)*Pi/180
wherein, C represents the distance between two points between the spherical surfaces, R is the earth radius, and Pi is the circumference ratio. The distances from the four endpoints of the runway to the middle points of the bottom edges can be calculated by adopting the calculation formula, namely, the distances from the four endpoints of the runway to the origin are calculated by adopting the calculation formula, and the coordinates in the world coordinate system can be obtained.
And 140, converting the world coordinate system into a camera coordinate system according to the rotation matrix and the translation matrix of the unmanned aerial vehicle calculated by the flight information.
Firstly, a rotation matrix R of the unmanned aerial vehicle can be obtained through calculation of a course angle yw, a roll angle roll and a pitch angle pitch of the unmanned aerial vehicle, and a specific calculation formula is as follows:
a1=cos(yaw)
a2=cos(roll)
a3=cos(pitch)
b1=sin(yaw)
b2=sin(roll)
b3=sin(pitch)
Figure BDA0004134468510000061
the linear distance Tx, the transverse distance Ty and the longitudinal distance Tz of the unmanned aerial vehicle relative to the runway can be calculated through GPS information of the unmanned aerial vehicle, and a translation matrix T is obtained, wherein the specific calculation formula is as follows:
T=[Tx,Ty,Tz]
and judging whether the translation matrix and the rotation matrix of the unmanned aerial vehicle are effective according to the position of the unmanned aerial vehicle relative to the runway in the image, judging whether the standard is the correctness of the values, for example, flying towards an airport, wherein the height value is a positive value, the linear distance value is a negative value, and deleting the ineffective matrix.
And finally, multiplying the point under the world coordinate system by the rotation matrix, and adding the translation matrix T to the result obtained by multiplying the rotation matrix to obtain the coordinate point under the camera coordinate system, wherein the calculation formula is as follows:
Figure BDA0004134468510000071
and 150, converting the camera coordinate system into an image coordinate system, and drawing points of the image coordinate system on an image to obtain a runway area.
The method comprises the steps of firstly acquiring a camera internal parameter matrix K for shooting the flight video, wherein the camera internal parameter matrix K comprises the following specific steps:
preparing a checkerboard template, printing a checkerboard with black and white intervals, shooting more than 10 template images from different angles, detecting characteristic points in the images, wherein the characteristic points are obtained through a detection algorithm, and calculating an internal reference K of the camera by using the detected characteristic points, wherein the specific formula is as follows:
Figure BDA0004134468510000072
wherein f x Represents the length of the focal length in the x-axis direction, f y Represents the length of the focal length in the y-axis direction, u 0 、v 0 Is the actual location of the principal point.
Then, according to the camera internal parameter matrix K for shooting the flight video, the runway coordinates in the camera coordinate system are converted into points in the image coordinate system in the virtual three-dimensional scene, and the calculation formula is as follows:
Figure BDA0004134468510000073
finally, the points of the obtained image coordinate system are sequentially drawn on the image through an opencv function, so that the user can conveniently check the image coordinate system, meanwhile, the image coordinate system is saved in a json file for subsequent training according to a braking format, and marking software is used for checking an automatic marking result and modifying obvious erroneous marking. And taking out an abscissa minimum value x_min, an abscissa maximum value x_max, an ordinate minimum value y_min and an ordinate maximum value y_max of the calculated result, taking out the areas (x_min-10 and y_min-10) to (x_max-10 and y_max-10) on the image, and cutting out to obtain the runway area.
Because the recorded flight information data and the image shooting time have delay, errors exist in calculation results, the images are cut for reducing the errors, corner detection is carried out, and finally the corner detection results are recorded in a Json file and are used for model training. The method comprises the following steps:
firstly classifying all points according to the y value of the detected ordinate of the points, storing the points larger than (y_max-y_min+20)/2 into top_points, storing the points smaller than the value into bottom_points, sequentially taking two points from the top_points and the bottom_points to form polygons, performing contour matching on the polygons formed by the polygons and the points formed by the original calculation results, selecting four points of the best matched contours, storing the result of corner detection into json files, opening the json files in labelme, and checking the calculation results in labelme.
The method is characterized in that a flight video in a real scene is required to be prepared for 12 minutes, 1200 pictures are cut, the manual marking speed is 100 pieces/h, and the manual marking time is about 12 hours. By using the runway automatic labeling method based on the telemetry data, all pictures are sequentially read from a folder, a flight log derived from a real scene is selected, coordinate information of the runway in the image is sequentially calculated and stored in a json file, the time is consumed for 15s, labeling results are checked by using labeling software, the labeling of a correct picture is 90%, and the rest pictures only need to be slightly modified manually and take about 1 h. The error is generated because the error is generated due to the fact that the recorded time of the flight log of the picture actually shot and derived in the real scene cannot be corresponding. Based on the calculation result, the corner detection is used, so that the calculation result is more accurate, the number of pictures needing to be manually modified is reduced, the range of the runway coordinates calculated for the pictures is widened up, down, left and right, the whole runway can be contained in a rectangular frame, the rectangular frame is cut out, the corner detection is carried out on the cut pictures, all the corners are traversed, all the trapezoids and the original pictures are taken out, the outline matching is carried out, the best matching outline is found, and the result is recorded in a json file. The labelme is used for checking that the current problematic picture of the calculation result only occupies 5%, and the calculation result is more accurate.
The following is an embodiment of the automatic unmanned runway marking device of the present application, and for details not described in detail in this embodiment of the device, reference may be made to the above-described method embodiment.
Referring to fig. 2, fig. 2 is a block diagram of an automatic labeling device 200 for an unmanned runway according to an exemplary embodiment of the present application, where the device includes:
the image acquisition module 210 acquires a flight video of the unmanned aerial vehicle in a real scene, and captures and cuts the flight video to acquire a flight image;
the flight information obtaining module 220 is configured to read a flight log in a real scene from the flight picture to obtain flight information of the unmanned aerial vehicle;
the world coordinate system acquisition module 230 is configured to acquire GPS information of four endpoints of the runway in the real scene, and convert the GPS information into a world coordinate system of the runway;
the camera coordinate system acquisition module 240 is configured to convert the world coordinate system into a camera coordinate system according to the rotation matrix and the translation matrix of the unmanned aerial vehicle calculated by the flight information;
the runway region acquisition module 250 converts the camera coordinate system to an image coordinate system and draws points of the image coordinate system on the image to acquire a runway region.
In another embodiment, the automatic unmanned runway labeling device 200 further comprises: the corner detection module 260 is configured to ensure that the entire runway can be contained in the drawn rectangular frame for enlarging the range of the runway coordinates calculated by the picture, so as to reduce errors and ensure accuracy of labeling of the unmanned aerial vehicle runway.
The above-mentioned units in the automatic unmanned runway labeling device 200 may be all or partially implemented by software, hardware, and a combination thereof. The units may be embedded in hardware or may be stored in a memory of the computer device, or may be independent of a processor of the computer device, and may be stored in software, so that the processor may invoke and perform the operations of the units.
The present application also provides an embodiment of a computer device that may be used to implement the automatic track marking method based on telemetry data provided in the above embodiments. The computer device comprises a processor and a memory, the memory storing at least one program which when executed by the processor implements the steps of the embodiments of the telemetry data based runway automatic labeling method described above. The memory includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and computer readable instructions, and the internal memory provides an environment for the operating system and the execution of the computer readable instructions in the non-volatile storage medium. The processor provides computing and control capabilities for the computer device, executing the computer program stored in memory.
The application further provides a computer readable storage medium for storing a computer program, where the computer readable storage medium may be applied to the above computer device, and the computer program makes the computer device execute a corresponding flow in the runway automatic labeling method based on telemetry data in the embodiment of the application, and for brevity, details are not repeated here.
Embodiments of the present invention provide a computer program product or computer program comprising computer instructions stored on a computer readable storage medium. The computer instructions are read from the computer readable storage medium by a processor of a computer device, and executed by the processor, to cause the computer device to perform the telemetry-based runway automatic labeling method provided in the alternative implementation described above.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application. The above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made thereto without departing from the spirit and scope of the technical solution of the present invention, which is intended to be covered by the scope of the claims of the present invention. The technology, shape, and construction parts of the present invention, which are not described in detail, are known in the art.

Claims (10)

1. The runway automatic labeling method based on the telemetry data is characterized by comprising the following steps of:
acquiring a flight video of the unmanned aerial vehicle in a real scene, and capturing and cutting the flight video to obtain a flight picture;
reading a flight log in a real scene to obtain flight information of the unmanned aerial vehicle;
acquiring GPS information of four endpoints of a runway in a real scene, and converting the GPS information into a world coordinate system of the runway;
converting the world coordinate system into a camera coordinate system according to a rotation matrix and a translation matrix of the unmanned aerial vehicle calculated by the flight information;
and converting the camera coordinate system into an image coordinate system, and drawing points of the image coordinate system on an image to obtain a runway area.
2. The automatic runway labeling method based on telemetry data according to claim 1, wherein the flight information comprises GPS information, heading angle yw, roll angle roll and pitch angle pitch of the unmanned aerial vehicle.
3. The automatic runway labeling method based on telemetry data according to claim 2, wherein the runway world coordinate system is obtained as follows:
setting the midpoint of the bottom edge of the runway as the origin of a world coordinate system, and establishing the world coordinate system through the origin;
and obtaining GPS information of four endpoints of the runway and the midpoint of the bottom edge according to flight information in the real scene, calculating the distances from the four endpoints of the runway to the midpoint of the bottom edge by using a GPS calculation formula through the GPS information, and calculating the coordinates of the four endpoints of the runway in a world coordinate system according to the distances.
4. A method for automatically labeling a runway based on telemetry data according to claim 3 wherein the rotation matrix is calculated as follows:
a1=cos(yaw)
a2=cos(roll)
a3=cos(pitch)
b1=sin(yaw)
b2=sin(roll)
b3=sin(pitch)
Figure FDA0004134468490000011
5. the method for automatically labeling a runway based on telemetry data according to claim 4 wherein the translation matrix is calculated as follows: according to the GPS information, the linear distance Tx, the transverse distance Ty and the longitudinal distance Tz of the unmanned aerial vehicle relative to the runway can be calculated, the translation matrix T is calculated, and the calculation formula is as follows:
T=[Tx,Ty,Tz]。
6. the automatic runway labeling method based on telemetry data according to claim 5, wherein the camera coordinate system is obtained as follows:
firstly, multiplying a rotation matrix by a point in a world coordinate system, and adding a translation matrix T to a result obtained by multiplying the rotation matrix by the left to obtain a coordinate point in a camera coordinate system, wherein the calculation formula is as follows:
Figure FDA0004134468490000021
7. the automatic runway labeling method based on telemetry data according to claim 6, wherein the image coordinate system is obtained as follows:
according to a camera internal parameter matrix K for shooting the flight video, the runway coordinates in a camera coordinate system are converted into points in an image coordinate system in a virtual three-dimensional scene, and the calculation formula is as follows:
Figure FDA0004134468490000022
wherein,,
Figure FDA0004134468490000023
wherein f x Represents the length of the focal length in the x-axis direction, f y Indicating the length of the focal length in the y-axis direction.
8. Automatic marking device of unmanned aerial vehicle runway, its characterized in that includes:
the image acquisition module is used for acquiring a flight video of the unmanned aerial vehicle in a real scene, and capturing and cutting the flight video to obtain a flight image;
the flight information acquisition module is used for reading the flight log in the real scene from the flight picture to acquire the flight information of the unmanned aerial vehicle;
the world coordinate system acquisition module is used for acquiring GPS information of four endpoints of the runway and converting the GPS information into a world coordinate system of the runway;
the camera coordinate system acquisition module is used for converting the world coordinate system into a camera coordinate system according to the rotation matrix and the translation matrix of the unmanned aerial vehicle calculated by the flight information;
and the runway area acquisition module is used for converting the camera coordinate system into an image coordinate system and drawing points of the image coordinate system on an image to acquire the runway area.
9. A computer device comprising a processor and a memory, the memory storing at least one program that is loaded and executed by the processor to implement the telemetry-based runway automatic labeling method of any of claims 1-7.
10. A computer readable storage medium having at least one program stored thereon, the at least one program loaded and executed by the processor to implement the telemetry-based runway automatic labeling method of any of claims 1-7.
CN202310270588.1A 2023-03-20 2023-03-20 Runway automatic labeling method based on telemetry data Pending CN116309839A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310270588.1A CN116309839A (en) 2023-03-20 2023-03-20 Runway automatic labeling method based on telemetry data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310270588.1A CN116309839A (en) 2023-03-20 2023-03-20 Runway automatic labeling method based on telemetry data

Publications (1)

Publication Number Publication Date
CN116309839A true CN116309839A (en) 2023-06-23

Family

ID=86825347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310270588.1A Pending CN116309839A (en) 2023-03-20 2023-03-20 Runway automatic labeling method based on telemetry data

Country Status (1)

Country Link
CN (1) CN116309839A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117726670A (en) * 2024-02-18 2024-03-19 中国民用航空总局第二研究所 Airport runway pollutant coverage area assessment method and system and intelligent terminal

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117726670A (en) * 2024-02-18 2024-03-19 中国民用航空总局第二研究所 Airport runway pollutant coverage area assessment method and system and intelligent terminal
CN117726670B (en) * 2024-02-18 2024-05-07 中国民用航空总局第二研究所 Airport runway pollutant coverage area assessment method and system and intelligent terminal

Similar Documents

Publication Publication Date Title
EP3407294B1 (en) Information processing method, device, and terminal
JP7138718B2 (en) Feature detection device, feature detection method, and feature detection program
CN109344804A (en) A kind of recognition methods of laser point cloud data, device, equipment and medium
US20080089577A1 (en) Feature extraction from stereo imagery
CN111274927A (en) Training data generation method and device, electronic equipment and storage medium
US8761441B2 (en) System and method for measuring flight information of a spherical object with high-speed stereo camera
EP3591621A1 (en) Methods for generating a dataset of corresponding images for machine vision learning
US11823394B2 (en) Information processing apparatus and method for aligning captured image and object
US20220301277A1 (en) Target detection method, terminal device, and medium
CN112257605A (en) Three-dimensional target detection method, system and device based on self-labeling training sample
CN110956147B (en) Method and device for generating training data
US20030067462A1 (en) Evaluating method, generating method and apparatus for three-dimensional shape model
CN112017212B (en) Training and tracking method and system of face key point tracking model
CN110298891A (en) The method and device that Camera extrinsic precision is assessed automatically
CN110135396A (en) Recognition methods, device, equipment and the medium of surface mark
WO2021088504A1 (en) Road junction detection method and apparatus, neural network training method and apparatus, intelligent driving method and apparatus, and device
CN116309839A (en) Runway automatic labeling method based on telemetry data
US20220301176A1 (en) Object detection method, object detection device, terminal device, and medium
CN111723656A (en) Smoke detection method and device based on YOLO v3 and self-optimization
CN110942092A (en) Graphic image recognition method and recognition system
CN114972646A (en) Method and system for extracting and modifying independent ground objects of live-action three-dimensional model
CN118247429A (en) Air-ground cooperative rapid three-dimensional modeling method and system
US11461944B2 (en) Region clipping method and recording medium storing region clipping program
CN111898552B (en) Method and device for distinguishing person attention target object and computer equipment
CN112509110A (en) Automatic image data set acquisition and labeling framework for land confrontation intelligent agent

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Tao Chenggang

Inventor after: Ma Bo

Inventor after: Wang Rui

Inventor after: Wang Bo

Inventor after: Wang Lanchun

Inventor before: Tao Chenggang

Inventor before: Ma Bo

Inventor before: Wang Rui

Inventor before: Wang Bo

Inventor before: Wang Chunlan