WO2024096691A1 - Method and device for estimating gps coordinates of multiple target objects and tracking target objects on basis of camera image information about unmanned aerial vehicle - Google Patents
Method and device for estimating gps coordinates of multiple target objects and tracking target objects on basis of camera image information about unmanned aerial vehicle Download PDFInfo
- Publication number
- WO2024096691A1 WO2024096691A1 PCT/KR2023/017562 KR2023017562W WO2024096691A1 WO 2024096691 A1 WO2024096691 A1 WO 2024096691A1 KR 2023017562 W KR2023017562 W KR 2023017562W WO 2024096691 A1 WO2024096691 A1 WO 2024096691A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- coordinates
- coordinate system
- unmanned aerial
- aerial vehicle
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000006243 chemical reaction Methods 0.000 claims abstract description 45
- 238000004364 calculation method Methods 0.000 claims abstract description 27
- 238000001514 detection method Methods 0.000 claims abstract description 22
- 238000013473 artificial intelligence Methods 0.000 claims abstract description 13
- 238000013136 deep learning model Methods 0.000 claims abstract description 11
- 230000003287 optical effect Effects 0.000 claims description 4
- 230000005484 gravity Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 20
- 239000011159 matrix material Substances 0.000 description 16
- 230000009466 transformation Effects 0.000 description 11
- 230000008569 process Effects 0.000 description 10
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000007689 inspection Methods 0.000 description 3
- 238000004088 simulation Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 241000195940 Bryophyta Species 0.000 description 1
- 230000002155 anti-virotic effect Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64C—AEROPLANES; HELICOPTERS
- B64C39/00—Aircraft not otherwise provided for
- B64C39/02—Aircraft not otherwise provided for characterised by special use
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/20—Control system inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
Definitions
- the present disclosure relates to a method and device for estimating and tracking the GPS coordinates of a target. Specifically, the GPS coordinates of multiple targets are estimated based on camera image information of an unmanned aerial vehicle, and the pixels corresponding to the targets are located at the center of the image plane. It relates to a method and device for tracking a target by controlling a drone to locate it.
- the location coordinates of the target may be obtained through a location sensor or terminal installed on the target, but in real situations such as searching and detecting missing persons, the location coordinates of the target must be obtained even in an uncooperative environment.
- An uncooperative environment refers to a case in which the location of a target cannot be determined through wireless communication, for example, if the target is impossible to communicate with each other or is not equipped with a system such as ADS-B (Automatic Dependent Surveillance - Broadcast). In such an uncooperative environment, it is generally difficult to estimate the location of the target.
- an unmanned aerial vehicle refers to an aircraft that flies autonomously or semi-automatically according to a preset route or remotely controlled from the ground without a pilot directly boarding the aircraft.
- Unmanned aerial vehicles are also called drones.
- drones are increasingly using cameras to detect or track objects or people on the ground. During this filming process, drones can detect targets such as objects or people and perform tracking missions.
- An apparatus and method for estimating GPS coordinates of a target based on camera image information of an unmanned aerial vehicle calculates the GPS coordinates of multiple targets detected based on camera images mounted on an unmanned aerial vehicle, and identifies multiple targets on a three-dimensional map. Estimate the exact global location of the target.
- targets corresponding to given tasks are automatically detected through a pre-trained artificial intelligence deep learning model.
- a bounding box is created according to the target size including the detected target.
- the reference pixel of the bounding box is set according to the detection location (ground, sky) of the detected target, the reference pixel coordinates of the target are calculated, and the calculated reference pixel coordinates are converted to a coordinate system including the unmanned aerial vehicle coordinate system. Afterwards, the target GPS coordinates of the unmanned aerial vehicle are finally acquired.
- An apparatus and method for estimating GPS coordinates of a target based on camera image information of an unmanned aerial vehicle can select a desired target reference pixel within a bounding box according to the user's purpose.
- the apparatus and method for estimating the GPS coordinates of a target based on camera image information of an unmanned aerial vehicle calculates depth information through altimeter information mounted on the unmanned aerial vehicle when the detected target is a ground target; In the case of an airborne target, depth information on the target can be acquired by mounting a stereo camera or depth camera.
- An apparatus and method for tracking a ground target uses a camera mounted on a drone to control the ground target so that the pixel corresponding to the ground target is located at the center of the image plane. It provides a ground target tracking method using a drone equipped with a tracking camera.
- a device for estimating the GPS coordinates of a target based on camera image information of an unmanned aerial vehicle detects a target corresponding to the mission set for the unmanned aerial vehicle through a trained (pre-trained) artificial intelligence deep learning model ( a detection unit that detects and creates a bounding box containing the detected target; a setting unit that sets a reference pixel, which is a target point expressed as a pixel on a camera image plane; a calculation unit that calculates the reference pixel coordinates of the target in a two-dimensional pixel coordinate system; A conversion unit that converts the reference pixel coordinates of the target into 3D coordinates using at least one of the altitude information and depth information of the target, and converts the coordinate system of the converted 3D coordinates to obtain GPS coordinates of the target; Includes.
- the ground target tracking method is a method of tracking a ground target using a drone equipped with a camera, and the center pixel of the ground target captured using a camera mounted on the drone is selected on the image plane. Checking whether it has been done; If the center pixel of the ground target is checked as selected, calculating a yaw angle for a yaw rotation; Aligning in the horizontal direction through yawing rotation by the calculated yawing angle; calculating a distance for movement of the drone; Aligning in the vertical direction by moving forward or backward by the calculated distance; and tracking the ground target based on the aligned horizontal and vertical directions.
- the device and method for estimating the GPS coordinates of a target based on the camera image information of an unmanned aerial vehicle as described above automatically determines the global location of the target through the camera image regardless of the number of targets or the detection location of the target, such as in the sky or on the ground. By displaying it on a map, it allows more accurate performance of various missions, such as searching for missing people or calculating the global location of targets eluded by radar for anti-drone purposes.
- the device and method for estimating the GPS coordinates of a target based on camera image information of an unmanned aerial vehicle are derived from the fields of computer vision, artificial intelligence, robotics, aerospace, and flight dynamics coordinate conversion, and are used for searching missing persons and invading unmanned aerial vehicles. It can be used for detection, etc.
- An apparatus and method for estimating GPS coordinates of a target based on camera image information of an unmanned aerial vehicle includes searching for missing persons and belongings using an unmanned aerial vehicle, estimating location information of people in a dangerous area, and landing and delivering an unmanned aerial vehicle. It can be used in the process of estimating the GPS coordinates of the destination, obtaining location information of the inspection target when inspecting facilities, and estimating the GPS coordinates of an enemy unmanned aerial vehicle located in the sky.
- a camera is mounted on a drone, and the ground target is tracked by controlling the drone so that the pixel corresponding to the ground target is located at the center of the image plane. You can.
- Figure 1 is a diagram showing the data processing configuration of an apparatus 100 for estimating GPS coordinates of a target based on camera image information of an unmanned aerial vehicle according to an embodiment of the present invention.
- Figure 2 is a diagram showing the data processing configuration of the conversion unit 170 according to an embodiment of the present invention.
- Figure 3 is a diagram showing the positional relationship between an unmanned aerial vehicle and a ground target according to an embodiment of the present invention.
- Figure 4 is a diagram showing the positional relationship between an unmanned aerial vehicle and an airborne target according to an embodiment of the present invention.
- Figure 5 is a diagram showing the X B Y B plane of FRD (Front - Right - Down) according to an embodiment of the present invention.
- Figure 6 is a diagram showing the data processing flow of a method for estimating GPS coordinates of a target based on camera image information of an unmanned aerial vehicle according to an embodiment of the present invention.
- FIG. 7 is a diagram showing the data processing process in step S400 according to an embodiment of the present invention.
- Figure 8 is a block diagram for explaining a ground target tracking device using a drone equipped with a camera according to another embodiment of the present invention.
- FIG. 9 is a diagram illustrating a state before the selected pixel is horizontally centered on the image plane according to another embodiment of the present invention.
- FIG. 10 is a diagram illustrating a state after a selected pixel is horizontally centered on an image plane according to another embodiment of the present invention.
- Figure 11 is a flowchart illustrating a ground target tracking method using a drone equipped with a camera according to another embodiment of the present invention.
- An apparatus and method for estimating GPS coordinates of a target based on camera image information of an unmanned aerial vehicle estimates position coordinates, which are GPS coordinates of multiple targets detected based on camera images mounted on an unmanned aerial vehicle, and provides three-dimensional Displays the exact global location of multiple targets on the map.
- targets corresponding to given tasks are automatically detected through a pre-trained artificial intelligence deep learning model.
- a bounding box is created according to the target size including the detected target.
- the reference pixel of the bounding box is set according to the detection location (ground, sky) of the detected target, the reference pixel coordinates of the target are calculated, the calculated reference pixel coordinates are converted into 3D coordinates, and then the unmanned aerial vehicle By appropriately converting coordinates using GPS coordinates, GPS coordinates are finally obtained.
- An apparatus and method for estimating GPS coordinates of multiple targets based on camera image information of an unmanned aerial vehicle can select a desired target reference pixel within a bounding box according to the user's purpose.
- an apparatus and method for estimating GPS coordinates of multiple targets according to an embodiment calculates depth information through altimeter information mounted on an unmanned aerial vehicle when the detected target is a ground target, and calculates depth information using a stereo camera or a stereo camera when the detected target is an airborne target. Depth information of a target can be obtained through a depth camera.
- depth information is required.
- 3D restoration using 2D information means creating one more dimensional information.
- Figure 1 is a diagram showing the data processing configuration of a target GPS coordinate estimation device 100 for an unmanned aerial vehicle according to an embodiment.
- the target GPS coordinate estimation device 100 of an unmanned aerial vehicle will be comprised of a detection unit 110, a setting unit 130, a calculation unit 150, and a conversion unit 170. You can.
- the term 'part' used in this specification should be construed to include software, hardware, or a combination thereof, depending on the context in which the term is used.
- software may be machine language, firmware, embedded code, and application software.
- hardware may be a circuit, processor, computer, integrated circuit, integrated circuit core, sensor, Micro-Electro-Mechanical System (MEMS), passive device, or a combination thereof.
- MEMS Micro-Electro-Mechanical System
- the detection unit 110 detects a target corresponding to the mission set for the unmanned aerial vehicle through a pre-trained artificial intelligence deep learning model.
- Missions set for unmanned aerial vehicles may include searching for missing persons, detecting vehicles, and detecting enemy aircraft in the sky. Additionally, in the embodiment, the detection unit 110 may perform multi-targeting to detect at least one target.
- the detection unit 110 can detect both ground targets and airborne targets.
- a ground target is a target located on the ground, and an aerial target is a target located in the sky.
- the detection unit 110 may detect ground targets and airspace targets through an artificial intelligence deep learning model trained by separating the ground target detection process and the airspace target detection process.
- the artificial intelligence deep learning model includes, but is not limited to, at least one of a Deep Neural Network (DNN), a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN), and a Bidirectional Recurrent Deep Neural Network (BRDNN).
- an artificial intelligence deep learning model can classify various target detection processes according to the type of ground target and learn the classified target detection process. Types of targets may include, but are not limited to, people, articles, hats, bicycles, etc.
- the detection unit 110 creates a bounding box containing the detected target according to the size of the target.
- the setting unit 130 sets the reference pixel of the target included in the bounding box.
- the reference pixel of the target may be a target point represented by a pixel on the camera image plane. That is, the setting unit 130 may set a reference pixel to obtain a target point within the bounding box.
- the setting unit 130 may set the reference pixel of the target as the center point of the bounding box.
- the setting unit 130 may set the reference pixel of the target within the bounding box to a desired pixel according to the user's purpose. For example, if the user's goal is to shoot down the target and the target is an aircraft, the wing part of the aircraft may be set as the reference pixel of the target, and if the target is a building, the core part of the building may be set as the reference pixel. Additionally, if the user's purpose is a search and the target is a person, the reference pixel of the target may be set according to the person's posture. For example, if the person is standing, location information on the ground is needed, so the foot side of the detected person can be set as the reference pixel of the target. If the person is lying down, the person's torso can be set as the reference pixel for the target.
- the calculation unit 150 calculates the reference pixel coordinates of the target in a two-dimensional pixel coordinate system.
- the calculation unit 150 calculates the reference pixel of the target set in the setting unit 130 as a coordinate value in a two-dimensional pixel coordinate system.
- the conversion unit 170 converts the reference pixel coordinates of the target into three-dimensional coordinates using at least one of the altitude information and depth information of the unmanned aerial vehicle.
- the 3D coordinates are 3D values based on the camera coordinate system.
- an extrinsic parameter is used to convert this to the unmanned aerial vehicle body coordinate system.
- the external parameter is a variable that indicates where the camera is mounted and in what direction it is mounted relative to the center of the unmanned aerial vehicle body, and may include the camera tilt angle.
- the coordinate system of the converted 3D coordinates in the embodiment may be the unmanned aerial vehicle body coordinate system used in the unmanned aerial vehicle.
- the conversion unit 170 converts the three-dimensional coordinate system and obtains the target GPS coordinates of the unmanned aerial vehicle.
- Figure 2 is a diagram showing the data processing configuration of the conversion unit 170 according to an embodiment.
- the conversion unit 170 may be configured to include a dimension conversion unit 171, a coordinate system conversion unit 173, and a GPS coordinate acquisition unit 175.
- the dimension transformation unit 171 transforms the coordinate system of the reference pixel coordinates.
- the dimension conversion unit 171 may convert the reference pixel coordinates of the target from a two-dimensional pixel coordinate system to a three-dimensional unmanned aerial vehicle body coordinate system.
- the dimension conversion unit 171 uses the altitude information of the unmanned aerial vehicle as depth information of the ground target and converts the reference pixel coordinates of the target calculated in a two-dimensional pixel coordinate system into three dimensions.
- the dimension conversion unit 171 calculates the three-dimensional position difference of the camera coordinate system with respect to the reference pixel coordinates of the target, which are two-dimensional coordinates, through the camera model of the unmanned aerial vehicle, and calculates the three-dimensional position difference between the camera and the unmanned aerial vehicle CG (Center of Gravity). Using the external parameter value, the target's reference pixel coordinates are converted into three-dimensional coordinates for the body coordinate system of the unmanned aerial vehicle.
- Figure 3 is a diagram showing the positional relationship between an unmanned aerial vehicle and a ground target according to an embodiment.
- the dimension conversion unit 171 sets the camera mounted on the unmanned aerial vehicle as a pinhole camera model and converts the two-dimensional reference pixel coordinates into three-dimensional coordinates of the camera through Equation 1. You can.
- the dimensional conversion unit 171 assumes that the origin of the camera coordinate system and the unmanned aerial vehicle body coordinate system are located at the same location, and converts the three-dimensional coordinate P c converted to the camera coordinate system into a vector of the unmanned aerial vehicle body coordinate system using Equation 2. Convert to P B.
- the dimensional conversion unit 171 uses a three-dimensional rotation transformation matrix R BC from the camera coordinate system to the unmanned aerial vehicle body coordinate system. It can be calculated through Equation 3.
- the dimension conversion unit 171 calculates the vector P B of the unmanned aerial vehicle body coordinate system using equation 4 derived through equation 1 and equation 2.
- the dimension conversion unit 171 By collecting depth information of an aerial target through a stereo camera or depth camera mounted on an unmanned aerial vehicle, the GPS coordinates of the aerial target can be estimated.
- the dimension conversion unit 171 calculates the 3D position information P c of the aerial target expressed in a camera coordinate system using Equation 5, and using Equation 6 Calculate the vector P B of the unmanned aerial vehicle body coordinate system.
- the coordinate system conversion unit 173 converts the target coordinates into the 3D body coordinate system of the unmanned aerial vehicle based on the target coordinates.
- the relative position difference between the unmanned aerial vehicle and the target is calculated, and the three-dimensional coordinates of the target are converted to the ENU (East-North-Up) coordinate system.
- FIG. 4 is a diagram showing the positional relationship between an unmanned aerial vehicle and an airborne target according to an embodiment
- FIG. 5 is a diagram showing the X B Y B plane of FRD (Front-Right-Down) according to an embodiment.
- the angle ⁇ formed between the target and the unmanned aerial vehicle on the X B Y B plane of the unmanned aerial vehicle body coordinate system is calculated through Equation 7.
- the coordinate system conversion unit 173 assumes that the ground target is located on the ground at the same altitude as the sea level altitude of the ground, which is the reference for the altitude of the unmanned aerial vehicle.
- the reference pixel of the ground target may be set to the center point of the automatically detected bounding box, unless specifically selected by the user. Accordingly, the coordinate system conversion unit 173 calculates the distance d t to the ground target on the X B Y B plane of the unmanned aerial vehicle body coordinate system using altimeter information mounted on the unmanned aerial vehicle through Equation 8.
- the coordinate system conversion unit 173 calculates the distance d t from the sky target on the X B Y B plane of the unmanned aerial vehicle body coordinate system through Equation 9.
- the coordinate system conversion unit 173 calculates the altitude (h t ) of the target in the sky using Equation 10.
- the coordinate system conversion unit 173 calculates both the ground target and the airborne target, the angle ⁇ formed between the target and the unmanned aerial vehicle on the X B Y B plane of the calculated unmanned aerial vehicle body coordinate system , and the Using the distance d t to the target, the body coordinate system (FRD) of the unmanned aerial vehicle is converted to an inertial coordinate system (ENU) through Equation 11 and Equation 12.
- the coordinate system conversion unit 173 calculates the target x coordinate (x ENU ) in the ENU coordinate system using Equation 11.
- the coordinate system conversion unit 173 calculates the target y coordinate (y ENU ) in the ENU coordinate system using Equation 12.
- the GPS coordinate acquisition unit 175 converts target coordinates expressed in the ENU coordinate system into GPS coordinates.
- GPS coordinates are coordinates that include latitude, longitude, and altitude.
- the GPS coordinate acquisition unit 175 first converts the coordinates of the target expressed in the ENU coordinate system to the ECEF (Earth-Centered Earth-Fixed) coordinate system, and then converts the ECEF coordinate system to the LLA coordinate system (GPS coordinates) to obtain the target GPS coordinates. It can be obtained.
- ECEF Earth-Centered Earth-Fixed
- the GPS coordinate acquisition unit 175 converts the target coordinates from the ENU coordinate system to the ECEF coordinate system using the current GPS coordinates of the unmanned aerial vehicle, and then converts them back to the LLA (Latitude-Longitude-Altitude) coordinate system, Finally, obtain the aircraft's target GPS coordinates. Specifically, the GPS coordinate acquisition unit 175 converts the target coordinates of the ENU (East-North-Up) coordinate system calculated by the coordinate system conversion unit 173 into the ECEF coordinate system using Equations 13 to 18 .
- the GPS coordinate acquisition unit 175 first calculates t using Equation 13 to convert from the ENU coordinate system to the ECEF coordinate system.
- Equation 14 t calculated in this way is a temporary variable used in Equations 14 and 15 for conversion from the ENU coordinate system to the ECEF coordinate system.
- the GPS coordinate acquisition unit 175 calculates d x , which is the relative position difference on the x-axis between the unmanned aerial vehicle and the target in the ECEF coordinate system using Equation 14.
- the GPS coordinate acquisition unit 175 calculates d y , which is the relative position difference on the y axis between the unmanned aerial vehicle and the target in the ECEF coordinate system using Equation 15.
- the GPS coordinate acquisition unit 175 calculates x 0 , which is the x-axis position of the unmanned aerial vehicle expressed in the ECEF coordinate system, using Equation 16.
- the GPS coordinate acquisition unit 175 calculates y 0 , which is the y-axis position of the unmanned aerial vehicle expressed in the ECEF coordinate system, using Equation 17.
- the radius of curvature N( ⁇ ) of the Earth's ellipsoid according to latitude ( ⁇ ) may be calculated by the position coordinate acquisition unit 175 through Equation 18.
- the GPS coordinate acquisition unit 175 calculates
- the GPS coordinate acquisition unit 175 calculates y ECEF, which is the y coordinate of the ECEF coordinate system, using Equation 20 through the unmanned aerial vehicle location information expressed in the ECEF coordinate system and the relative position difference between the unmanned aerial vehicle and the target.
- the GPS coordinate acquisition unit 175 can convert the target's location into the ECEF coordinate system using Equation 19 and Equation 20.
- the GPS coordinate acquisition unit 175 converts the coordinates of the target shown in the ECEF coordinate system into GPS coordinates (ECEF to LLA) and finally obtains the GPS coordinates of the target of the unmanned aerial vehicle.
- the GPS coordinate acquisition unit 175 converts the coordinates of the target converted into the ECEF coordinate system obtained using Equation 19 and Equation 20 into GPS coordinates using Equation 21 to Equation 26, and converts the converted Coordinates are estimated as target GPS coordinates.
- the GPS coordinate acquisition unit 175 calculates the size r of the target position vector in the ECEF coordinate system using Equation 21.
- the GPS coordinate acquisition unit 175 calculates the linear eccentricity E using Equation 22.
- the GPS coordinate acquisition unit 175 calculates u using Equation 23.
- the GPS coordinate acquisition unit 175 calculates ⁇ 0 using Equation 24.
- the GPS coordinate acquisition unit 175 calculates ⁇ using Equation 25.
- the GPS coordinate acquisition unit 175 calculates ⁇ using Equation 26.
- Figure 6 is a diagram showing the data processing flow of a method for estimating GPS coordinates of a target for an unmanned aerial vehicle according to an embodiment.
- step S100 a target corresponding to the mission set for the unmanned aerial vehicle is detected through an artificial intelligence deep learning model trained (pre-trained) in the detection unit, and bounding including the detected target is performed.
- a bounding box is created according to the size of the target.
- the setting unit sets a reference pixel, which is a target point expressed as a pixel on the camera image plane.
- step S300 the calculation unit calculates the reference pixel coordinates of the target in a two-dimensional pixel coordinate system.
- step S400 the conversion unit converts the reference pixel coordinates of the target into 3D coordinates using at least one of the target's altitude information and the target's depth information, and converts the coordinate system of the converted 3D coordinates.
- step S500 the GPS coordinates of the target are obtained from the conversion unit.
- Figure 7 is a diagram showing the data processing process in step S400 according to an embodiment.
- the dimension conversion unit converts the coordinate system of the reference pixel coordinates. More specifically, in step S410, the reference pixel coordinates of the target are converted from a 2D pixel coordinate system to a 3D camera coordinate system and then into a 3D body coordinate system (FRD coordinate system).
- a pinhole camera model can be used, and at this time, altitude information of the unmanned aerial vehicle for ground targets and target depth information for airborne targets can be used.
- step S430 the coordinate system conversion unit 173 calculates the relative position difference between the unmanned aerial vehicle and the target based on the target coordinates converted to the unmanned aerial vehicle body coordinate system, and converts the target's coordinates to the ENU (East-North-Up) coordinate system.
- the coordinates of the target are converted from the unmanned aerial vehicle body coordinate system (FRD coordinate system) to the inertial coordinate system, which is the ENU coordinate system.
- step S450 the GPS coordinate acquisition unit 175 converts the target coordinates expressed in the ENU coordinate system into GPS coordinates. Specifically, in step S450, the ENU coordinate system is converted to the ECEF coordinate system and the LLA coordinate system (target GPS coordinates). In the embodiment, when converting the ENU coordinate system to the ECEF coordinate system, the GPS coordinate value of the unmanned aerial vehicle is used.
- the device and method for estimating the target GPS coordinates of an unmanned aerial vehicle as described above automatically maps and displays the global location of the target through camera images regardless of the number of targets or the detection location of the target, such as in the sky or on the ground, thereby rescuing missing people or anti-virus software. It allows various missions to be performed more accurately, such as calculating the global location of targets that cannot be detected by drone-level radar.
- the device and method for estimating GPS coordinates of a target for an unmanned aerial vehicle include searching for missing people and belongings using an unmanned aerial vehicle, estimating location information of people in a dangerous area, estimating GPS coordinates of a destination for landing and delivery of an unmanned aerial vehicle, and It can be used in the process of obtaining location information of the inspection target during inspection and estimating the GPS coordinates of enemy drones in the sky.
- the two-dimensional coordinates of the ground target pixel expressed in the pixel coordinate system are expressed based on the drone body coordinate system. It must be converted to dimensional coordinates.
- a total of three coordinate systems are needed: a pixel coordinate system, a camera coordinate system, and a drone body coordinate system.
- the drone body coordinate system uses the FRD (Front-Right-Down) coordinate system.
- the origin of the camera coordinate system is the drone body coordinate system. It exists on a plane and is in the camera coordinate system.
- the axis is in the drone body coordinate system. parallel to the axis Additionally, the camera coordinate system The axis is in the drone body coordinate system for tracking ground targets. Angle in the direction of the ground relative to the axis ( ) is tilted.
- the conversion from the 2D pixel coordinate system to the 3D camera coordinate system utilizes camera intrinsic parameters. Assuming that the camera mounted on the drone is a pinhole camera model, the two-dimensional pixel coordinates can be converted as shown in Equation 27.
- K is the intrinsic matrix of the camera, is a three-dimensional vector representing the two-dimensional coordinates of the pixel coordinate system as a homogeneous coordinate system, and is a three-dimensional coordinate converted to the camera coordinate system.
- the extrinsic parameter between the two coordinate systems is used to convert to the camera coordinate system as shown in Equation 28.
- 3D coordinates the drone body coordinate system It can be converted to .
- Equation 29 is a three-dimensional rotation transformation matrix from the camera coordinate system to the drone body coordinate system, and can be calculated as in Equation 29 below.
- K is the intrinsic matrix of the camera, is a 3-dimensional vector representing the 2-dimensional coordinates of the pixel coordinate system as a homogeneous coordinate system, is a three-dimensional rotation transformation matrix from the camera coordinate system to the drone body coordinate system.
- Figure 8 is a block diagram for explaining a ground target tracking device using a drone equipped with a camera according to an embodiment of the present invention.
- the ground target tracking device using a drone equipped with a camera includes a check unit 210, a yawing angle calculation unit 220, a horizontal alignment unit 230, and a distance calculation unit. It includes a unit 240 and a vertical alignment unit 250.
- the ground target tracking device is composed of a check unit 210, a yawing angle calculation unit 220, a horizontal alignment unit 230, a distance calculation unit 240, and a vertical alignment unit 250.
- the camera may be a fixed monocular camera, a gimbal camera, etc.
- the check unit 210 checks whether the center pixel of the ground target is located on the image plane.
- a case where the center pixel is checked to be located on the image plane may be a case where the center pixel of a ground target displayed on the image plane is selected according to a user's manipulation.
- the case where the center pixel is checked to be located on the image plane may be a case where the center pixel of the ground target is located on the image plane through an intra-image tracking algorithm.
- ROI Region Of Interest
- the video between frames is analyzed based on existing target tracking algorithms (MOSSE, CSRT, etc.) in the video to determine the ground target.
- Targets can be continuously tracked within the video.
- the above-described tracking algorithm tracks the ground target, which is the object designated by ROI, within the video.
- the center pixel of a moving ground target can be automatically derived, thereby maintaining continuity of tracking using the drone. In other words, move the drone so that the ground target is at the center of the image to secure and maintain the angle of view.
- the ground target is recognized every frame through artificial intelligence and the center pixel of the ground target located on the image plane is automatically selected. It can be.
- the yawing angle calculation unit 220 checks that the center pixel of the target is located on the image plane, it calculates the yawing angle for yawing rotation.
- the yawing angle calculation unit 120 is the angle at which the yawing rotation should be performed. Can be calculated as in Equation 31 below.
- the horizontal alignment unit 230 aligns in the horizontal direction through yawing rotation by the yawing angle calculated by the yawing angle calculation unit 220.
- Figure 9 is a diagram to explain the state before the selected pixel is centered horizontally on the image plane.
- Figure 10 is a diagram for explaining the state after the selected pixel is centered horizontally on the image plane.
- the distance calculation unit 240 calculates the distance for the drone to move.
- the distance calculation unit 240 calculates the distance from the altitude reference point H of the drone to the ground target T. can be calculated as in Equation 32.
- the distance calculation unit 240 calculates the distance between the altitude reference point H of the drone and the point O where the camera optical axis meets the ground. Calculate as shown in Equation 33.
- the height of the camera origin away from the ground is the drone body coordinate system. It is the tilt angle toward the ground relative to the axis.
- the distance calculation unit 240 calculates the distance d that the drone must move to align the vertical center of the selected pixel, as shown in Equation 34 below, based on Equation 32 and Equation 33.
- , is the distance between the drone's altitude reference point H and the point 0 where the camera optical axis meets the ground, is the height of the camera origin away from the ground, is the x-coordinate value converted from the pixel coordinates located on the image plane to the drone body coordinate system, is the y coordinate value converted from the pixel coordinates located on the image plane to the drone body coordinate system, is the z coordinate value converted from the pixel coordinates located on the image plane to the drone body coordinate system, is the drone body coordinate system. It is the tilt angle toward the ground relative to the axis.
- the vertical alignment unit 250 aligns in the vertical direction by moving forward or backward by the distance calculated by the distance calculation unit 240.
- the center pixel of a ground target when the center pixel of a ground target is selected on the image plane, it rotates by yawing by the calculated yawing angle to adjust the horizontal direction, and moves forward or backward by the calculated distance to adjust the vertical direction to the ground target.
- the ground target is tracked by placing the pixel selected as the target at the center of the image plane.
- Figure 11 is a flowchart illustrating a ground target tracking method using a drone equipped with a camera according to another embodiment of the present invention.
- step S610 it is checked whether the center pixel of the target has been selected. Checking whether the center pixel is selected may be performed by the check unit 210 shown in FIG. 1. The above-mentioned center pixel may be selected according to the user's manipulation, or may be selected through an object detection program programmed for object selection.
- step S620 the yawing angle is calculated for yawing rotation.
- the above-described yawing angle calculation may be performed by the yawing angle calculation unit 220 shown in FIG. 8.
- step S630 Align the horizontal direction by yawing and rotating by the yawing angle calculated in step S620 (step S630).
- the horizontal alignment described above can be performed by the horizontal alignment unit 230 shown in FIG. 8.
- step S620 and step S630 are described as being separated, but step S620 and step S630 may be performed simultaneously.
- step S640 Calculate the distance for the drone to move (step S640).
- the above distance calculation may be performed by the distance calculation unit 240 shown in FIG. 8.
- step S650 Align the vertical direction by moving forward or backward by the distance calculated in step S640 (step S650).
- the vertical alignment described above can be performed by the vertical alignment unit 250 shown in FIG. 8.
- step S640 and step S650 are described as being separated, but step S640 and step S650 may be performed simultaneously.
- the ground target pixel tracking technique according to the present invention was verified in a simulation environment. camera The altitude of the drone mounted at an angle of It is deployed to perform ground target tracking.
- FIG. 12 to 14 are images for explaining ground target pixel tracking simulation results.
- FIG. 12 shows selection of a ground target to be tracked
- FIG. 13 shows after target tracking
- FIG. 14 is a graph showing the error between the target position and the camera center point on the world coordinate system after target tracking.
- the angle calculated by Equation 31 Adjust the horizontal direction by yawing and rotating, and the distance calculated by Equation 34 Move forward or backward as much as possible to track the ground target in the vertical direction. That is, the pixel selected by the tracking technique according to the present invention is placed at the center of the image plane as shown in FIG. 13.
- the average pixel error e.g., the pixel distance between the center point of the final image plane and the target marker center pixel point
- the real distance of 0.62 m Target tracking is possible with error.
- a camera is mounted on a drone, and the ground target can be tracked by controlling the drone so that the pixel corresponding to the ground target is located at the center of the image plane. .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
Provided are a method and device for estimating the GPS coordinates of multiple target objects and tracking the target objects on the basis of camera image information from an unmanned aerial vehicle according to an embodiment. The device comprises: a detection unit that, through a pre-trained artificial intelligence deep learning model, detects an object corresponding to an assignment set in the unmanned aerial vehicle, and generates a bounding box including the detected target object according to the size of the target object; a setting unit that sets a reference pixel that is a target point represented as a pixel on a camera image plane; a calculation unit that calculates reference pixel coordinates of the target object in a two-dimensional pixel coordinate system; and a conversion unit that converts the reference pixel coordinates of the target object into three-dimensional coordinates by using at least one of altitude information about the unmanned aerial vehicle or depth information about the target object, and converts the coordinate system of the converted three-dimensional coordinates to acquire the GPS coordinates of the target object.
Description
본 개시는 목표물의 GPS 좌표를 추정하고 추척하는 방법 및 장치에 관한 것으로 구체적으로, 무인항공기의 카메라 영상 정보를 기반으로 다수 목표물의 GPS 좌표를 추정하고, 목표물에 해당하는 픽셀이 이미지 평면의 중심에 위치하도록 드론 제어를 통해 목표물을 추적하는 방법 및 장치에 관한 것이다. The present disclosure relates to a method and device for estimating and tracking the GPS coordinates of a target. Specifically, the GPS coordinates of multiple targets are estimated based on camera image information of an unmanned aerial vehicle, and the pixels corresponding to the targets are located at the center of the image plane. It relates to a method and device for tracking a target by controlling a drone to locate it.
본 명세서에서 달리 표시되지 않는 한, 이 섹션에 설명되는 내용들은 이 출원의 청구항들에 대한 종래 기술이 아니며, 이 섹션에 포함된다고 하여 종래 기술이라고 인정되는 것은 아니다.Unless otherwise indicated herein, the material described in this section is not prior art to the claims of this application, and is not admitted to be prior art by inclusion in this section.
실종자 수색 또는 안티 드론 차원의 레이더로 잡히지 않는 표적의 전역 위치 계산과 같은 임무 수행을 위해서는 다수 목표물의 위치 좌표를 정확하게 파악하는 것이 중요하다. 이를 위해, 목표물에 설치된 위치 센서나 단말을 통해, 목표물의 위치 좌표를 획득할 수도 있겠지만, 실종자 수색, 탐지 등의 실제 상황에서는 비협조적인 환경에서도 목표물의 위치 좌표를 획득해야만 한다. 비협조적인 환경은 예컨대, 상호 통신이 불가능한 목표물이거나 ADS-B(Automatic Dependent Surveillance - Broadcast) 와 같은 시스템이 탑재되어 있지 않은 목표물인 경우와 같이 무선통신을 통해 목표물의 위치를 파악할 수 없는 경우를 의미한다. 이러한 비협조적인 환경에서는 일반적으로 목표물의 위치를 가늠하기 어려운 실정이다. In order to perform missions such as searching for missing people or calculating the global location of targets that are not captured by anti-drone radar, it is important to accurately determine the location coordinates of multiple targets. To this end, the location coordinates of the target may be obtained through a location sensor or terminal installed on the target, but in real situations such as searching and detecting missing persons, the location coordinates of the target must be obtained even in an uncooperative environment. An uncooperative environment refers to a case in which the location of a target cannot be determined through wireless communication, for example, if the target is impossible to communicate with each other or is not equipped with a system such as ADS-B (Automatic Dependent Surveillance - Broadcast). In such an uncooperative environment, it is generally difficult to estimate the location of the target.
한편, 무인항공기(UAV; Unmanned Aerial Vehicle)는 조종사가 비행체에 직접 탑승하지 않고 지상에서 원격 조종 또는 사전에 설정된 경로에 따라 자동 또는 반자동으로 자율 비행하는 비행체를 말한다. 무인항공기를 다른 말로 드론(Drone)이라고도 한다. Meanwhile, an unmanned aerial vehicle (UAV) refers to an aircraft that flies autonomously or semi-automatically according to a preset route or remotely controlled from the ground without a pilot directly boarding the aircraft. Unmanned aerial vehicles are also called drones.
이러한 드론은 지상의 물체나 사람을 탐지하거나 추적하기 위해 카메라를 이용하는 경우가 증가하고 있다. 이러한 촬영 과정에서 드론은 물체나 사람과 같은 목표물을 탐지하여 추적 임무를 수행할 수 있다. These drones are increasingly using cameras to detect or track objects or people on the ground. During this filming process, drones can detect targets such as objects or people and perform tracking missions.
이에 따라 보다 간편하고 정밀하게 특정한 지점이나 특정 객체와 같은 목표물을 고정하여 촬영하면서 촬영된 목표물을 추적할 수 있는 방안에 관한 발명이 요구되는 실정이다.Accordingly, there is a need for an invention that can more simply and precisely fix a target, such as a specific point or a specific object, and track the captured target while shooting.
실시예에 따른 무인항공기의 카메라 영상 정보를 기반으로 목표물의 GPS 좌표를 추정하는 장치 및 방법은 무인항공기에 탑재된 카메라 영상을 기반으로 탐지된 다수 목표물의 GPS 좌표를 계산하여, 3차원 지도상에서 다수 목표물의 정확한 전역 위치를 추정한다.An apparatus and method for estimating GPS coordinates of a target based on camera image information of an unmanned aerial vehicle according to an embodiment calculates the GPS coordinates of multiple targets detected based on camera images mounted on an unmanned aerial vehicle, and identifies multiple targets on a three-dimensional map. Estimate the exact global location of the target.
실시예에서는 학습된(pre-training) 인공지능 딥러닝 모델을 통해서 실종자, 교통법규 위반차량 탐지 등 주어진 임무에 대응하는 목표물을 자동으로 탐지(detect)한다. 실시예에서는 탐지된 목표물을 포함하는 목표물 크기에 따른 바운딩 박스를 생성한다. 실시예에서는 탐지된 목표물의 탐지 위치(지상, 상공)에 따라 바운딩 박스의 기준 픽셀을 설정하고, 목표물의 기준 픽셀 좌표를 산출하여, 산출된 기준 픽셀 좌표를 무인항공기 좌표계를 포함하는 좌표계로 변환한 후 무인항공기의 목표물 GPS 좌표를 최종적으로 획득한다. In the embodiment, targets corresponding to given tasks, such as detection of missing persons and vehicles violating traffic laws, are automatically detected through a pre-trained artificial intelligence deep learning model. In the embodiment, a bounding box is created according to the target size including the detected target. In the embodiment, the reference pixel of the bounding box is set according to the detection location (ground, sky) of the detected target, the reference pixel coordinates of the target are calculated, and the calculated reference pixel coordinates are converted to a coordinate system including the unmanned aerial vehicle coordinate system. Afterwards, the target GPS coordinates of the unmanned aerial vehicle are finally acquired.
실시예에 따른 무인항공기의 카메라 영상 정보를 기반으로 목표물의 GPS 좌표를 추정하는 장치 및 방법은 사용자의 목적에 따라서 바운딩 박스 내 원하는 목표물 기준 픽셀을 선택할 수 있다. 또한, 실시예에 따른 무인항공기의 카메라 영상 정보를 기반으로 목표물의 GPS 좌표를 추정하는 장치 및 방법은 탐지된 목표물이 지상 목표물인 경우에는 무인항공기에 탑재된 고도계 정보를 통해 깊이 정보를 계산하고, 상공 목표물인 경우에는 스테레오 카메라나 깊이 카메라를 탑재하여 목표물의 깊이 정보를 획득할 수 있다.An apparatus and method for estimating GPS coordinates of a target based on camera image information of an unmanned aerial vehicle according to an embodiment can select a desired target reference pixel within a bounding box according to the user's purpose. In addition, the apparatus and method for estimating the GPS coordinates of a target based on camera image information of an unmanned aerial vehicle according to an embodiment calculates depth information through altimeter information mounted on the unmanned aerial vehicle when the detected target is a ground target; In the case of an airborne target, depth information on the target can be acquired by mounting a stereo camera or depth camera.
실시예에 따른 지상 표적을 추적하는 장치 및 방법은 드론에 탑재된 카메라를 이용하여, 이미지 평면(image plane) 상에서 지상 목표물에 해당하는 픽셀이 이미지 평면의 중심에 위치하도록 드론 제어를 통해 지상 목표물을 추적하는 카메라가 탑재된 드론을 이용한 지상 표적 추적 방법을 제공하는 것이다. An apparatus and method for tracking a ground target according to an embodiment uses a camera mounted on a drone to control the ground target so that the pixel corresponding to the ground target is located at the center of the image plane. It provides a ground target tracking method using a drone equipped with a tracking camera.
실시예에 따른 무인항공기의 카메라 영상 정보를 기반으로 목표물의 GPS 좌표를 추정하는 장치는, 학습된(Pre-training) 인공지능 딥러닝 모델을 통해, 무인항공기에 설정된 임무에 대응하는 목표물을 탐지(detect)하고, 탐지된 목표물을 포함하는 바운딩 박스(Bounding box)를 생성하는 탐지부; 카메라 이미지 평면(image plane)상에 픽셀로 나타낸 목표점인 기준 픽셀을 설정하는 설정부; 2차원 픽셀 좌표계에서 목표물의 기준 픽셀 좌표를 산출하는 산출부; 목표물의 고도정보, 깊이(depth) 정보 중 적어도 하나를 이용하여, 목표물의 기준 픽셀 좌표를 3차원 좌표로 변환하고, 변환된 3차원 좌표의 좌표계를 변환하여 목표물의 GPS 좌표를 획득하는 변환부; 를 포함한다.A device for estimating the GPS coordinates of a target based on camera image information of an unmanned aerial vehicle according to an embodiment detects a target corresponding to the mission set for the unmanned aerial vehicle through a trained (pre-trained) artificial intelligence deep learning model ( a detection unit that detects and creates a bounding box containing the detected target; a setting unit that sets a reference pixel, which is a target point expressed as a pixel on a camera image plane; a calculation unit that calculates the reference pixel coordinates of the target in a two-dimensional pixel coordinate system; A conversion unit that converts the reference pixel coordinates of the target into 3D coordinates using at least one of the altitude information and depth information of the target, and converts the coordinate system of the converted 3D coordinates to obtain GPS coordinates of the target; Includes.
실시예에 따른 지상 표적 추적 방법은, 카메라가 탑재된 드론을 이용하여 지상 표적을 추적하는 방법으로서, 드론에 탑재된 카메라를 이용하여 촬영된 지상 목표물의 중심 픽셀이 이미지 평면(image plane) 상에서 선택되었는지의 여부를 체크하는 단계; 상기 지상 목표물의 상기 중심 픽셀이 선택된 것으로 체크되면, 요잉 회전을 위해 요잉 각도를 계산하는 단계; 계산된 요잉 각도만큼 요잉 회전을 통해 수평 방향으로 정렬하는 단계; 상기 드론의 이동을 위해 거리를 계산하는 단계; 계산된 거리만큼 전진 또는 후진 이동하여 수직 방향으로 정렬하는 단계; 및 정렬된 수평 방향과 수직 방향에 기초하여 지상 목표물을 추적하는 단계를 포함한다.The ground target tracking method according to the embodiment is a method of tracking a ground target using a drone equipped with a camera, and the center pixel of the ground target captured using a camera mounted on the drone is selected on the image plane. Checking whether it has been done; If the center pixel of the ground target is checked as selected, calculating a yaw angle for a yaw rotation; Aligning in the horizontal direction through yawing rotation by the calculated yawing angle; calculating a distance for movement of the drone; Aligning in the vertical direction by moving forward or backward by the calculated distance; and tracking the ground target based on the aligned horizontal and vertical directions.
이상에서와 같은 무인항공기의 카메라 영상 정보를 기반으로 목표물의 GPS 좌표를 추정하는 장치 및 방법은 목표물의 개수나 상공, 지상 등 목표물의 탐지 위치에 관계없이 카메라 영상을 통해 자동으로 목표물의 전역 위치를 지도에 표시함으로써, 실종자 수색 또는 안티 드론 차원의 레이더로 잡히지 않는 표적의 전역 위치 계산과 같은 다양한 임무를 보다 정확하게 수행할 수 있도록 한다. The device and method for estimating the GPS coordinates of a target based on the camera image information of an unmanned aerial vehicle as described above automatically determines the global location of the target through the camera image regardless of the number of targets or the detection location of the target, such as in the sky or on the ground. By displaying it on a map, it allows more accurate performance of various missions, such as searching for missing people or calculating the global location of targets eluded by radar for anti-drone purposes.
실시예에 따른 무인항공기의 카메라 영상 정보를 기반으로 목표물의 GPS 좌표를 추정하는 장치 및 방법은 컴퓨터 비전, 인공지능, 로보틱스, 항공우주, 비행 동역학 좌표변환 분야에서 도출된 것으로, 실종자 수색, 침범 무인기 탐지 등에 활용될 수 있다.The device and method for estimating the GPS coordinates of a target based on camera image information of an unmanned aerial vehicle according to an embodiment are derived from the fields of computer vision, artificial intelligence, robotics, aerospace, and flight dynamics coordinate conversion, and are used for searching missing persons and invading unmanned aerial vehicles. It can be used for detection, etc.
실시예에 따른 무인항공기의 카메라 영상 정보를 기반으로 목표물의 GPS 좌표를 추정하는 장치 및 방법은 무인항공기를 활용한 실종자, 유류품 수색, 위험 지역에 있는 사람들의 위치 정보 추정, 무인항공기의 착륙 및 배달을 위한 목적지의 GPS 좌표 추정, 시설물 점검 시 점검 대상의 위치 정보 획득 및 상공에 위치한 적군 무인기의 GPS 좌표 추정 과정 등에서 이용될 수 있다. An apparatus and method for estimating GPS coordinates of a target based on camera image information of an unmanned aerial vehicle according to an embodiment includes searching for missing persons and belongings using an unmanned aerial vehicle, estimating location information of people in a dangerous area, and landing and delivering an unmanned aerial vehicle. It can be used in the process of estimating the GPS coordinates of the destination, obtaining location information of the inspection target when inspecting facilities, and estimating the GPS coordinates of an enemy unmanned aerial vehicle located in the sky.
실시예에 따른 지상 표적 추적 방법 및 장치에 의하면, 카메라를 드론에 탑재하고, 이미지 평면(image plane) 상에서 지상 목표물에 해당하는 픽셀이 이미지 평면의 중심에 위치하도록 드론 제어를 통해 지상 목표물을 추적할 수 있다.According to the ground target tracking method and device according to the embodiment, a camera is mounted on a drone, and the ground target is tracked by controlling the drone so that the pixel corresponding to the ground target is located at the center of the image plane. You can.
본 발명의 효과는 상기한 효과로 한정되는 것은 아니며, 본 발명의 상세한 설명 또는 특허청구범위에 기재된 발명의 구성으로부터 추론 가능한 모든 효과를 포함하는 것으로 이해되어야 한다.The effects of the present invention are not limited to the effects described above, and should be understood to include all effects that can be inferred from the configuration of the invention described in the detailed description or claims of the present invention.
도 1은 본 발명의 일 실시예에 따른 무인항공기의 카메라 영상 정보를 기반으로 목표물의 GPS 좌표를 추정하는 장치(100)의 데이터 처리 구성을 나타낸 도면이다.Figure 1 is a diagram showing the data processing configuration of an apparatus 100 for estimating GPS coordinates of a target based on camera image information of an unmanned aerial vehicle according to an embodiment of the present invention.
도 2는 본 발명의 일 실시예에 따른 변환부(170)의 데이터 처리 구성을 나타낸 도면이다.Figure 2 is a diagram showing the data processing configuration of the conversion unit 170 according to an embodiment of the present invention.
도 3은 본 발명의 일 실시예에 따른 무인항공기와 지상 목표물의 위치 관계를 나타낸 도면이다.Figure 3 is a diagram showing the positional relationship between an unmanned aerial vehicle and a ground target according to an embodiment of the present invention.
도 4는 본 발명의 일 실시예에 따른 무인항공기와 상공 목표물과의 위치 관계를 나타낸 도면이다.Figure 4 is a diagram showing the positional relationship between an unmanned aerial vehicle and an airborne target according to an embodiment of the present invention.
도 5는 본 발명의 일 실시예에 따른 FRD(Front - Right - Down)의 XBYB평면을 나타낸 도면이다.Figure 5 is a diagram showing the X B Y B plane of FRD (Front - Right - Down) according to an embodiment of the present invention.
도 6은 본 발명의 일 실시예에 따른 무인항공기의 카메라 영상 정보를 기반으로 목표물의 GPS 좌표를 추정하는 방법의 데이터 처리 흐름을 나타낸 도면이다.Figure 6 is a diagram showing the data processing flow of a method for estimating GPS coordinates of a target based on camera image information of an unmanned aerial vehicle according to an embodiment of the present invention.
도 7은 본 발명의 일 실시예에 따른 S400 단계의 데이터 처리과정을 나타낸 도면이다.Figure 7 is a diagram showing the data processing process in step S400 according to an embodiment of the present invention.
도 8은 본 발명의 다른 실시예에 따른 카메라가 탑재된 드론을 이용한 지상 표적 추적 장치를 설명하기 위한 블록도이다. Figure 8 is a block diagram for explaining a ground target tracking device using a drone equipped with a camera according to another embodiment of the present invention.
도 9는 본 발명의 다른 실시예에 따른 선택한 픽셀이 이미지 평면 상에서 수평으로 중앙에 놓이기 전의 상태를 설명하기 위한 도면이다. FIG. 9 is a diagram illustrating a state before the selected pixel is horizontally centered on the image plane according to another embodiment of the present invention.
도 10은 본 발명의 다른 실시예에 따른 선택한 픽셀이 이미지 평면 상에서 수평으로 중앙에 놓인 후의 상태를 설명하기 위한 도면이다. FIG. 10 is a diagram illustrating a state after a selected pixel is horizontally centered on an image plane according to another embodiment of the present invention.
도 11은 본 발명의 다른 실시예에 따른 카메라가 탑재된 드론을 이용한 지상 표적 추적 방법을 설명하기 위한 흐름도이다. Figure 11 is a flowchart illustrating a ground target tracking method using a drone equipped with a camera according to another embodiment of the present invention.
도 12 내지 14는 본 발명의 다른 실시예에 따른 지상 목표물 픽셀 추적 시뮬레이션 결과를 설명하기 위한 이미지이다.12 to 14 are images for explaining ground target pixel tracking simulation results according to another embodiment of the present invention.
본 발명의 이점 및 특징, 그리고 그것들을 달성하는 방법은 첨부되는 도면과 함께 상세하게 후술되어 있는 실시 예들을 참조하면 명확해질 것이다. 그러나 본 발명은 이하에서 개시되는 실시 예들에 한정되는 것이 아니라 서로 다른 다양한 형태로 구현될 수 있으며, 단지 본 실시 예들은 본 발명의 개시가 완전하도록 하고, 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자에게 발명의 범주를 완전하게 알려주기 위해 제공되는 것이며, 본 발명은 청구항의 범주에 의해 정의될 뿐이다. 명세서 전체에 걸쳐 동일 도면부호는 동일 구성 요소를 지칭한다.The advantages and features of the present invention and methods for achieving them will become clear by referring to the embodiments described in detail below along with the accompanying drawings. However, the present invention is not limited to the embodiments disclosed below and may be implemented in various different forms. The present embodiments are merely provided to ensure that the disclosure of the present invention is complete and to provide common knowledge in the technical field to which the present invention pertains. It is provided to fully inform those who have the scope of the invention, and the present invention is only defined by the scope of the claims. Like reference numerals refer to like elements throughout the specification.
본 발명의 실시 예들을 설명함에 있어서 공지 기능 또는 구성에 대한 구체적인 설명이 본 발명의 요지를 불필요하게 흐릴 수 있다고 판단되는 경우에는 그 상세한 설명을 생략할 것이다. 그리고 후술되는 용어들은 본 발명의 실시 예에서의 기능을 고려하여 정의된 용어들로서 이는 사용자, 운용자의 의도 또는 관례 등에 따라 달라질 수 있다. 그러므로 그 정의는 본 명세서 전반에 걸친 내용을 토대로 내려져야 할 것이다.In describing embodiments of the present invention, if it is determined that a detailed description of a known function or configuration may unnecessarily obscure the gist of the present invention, the detailed description will be omitted. The terms described below are terms defined in consideration of functions in embodiments of the present invention, and may vary depending on the intention or custom of the user or operator. Therefore, the definition should be made based on the contents throughout this specification.
실시예에 따른 무인항공기의 카메라 영상 정보를 기반으로 목표물의 GPS 좌표를 추정하는 장치 및 방법은 무인항공기에 탑재된 카메라 영상을 기반으로 탐지된 다수 목표물의 GPS 좌표인 위치 좌표를 추정하여, 3차원 지도상에서 다수 목표물의 정확한 전역 위치를 표시한다.An apparatus and method for estimating GPS coordinates of a target based on camera image information of an unmanned aerial vehicle according to an embodiment estimates position coordinates, which are GPS coordinates of multiple targets detected based on camera images mounted on an unmanned aerial vehicle, and provides three-dimensional Displays the exact global location of multiple targets on the map.
실시예에서는 학습된(pre-training) 인공지능 딥러닝 모델을 통해서 실종자, 교통법규 위반차량 탐지 등 주어진 임무에 대응하는 목표물을 자동으로 탐지(detect)한다. 실시예에서는 탐지된 목표물을 포함하는 목표물 크기에 따른 바운딩 박스를 생성한다. 실시예에서는 탐지된 목표물의 탐지 위치(지상, 상공)에 따라 바운딩 박스의 기준 픽셀을 설정하고, 목표물의 기준 픽셀 좌표를 산출하여, 산출된 기준 픽셀 좌표를 3차원 좌표로 변환한 후 무인항공기의 GPS 좌표를 사용하여 적절히 좌표 변환하여, GPS 좌표를 최종적으로 획득한다. In the embodiment, targets corresponding to given tasks, such as detection of missing persons and vehicles violating traffic laws, are automatically detected through a pre-trained artificial intelligence deep learning model. In the embodiment, a bounding box is created according to the target size including the detected target. In the embodiment, the reference pixel of the bounding box is set according to the detection location (ground, sky) of the detected target, the reference pixel coordinates of the target are calculated, the calculated reference pixel coordinates are converted into 3D coordinates, and then the unmanned aerial vehicle By appropriately converting coordinates using GPS coordinates, GPS coordinates are finally obtained.
실시예에 따른 무인항공기의 카메라 영상 정보를 기반으로 다수 목표물의 GPS 좌표를 추정하는 장치 및 방법은 사용자의 목적에 따라서 바운딩 박스 내 원하는 목표물 기준 픽셀을 선택할 수 있다. 또한, 실시예에 따른 다수 목표물의 GPS 좌표를 추정하는 장치 및 방법은 탐지된 목표물이 지상 목표물인 경우에는 무인항공기에 탑재된 고도계 정보를 통해 깊이 정보를 계산하고, 상공 목표물인 경우에는 스테레오 카메라나 깊이 카메라를 통해 목표물의 깊이 정보를 획득할 수 있다. 좌표를 변환할 때에는 깊이 정보가 필요하다. 2차원 이미지에서 3차원 공간(2D to 3D)으로 좌표 변환하는 경우, 2차원 정보를 가지고 3차원 복원하다는 것은 한 차원 정보를 더 생성하는 것이다. 이 경우 스케일 불명확성(scale ambiguity)이 발생하여, 정확한 스케일(scale)은 모르기 때문에, 비율대로 비례하게 복원할 수 있다. 만약, 정확하게 알고 있는 깊이 정보인, 기준이 되는 한 축의 값을 알고 있다면, 스케일(scale)을 정확히 알기 때문에 더 이상 비례해서 변환하는 것이 아니라 2차원 이미지에서 3차원 공간(2D to 3D)으로 정확한 좌표변환을 할 수 있다. 무인항공기는 고도계를 사용하여 고도 정보를 알고 있기 때문에 지상 목표물의 스케일(scale)을 알기 위한 깊이 정보로 측정된 고도값을 사용한다. 상공 목표물인 경우는 탑재되어 있는 스테레오 카메라, 깊이 카메라 등 깊이 정보를 측정할 수 있는 카메라 센서를 통해 깊이 정보를 알 수 있다.An apparatus and method for estimating GPS coordinates of multiple targets based on camera image information of an unmanned aerial vehicle according to an embodiment can select a desired target reference pixel within a bounding box according to the user's purpose. In addition, an apparatus and method for estimating GPS coordinates of multiple targets according to an embodiment calculates depth information through altimeter information mounted on an unmanned aerial vehicle when the detected target is a ground target, and calculates depth information using a stereo camera or a stereo camera when the detected target is an airborne target. Depth information of a target can be obtained through a depth camera. When converting coordinates, depth information is required. When converting coordinates from a 2D image to a 3D space (2D to 3D), 3D restoration using 2D information means creating one more dimensional information. In this case, scale ambiguity occurs and the exact scale is unknown, so it can be restored proportionally. If you know the value of one axis as a reference, which is precisely known depth information, you can no longer convert proportionally because you know the scale accurately, but you can change the exact coordinates from a two-dimensional image to a three-dimensional space (2D to 3D). Conversion can be done. Since the unmanned aerial vehicle knows altitude information using an altimeter, it uses the measured altitude value as depth information to know the scale of the ground target. In the case of an airborne target, depth information can be obtained through camera sensors that can measure depth information, such as a mounted stereo camera or depth camera.
도 1은 실시예에 따른 무인항공기의 목표물 GPS 좌표 추정 장치(100)의 데이터 처리 구성을 나타낸 도면이다. Figure 1 is a diagram showing the data processing configuration of a target GPS coordinate estimation device 100 for an unmanned aerial vehicle according to an embodiment.
도 1을 참조하면, 실시예에 따른 무인항공기의 목표물 GPS 좌표 추정 장치(100)는 탐지부(110), 설정부(130), 산출부(150) 및 변환부(170)를 포함하여 구성될 수 있다. 본 명세서에서 사용되는 '부' 라는 용어는 용어가 사용된 문맥에 따라서, 소프트웨어, 하드웨어 또는 그 조합을 포함할 수 있는 것으로 해석되어야 한다. 예를 들어, 소프트웨어는 기계어, 펌웨어(firmware), 임베디드코드(embedded code), 및 애플리케이션 소프트웨어일 수 있다. 또 다른 예로, 하드웨어는 회로, 프로세서, 컴퓨터, 집적 회로, 집적 회로 코어, 센서, 멤스(MEMS; Micro-Electro-Mechanical System), 수동 디바이스, 또는 그 조합일 수 있다.Referring to FIG. 1, the target GPS coordinate estimation device 100 of an unmanned aerial vehicle according to an embodiment will be comprised of a detection unit 110, a setting unit 130, a calculation unit 150, and a conversion unit 170. You can. The term 'part' used in this specification should be construed to include software, hardware, or a combination thereof, depending on the context in which the term is used. For example, software may be machine language, firmware, embedded code, and application software. As another example, hardware may be a circuit, processor, computer, integrated circuit, integrated circuit core, sensor, Micro-Electro-Mechanical System (MEMS), passive device, or a combination thereof.
탐지부(110)는 학습된(Pre-training) 인공지능 딥러닝 모델을 통해, 무인항공기에 설정된 임무에 대응하는 목표물을 탐지(detect)한다. 무인항공기에 설정된 임무는 실종자 수색, 차량 탐지 및 상공의 적군기 탐지 등을 포함할 수 있다. 또한, 실시예에서 탐지부(110)는 적어도 하나 이상의 목표물을 탐지하는 멀티 타겟팅(Multi Targeting)을 수행할 수 있다. The detection unit 110 detects a target corresponding to the mission set for the unmanned aerial vehicle through a pre-trained artificial intelligence deep learning model. Missions set for unmanned aerial vehicles may include searching for missing persons, detecting vehicles, and detecting enemy aircraft in the sky. Additionally, in the embodiment, the detection unit 110 may perform multi-targeting to detect at least one target.
실시예에서 탐지부(110)는 지상 목표물과 상공 목표물을 모두 탐지할 수 있다. 지상 목표물은 지상에 위치하는 목표물이고, 상공 목표물은 상공에 위치하는 목표물이다. 실시예에서 탐지부(110)는 지상 목표물 탐지 과정과 상공 목표물 탐지 과정이 구분되어 트레이닝된 인공지능 딥러닝 모델을 통해, 지상 목표물과 상공 목표물을 탐지할 수 있다. 실시예에서 인공지능 딥러닝 모델은 DNN(Deep Neural Network), CNN(Convolutional Neural Network) RNN(Recurrent Neural Network) 및 BRDNN(Bidirectional Recurrent Deep Neural Network) 중 적어도 하나를 포함하고, 이에 한정하지 않는다. 실시예에서 인공지능 딥러닝 모델은 지상 목표물의 종류에 따라 다양한 목표물 탐지 과정을 분류하고, 분류된 목표물 탐지 과정을 학습할 수 있다. 목표물의 종류에는 사람, 유류품, 모자, 자전거 등이 포함될 수 있으나, 이에 한정되지 않는다.In the embodiment, the detection unit 110 can detect both ground targets and airborne targets. A ground target is a target located on the ground, and an aerial target is a target located in the sky. In the embodiment, the detection unit 110 may detect ground targets and airspace targets through an artificial intelligence deep learning model trained by separating the ground target detection process and the airspace target detection process. In an embodiment, the artificial intelligence deep learning model includes, but is not limited to, at least one of a Deep Neural Network (DNN), a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN), and a Bidirectional Recurrent Deep Neural Network (BRDNN). In an embodiment, an artificial intelligence deep learning model can classify various target detection processes according to the type of ground target and learn the classified target detection process. Types of targets may include, but are not limited to, people, articles, hats, bicycles, etc.
탐지부(110)는 탐지된 목표물이 포함된 바운딩 박스(Bounding box)를 목표물의 크기에 따라 생성한다. The detection unit 110 creates a bounding box containing the detected target according to the size of the target.
이후, 설정부(130)는 바운딩 박스에 포함된 목표물의 기준 픽셀을 설정한다. 실시예에서 목표물의 기준 픽셀은 카메라 이미지 평면(image plane)상에 픽셀로 나타낸 목표점일 수 있다. 즉, 설정부(130)는 바운딩 박스 내에서 목표점을 획득하기 위해 기준 픽셀을 설정할 수 있다. Afterwards, the setting unit 130 sets the reference pixel of the target included in the bounding box. In an embodiment, the reference pixel of the target may be a target point represented by a pixel on the camera image plane. That is, the setting unit 130 may set a reference pixel to obtain a target point within the bounding box.
실시예에서 설정부(130)는 목표물의 기준 픽셀을 바운딩 박스의 중심점으로 설정할 수 있다. In an embodiment, the setting unit 130 may set the reference pixel of the target as the center point of the bounding box.
또한, 실시예에서 설정부(130)는 사용자의 목적에 따라 바운딩 박스 내에서 목표물의 기준 픽셀을 원하는 픽셀로 설정할 수도 있다. 예컨대, 사용자의 목적이 격추이고, 목표물이 항공기인 경우, 항공기의 날개 부분이 목표물의 기준 픽셀로 설정될 수 있고, 목표물이 건물인 경우, 건물의 핵심 부분이 기준 픽셀로 설정될 수 있다. 또한, 사용자의 목적이 수색이고, 목표물이 사람인 경우, 사람의 자세에 따라 목표물의 기준 픽셀이 설정될 수 있다. 예컨대, 서 있는 사람이라면, 지면의 위치 정보가 필요하기 때문에, 탐지된 사람의 발 쪽 부분이 목표물의 기준 픽셀로 설정될 수 있다. 만일 누워 있는 사람이라면, 사람의 몸통부분이 목표물의 기준 픽셀로 설정될 수 있다. Additionally, in the embodiment, the setting unit 130 may set the reference pixel of the target within the bounding box to a desired pixel according to the user's purpose. For example, if the user's goal is to shoot down the target and the target is an aircraft, the wing part of the aircraft may be set as the reference pixel of the target, and if the target is a building, the core part of the building may be set as the reference pixel. Additionally, if the user's purpose is a search and the target is a person, the reference pixel of the target may be set according to the person's posture. For example, if the person is standing, location information on the ground is needed, so the foot side of the detected person can be set as the reference pixel of the target. If the person is lying down, the person's torso can be set as the reference pixel for the target.
산출부(150)는 2차원 픽셀 좌표계에서 목표물의 기준 픽셀 좌표를 산출한다. 산출부(150)는 설정부(130)에서 설정된 목표물의 기준 픽셀을 2차원 픽셀 좌표계에서의 좌표값으로 산출한다. The calculation unit 150 calculates the reference pixel coordinates of the target in a two-dimensional pixel coordinate system. The calculation unit 150 calculates the reference pixel of the target set in the setting unit 130 as a coordinate value in a two-dimensional pixel coordinate system.
변환부(170)는 무인항공기의 고도정보, 깊이(depth) 정보 중 적어도 하나를 이용하여, 목표물의 기준 픽셀 좌표를 3차원 좌표로 변환한다. 여기서, 3차원 좌표는 카메라 좌표계 기준 3차원 값이다. 이를 무인항공기 몸체 좌표계로 변환하기 위해서는 외부 파라미터(extrinsic parameter)가 사용된다. 외부 파라미터는, 무인항공기 몸체 중심을 기준으로 카메라가 어디에 탑재되어 있고, 어느 방향으로 탑재되어 있는지를 나타내는 변수로서, 카메라 틸트각을 포함할 수 있다. The conversion unit 170 converts the reference pixel coordinates of the target into three-dimensional coordinates using at least one of the altitude information and depth information of the unmanned aerial vehicle. Here, the 3D coordinates are 3D values based on the camera coordinate system. To convert this to the unmanned aerial vehicle body coordinate system, an extrinsic parameter is used. The external parameter is a variable that indicates where the camera is mounted and in what direction it is mounted relative to the center of the unmanned aerial vehicle body, and may include the camera tilt angle.
즉, 실시예에서 변환된 3차원 좌표의 좌표계는, 무인항공기에서 사용하는 무인항공기 몸체 좌표계일 수 있다. 이후, 변환부(170)는 3차원 좌표의 좌표계를 변환하여 무인항공기의 목표물 GPS 좌표를 획득한다. That is, the coordinate system of the converted 3D coordinates in the embodiment may be the unmanned aerial vehicle body coordinate system used in the unmanned aerial vehicle. Afterwards, the conversion unit 170 converts the three-dimensional coordinate system and obtains the target GPS coordinates of the unmanned aerial vehicle.
이하, 변환부(170)에서의 목표물 GPS 좌표 획득 과정을 상세히 설명한다. Hereinafter, the process of acquiring target GPS coordinates in the conversion unit 170 will be described in detail.
도 2는 실시예에 따른 변환부(170)의 데이터 처리 구성을 나타낸 도면이다.Figure 2 is a diagram showing the data processing configuration of the conversion unit 170 according to an embodiment.
도 2를 참조하면, 실시예에 따른 변환부(170)는 차원 변환부(171), 좌표계 변환부(173) 및 GPS 좌표 획득부(175)를 포함하여 구성될 수 있다.Referring to FIG. 2, the conversion unit 170 according to the embodiment may be configured to include a dimension conversion unit 171, a coordinate system conversion unit 173, and a GPS coordinate acquisition unit 175.
실시예에서 차원 변환부(171)는 기준 픽셀 좌표의 좌표계를 변환한다. 차원 변환부(171)는 목표물의 기준 픽셀 좌표를, 2차원 픽셀 좌표계에서 3차원인 무인항공기 몸체 좌표계로 변환할 수 있다.In the embodiment, the dimension transformation unit 171 transforms the coordinate system of the reference pixel coordinates. The dimension conversion unit 171 may convert the reference pixel coordinates of the target from a two-dimensional pixel coordinate system to a three-dimensional unmanned aerial vehicle body coordinate system.
보다 구체적으로, 차원 변환부(171)는 지상 목표물인 경우, 무인항공기의 고도정보를 지상 목표물의 깊이 정보로 이용하여, 2차원 픽셀 좌표계에서 산출된 목표물의 기준 픽셀 좌표를 3차원으로 변환한다. More specifically, in the case of a ground target, the dimension conversion unit 171 uses the altitude information of the unmanned aerial vehicle as depth information of the ground target and converts the reference pixel coordinates of the target calculated in a two-dimensional pixel coordinate system into three dimensions.
실시예에서 차원 변환부(171)는 무인항공기의 카메라 모델을 통해, 2차원 좌표인 목표물의 기준 픽셀 좌표에 대한 카메라 좌표계의 3차원 위치 차이를 계산하고, 카메라와 무인항공기 CG(Center of Gravity) 간 외부 파마리터 값을 이용하여, 목표물의 기준 픽셀 좌표를 무인항공기의 몸체 좌표계에 대한 3차원 좌표로 변환한다.In the embodiment, the dimension conversion unit 171 calculates the three-dimensional position difference of the camera coordinate system with respect to the reference pixel coordinates of the target, which are two-dimensional coordinates, through the camera model of the unmanned aerial vehicle, and calculates the three-dimensional position difference between the camera and the unmanned aerial vehicle CG (Center of Gravity). Using the external parameter value, the target's reference pixel coordinates are converted into three-dimensional coordinates for the body coordinate system of the unmanned aerial vehicle.
도 3은 실시예에 따른 무인항공기와 지상 목표물의 위치 관계를 나타낸 도면이다.Figure 3 is a diagram showing the positional relationship between an unmanned aerial vehicle and a ground target according to an embodiment.
도 3을 참조하면, 실시예에서 차원 변환부(171)는, 무인항공기에 탑재된 카메라를 핀홀 카메라 모델로 설정하여, 2차원 기준 픽셀 좌표를 수학식 1을 통해 카메라의 3차원 좌표로 변환할 수 있다. Referring to FIG. 3, in the embodiment, the dimension conversion unit 171 sets the camera mounted on the unmanned aerial vehicle as a pinhole camera model and converts the two-dimensional reference pixel coordinates into three-dimensional coordinates of the camera through Equation 1. You can.
(K: 카메라의 고유 행렬(intrinsic matrix), Pu(u,v,1)T: 픽셀 좌표계의 2차원 좌표를 동차 좌표계(homogeneous coordinate)로 나타낸 3차원 벡터, Pc: 카메라 좌표계로 변환된 3차원 좌표) (K: camera's intrinsic matrix, P u (u,v,1) T : 3-dimensional vector representing the 2-dimensional coordinates of the pixel coordinate system as a homogeneous coordinate system, P c : converted to the camera coordinate system 3D coordinates)
이후, 차원 변환부(171)는 카메라 좌표계와 무인항공기 몸체 좌표계의 원점이 같은 위치에 놓여 있다고 가정하고, 수학식 2를 이용하여 카메라 좌표계로 변환된 3차원 좌표 Pc를 무인항공기 몸체 좌표계의 벡터 PB로 변환한다.Afterwards, the dimensional conversion unit 171 assumes that the origin of the camera coordinate system and the unmanned aerial vehicle body coordinate system are located at the same location, and converts the three-dimensional coordinate P c converted to the camera coordinate system into a vector of the unmanned aerial vehicle body coordinate system using Equation 2. Convert to P B.
(RBC: 카메라 좌표계로부터 무인항공기 몸체 좌표계로의 3차원 회전 변환 행렬, Pc: 카메라 좌표계로 변환된 3차원 좌표)(R BC : 3D rotation transformation matrix from the camera coordinate system to the unmanned aerial vehicle body coordinate system, P c : 3D coordinates converted to the camera coordinate system)
차원 변환부(171)는 무인항공기에 탑재된 카메라가 지면 방향으로 0도 이상 90도 이하의 범위에서 틸트(tilt)되어 있으므로, 카메라 좌표계로부터 무인항공기 몸체 좌표계로의 3차원 회전 변환 행렬RBC을 수학식 3을 통해 산출할 수 있다. Since the camera mounted on the unmanned aerial vehicle is tilted in the range of 0 degrees to 90 degrees in the ground direction, the dimensional conversion unit 171 uses a three-dimensional rotation transformation matrix R BC from the camera coordinate system to the unmanned aerial vehicle body coordinate system. It can be calculated through Equation 3.
(Ry: y축 회전 변환 행렬, Rz: z축 회전 변환 행렬 Rx: x축 회전 변환 행렬) (Ry: y-axis rotation transformation matrix, Rz: z-axis rotation transformation matrix Rx: x-axis rotation transformation matrix)
이후, 차원 변환부(171)는 수학식 1 및 수학식 2를 통해, 도출된 수학식 4를 이용하여 무인항공기 몸체 좌표계의 벡터 PB를 산출한다. Afterwards, the dimension conversion unit 171 calculates the vector P B of the unmanned aerial vehicle body coordinate system using equation 4 derived through equation 1 and equation 2.
(K: 카메라의 내부 파라미터(intrinsic matrix), Pu(u,v,1)T: 픽셀 좌표계의 2차원 좌표를 동차 좌표계(homogeneous coordinate)로 나타낸 3차원 벡터) (K: camera's internal parameter (intrinsic matrix), P u (u,v,1) T : 3-dimensional vector representing the 2-dimensional coordinates of the pixel coordinate system as a homogeneous coordinate)
한편, 차원 변환부(171)는 탐지된 목표물이 상공 목표물인 경우, 무인항공기에 탑재된 스테레오 카메라 또는 깊이(depth) 카메라를 통해 상공 목표물의 깊이(depth) 정보를 수집하여, 상공 목표물의 GPS 좌표를 추정할 수 있도록 한다. On the other hand, when the detected target is an airspace target, the dimension conversion unit 171 By collecting depth information of an aerial target through a stereo camera or depth camera mounted on an unmanned aerial vehicle, the GPS coordinates of the aerial target can be estimated.
예컨대, 차원 변환부(171)는 상공 목표물의 깊이 정보(zc)가 주어지면 카메라 좌표계로 나타낸 상공 목표물의 3차원 위치 정보 Pc를 수학식 5를 이용하여 산출하고, 수학식 6을 이용하여 무인항공기 몸체 좌표계의 벡터 PB를 산출한다.For example, when depth information (z c ) of an aerial target is given, the dimension conversion unit 171 calculates the 3D position information P c of the aerial target expressed in a camera coordinate system using Equation 5, and using Equation 6 Calculate the vector P B of the unmanned aerial vehicle body coordinate system.
(Pc: 카메라 좌표계로 변환된 3차원 좌표, K: 카메라의 내부 파라미터(intrinsic matrix), zc: 목표물의 깊이 정보, I: 단위행렬, Pu: 픽셀 좌표계의 2차원 좌표를 동차 좌표계(homogeneous coordinate)로 나타낸 3차원 벡터)(P c : 3-dimensional coordinates converted to the camera coordinate system, K: internal parameters of the camera (intrinsic matrix), z c : depth information of the target, I: identity matrix, P u : 2-dimensional coordinates of the pixel coordinate system are converted to a homogeneous coordinate system ( 3D vector expressed in homogeneous coordinates)
(PB: 무인항공기 몸체 좌표계의 벡터, RBC: 카메라 좌표계로부터 무인항공기 몸체 좌표계로의 3차원 회전 변환 행렬, K: 카메라의 내부 파라미터(intrinsic matrix), zc: 목표물의 깊이 정보, I: 단위행렬, Pu: 픽셀 좌표계의 2차원 좌표를 동차 좌표계(homogeneous coordinate)로 나타낸 3차원 벡터)(P B : Vector of the unmanned aerial vehicle body coordinate system, R BC : 3D rotation transformation matrix from the camera coordinate system to the unmanned aerial vehicle body coordinate system, K: Camera's internal parameter (intrinsic matrix), z c : Target depth information, I: Unit matrix, P u : 3-dimensional vector representing the 2-dimensional coordinates of the pixel coordinate system as a homogeneous coordinate system)
실시예에서는 차원 변환부(171)가 목표물의 2차원 기준 픽셀 좌표를 무인항공기의 3차원 몸체 좌표계로 변환하면, 좌표계 변환부(173)는 무인항공기의 3차원 몸체 좌표계로 변환된 목표물 좌표를 기반으로 무인항공기와 목표물 간의 상대적 위치 차를 계산하고, 3차원으로 변환된 목표물의 좌표를 ENU(East-North-Up)좌표계로 변환한다. In the embodiment, when the dimensional conversion unit 171 converts the 2D reference pixel coordinates of the target into the 3D body coordinate system of the unmanned aerial vehicle, the coordinate system conversion unit 173 converts the target coordinates into the 3D body coordinate system of the unmanned aerial vehicle based on the target coordinates. The relative position difference between the unmanned aerial vehicle and the target is calculated, and the three-dimensional coordinates of the target are converted to the ENU (East-North-Up) coordinate system.
도 4는 실시예에 따른 무인항공기와 상공 목표물과의 위치 관계를 나타낸 도면이고, 도 5는 실시예에 따른 FRD(Front - Right - Down)의 XBYB평면을 나타낸 도면이다. 도 4 및 도 5를 참조하면, 실시예에서 좌표계 변환부(173)는 목표물의 기준 픽셀 좌표를 무인항공기 몸체 좌표계로 변환한 3차원 좌표를 Pt=(xt, yt, zt)T라고 가정하여, 무인항공기 몸체 좌표계의 XBYB평면상에서 목표물과 무인항공기가 이루는 각 Ψ를 수학식 7을 통해 계산한다. FIG. 4 is a diagram showing the positional relationship between an unmanned aerial vehicle and an airborne target according to an embodiment, and FIG. 5 is a diagram showing the X B Y B plane of FRD (Front-Right-Down) according to an embodiment. Referring to Figures 4 and 5, in the embodiment, the coordinate system conversion unit 173 converts the reference pixel coordinates of the target into the unmanned aerial vehicle body coordinate system and converts the three-dimensional coordinates into P t = (x t , y t , z t ) T Assuming that, the angle Ψ formed between the target and the unmanned aerial vehicle on the X B Y B plane of the unmanned aerial vehicle body coordinate system is calculated through Equation 7.
이때, 좌표계 변환부(173)는 지상 목표물이 무인항공기 고도 기준이 되는 지면의 해수면 고도와 동일한 고도의 지면 상에 놓여있다고 가정한다. At this time, the coordinate system conversion unit 173 assumes that the ground target is located on the ground at the same altitude as the sea level altitude of the ground, which is the reference for the altitude of the unmanned aerial vehicle.
실시예에서 지상 목표물의 기준 픽셀은 사용자가 특별히 선택하지 않으면, 자동 탐지된 바운딩 박스의 중심점으로 설정될 수 있다. 이에 따라, 좌표계 변환부(173)는 수학식 8을 통해 무인항공기에 탑재된 고도계 정보를 이용하여 무인항공기 몸체 좌표계의 XBYB평면상에서 지상 목표물까지의 거리 dt를 산출한다. In an embodiment, the reference pixel of the ground target may be set to the center point of the automatically detected bounding box, unless specifically selected by the user. Accordingly, the coordinate system conversion unit 173 calculates the distance d t to the ground target on the X B Y B plane of the unmanned aerial vehicle body coordinate system using altimeter information mounted on the unmanned aerial vehicle through Equation 8.
(h: 무인항공기의 고도, dt: 무인항공기 몸체 좌표계 원점을 지면에 내린 수선의 발로부터 목표물까지의 거리)(h: altitude of the unmanned aerial vehicle, d t : distance from the foot of the perpendicular line where the origin of the unmanned aerial vehicle's body coordinate system is placed on the ground to the target)
한편, 상공 목표물의 기준 픽셀은 사용자가 특별히 선택하지 않으면, 바운딩 박스의 중심점이 상공 목표물의 기준 픽셀로 설정된다. 따라서, 좌표계 변환부(173)는 수학식 9를 통해, 무인항공기 몸체 좌표계의 XBYB평면상에서 상공 목표물까지의 거리 dt를 계산한다. Meanwhile, unless the user specifically selects the reference pixel of the target in the sky, the center point of the bounding box is set as the reference pixel of the target in the sky. Accordingly, the coordinate system conversion unit 173 calculates the distance d t from the sky target on the X B Y B plane of the unmanned aerial vehicle body coordinate system through Equation 9.
(xt: x좌표, yt: y좌표)(x t : x coordinate, y t : y coordinate)
또한, 좌표계 변환부(173)는 무인항공기의 고도계 정보(h)가 주어지면, 상공 목표물의 고도(ht)를 수학식 10을 통해 산출한다. In addition, given the altimeter information (h) of the unmanned aerial vehicle, the coordinate system conversion unit 173 calculates the altitude (h t ) of the target in the sky using Equation 10.
(h: 무인항공기의 고도, zt: 무인항공기 몸체 좌표로 변환한 3차원 좌표의 z 좌표) (h: altitude of the unmanned aerial vehicle, z t : z coordinate of 3D coordinates converted to unmanned aerial vehicle body coordinates)
이후, 좌표계 변환부(173)는 지상 목표물과 상공 목표물 모두, 계산된 무인항공기 몸체 좌표계의 XBYB평면상에서 목표물과 무인항공기가 이루는 각 Ψ 및 무인항공기 몸체 좌표계의 XBYB평면상에서 상공 목표물까지의 거리 dt를 이용하여 수학식 11 및 수학식 12를 통해, 무인항공기의 몸체 좌표계(FRD)를 관성 좌표계(ENU)로 변환한다. Afterwards, the coordinate system conversion unit 173 calculates both the ground target and the airborne target, the angle Ψ formed between the target and the unmanned aerial vehicle on the X B Y B plane of the calculated unmanned aerial vehicle body coordinate system , and the Using the distance d t to the target, the body coordinate system (FRD) of the unmanned aerial vehicle is converted to an inertial coordinate system (ENU) through Equation 11 and Equation 12.
실시예에서 좌표계 변환부(173)는 수학식 11을 이용하여, ENU 좌표계에서의 목표물 x좌표(xENU)를 산출한다. In the embodiment, the coordinate system conversion unit 173 calculates the target x coordinate (x ENU ) in the ENU coordinate system using Equation 11.
(dt: 무인항공기 몸체 좌표계의 XBYB평면상에서 목표물까지의 거리, Ψ: 항공기 몸체 좌표계의 XBYB평면상에서 목표물과 무인항공기가 이루는 각, α: 나침반의 북쪽을 기준으로 시계 방향으로 무인항공기의 헤딩 각도 (0≤α <360))(d t : Distance to target on the X B Y B plane of the unmanned aerial vehicle body coordinate system, Ψ: Angle formed between the target and the unmanned aerial vehicle on the Heading angle of unmanned aerial vehicle (0≤α <360))
좌표계 변환부(173)는 수학식 12를 이용하여, ENU 좌표계에서의 목표물 y 좌표(yENU)를 산출한다. The coordinate system conversion unit 173 calculates the target y coordinate (y ENU ) in the ENU coordinate system using Equation 12.
(dt: 무인항공기 몸체 좌표계의 XBYB평면상에서 상공 목표물까지의 거리, Ψ: 항공기 몸체 좌표계의 XBYB평면상에서 목표물과 무인항공기가 이루는 각, α: 나침반의 북쪽을 기준으로 시계 방향으로 무인항공기의 헤딩 각도 (0≤α <360)) ( d t : Distance to the target in the sky on the Heading angle of the drone in the direction (0≤α <360))
실시예에서는 GPS 좌표 획득부(175)는 ENU 좌표계로 나타낸 목표물 좌표를 GPS 좌표로 변환한다. GPS 좌표는 위도, 경도, 고도를 포함하는 좌표이다. 예컨대, GPS 좌표 획득부(175)는 먼저, ENU 좌표계로 나타낸 목표물의 좌표를 ECEF(Earth-Centered Earth-Fixed) 좌표계로 변환 후, ECEF 좌표계에서 LLA 좌표계(GPS 좌표)로 변환하여 목표물 GPS 좌표를 획득할 수 있다. In the embodiment, the GPS coordinate acquisition unit 175 converts target coordinates expressed in the ENU coordinate system into GPS coordinates. GPS coordinates are coordinates that include latitude, longitude, and altitude. For example, the GPS coordinate acquisition unit 175 first converts the coordinates of the target expressed in the ENU coordinate system to the ECEF (Earth-Centered Earth-Fixed) coordinate system, and then converts the ECEF coordinate system to the LLA coordinate system (GPS coordinates) to obtain the target GPS coordinates. It can be obtained.
실시예에서, GPS 좌표 획득부(175)는 무인항공기의 현재 GPS 좌표를 이용해서 ENU 좌표계에서의 목표물 좌표를 ECEF 좌표계로 변환한 후, 다시 LLA(Latitude-Longitude-Altitude)좌표계로 변환하여, 무인항공기의 목표물 GPS 좌표를 최종 획득한다. 구체적으로, GPS 좌표 획득부(175)는 수학식 13 내지 수학식 18을 이용하여 좌표계 변환부(173)에서 산출된 ENU(East-North-Up) 좌표계의 목표물 좌표를 ECEF 좌표계로 변환한다.
In an embodiment, the GPS coordinate acquisition unit 175 converts the target coordinates from the ENU coordinate system to the ECEF coordinate system using the current GPS coordinates of the unmanned aerial vehicle, and then converts them back to the LLA (Latitude-Longitude-Altitude) coordinate system, Finally, obtain the aircraft's target GPS coordinates. Specifically, the GPS coordinate acquisition unit 175 converts the target coordinates of the ENU (East-North-Up) coordinate system calculated by the coordinate system conversion unit 173 into the ECEF coordinate system using Equations 13 to 18 .
실시예에서 GPS 좌표 획득부(175)는 ENU 좌표계에서 ECEF 좌표계로 변환을 위해 먼저 수학식 13을 이용하여 t를 산출한다.In the embodiment, the GPS coordinate acquisition unit 175 first calculates t using Equation 13 to convert from the ENU coordinate system to the ECEF coordinate system.
(zENU: ENU 좌표계로 나타낸 목표물의 z좌표, yENU: ENU 좌표계로 나타낸 목표물의 y좌표, Φ: 무인항공기의 위도)(z ENU : z-coordinate of the target expressed in the ENU coordinate system, y ENU : y-coordinate of the target expressed in the ENU coordinate system, Φ: latitude of the unmanned aerial vehicle)
이렇게 산출된 t 는 ENU 좌표계에서 ECEF 좌표계로의 변환을 위한 수학식 14와 수학식 15 에서 사용되는 임시 변수이다.t calculated in this way is a temporary variable used in Equations 14 and 15 for conversion from the ENU coordinate system to the ECEF coordinate system.
이후, GPS 좌표 획득부(175)는 수학식 14를 이용하여 ECEF 좌표계에서 무인항공기와 목표물 간 x축의 상대적 위치차이인 dx를 산출한다. Afterwards, the GPS coordinate acquisition unit 175 calculates d x , which is the relative position difference on the x-axis between the unmanned aerial vehicle and the target in the ECEF coordinate system using Equation 14.
(λ: 무인항공기의 경도)(λ: Longitude of unmanned aerial vehicle)
이후, GPS 좌표 획득부(175)는 수학식 15를 이용하여 ECEF 좌표계에서 무인항공기와 목표물 간 y축의 상대적 위치차이인 dy를 산출한다. Afterwards, the GPS coordinate acquisition unit 175 calculates d y , which is the relative position difference on the y axis between the unmanned aerial vehicle and the target in the ECEF coordinate system using Equation 15.
(λ: 무인항공기의 경도)(λ: Longitude of unmanned aerial vehicle)
이후, GPS 좌표 획득부(175)는 수학식 16을 이용하여 ECEF 좌표계로 나타낸 무인항공기의 x축 위치인 x0를 산출한다. Afterwards, the GPS coordinate acquisition unit 175 calculates x 0 , which is the x-axis position of the unmanned aerial vehicle expressed in the ECEF coordinate system, using Equation 16.
(Φ: 무인항공기의 위도, N(Φ): 위도(Φ )에 따른 지구 타원체의 곡률 반경, λ: 무인항공기의 경도)(Φ: latitude of the unmanned aerial vehicle, N(Φ): radius of curvature of the Earth's ellipsoid according to latitude (Φ), λ: longitude of the unmanned aerial vehicle)
이후, GPS 좌표 획득부(175)는 수학식 17을 이용하여 ECEF 좌표계로 나타낸 무인항공기의 y 축 위치인 y0를 산출한다. Afterwards, the GPS coordinate acquisition unit 175 calculates y 0 , which is the y-axis position of the unmanned aerial vehicle expressed in the ECEF coordinate system, using Equation 17.
(Φ: 무인항공기의 위도, N(Φ): 위도(Φ )에 따른 지구 타원체의 곡률 반경, λ: 무인항공기의 경도)(Φ: latitude of the unmanned aerial vehicle, N(Φ): radius of curvature of the Earth's ellipsoid according to latitude (Φ), λ: longitude of the unmanned aerial vehicle)
실시예에서 위도(Φ)에 따른 지구 타원체의 곡률 반경 N(Φ)은 수학식 18을 통해 위치 좌표 획득부(175)에서 산출될 수 있다. In the embodiment, the radius of curvature N(Φ) of the Earth's ellipsoid according to latitude (Φ) may be calculated by the position coordinate acquisition unit 175 through Equation 18.
(a: 지구 타원체(WGS84)의 장축, b: 지구 타원체(WGS84)의 단축, Φ: 무인항공기의 위도)(a: long axis of the Earth ellipsoid (WGS84), b: short axis of the Earth ellipsoid (WGS84), Φ: latitude of the unmanned aerial vehicle)
이후, GPS 좌표 획득부(175)는 ECEF 좌표계로 나타낸 무인항공기 위치 정보와, 무인항공기와 목표물 간 상대적 위치차를 통해 수학식 19를 이용하여, ECEF 좌표계의 x좌표인 XECEF를 산출한다.Afterwards, the GPS coordinate acquisition unit 175 calculates
(x0: ECEF 좌표계로 나타낸 무인항공기의 위치, dx: ECEF 좌표계에서 무인항공기와 목표물 간 상대적 위치차)(x 0 : position of the unmanned aerial vehicle in the ECEF coordinate system, d x : relative position difference between the unmanned aerial vehicle and the target in the ECEF coordinate system)
GPS 좌표 획득부(175)는 ECEF 좌표계로 나타낸 무인항공기 위치 정보와, 무인항공기와 목표물 간 상대적 위치차를 통해 수학식 20을 이용하여, ECEF 좌표계의 y좌표인 yECEF를 산출한다.The GPS coordinate acquisition unit 175 calculates y ECEF, which is the y coordinate of the ECEF coordinate system, using Equation 20 through the unmanned aerial vehicle location information expressed in the ECEF coordinate system and the relative position difference between the unmanned aerial vehicle and the target.
(y0: ECEF 좌표계로 나타낸 무인항공기의 위치, dy: ECEF 좌표계에서 무인항공기와 목표물 간 상대적 위치차)(y 0 : position of the unmanned aerial vehicle in the ECEF coordinate system, d y : relative position difference between the unmanned aerial vehicle and the target in the ECEF coordinate system)
즉, 실시예에서 GPS 좌표 획득부(175)는 수학식 19및 수학식 20을 이용하여 목표물의 위치를 ECEF 좌표계로 변환할 수 있다. That is, in the embodiment, the GPS coordinate acquisition unit 175 can convert the target's location into the ECEF coordinate system using Equation 19 and Equation 20.
이후, GPS 좌표 획득부(175)는 ECEF 좌표계로 나타난 목표물의 좌표를 GPS 좌표로 변환하여(ECEF to LLA) 무인항공기의 목표물 GPS 좌표를 최종적으로 획득한다. Afterwards, the GPS coordinate acquisition unit 175 converts the coordinates of the target shown in the ECEF coordinate system into GPS coordinates (ECEF to LLA) and finally obtains the GPS coordinates of the target of the unmanned aerial vehicle.
실시예에서는 GPS 좌표 획득부(175)는 수학식 19와 수학식 20을 이용하여 획득한 ECEF 좌표계로 변환된 목표물의 좌표를 수학식 21 내지 수학식 26을 이용하여 GPS 좌표로 변환하고, 변환된 좌표를 목표물 GPS 좌표로 추정한다. In the embodiment, the GPS coordinate acquisition unit 175 converts the coordinates of the target converted into the ECEF coordinate system obtained using Equation 19 and Equation 20 into GPS coordinates using Equation 21 to Equation 26, and converts the converted Coordinates are estimated as target GPS coordinates.
실시예에서 GPS 좌표 획득부(175)는 수학식 21을 이용하여 ECEF 좌표계에서 목표물 위치 벡터의 크기 r을 산출한다.In the embodiment, the GPS coordinate acquisition unit 175 calculates the size r of the target position vector in the ECEF coordinate system using Equation 21.
이후, GPS 좌표 획득부(175)는 수학식 22를 이용하여 선형 이심률 E를 산출한다. Afterwards, the GPS coordinate acquisition unit 175 calculates the linear eccentricity E using Equation 22.
(a: 지구 타원체(WGS84)의 장축, b: 지구 타원체(WGS84)의 단축)(a: Long axis of the Earth's ellipsoid (WGS84), b: Short axis of the Earth's ellipsoid (WGS84))
이후, GPS 좌표 획득부(175)는 수학식 23을 이용하여 u를 산출한다.Afterwards, the GPS coordinate acquisition unit 175 calculates u using Equation 23.
이후, GPS 좌표 획득부(175)는 수학식 24를 이용하여 β0를 산출한다.Afterwards, the GPS coordinate acquisition unit 175 calculates β 0 using Equation 24.
(u: 공초점 타원체의 단축, β: 지구 중심에서 기준 타원체로의 각도, β0: 지구 중심에서 공초점 타원체로의 각도)(u: minor axis of the confocal ellipsoid, β: angle from the center of the Earth to the reference ellipsoid, β 0 : angle from the center of the Earth to the confocal ellipsoid)
이후, GPS 좌표 획득부(175)는 수학식 25를 이용하여 Δβ를 산출한다.Afterwards, the GPS coordinate acquisition unit 175 calculates Δβ using Equation 25.
이후, GPS 좌표 획득부(175)는 수학식 26를 이용하여 β를 산출한다.Afterwards, the GPS coordinate acquisition unit 175 calculates β using Equation 26.
이하에서는 무인항공기의 목표물 GPS 좌표 추정 방법에 대해서 차례로 설명한다. 실시예에 따른 무인항공기의 목표물 GPS 좌표 추정 방법의 작용(기능)은 무인항공기의 목표물 GPS 좌표 추정 시스템의 기능과 본질적으로 같은 것이므로 도 1 내지 도 5와 중복되는 설명은 생략하도록 한다.Below, the method for estimating the target GPS coordinates of an unmanned aerial vehicle will be explained in turn. Since the operation (function) of the target GPS coordinate estimation method of the unmanned aerial vehicle according to the embodiment is essentially the same as the function of the target GPS coordinate estimation system of the unmanned aerial vehicle, descriptions overlapping with FIGS. 1 to 5 will be omitted.
도 6은 실시예에 따른 무인항공기의 목표물 GPS 좌표 추정 방법의 데이터 처리 흐름을 나타낸 도면이다.Figure 6 is a diagram showing the data processing flow of a method for estimating GPS coordinates of a target for an unmanned aerial vehicle according to an embodiment.
도 6을 참조하면, S100 단계에서는 탐지부에서 학습된(Pre-training) 인공지능 딥러닝 모델을 통해, 무인항공기에 설정된 임무에 대응하는 목표물을 탐지(detect)하고, 탐지된 목표물을 포함하는 바운딩 박스(Bounding box)를 목표물의 크기에 따라 생성한다. S200 단계에서는 설정부에서 카메라 이미지 평면(image plane)상에 픽셀로 나타낸 목표점인 기준 픽셀을 설정한다. S300 단계에서는 산출부에서 2차원 픽셀 좌표계에서 목표물의 기준 픽셀 좌표를 산출한다. S400 단계에서는 변환부에서 목표물의 고도정보, 목표물의 깊이 정보 중 적어도 하나를 이용하여, 목표물의 기준 픽셀 좌표를, 3차원 좌표로 변환하고, 변환된 3차원 좌표의 좌표계를 변환한다. 이후, S500 단계에서는 변환부에서 목표물의 GPS 좌표를 획득한다.Referring to FIG. 6, in step S100, a target corresponding to the mission set for the unmanned aerial vehicle is detected through an artificial intelligence deep learning model trained (pre-trained) in the detection unit, and bounding including the detected target is performed. A bounding box is created according to the size of the target. In step S200, the setting unit sets a reference pixel, which is a target point expressed as a pixel on the camera image plane. In step S300, the calculation unit calculates the reference pixel coordinates of the target in a two-dimensional pixel coordinate system. In step S400, the conversion unit converts the reference pixel coordinates of the target into 3D coordinates using at least one of the target's altitude information and the target's depth information, and converts the coordinate system of the converted 3D coordinates. Afterwards, in step S500, the GPS coordinates of the target are obtained from the conversion unit.
도 7은 실시예에 따른 S400 단계의 데이터 처리과정을 나타낸 도면이다.Figure 7 is a diagram showing the data processing process in step S400 according to an embodiment.
도 7을 참조하면, S410 단계에서는 차원 변환부에서 기준 픽셀 좌표의 좌표계를 변환한다. 보다 구체적으로, S410 단계에서는 목표물의 기준 픽셀 좌표를, 2차원 픽셀 좌표계에서 3차원 카메라 좌표계로 변환 후 3차원 몸체 좌표계(FRD 좌표계)로 변환한다. S410 단계에서 2차원 픽셀 좌표계를 3차원 카메라 좌표계 변환 시, 핀홀 카메라 모델을 사용하고, 이때 지상 목표물을 위한 무인항공기의 고도 정보와 상공 목표물을 위한 목표물의 깊이 정보가 사용될 수 있다.Referring to FIG. 7, in step S410, the dimension conversion unit converts the coordinate system of the reference pixel coordinates. More specifically, in step S410, the reference pixel coordinates of the target are converted from a 2D pixel coordinate system to a 3D camera coordinate system and then into a 3D body coordinate system (FRD coordinate system). When converting the 2D pixel coordinate system to the 3D camera coordinate system in step S410, a pinhole camera model can be used, and at this time, altitude information of the unmanned aerial vehicle for ground targets and target depth information for airborne targets can be used.
또한, 3차원 카메라 좌표계를 3차원 몸체 좌표계 변환 시, 카메라와 무인항공기 CG 사이의 외부 파라미터가 사용될 수 있다. Additionally, when converting the 3D camera coordinate system to the 3D body coordinate system, external parameters between the camera and the unmanned aerial vehicle CG can be used.
S430 단계에서는 좌표계 변환부(173)에서 무인항공기 몸체 좌표계로 변환된 목표물 좌표를 기반으로 무인항공기와 목표물 간의 상대적 위치 차를 계산하고, 목표물의 좌표를 ENU(East-North-Up)좌표계로 변환한다. 실시예에서 S430 단계에서는 목표물의 좌표를 무인항공기 몸체 좌표계(FRD 좌표계)에서 ENU 좌표계인 관성 좌표계로 변환한다. In step S430, the coordinate system conversion unit 173 calculates the relative position difference between the unmanned aerial vehicle and the target based on the target coordinates converted to the unmanned aerial vehicle body coordinate system, and converts the target's coordinates to the ENU (East-North-Up) coordinate system. . In the embodiment, in step S430, the coordinates of the target are converted from the unmanned aerial vehicle body coordinate system (FRD coordinate system) to the inertial coordinate system, which is the ENU coordinate system.
S450 단계에서는 GPS 좌표 획득부(175)에서 ENU 좌표계로 나타낸 목표물 좌표를 GPS 좌표로 변환한다. 구체적으로, S450 단계에서는 ENU 좌표계를 ECEF 좌표계로 변환하고, LLA 좌표계(목표물 GPS 좌표)로 변환한다. 실시예에서는 ENU 좌표계를 ECEF 좌표계로 변환 시, 무인항공기의 GPS 좌표값을 사용한다. In step S450, the GPS coordinate acquisition unit 175 converts the target coordinates expressed in the ENU coordinate system into GPS coordinates. Specifically, in step S450, the ENU coordinate system is converted to the ECEF coordinate system and the LLA coordinate system (target GPS coordinates). In the embodiment, when converting the ENU coordinate system to the ECEF coordinate system, the GPS coordinate value of the unmanned aerial vehicle is used.
이상에서와 같은 무인항공기의 목표물 GPS 좌표 추정 장치 및 방법은 목표물의 개수나 상공, 지상 등 목표물의 탐지 위치에 관계없이 카메라 영상을 통해 자동으로 목표물의 전역 위치를 지도해 표시함으로써, 실종자 구출 또는 안티 드론 차원의 레이더로 잡히지 않는 표적 전역 위치 계산과 같은 다양한 임무를 보다 정확하게 수행할 수 있도록 한다. The device and method for estimating the target GPS coordinates of an unmanned aerial vehicle as described above automatically maps and displays the global location of the target through camera images regardless of the number of targets or the detection location of the target, such as in the sky or on the ground, thereby rescuing missing people or anti-virus software. It allows various missions to be performed more accurately, such as calculating the global location of targets that cannot be detected by drone-level radar.
실시예에 따른 무인항공기의 목표물 GPS 좌표 추정 장치 및 방법은 무인항공기를 활용한 실종자, 유류품 수색, 위험 지역에 있는 사람들의 위치 정보 추정, 무인항공기의 착륙 및 배달을 위한 목적지의 GPS 좌표 추정, 시설물 점검 시 점검 대상의 위치 정보 획득 및 상공에 적군 무인기의 GPS 좌표 추정 과정에서 이용될 수 있다. The device and method for estimating GPS coordinates of a target for an unmanned aerial vehicle according to an embodiment include searching for missing people and belongings using an unmanned aerial vehicle, estimating location information of people in a dangerous area, estimating GPS coordinates of a destination for landing and delivery of an unmanned aerial vehicle, and It can be used in the process of obtaining location information of the inspection target during inspection and estimating the GPS coordinates of enemy drones in the sky.
이하에서는, 본 발명의 다른 실시예에 따른, 카메라가 탑재된 드론을 이용한 지상 표적 추적 장치 및 방법을 설명한다. Below, a ground target tracking device and method using a drone equipped with a camera according to another embodiment of the present invention will be described.
먼저, 카메라가 탑재된 드론을 제어하여 카메라 이미지 평면 상에 있는 픽셀을 이미지 평면(image plane)의 중심에 오도록 하기 위해서는, 픽셀 좌표계로 나타낸 지상 목표물 픽셀의 2차원 좌표를 드론 몸체 좌표계 기준으로 나타낸 3차원 좌표로 변환해야 한다. First, in order to control a drone equipped with a camera so that the pixel on the camera image plane is at the center of the image plane, the two-dimensional coordinates of the ground target pixel expressed in the pixel coordinate system are expressed based on the drone body coordinate system. It must be converted to dimensional coordinates.
이러한 변환을 위해서 픽셀 좌표계, 카메라 좌표계, 그리고 드론 몸체 좌표계까지 총 세 가지의 좌표계가 필요하다. 이때, 드론 몸체 좌표계 는 FRD(Front-Right-Down) 좌표계를 사용한다. For this transformation, a total of three coordinate systems are needed: a pixel coordinate system, a camera coordinate system, and a drone body coordinate system. At this time, the drone body coordinate system uses the FRD (Front-Right-Down) coordinate system.
카메라 좌표계의 원점은 드론 몸체 좌표계의 평면상에 존재하며, 카메라 좌표계의 축은 드론 몸체 좌표계의 축과 평행하다. 또한, 카메라 좌표계의 축은 지상 목표물 추적을 위해, 드론 몸체 좌표계의 축을 기준으로 지면 방향으로 각 ()만큼 틸트(tilt) 되어 있다. The origin of the camera coordinate system is the drone body coordinate system. It exists on a plane and is in the camera coordinate system. The axis is in the drone body coordinate system. parallel to the axis Additionally, the camera coordinate system The axis is in the drone body coordinate system for tracking ground targets. Angle in the direction of the ground relative to the axis ( ) is tilted.
이하, 좌표계 변환에 대해서 설명한다. Hereinafter, coordinate system transformation will be described.
2차원 픽셀 좌표계에서 3차원 카메라 좌표계로의 변환은 카메라 고유 매개변수(intrinsic parameter)를 활용한다. 드론에 탑재된 카메라를 핀홀 카메라 모델로 가정하면, 2차원 픽셀 좌표를 수학식 27과 같이 변환할 수 있다. The conversion from the 2D pixel coordinate system to the 3D camera coordinate system utilizes camera intrinsic parameters. Assuming that the camera mounted on the drone is a pinhole camera model, the two-dimensional pixel coordinates can be converted as shown in Equation 27.
여기서 K는 카메라의 고유 매트릭스(intrinsic matrix)이고, 는 픽셀 좌표계의 2차원 좌표를 호모지니어스 좌표계(homogeneous coordinate)로 나타낸 3차원 벡터이고, 그리고 는 카메라 좌표계로 변환된 3차원 좌표이다. Here K is the intrinsic matrix of the camera, is a three-dimensional vector representing the two-dimensional coordinates of the pixel coordinate system as a homogeneous coordinate system, and is a three-dimensional coordinate converted to the camera coordinate system.
다음으로, 계산의 편의를 위해 카메라 좌표계와 드론 몸체 좌표계의 원점이 같은 위치에 놓여 있다고 가정하면, 두 좌표계 사이의 외부 매개변수(extrinsic parameter)를 사용해서, 수학식 28과 같이 카메라 좌표계로 변환된 3차원 좌표 를 드론 몸체 좌표계 로 변환할 수 있다. Next, for convenience of calculation, assuming that the origin of the camera coordinate system and the drone body coordinate system are located at the same location, the extrinsic parameter between the two coordinate systems is used to convert to the camera coordinate system as shown in Equation 28. 3D coordinates the drone body coordinate system It can be converted to .
여기서, 는 카메라 좌표계로부터 드론 몸체 좌표계로의 3차원 회전변환 행렬로, 다음 수학식 29와 같이 계산할 수 있다. here, is a three-dimensional rotation transformation matrix from the camera coordinate system to the drone body coordinate system, and can be calculated as in Equation 29 below.
마지막으로, 수학식 27 과 수학식 28 에 의해서 다음 수학식 30 이 성립한다. Finally, the following equation 30 is established by equation 27 and equation 28.
여기서 K는 카메라의 고유 매트릭스(intrinsic matrix)이고, 는 픽셀 좌표계의 2차원 좌표를 호모지니어스 좌표계(homogeneous coordinate)로 나타낸 3차원 벡터이고, 는 카메라 좌표계로부터 드론 몸체 좌표계로의 3차원 회전변환 행렬이다. Here K is the intrinsic matrix of the camera, is a 3-dimensional vector representing the 2-dimensional coordinates of the pixel coordinate system as a homogeneous coordinate system, is a three-dimensional rotation transformation matrix from the camera coordinate system to the drone body coordinate system.
도 8은 본 발명의 일 실시예에 따른 카메라가 탑재된 드론을 이용한 지상 표적 추적 장치를 설명하기 위한 블록도이다. Figure 8 is a block diagram for explaining a ground target tracking device using a drone equipped with a camera according to an embodiment of the present invention.
도 8을 참조하면, 본 발명의 일 실시예에 따른 카메라가 탑재된 드론을 이용한 지상 표적 추적 장치는 체크부(210), 요잉 각도 계산부(220), 수평 방향 정렬부(230), 거리 계산부(240) 및 수직 방향 정렬부(250)를 포함한다. 본 실시예에서, 지상 표적 추적 장치가 체크부(210), 요잉 각도 계산부(220), 수평 방향 정렬부(230), 거리 계산부(240) 및 수직 방향 정렬부(250)로 구성된 것을 설명하였으나, 이는 설명의 편의를 위해 논리적으로 구분하였을 뿐 하드웨어적으로 구분한 것은 아니다. 본 실시예에서, 상기한 카메라는 고정식 단안 카메라, 짐벌 카메라 등일 수 있다. Referring to FIG. 8, the ground target tracking device using a drone equipped with a camera according to an embodiment of the present invention includes a check unit 210, a yawing angle calculation unit 220, a horizontal alignment unit 230, and a distance calculation unit. It includes a unit 240 and a vertical alignment unit 250. In this embodiment, it is explained that the ground target tracking device is composed of a check unit 210, a yawing angle calculation unit 220, a horizontal alignment unit 230, a distance calculation unit 240, and a vertical alignment unit 250. However, this was only logically divided for convenience of explanation and not hardware-wise. In this embodiment, the camera may be a fixed monocular camera, a gimbal camera, etc.
체크부(210)는 지상 목표물의 중심 픽셀이 이미지 평면 상에 위치하는지의 여부를 체크한다. The check unit 210 checks whether the center pixel of the ground target is located on the image plane.
일례로서, 상기한 중심 픽셀이 상기 이미지 평면 상에 위치하는 것으로 체크되는 경우는 사용자의 조작에 따라 상기 이미지 평면 상에 표시되는 지상 목표물의 중심 픽셀이 선택되는 경우일 수 있다. As an example, a case where the center pixel is checked to be located on the image plane may be a case where the center pixel of a ground target displayed on the image plane is selected according to a user's manipulation.
다른 예로서, 상기한 중심 픽셀이 상기 이미지 평면 상에 위치하는 것으로 체크되는 경우는 영상 내 추적 알고리즘을 통해 지상 목표물의 중심 픽셀이 이미지 평면상에 위치하는 경우일 수 있다. 구체적으로, 사용자가 영상에서 추적하고자 하는 지상 목표물을 ROI(Region Of Interest)로 한번만 지정하면, 기존 존재하는 영상 내 목표물 추적 알고리즘(MOSSE, CSRT 등)을 기반으로 프레임과 프레임간의 영상을 분석하여 지상 목표물을 영상 내에서 지속적으로 추적할 수 있다. 카메라가 탑재된 드론이 정지되어 있고 ROI로 지정된 대상이 움직이면, 상기한 추적 알고리즘은 영상 내에서 ROI로 지정된 대상인 지상 목표물을 추적한. 이러한 추적 알고리즘이 동작하고 있으면, 움직이는 지상 목표물의 중심 픽셀을 자동으로 도출할 수 있기 때문에 드론을 사용한 추적의 연속성을 유지할 수 있다. 즉, 드론을 움직여 지상 목표물이 영상 내 중심에 오도록 하여 화각을 확보하고 유지한다. As another example, the case where the center pixel is checked to be located on the image plane may be a case where the center pixel of the ground target is located on the image plane through an intra-image tracking algorithm. Specifically, if the user specifies the ground target they want to track in the video as ROI (Region Of Interest) only once, the video between frames is analyzed based on existing target tracking algorithms (MOSSE, CSRT, etc.) in the video to determine the ground target. Targets can be continuously tracked within the video. When a drone equipped with a camera is stationary and the object designated by ROI moves, the above-described tracking algorithm tracks the ground target, which is the object designated by ROI, within the video. When this tracking algorithm is operating, the center pixel of a moving ground target can be automatically derived, thereby maintaining continuity of tracking using the drone. In other words, move the drone so that the ground target is at the center of the image to secure and maintain the angle of view.
또 다른 예로서, 상기한 중심 픽셀이 상기 이미지 평면 상에 위치하는 것으로 체크되는 경우는 인공지능을 통해 매 프레임마다 지상 목표물이 인지되어 이미지 평면상에 위치하는 지상 목표물의 중심 픽셀이 자동 선택되는 경우일 수 있다. As another example, if the center pixel is checked to be located on the image plane, the ground target is recognized every frame through artificial intelligence and the center pixel of the ground target located on the image plane is automatically selected. It can be.
요잉 각도 계산부(220)는 목표물의 중심 픽셀이 이미지 평면 상에 위치하는 것으로 체크되면, 요잉 회전을 위해 요잉 각도를 계산한다. When the yawing angle calculation unit 220 checks that the center pixel of the target is located on the image plane, it calculates the yawing angle for yawing rotation.
구체적으로, 이미지 평면 상에 위치하는 픽셀 좌표를 드론 몸체 좌표계로 변환한 좌표를 라고 정의하면, 요잉 각도 계산부(120)는 요잉 회전해야 하는 각도 를 다음의 수학식 31과 같이 계산할 수 있다. Specifically, the coordinates obtained by converting the pixel coordinates located on the image plane into the drone body coordinate system are If defined, the yawing angle calculation unit 120 is the angle at which the yawing rotation should be performed. Can be calculated as in Equation 31 below.
여기서 는 이미지 평면 상에 위치하는 픽셀 좌표를 드론 몸체 좌표계로 변환한 x 좌표값이고, 는 이미지 평면 상에 위치하는 픽셀 좌표를 드론 몸체 좌표계로 변환한 y 좌표값이다. here is the x-coordinate value converted from the pixel coordinates located on the image plane to the drone body coordinate system, is the y coordinate value converted from the pixel coordinates located on the image plane to the drone body coordinate system.
수평 방향 정렬부(230)는 요잉 각도 계산부(220)에 의해 계산된 요잉 각도만큼 요잉 회전을 통해 수평 방향으로 정렬한다. The horizontal alignment unit 230 aligns in the horizontal direction through yawing rotation by the yawing angle calculated by the yawing angle calculation unit 220.
도 9는 선택한 픽셀이 이미지 평면 상에서 수평으로 중앙에 놓이기 전의 상태를 설명하기 위한 도면이다. Figure 9 is a diagram to explain the state before the selected pixel is centered horizontally on the image plane.
도 9에서 선택된 픽셀이 요잉(yawing)을 통해 이미지 평면 수평 방향 중심에 위치하게 되면, 드론이 호버(hover) 상태일 때 도 10에 도시된 위치 관계가 된다. When the pixel selected in FIG. 9 is located at the center of the image plane horizontal direction through yawing, the positional relationship shown in FIG. 10 occurs when the drone is in a hover state.
도 10은 선택한 픽셀이 이미지 평면 상에서 수평으로 중앙에 놓인 후의 상태를 설명하기 위한 도면이다. Figure 10 is a diagram for explaining the state after the selected pixel is centered horizontally on the image plane.
도 8을 다시 참조하면, 거리 계산부(240)는 드론의 이동을 위해 거리를 계산한다. Referring again to FIG. 8, the distance calculation unit 240 calculates the distance for the drone to move.
구체적으로, 픽셀 좌표를 드론 몸체 좌표계로 변환한 좌표 와 지면으로부터 떨어진 카메라 원점의 높이 를 활용하면, 거리 계산부(240)는 드론의 고도 기준점 H 로부터 지상 목표물 T 까지의 거리 를 수학식 32와 같이 계산할 수 있다. Specifically, coordinates converted from pixel coordinates to the drone body coordinate system. and the height of the camera origin away from the ground Using , the distance calculation unit 240 calculates the distance from the altitude reference point H of the drone to the ground target T. can be calculated as in Equation 32.
여기서, 는 이미지 평면 상에 위치하는 픽셀 좌표를 드론 몸체 좌표계로 변환한 x 좌표값이고, 는 이미지 평면 상에 위치하는 픽셀 좌표를 드론 몸체 좌표계로 변환한 y 좌표값이다. here, is the x-coordinate value converted from the pixel coordinates located on the image plane to the drone body coordinate system, is the y coordinate value converted from the pixel coordinates located on the image plane to the drone body coordinate system.
또한, 거리 계산부(240)는 드론의 고도 기준점 H와 카메라 광학축이 지면과 만나는 점 O 사이의 거리 를 수학식 33과 같이 계산한다. In addition, the distance calculation unit 240 calculates the distance between the altitude reference point H of the drone and the point O where the camera optical axis meets the ground. Calculate as shown in Equation 33.
여기서, 는 지면으로부터 떨어진 카메라 원점의 높이이고, 는 드론 몸체 좌표계의 축을 기준으로 지면 방향으로 틸트 각도이다. here, is the height of the camera origin away from the ground, is the drone body coordinate system. It is the tilt angle toward the ground relative to the axis.
이어, 거리 계산부(240)는 선택된 픽셀의 수직 방향 중심을 맞추기 위해서 드론이 이동해야 하는 거리 d를 수학식 32 와 수학식 33 에 기초하여 다음의 수학식 34와 같이 계산한다. Next, the distance calculation unit 240 calculates the distance d that the drone must move to align the vertical center of the selected pixel, as shown in Equation 34 below, based on Equation 32 and Equation 33.
여기서, , , 는 드론의 고도 기준점 H와 카메라 광학축이 지면과 만나는 점 0 사이의 거리, 는 지면으로부터 떨어진 카메라 원점의 높이, 는 이미지 평면 상에 위치하는 픽셀 좌표를 드론 몸체 좌표계로 변환한 x 좌표값, 는 이미지 평면 상에 위치하는 픽셀 좌표를 드론 몸체 좌표계로 변환한 y 좌표값, 는 이미지 평면 상에 위치하는 픽셀 좌표를 드론 몸체 좌표계로 변환한 z 좌표값, 는 드론 몸체 좌표계의 축을 기준으로 지면 방향으로 틸트 각도이다. here, , , is the distance between the drone's altitude reference point H and the point 0 where the camera optical axis meets the ground, is the height of the camera origin away from the ground, is the x-coordinate value converted from the pixel coordinates located on the image plane to the drone body coordinate system, is the y coordinate value converted from the pixel coordinates located on the image plane to the drone body coordinate system, is the z coordinate value converted from the pixel coordinates located on the image plane to the drone body coordinate system, is the drone body coordinate system. It is the tilt angle toward the ground relative to the axis.
수직 방향 정렬부(250)는 거리 계산부(240)에 의해 계산된 거리만큼 전진 또는 후진 이동하여 수직 방향으로 정렬한다. The vertical alignment unit 250 aligns in the vertical direction by moving forward or backward by the distance calculated by the distance calculation unit 240.
이상에서 설명된 바와 같이, 본 발명에 따르면 지상 목표물의 중심 픽셀을 이미지 평면 상에서 선택하면, 계산된 요잉 각도만큼 요잉 회전하여 수평 방향을 맞추고, 계산된 거리만큼 전진 또는 후진 이동하여 수직 방향을 맞추어 지상 목표물로 선택된 픽셀을 이미지 평면의 중심에 위치시켜 지상 목표물을 추적한다. As described above, according to the present invention, when the center pixel of a ground target is selected on the image plane, it rotates by yawing by the calculated yawing angle to adjust the horizontal direction, and moves forward or backward by the calculated distance to adjust the vertical direction to the ground target. The ground target is tracked by placing the pixel selected as the target at the center of the image plane.
도 11은 본 발명의 다른 실시예에 따른 카메라가 탑재된 드론을 이용한 지상 표적 추적 방법을 설명하기 위한 흐름도이다. Figure 11 is a flowchart illustrating a ground target tracking method using a drone equipped with a camera according to another embodiment of the present invention.
도 11을 참조하면, 목표물의 중심 픽셀이 선택되었는지의 여부를 체크한다(단계 S610). 상기한 중심 픽셀의 선택 여부에 대한 체크는 도 1에 도시된 체크부(210)에 의해 수행될 수 있다. 상기한 중심 픽셀은 사용자의 조작에 따라 선택될 수도 있고, 객체 선택을 위해 프로그래밍된 객체 검지 프로그램을 통해 선택될 수도 있다.Referring to FIG. 11, it is checked whether the center pixel of the target has been selected (step S610). Checking whether the center pixel is selected may be performed by the check unit 210 shown in FIG. 1. The above-mentioned center pixel may be selected according to the user's manipulation, or may be selected through an object detection program programmed for object selection.
단계 S610에서 중심 픽셀이 선택된 것으로 체크되면 요잉 회전을 위해 요잉 각도를 계산한다(단계 S620). 상기한 요잉 각도 계산은 도 8에 도시된 요잉 각도 계산부(220)에 의해 수행될 수 있다. If it is checked that the center pixel is selected in step S610, the yawing angle is calculated for yawing rotation (step S620). The above-described yawing angle calculation may be performed by the yawing angle calculation unit 220 shown in FIG. 8.
단계 S620에서 계산된 요잉 각도만큼 요잉 회전하여 수평 방향을 정렬한다(단계 S630). 상기한 수평 방향 정렬은 도 8에 도시된 수평 방향 정렬부(230)에 의해 수행될 수 있다. 본 실시예에서는 단계 S620과 단계 S630이 분리된 것으로 설명하였으나, 단계 S620과 단계 S630는 동시에 수행될 수도 있다. Align the horizontal direction by yawing and rotating by the yawing angle calculated in step S620 (step S630). The horizontal alignment described above can be performed by the horizontal alignment unit 230 shown in FIG. 8. In this embodiment, step S620 and step S630 are described as being separated, but step S620 and step S630 may be performed simultaneously.
드론의 이동을 위해 거리를 계산한다(단계 S640). 상기한 거리 계산은 도 8에 도시된 거리 계산부(240)에 의해 수행될 수 있다. Calculate the distance for the drone to move (step S640). The above distance calculation may be performed by the distance calculation unit 240 shown in FIG. 8.
단계 S640에서 계산된 거리만큼 전진 또는 후진 이동하여 수직 방향을 정렬한다(단계 S650). 상기한 수직 방향 정렬은 도 8에 도시된 수직 방향 정렬부(250)에 의해 수행될 수 있다. 본 실시예에서는 단계 S640와 단계 S650이 분리된 것으로 설명하였으나, 단계 S640과 단계 S650은 동시에 수행될 수도 있다.Align the vertical direction by moving forward or backward by the distance calculated in step S640 (step S650). The vertical alignment described above can be performed by the vertical alignment unit 250 shown in FIG. 8. In this embodiment, step S640 and step S650 are described as being separated, but step S640 and step S650 may be performed simultaneously.
그러면, 이하에서 실험 검증 결과에 대해서 설명한다. Then, the experimental verification results will be described below.
본 발명에 따른 지상 목표물 픽셀 추적 기법을 시뮬레이션 환경에서 검증하였다. 카메라를 의 각으로 탑재한 드론을 고도 에 배치하여 지상 목표물 추적을 수행한다. The ground target pixel tracking technique according to the present invention was verified in a simulation environment. camera The altitude of the drone mounted at an angle of It is deployed to perform ground target tracking.
도 12 내지 14는 지상 목표물 픽셀 추적 시뮬레이션 결과를 설명하기 위한 이미지이다. 특히, 도 12 는 추적할 지상 목표물 선택을 나타내고, 도 13 은 목표물 추적 후를 나타내고, 도 14 는 목표 추적후 월드 좌표계(world coordinate) 상에서 목표물 위치와 카메라 중심점 사이의 오차를 나타낸 그래프이다. 12 to 14 are images for explaining ground target pixel tracking simulation results. In particular, FIG. 12 shows selection of a ground target to be tracked, FIG. 13 shows after target tracking, and FIG. 14 is a graph showing the error between the target position and the camera center point on the world coordinate system after target tracking.
도 12 와 같이 지면에 놓인 표적 마커의 중심 픽셀을 이미지 평면 상에서 선택하면, 수학식 31 에 의해 계산된 각 만큼 요잉 회전하여 수평 방향을 맞추고, 수학식 34 에 의해 계산된 거리 만큼 전진 또는 후진 이동하여 수직 방향을 맞추어 지상 목표물을 추적한다. 즉, 본 발명에 따른 추적 기법에 의해 도 13과 같이 선택된 픽셀이 이미지 평면의 중심에 놓인다. If the center pixel of the target marker placed on the ground is selected on the image plane as shown in Figure 12, the angle calculated by Equation 31 Adjust the horizontal direction by yawing and rotating, and the distance calculated by Equation 34 Move forward or backward as much as possible to track the ground target in the vertical direction. That is, the pixel selected by the tracking technique according to the present invention is placed at the center of the image plane as shown in FIG. 13.
상술된 조건으로 동일한 목표물 추적 임무를 5회 반복 수행한 결과, 도 14에서와 같이, 평균 13.5 픽셀 오차(예컨대, 최종 이미지 평면의 중앙점과 목표물 마커 중심 픽셀점 사이의 픽셀 거리) 및 0.62m 실거리 오차로 목표물 추적이 가능하다. As a result of performing the same target tracking mission five times under the above-mentioned conditions, as shown in Figure 14, the average pixel error (e.g., the pixel distance between the center point of the final image plane and the target marker center pixel point) and the real distance of 0.62 m Target tracking is possible with error.
이상에서 설명된 바와 같이, 본 발명에 따르면 카메라를 드론에 탑재하고, 이미지 평면(image plane) 상에서 지상 목표물에 해당하는 픽셀이 이미지 평면의 중심에 위치하도록 드론 제어를 통해 지상 목표물을 추적할 수 있다. As described above, according to the present invention, a camera is mounted on a drone, and the ground target can be tracked by controlling the drone so that the pixel corresponding to the ground target is located at the center of the image plane. .
또한 카메라가 이용되므로 소형 드론에도 탑재를 할 수 있는 확장성을 가지며, 비교적 단순한 알고리즘을 통해 연산량을 줄일 수 있다.In addition, because a camera is used, it has the scalability to be mounted on small drones, and the amount of computation can be reduced through a relatively simple algorithm.
개시된 내용은 예시에 불과하며, 특허청구범위에서 청구하는 청구의 요지를 벗어나지 않고 당해 기술분야에서 통상의 지식을 가진 자에 의하여 다양하게 변경 실시될 수 있으므로, 개시된 내용의 보호범위는 상술한 특정의 실시예에 한정되지 않는다.The disclosed content is merely an example and can be modified and implemented in various ways by those skilled in the art without departing from the gist of the claims, so the scope of protection of the disclosed content is limited to the above-mentioned specific scope. It is not limited to the examples.
Claims (18)
- 학습된(Pre-training)인공지능 딥러닝 모델을 통해, 무인항공기에 설정된 임무에 대응하는 목표물을 탐지(detect)하고, 탐지된 목표물을 포함하는 바운딩 박스(Bounding box)를 생성하는 탐지부;A detection unit that detects a target corresponding to the mission set for the unmanned aerial vehicle through a pre-trained artificial intelligence deep learning model and creates a bounding box containing the detected target;카메라 이미지 평면(image plane)상에 픽셀로 나타낸 목표점인 기준 픽셀을 설정하는 설정부;a setting unit that sets a reference pixel, which is a target point expressed as a pixel on a camera image plane;2차원 픽셀 좌표계에서 상기 목표물의 기준 픽셀 좌표를 산출하는 산출부;a calculation unit calculating reference pixel coordinates of the target in a two-dimensional pixel coordinate system;무인항공기의 고도정보, 목표물의 깊이(depth) 정보 중 적어도 하나를 이용하여, 상기 목표물의 기준 픽셀 좌표를 3차원 좌표로 변환하고, 변환된 3차원 좌표의 좌표계를 변환하여 목표물의 GPS 좌표를 획득하는 변환부; 를 포함하는 무인항공기의 카메라 영상 정보를 기반으로 목표물의 GPS 좌표를 추정하는 장치. Using at least one of the altitude information of the unmanned aerial vehicle and the depth information of the target, the reference pixel coordinates of the target are converted into 3D coordinates, and the coordinate system of the converted 3D coordinates is converted to obtain the GPS coordinates of the target. A conversion unit that does; A device that estimates the GPS coordinates of a target based on camera image information of an unmanned aerial vehicle.
- 제1항에 있어서, According to paragraph 1,상기 목표물은The target is지상 목표물과 상공 목표물을 포함하고, Includes ground and air targets,상기 변환부; 는 The conversion unit; Is무인항공기의 고도정보를 상기 지상 목표물의 깊이 정보로 이용하여, 상기 목표물의 기준 픽셀 좌표를 3차원 좌표로 변환하는 것을 특징으로 하는 무인항공기의 카메라 영상 정보를 기반으로 목표물의 GPS 좌표를 추정하는 장치.A device for estimating GPS coordinates of a target based on camera image information of an unmanned aerial vehicle, characterized in that it converts the reference pixel coordinates of the target into 3D coordinates by using the altitude information of the unmanned aerial vehicle as depth information of the ground target. .
- 제1항에 있어서, 상기 변환부; 는 According to claim 1, wherein the conversion unit; Is깊이를 측정하는 스테레오 카메라나 깊이 카메라를 통해 상기 상공 목표물의 깊이 정보를 획득하여, 상기 목표물의 기준 픽셀 좌표를 3차원 좌표로 변환하는 것을 특징으로 하는 무인항공기의 카메라 영상 정보를 기반으로 목표물의 GPS 좌표를 추정하는 장치.GPS of the target based on the camera image information of the unmanned aerial vehicle, characterized in that the depth information of the target in the air is acquired through a stereo camera or depth camera that measures depth, and the reference pixel coordinates of the target are converted into 3D coordinates. A device for estimating coordinates.
- 제2항 또는 3항에 있어서, 상기 변환부; 는 The method of claim 2 or 3, wherein the conversion unit; Is무인항공기의 카메라 모델을 통해, 2차원 좌표인 목표물의 기준 픽셀 좌표에 대한 카메라 좌표계의 3차원 위치를 계산하고, 카메라와 무인항공기 CG(Center of Gravity) 간 외부 파라미터 값을 이용하여, 목표물의 기준 픽셀 좌표를 무인항공기의 몸체 좌표계에 대한 3차원 좌표로 변환하는 것을 특징으로 하는 무인항공기의 카메라 영상 정보를 기반으로 목표물의 GPS 좌표를 추정하는 장치.Through the camera model of the unmanned aerial vehicle, the 3-dimensional position of the camera coordinate system is calculated with respect to the target's reference pixel coordinates, which are 2-dimensional coordinates, and the target's standard is calculated using external parameter values between the camera and the unmanned aerial vehicle CG (Center of Gravity). A device for estimating GPS coordinates of a target based on camera image information of an unmanned aerial vehicle, characterized by converting pixel coordinates into three-dimensional coordinates for the body coordinate system of the unmanned aerial vehicle.
- 제4항에 있어서, 상기 변환부; 는According to claim 4, wherein the conversion unit; Is무인항공기의 현재 GPS 좌표를 이용해서 무인항공기 몸체 좌표계로 나타낸 목표물의 3차원 좌표를, ECEF(Earth-Centered Earth-Fixed)좌표계로 변환한 후, 다시 LLA(Latitude-Longitude-Altitude)좌표계로 변환하여, 목표물의 GPS 좌표를 획득하는 것을 특징으로 하는 무인항공기의 카메라 영상 정보를 기반으로 목표물의 GPS 좌표를 추정하는 장치.Using the current GPS coordinates of the unmanned aerial vehicle, the 3-dimensional coordinates of the target expressed in the unmanned aerial vehicle body coordinate system are converted to the ECEF (Earth-Centered Earth-Fixed) coordinate system and then converted back to the LLA (Latitude-Longitude-Altitude) coordinate system. , A device that estimates the GPS coordinates of a target based on camera image information of an unmanned aerial vehicle, characterized in that it acquires the GPS coordinates of the target.
- (A) 탐지부에서 학습된(Pre-training) 인공지능 딥러닝 모델을 통해, 무인항공기에 설정된 임무에 대응하는 목표물을 탐지(detect)하고, 탐지된 목표물을 포함하는 바운딩 박스(Bounding box)를 목표물의 크기에 따라 생성하는 단계;(A) Through the artificial intelligence deep learning model learned (pre-trained) in the detection department, the target corresponding to the mission set for the unmanned aerial vehicle is detected and the bounding box containing the detected target is created. Generating according to the size of the target;(B) 설정부에서 카메라 이미지 평면(image plane)상에 픽셀로 나타낸 목표점인 기준 픽셀을 설정하는 단계;(B) setting a reference pixel, which is a target point expressed as a pixel on a camera image plane, in a setting unit;(C) 산출부에서 2차원 픽셀 좌표계에서 상기 목표물의 기준 픽셀 좌표를 산출하는 단계;(C) calculating the reference pixel coordinates of the target in a two-dimensional pixel coordinate system in a calculation unit;(D) 변환부에서 무인항공기의 고도정보, 목표물의 깊이(depth) 정보 중 적어도 하나를 이용하여, 상기 목표물의 기준 픽셀 좌표를 3차원 좌표로 변환하고, 변환된 3차원 좌표의 좌표계를 변환하여 목표물의 GPS 좌표를 획득하는 단계; 를 포함하는, 무인항공기의 카메라 영상 정보를 기반으로 목표물의 GPS 좌표를 추정하는 방법. (D) The conversion unit converts the reference pixel coordinates of the target into 3D coordinates using at least one of the altitude information of the unmanned aerial vehicle and the depth information of the target, and converts the coordinate system of the converted 3D coordinates. Obtaining GPS coordinates of a target; A method of estimating GPS coordinates of a target based on camera image information of an unmanned aerial vehicle, including.
- 제6항에 있어서, According to clause 6,상기 목표물은The target is지상 목표물과 상공 목표물을 포함하고,Includes ground and air targets,상기 (D)의 단계; 는 Step (D) above; Is무인항공기의 고도정보를 상기 지상 목표물의 깊이 정보로 이용하여, 상기 목표물의 기준 픽셀 좌표를 3차원 좌표로 변환하는 것을 특징으로 하는, 무인항공기의 카메라 영상 정보를 기반으로 목표물의 GPS 좌표를 추정하는 방법.Estimating GPS coordinates of a target based on camera image information of an unmanned aerial vehicle, characterized in that the altitude information of the unmanned aerial vehicle is used as depth information of the ground target to convert the reference pixel coordinates of the target into 3-dimensional coordinates. method.
- 제6항에 있어서, 상기 (D)의 단계; 는 The method of claim 6, wherein step (D); Is깊이를 측정하는 스테레오 카메라나 깊이 카메라를 통해 상기 상공 목표물의 깊이 정보를 획득하여, 상기 목표물의 기준 픽셀 좌표를 3차원 좌표로 변환하는 것을 특징으로 하는, 무인항공기의 카메라 영상 정보를 기반으로 목표물의 GPS 좌표를 추정하는 방법.Characterized by acquiring depth information of the target in the sky through a stereo camera or depth camera that measures depth, and converting the reference pixel coordinates of the target into three-dimensional coordinates, the target is captured based on the camera image information of the unmanned aerial vehicle. How to estimate GPS coordinates.
- 제7항 또는 8항에 있어서, 상기 (D)의 단계; 는 The method of claim 7 or 8, wherein step (D); Is무인항공기의 카메라 모델을 통해, 2차원 좌표인 목표물의 기준 픽셀 좌표에 대한 카메라 좌표계의 3차원 위치를 계산하고, 카메라와 무인항공기 CG(Center of Gravity) 간 외부 파라미터 값을 이용하여, 목표물의 기준 픽셀 좌표를 무인항공기의 몸체 좌표계에 대한 3차원 좌표로 변환하는 것을 특징으로 하는, 무인항공기의 카메라 영상 정보를 기반으로 목표물의 GPS 좌표를 추정하는 방법.Through the camera model of the unmanned aerial vehicle, the 3-dimensional position of the camera coordinate system is calculated with respect to the target's reference pixel coordinates, which are 2-dimensional coordinates, and the target's standard is calculated using external parameter values between the camera and the unmanned aerial vehicle CG (Center of Gravity). A method of estimating GPS coordinates of a target based on camera image information of an unmanned aerial vehicle, characterized by converting pixel coordinates into three-dimensional coordinates for the body coordinate system of the unmanned aerial vehicle.
- 제9항에 있어서, 상기 (D)의 단계; 는The method of claim 9, wherein step (D); Is무인항공기의 현재 GPS 좌표를 이용해서 무인항공기 몸체 좌표계로 나타낸 목표물의 3차원 좌표를, ECEF(Earth-Centered Earth-Fixed)좌표계로 변환한 후, 다시 LLA(Latitude-Longitude-Altitude)좌표계로 변환하여, 목표물의 GPS 좌표를 획득하는 것을 특징으로 하는, 무인항공기의 카메라 영상 정보를 기반으로 목표물의 GPS 좌표를 추정하는 방법.Using the current GPS coordinates of the unmanned aerial vehicle, the 3-dimensional coordinates of the target expressed in the unmanned aerial vehicle body coordinate system are converted to the ECEF (Earth-Centered Earth-Fixed) coordinate system and then converted back to the LLA (Latitude-Longitude-Altitude) coordinate system. , A method of estimating the GPS coordinates of a target based on camera image information of an unmanned aerial vehicle, characterized by acquiring the GPS coordinates of the target.
- 드론에 탑재된 카메라를 이용하여 촬영된 지상 목표물의 중심 픽셀이 이미지 평면(image plane) 상에서 위치하는지의 여부를 체크하는 단계; Checking whether the center pixel of a ground target captured using a camera mounted on a drone is located on an image plane;상기 지상 목표물의 상기 중심 픽셀이 상기 이미지 평면상에 위치하는 것으로 체크되면, 요잉 회전을 위해 요잉 각도를 계산하는 단계; If the center pixel of the ground target is checked to be located on the image plane, calculating a yaw angle for a yaw rotation;계산된 요잉 각도만큼 요잉 회전을 통해 수평 방향으로 정렬하는 단계; Aligning in the horizontal direction through yawing rotation by the calculated yawing angle;상기 드론의 이동을 위해 거리를 계산하는 단계; 및 calculating a distance for movement of the drone; and계산된 거리만큼 전진 또는 후진 이동하여 수직 방향으로 정렬하는 단계를 포함하는 카메라가 탑재된 드론을 이용한 지상 표적 추적 방법.A ground target tracking method using a drone equipped with a camera, including the step of moving forward or backward by a calculated distance and aligning it in the vertical direction.
- 제11항에 있어서, According to clause 11,상기 요잉 각도는, (여기서, 는 상기 이미지 평면 상에 위치하는 픽셀 좌표를 드론 몸체 좌표계로 변환한 x 좌표값, 는 상기 이미지 평면 상에 위치하는 픽셀 좌표를 상기 드론 몸체 좌표계로 변환한 y 좌표값)에 의해 계산되는 것을 특징으로 하는 카메라가 탑재된 드론을 이용한 지상 표적 추적 방법. The yawing angle is, (here, is the x-coordinate value converted from the pixel coordinates located on the image plane to the drone body coordinate system, is a y-coordinate value obtained by converting pixel coordinates located on the image plane to the drone body coordinate system. A ground target tracking method using a drone equipped with a camera, characterized in that calculated.
- 제11항에 있어서, According to clause 11,상기 거리는, (여기서, , , 는 상기 드론의 고도 기준점 H와 카메라 광학축이 지면과 만나는 점 O 사이의 거리, 는 지면으로부터 떨어진 카메라 원점의 높이, 는 상기 이미지 평면 상에 위치하는 픽셀 좌표를 드론 몸체 좌표계로 변환한 x 좌표값, 는 상기 이미지 평면 상에 위치하는 픽셀 좌표를 상기 드론 몸체 좌표계로 변환한 y 좌표값, 는 상기 이미지 평면 상에 위치하는 픽셀 좌표를 드론 몸체 좌표계로 변환한 z 좌표값, 는 상기 드론 몸체 좌표계의 축을 기준으로 지면 방향으로 틸트 각도)에 의해 계산되는 것을 특징으로 하는 카메라가 탑재된 드론을 이용한 지상 표적 추적 방법.The distance above is, (here, , , is the distance between the altitude reference point H of the drone and the point O where the camera optical axis meets the ground, is the height of the camera origin away from the ground, is the x-coordinate value converted from the pixel coordinates located on the image plane to the drone body coordinate system, is a y-coordinate value obtained by converting the pixel coordinates located on the image plane into the drone body coordinate system, is the z coordinate value converted from the pixel coordinates located on the image plane to the drone body coordinate system, is the drone body coordinate system A method of tracking a ground target using a drone equipped with a camera, which is calculated by a tilt angle in the direction of the ground relative to the axis.
- 제11항에 있어서, According to clause 11,상기 수평 방향으로 정렬하는 단계는, The step of aligning in the horizontal direction is,상기 지상 목표물로 선택된 픽셀이 상기 이미지 평면의 중심에 위치하도록 하기 위해, 제자리에서 요잉(yawing)을 통해 상기 드론의 헤딩 각을 회전하여 선택된 픽셀이 상기 이미지 평면의 수평 방향 중심에 위치하도록 맞추는 것을 특징으로 하는 카메라가 탑재된 드론을 이용한 지상 표적 추적 방법.In order to ensure that the pixel selected as the ground target is located at the center of the image plane, the heading angle of the drone is rotated through yawing in place to align the selected pixel to be located at the horizontal center of the image plane. A ground target tracking method using a drone equipped with a camera.
- 제11항에 있어서, According to clause 11,상기 수직 방향으로 정렬하는 단계는, The step of aligning in the vertical direction is,상기 드론을 전진 또는 후진 이동하여 선택된 픽셀이 상기 이미지 평면의 중심에 위치하도록 수직 방향을 맞추는 것을 특징으로 하는 카메라가 탑재된 드론을 이용한 지상 표적 추적 방법.A ground target tracking method using a drone equipped with a camera, characterized in that the drone is moved forward or backward to align the vertical direction so that the selected pixel is located at the center of the image plane.
- 드론에 탑재된 카메라를 이용하여 촬영된 지상 목표물의 중심 픽셀이 이미지 평면(image plane) 상에서 위치하는지의 여부를 체크하는 체크부; a check unit that checks whether the center pixel of a ground target captured using a camera mounted on a drone is located on an image plane;상기 지상 목표물의 상기 중심 픽셀이 상기 이미지 평면상에 위치하는 것으로 체크되면, 요잉 회전을 위해 요잉 각도를 계산하는 요잉 각도 계산부; a yawing angle calculation unit that calculates a yawing angle for yawing rotation when it is checked that the center pixel of the ground target is located on the image plane;계산된 요잉 각도만큼 요잉 회전을 통해 수평 방향으로 정렬하는 수평 방향 정렬부; A horizontal alignment unit that aligns in the horizontal direction through yawing rotation by the calculated yawing angle;상기 드론의 이동을 위해 거리를 계산하는 거리 계산부; 및 a distance calculation unit that calculates the distance for movement of the drone; and계산된 거리만큼 전진 또는 후진 이동하여 수직 방향으로 정렬하는 수직 방향 정렬부를 포함하는 카메라가 탑재된 드론을 이용한 지상 표적 추적 장치.A ground target tracking device using a drone equipped with a camera including a vertical alignment unit that moves forward or backward by a calculated distance and aligns in the vertical direction.
- 제16항에 있어서, According to clause 16,상기 요잉 각도 계산부는,The yawing angle calculator,(여기서, 는 상기 이미지 평면 상에 위치하는 픽셀 좌표를 드론 몸체 좌표계로 변환한 x 좌표값, 는 상기 이미지 평면 상에 위치하는 픽셀 좌표를 상기 드론 몸체 좌표계로 변환한 y 좌표값)의 수식을 이용하여 상기 요잉 각도를 계산하는 것을 특징으로 하는 카메라가 탑재된 드론을 이용한 지상 표적 추적 장치. (here, is the x-coordinate value converted from the pixel coordinates located on the image plane to the drone body coordinate system, A ground target tracking device using a drone equipped with a camera, characterized in that the yaw angle is calculated using the formula (y coordinate value converted from the pixel coordinate located on the image plane to the drone body coordinate system).
- 제16항에 있어서, According to clause 16,상기 거리 계산부는, The distance calculator,(여기서, , , 는 상기 드론의 고도 기준점 H와 카메라 광학축이 지면과 만나는 점 O 사이의 거리, 는 지면으로부터 떨어진 카메라 원점의 높이, 는 상기 이미지 평면 상에 위치하는 픽셀 좌표를 드론 몸체 좌표계로 변환한 x 좌표값, 는 상기 이미지 평면 상에 위치하는 픽셀 좌표를 상기 드론 몸체 좌표계로 변환한 y 좌표값, 는 상기 이미지 평면 상에 위치하는 픽셀 좌표를 드론 몸체 좌표계로 변환한 z 좌표값, 는 상기 드론 몸체 좌표계의 축을 기준으로 지면 방향으로 틸트 각도)의 수식을 이용하여 상기 거리를 계산하는 것을 특징으로 하는 카메라가 탑재된 드론을 이용한 지상 표적 추적 장치. (here, , , is the distance between the altitude reference point H of the drone and the point O where the camera optical axis meets the ground, is the height of the camera origin away from the ground, is the x-coordinate value converted from the pixel coordinates located on the image plane to the drone body coordinate system, is a y-coordinate value obtained by converting the pixel coordinates located on the image plane into the drone body coordinate system, is the z coordinate value converted from the pixel coordinates located on the image plane to the drone body coordinate system, is the drone body coordinate system A ground target tracking device using a drone equipped with a camera, characterized in that the distance is calculated using the formula of (tilt angle in the direction of the ground relative to the axis).
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020220146370A KR20240064392A (en) | 2022-11-04 | 2022-11-04 | Method and apparatus for tracking a ground target using a drone equipped with a camera |
KR10-2022-0146370 | 2022-11-04 | ||
KR10-2023-0025272 | 2023-02-24 | ||
KR1020230025272A KR20240131787A (en) | 2023-02-24 | 2023-02-24 | device and method for estimating the GPS coordinates of multiple targets based on camera image information of an unmanned aerial vehicle in an uncooperative environment |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024096691A1 true WO2024096691A1 (en) | 2024-05-10 |
Family
ID=90931060
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2023/017562 WO2024096691A1 (en) | 2022-11-04 | 2023-11-03 | Method and device for estimating gps coordinates of multiple target objects and tracking target objects on basis of camera image information about unmanned aerial vehicle |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024096691A1 (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102001728B1 (en) * | 2017-11-07 | 2019-07-18 | 공간정보기술 주식회사 | Method and system for acquiring three dimentional position coordinates in non-control points using stereo camera drone |
KR102278467B1 (en) * | 2019-04-29 | 2021-07-19 | 주식회사 에프엠웍스 | Method and apparatus of real-time tracking a position using drones, traking a position system including the apparatus |
KR20210105345A (en) * | 2018-11-21 | 2021-08-26 | 광저우 엑스에어크래프트 테크놀로지 씨오 엘티디 | Surveying and mapping methods, devices and instruments |
KR102307584B1 (en) * | 2021-03-31 | 2021-09-30 | 세종대학교산학협력단 | System for autonomous landing control of unmanned aerial vehicle |
KR102371766B1 (en) * | 2021-09-29 | 2022-03-07 | (주)네온테크 | Data processing device for unmanned aerial vehicle for flight mission big data analysis and AI processing and data processing method for flight mission using the same |
KR102461405B1 (en) * | 2021-12-10 | 2022-10-28 | (주)프리뉴 | Drone and drone control methods that enable autonomous flight using spatial analysis |
-
2023
- 2023-11-03 WO PCT/KR2023/017562 patent/WO2024096691A1/en unknown
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102001728B1 (en) * | 2017-11-07 | 2019-07-18 | 공간정보기술 주식회사 | Method and system for acquiring three dimentional position coordinates in non-control points using stereo camera drone |
KR20210105345A (en) * | 2018-11-21 | 2021-08-26 | 광저우 엑스에어크래프트 테크놀로지 씨오 엘티디 | Surveying and mapping methods, devices and instruments |
KR102278467B1 (en) * | 2019-04-29 | 2021-07-19 | 주식회사 에프엠웍스 | Method and apparatus of real-time tracking a position using drones, traking a position system including the apparatus |
KR102307584B1 (en) * | 2021-03-31 | 2021-09-30 | 세종대학교산학협력단 | System for autonomous landing control of unmanned aerial vehicle |
KR102371766B1 (en) * | 2021-09-29 | 2022-03-07 | (주)네온테크 | Data processing device for unmanned aerial vehicle for flight mission big data analysis and AI processing and data processing method for flight mission using the same |
KR102461405B1 (en) * | 2021-12-10 | 2022-10-28 | (주)프리뉴 | Drone and drone control methods that enable autonomous flight using spatial analysis |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019093532A1 (en) | Method and system for acquiring three-dimensional position coordinates without ground control points by using stereo camera drone | |
WO2020071839A1 (en) | Ship and harbor monitoring device and method | |
WO2020004817A1 (en) | Apparatus and method for detecting lane information, and computer-readable recording medium storing computer program programmed to execute same method | |
WO2016074169A1 (en) | Target detecting method, detecting device, and robot | |
WO2017091008A1 (en) | Mobile robot and control method therefor | |
WO2014104574A1 (en) | Method for calibrating absolute misalignment between linear array image sensor and attitude control sensor | |
WO2016106715A1 (en) | Selective processing of sensor data | |
WO2017096547A1 (en) | Systems and methods for uav flight control | |
WO2015194868A1 (en) | Device for controlling driving of mobile robot having wide-angle cameras mounted thereon, and method therefor | |
WO2016041110A1 (en) | Flight control method of aircrafts and device related thereto | |
WO2021125395A1 (en) | Method for determining specific area for optical navigation on basis of artificial neural network, on-board map generation device, and method for determining direction of lander | |
WO2019135437A1 (en) | Guide robot and operation method thereof | |
WO2020159076A1 (en) | Landmark location estimation apparatus and method, and computer-readable recording medium storing computer program programmed to perform method | |
WO2021221344A1 (en) | Apparatus and method for recognizing environment of mobile robot in environment with slope, recording medium in which program for implementing same is stored, and computer program for implementing same stored in medium | |
WO2017007166A1 (en) | Projected image generation method and device, and method for mapping image pixels and depth values | |
JP2018004420A (en) | Device, mobile body device, positional deviation detecting method, and distance measuring method | |
WO2021158062A1 (en) | Position recognition method and position recognition system for vehicle | |
WO2020138760A1 (en) | Electronic device and control method thereof | |
WO2019199112A1 (en) | Autonomous work system and method, and computer-readable recording medium | |
WO2020101156A1 (en) | Orthoimage-based geometric correction system for mobile platform having mounted sensor | |
WO2021040214A1 (en) | Mobile robot and method for controlling same | |
WO2020075954A1 (en) | Positioning system and method using combination of results of multimodal sensor-based location recognition | |
WO2023008791A1 (en) | Method for acquiring distance to at least one object located in any direction of moving object by performing proximity sensing, and image processing device using same | |
WO2020071619A1 (en) | Apparatus and method for updating detailed map | |
WO2023022305A1 (en) | Indoor positioning apparatus and method for pedestrians |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23886385 Country of ref document: EP Kind code of ref document: A1 |