CN112686963B - Target positioning method of aerial work robot for shielding - Google Patents

Target positioning method of aerial work robot for shielding Download PDF

Info

Publication number
CN112686963B
CN112686963B CN202110088423.3A CN202110088423A CN112686963B CN 112686963 B CN112686963 B CN 112686963B CN 202110088423 A CN202110088423 A CN 202110088423A CN 112686963 B CN112686963 B CN 112686963B
Authority
CN
China
Prior art keywords
rgb
camera
point
map
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110088423.3A
Other languages
Chinese (zh)
Other versions
CN112686963A (en
Inventor
张啸宇
赵世钰
曹华姿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Westlake University
Original Assignee
Westlake University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Westlake University filed Critical Westlake University
Priority to CN202110088423.3A priority Critical patent/CN112686963B/en
Publication of CN112686963A publication Critical patent/CN112686963A/en
Application granted granted Critical
Publication of CN112686963B publication Critical patent/CN112686963B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a target positioning method of an aerial work robot for shielding, and belongs to the field of unmanned aerial vehicles and visual perception thereof. The invention is based on the RGB-D camera, extracts the characteristic point of the operation target in the RGB image, and constructs the three-dimensional point characteristic map of the operation area by combining the corresponding depth image, so that the position information of the operation target is stored in the three-dimensional map. In the movement process of the unmanned aerial vehicle, the point features in the three-dimensional map are matched with the feature points in the RGB image, so that the three-dimensional coordinates of the target position points can be obtained at other positions even if the target RGB image is not contained, and meanwhile, the position information of the target can be fed back to a control system of the unmanned aerial vehicle in real time. The invention can solve the problems that an operation target moves out of the field of view or is blocked by an operation mechanism in the process of moving or executing an operation task of an aerial operation robot.

Description

Target positioning method of aerial work robot for shielding
Technical Field
The invention belongs to the field of unmanned aerial vehicles and visual perception thereof, and particularly relates to a target positioning method of an aerial work robot for shielding.
Background
Unmanned aerial vehicle systems rapidly develop worldwide in recent years, and are widely applied to various fields such as power line inspection, agricultural plant protection and environmental monitoring. Although unmanned aerial vehicles are rapidly developed, most application scenes are limited to the "watch" stage only; the aerial work robot with the mechanical arm expands the walking capability of the unmanned aerial vehicle, so that the aerial work robot can complete the crossing from 'seeing' to 'doing'.
Positioning of a work target is one of key technologies that an aerial work robot can autonomously complete a work task. In the traditional unmanned aerial vehicle target positioning method, a laser radar or a vision camera and other sensors are used for calculating the target position; such methods require that the work object remain within the detection range of the sensor. However, in the aerial work robot, the mounting position of the sensor is limited due to the limitation of the robot arm, and the work target is easily shielded by the robot arm during the execution of the work task.
Disclosure of Invention
In order to solve the problem of shielding of a sensor for positioning a target by a mechanical arm of an aerial working robot, the invention provides a target positioning method of the aerial working robot for shielding, which is based on an RGB-D depth camera arranged on an unmanned plane body of the aerial working robot.
The aim of the invention is achieved by the following technical scheme:
a target positioning method of an aerial working robot for shielding is provided, wherein an RGB-D depth camera is arranged on a body of the aerial working robot, and when a working task is not executed, the RGB-D depth camera can directly observe a working target and cannot be shielded by an unmanned aerial vehicle body; before the unmanned aerial vehicle takes off, calibrating internal parameters of the RGB-D depth camera, wherein the internal parameters comprise the internal parameters of the RGB camera, the internal parameters of the depth camera and the position parameters between the RGB camera and the depth camera; shooting from different positions in the flight of the unmanned aerial vehicle to obtain an RGB image;
the target positioning method specifically comprises the following steps:
s1: marking a work target in a first RGB image containing the target shot by an RGB-D camera;
s2: extracting characteristic points of a first RGB image, and constructing a three-dimensional point characteristic map by combining a corresponding depth image, wherein points in the three-dimensional point characteristic map are called map points;
s3: obtaining a target position point representing the target according to the target marked by the S1, and calculating a corresponding relation between the target position point and a map point;
s4: with the movement of the aerial working robot, the RGB-D camera shoots at different positions; extracting characteristic points of RGB (red, green and blue) images shot at different positions, and matching the characteristic points with map points in the three-dimensional point characteristic map constructed in the step S2 based on descriptors of the characteristic points;
s5: for the successfully matched image feature points in the S4, calculating three-dimensional coordinates of the successfully matched image feature points in a camera coordinate system corresponding to the RGB image by combining the corresponding depth map;
s6: and according to the three-dimensional coordinates obtained in the step S5 and the corresponding relation between the target position points and the map points obtained in the step S3, the three-dimensional coordinates of the target position points in the camera coordinate system corresponding to the current RGB map can be obtained through calculation.
Further, in the step S1, the work target is marked with a rectangular frame by a manual or article identification technology.
Further, the feature points in S2 are ORB feature points.
Further, the step of S2 of constructing a three-dimensional point feature map specifically includes: according to a pinhole camera model, calculating coordinates of map points corresponding to the feature points in a camera coordinate system corresponding to the 1 st RGB image in S1 by using the following formula, and defining a world coordinate system to be the same as the camera coordinate system so as to construct a three-dimensional point feature map;
wherein,representing three-dimensional coordinates of the ith map point in a camera coordinate system corresponding to the 1 st RGB image; />Representing coordinates of the ith map point in a world coordinate system, wherein the coordinates are the same as the coordinates of the ith map point in a camera coordinate system corresponding to the 1 st RGB image; />Representing a depth value of the ith map point in a camera coordinate system of the 1 st RGB image, the depth value being obtained from a depth image of the RGB-D camera; f (f) x 、f y 、c x 、c y Respectively representing focal length and offset in internal parameters of the RGB camera obtained in camera calibration, wherein K represents an internal parameter matrix of the RGB camera obtained in camera calibration; />Pixels representing the ith map point in the image coordinate system of the 1 st RGB mapCoordinates.
Further, the target position point is a space point corresponding to the geometric center of the rectangular frame for marking the target position.
Further, the matching process of S4 is specifically as follows:
s4.1: first, map points in a world coordinate system are converted into a camera coordinate system
Wherein,shooting a pose of a kth RGB image for an RGB-D camera under a world coordinate system;
s4.2: based on the pinhole camera model, calculate according toProjection coordinates in the image coordinate system:
s4.3: if the projection point is in the corresponding RGB image range, drawing a search box by taking the projection point as the center, searchingFeature points matched in the image; the position of the successfully matched characteristic point in the image coordinate system is recorded as
Further, the correspondence between the target position point and the map points in S3 is the distance between the target position point and each map point; the specific method for calculating the three-dimensional coordinates of the target position point in the camera coordinate system when the kth RGB image is shot in S7 is as follows:
(1) Establishing the following constraint relation
Wherein,representing three-dimensional coordinates of the target position point in a camera coordinate system corresponding to the kth RGB image; d, d i Representing a distance between an ith map point and a target position point;
(2) Constructing the following constraint equation set according to the constraint relation, and solving the constraint equation set to obtain the calculation resultWherein m is the number of constraint equation sets, and m is more than or equal to 3;
the beneficial effects of the invention are as follows:
according to the invention, the feature point map of the target area is established by extracting the visual feature points of the target area. The unmanned aerial vehicle observes the target at a plurality of positions, so that the positions of the target feature points can be optimized, and the target positioning accuracy is further improved. Even if the condition that the target is blocked by the mechanical arm and the like occurs in the process of executing the job task, the constructed characteristic map can still continuously feed back the target position.
Drawings
FIG. 1 is a flow chart of the present invention for target positioning of an aerial work robot that addresses occlusion;
FIG. 2 is a schematic illustration of computing map points in three-dimensional space from feature points in an image based on a pinhole camera model;
fig. 3 is a schematic diagram of a process of matching map points with feature points in an image.
Detailed Description
The objects and effects of the present invention will become more apparent from the following detailed description of the preferred embodiments and the accompanying drawings, it being understood that the specific embodiments described herein are merely illustrative of the invention and not limiting thereof.
According to the target positioning method for the aerial work robot for shielding, the RGB-D depth camera is arranged on the unmanned aerial vehicle body, and the RGB-D depth camera can directly observe the work target when the work task is not executed, so that the target positioning method cannot be shielded by the unmanned aerial vehicle body; before the unmanned aerial vehicle takes off, the internal parameters of the RGB-D depth camera are required to be calibrated, wherein the internal parameters comprise the internal parameters of the RGB camera and the depth camera, and the position parameters between the RGB camera and the depth camera; images including the target object are shot from different positions in the unmanned aerial vehicle flight, and the operation target can be blocked in the process of executing the operation task.
As shown in fig. 1, the target positioning method of the present invention specifically includes the following steps:
s1: the job target is marked in the 1 st RGB diagram containing the target photographed by the RGB-D camera. As one of the embodiments, rectangular boxes may be used for marking, and may be used for manual marking or may be marked by techniques such as article identification.
S2: and extracting characteristic points of the first RGB image, and constructing a three-dimensional point characteristic map by combining the corresponding depth image. The points in the three-dimensional point feature map are referred to as map points. As one of the embodiments, an ORB feature point may be extracted.
The construction of the three-dimensional point characteristic map specifically comprises the following steps: according to a pinhole camera model, calculating coordinates of map points corresponding to the feature points in a camera coordinate system corresponding to the first RGB image in S1 by using the following formula, and defining a world coordinate system to be the same as the camera coordinate system so as to construct a three-dimensional point feature map;
wherein,representing three-dimensional coordinates of an ith map point in a camera coordinate system corresponding to the first RGB map; />Representing coordinates of an ith map point in a world coordinate system, wherein the coordinates are the same as the coordinates of the ith map point in a camera coordinate system corresponding to the first RGB map; />Representing a depth value of the ith map point in a camera coordinate system of the 1 st RGB image, the depth value being obtained from a depth image of the RGB-D camera; f (f) x 、f y 、c x 、c y Respectively representing focal length and offset in internal parameters of the RGB camera obtained in camera calibration, wherein K represents an internal parameter matrix of the RGB camera obtained in camera calibration; />Representing the pixel coordinates of the ith map point in the image coordinate system of the first RGB diagram. Map points, feature points, camera coordinate system and imaging plane are shown in fig. 2.
S3: and obtaining a target position point representing the target according to the target marked by the S1, and calculating the corresponding relation between the target position point and the map point. As one of the embodiments, the target location point may be a spatial point corresponding to the geometric center of the rectangular frame in S1; as one of the embodiments, the correspondence relationship between the target position point and the map points may be a distance between the target position point and each map point.
S4: with the movement of the aerial working robot, the RGB-D camera shoots at different positions; extracting characteristic points of RGB (red, green and blue) images shot at different positions, and matching the characteristic points with map points in the three-dimensional point characteristic map constructed in the step S2 based on descriptors thereof, wherein the characteristic points are as follows:
s4.1: map points in the world coordinate system are first converted into the camera coordinate system,
wherein,the pose of the kth RGB image shot by the RGB-D camera under the world coordinate system can be obtained by positioning methods such as visual positioning, laser positioning, a motion capture system, carrier phase difference (RTK) and the like.
S4.2: based on the pinhole camera model, calculate according toProjection coordinates in the image coordinate system:
s4.3: if the projection point is in the corresponding RGB image range, drawing a search box by taking the projection point as the center, searchingThe feature points that match in the image are shown in fig. 3. The position of the successfully matched characteristic point in the image coordinate system is recorded asAs one embodiment, the search box is a square box with a side length of 10 pixel values, with the projection point as the center.
S5: for the successfully matched image feature points in S4.3Calculating three-dimensional coordinates of the depth map in a camera coordinate system corresponding to the kth RGB map by combining the corresponding depth map>
S6: according to the three-dimensional coordinates calculated in S5And S3, the corresponding relation between the target position point and the map point is obtained, and the three-dimensional coordinate of the target position point in the camera coordinate system corresponding to the kth RGB image can be obtained through calculation.
As one embodiment, when the correspondence between the target position point and the map point in S3 is the distance between the target position point and each map point, the calculation method of the three-dimensional coordinates in the camera coordinate system of the target position point in S6 at the time of capturing the kth RGB image is as follows:
(1) Establishing the following constraint relation
Wherein,representing three-dimensional coordinates of the target position point in a camera coordinate system corresponding to the kth RGB image; d, d i Representing a distance between an ith map point and a target position point;
(2) Constructing the following constraint equation set according to the constraint relation, and solving the constraint equation set to obtain the calculation resultWherein m is the number of constraint equation sets, and m is more than or equal to 3;
and S4-S6 runs in real time along with the movement of the aerial working robot, and the relative positioning of the target position point and the aerial working robot is calculated in real time. Since the target position point can be calculated from other feature points in the surrounding environment, the relative position of the job target can be calculated in the image even if the target position point is blocked when executing the job task.
It will be appreciated by persons skilled in the art that the foregoing description is a preferred embodiment of the invention, and is not intended to limit the invention, but rather to limit the invention to the specific embodiments described, and that modifications may be made to the technical solutions described in the foregoing embodiments, or equivalents may be substituted for elements thereof, for the purposes of those skilled in the art. Modifications, equivalents, and alternatives falling within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (6)

1. The object positioning method of the aerial working robot for shielding is characterized in that an RGB-D depth camera is arranged on a body of the aerial working robot, and when a working task is not executed, the RGB-D depth camera can directly observe a working object and cannot be shielded by an unmanned aerial vehicle body; before the unmanned aerial vehicle takes off, calibrating internal parameters of the RGB-D depth camera, wherein the internal parameters comprise the internal parameters of the RGB camera, the internal parameters of the depth camera and the position parameters between the RGB camera and the depth camera; shooting from different positions in the flight of the unmanned aerial vehicle to obtain an RGB image;
the target positioning method specifically comprises the following steps:
s1: marking a work target in the 1 st RGB image containing the target shot by the RGB-D camera;
s2: extracting characteristic points of the 1 st RGB image, and constructing a three-dimensional point characteristic map by combining the corresponding depth image, wherein points in the three-dimensional point characteristic map are called map points;
s3: obtaining a target position point representing the target according to the target marked by the S1, and calculating a corresponding relation between the target position point and a map point;
s4: with the movement of the aerial working robot, the RGB-D camera shoots at different positions; extracting characteristic points of RGB (red, green and blue) images shot at different positions, and matching the characteristic points with map points in the three-dimensional point characteristic map constructed in the step S2 based on descriptors of the characteristic points;
s5: for the successfully matched image feature points in the S4, calculating three-dimensional coordinates of the successfully matched image feature points in a camera coordinate system corresponding to the RGB image by combining the corresponding depth map;
s6: according to the three-dimensional coordinates obtained in the step S5 and the corresponding relation between the target position points and the map points obtained in the step S3, the three-dimensional coordinates of the target position points in the camera coordinate system corresponding to the current RGB map can be obtained through calculation;
the matching process of the S4 is specifically as follows:
s4.1: first, map points in a world coordinate system are converted into a camera coordinate system
Wherein,shooting a pose of a kth RGB image for an RGB-D camera under a world coordinate system;
s4.2: based on the pinhole camera model, calculate according toProjection coordinates in the image coordinate system:
s4.3: if the projection point is in the corresponding RGB image range, drawing a search box by taking the projection point as the center, searchingFeature points matched in the image; the position of the successfully matched feature point in the image coordinate system is marked as +.>
2. The method for positioning a target of an aerial work robot for handling occlusion according to claim 1, wherein the working target is marked with a rectangular frame by a manual or article recognition technique in S1.
3. The method for positioning an object for an aerial work robot that handles occlusion according to claim 1, wherein the feature points in S2 are ORB feature points.
4. The method for positioning an object of an aerial work robot for handling occlusion according to claim 2, wherein the step S2 of constructing a three-dimensional point feature map specifically includes: according to a pinhole camera model, calculating coordinates of map points corresponding to the feature points in a camera coordinate system corresponding to the 1 st RGB image in S1 by using the following formula, and defining a world coordinate system to be the same as the camera coordinate system so as to construct a three-dimensional point feature map;
wherein,representing three-dimensional coordinates of the ith map point in a camera coordinate system corresponding to the 1 st RGB image; />Representing coordinates of the ith map point in a world coordinate system, wherein the coordinates are the same as the coordinates of the ith map point in a camera coordinate system corresponding to the 1 st RGB image; />Representing a depth value of the ith map point in a camera coordinate system of the 1 st RGB image, the depth value being obtained from a depth image of the RGB-D camera; f (f) x 、f y 、c x 、c y Respectively representing focal length and offset in internal parameters of the RGB camera obtained in camera calibration, wherein K represents an internal parameter matrix of the RGB camera obtained in camera calibration; />Representing the pixel coordinates of the ith map point in the image coordinate system of the 1 st RGB diagram.
5. The method for positioning a target of an aerial work robot for handling occlusion of claim 4, wherein the target location point is a spatial point corresponding to a geometric center of a rectangular frame marking the target location.
6. The method for positioning a target of an aerial work robot for covering according to claim 5, wherein the correspondence between the target position point and the map points in S3 is a distance between the target position point and each map point; the specific method for calculating the three-dimensional coordinates of the target position point in the camera coordinate system when the kth RGB image is shot in S6 is as follows:
(1) Establishing the following constraint relation
Wherein,representing three-dimensional coordinates of the target position point in a camera coordinate system corresponding to the kth RGB image; d, d i Representing a distance between an ith map point and a target position point;
(2) Constructing the following constraint equation set according to the constraint relation, and solving the constraint equation set to obtain the calculation resultWherein m is the number of constraint equation sets, and m is more than or equal to 3;
CN202110088423.3A 2021-01-22 2021-01-22 Target positioning method of aerial work robot for shielding Active CN112686963B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110088423.3A CN112686963B (en) 2021-01-22 2021-01-22 Target positioning method of aerial work robot for shielding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110088423.3A CN112686963B (en) 2021-01-22 2021-01-22 Target positioning method of aerial work robot for shielding

Publications (2)

Publication Number Publication Date
CN112686963A CN112686963A (en) 2021-04-20
CN112686963B true CN112686963B (en) 2024-03-29

Family

ID=75459024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110088423.3A Active CN112686963B (en) 2021-01-22 2021-01-22 Target positioning method of aerial work robot for shielding

Country Status (1)

Country Link
CN (1) CN112686963B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191504A (en) * 2018-08-01 2019-01-11 南京航空航天大学 A kind of unmanned plane target tracking

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10782137B2 (en) * 2019-01-28 2020-09-22 Qfeeltech (Beijing) Co., Ltd. Methods, apparatus, and systems for localization and mapping

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191504A (en) * 2018-08-01 2019-01-11 南京航空航天大学 A kind of unmanned plane target tracking

Also Published As

Publication number Publication date
CN112686963A (en) 2021-04-20

Similar Documents

Publication Publication Date Title
CN110825101B (en) Unmanned aerial vehicle autonomous landing method based on deep convolutional neural network
CN109872372B (en) Global visual positioning method and system for small quadruped robot
CN109993793B (en) Visual positioning method and device
CN108053449A (en) Three-dimensional rebuilding method, device and the binocular vision system of binocular vision system
WO2019076304A1 (en) Binocular camera-based visual slam method for unmanned aerial vehicles, unmanned aerial vehicle, and storage medium
EP3771198B1 (en) Target tracking method and device, movable platform and storage medium
CN112184812B (en) Method for improving identification and positioning precision of unmanned aerial vehicle camera to april tag and positioning method and system
JP2015212629A (en) Detection device and manipulator operation control including detection device
CN109143167B (en) Obstacle information acquisition device and method
CN113329179B (en) Shooting alignment method, device, equipment and storage medium
Chen et al. Global path planning in mobile robot using omnidirectional camera
CN111964680A (en) Real-time positioning method of inspection robot
WO2022208963A1 (en) Calibration device for controlling robot
CN113822946A (en) Mechanical arm grabbing method based on computer vision
CN111724432B (en) Object three-dimensional detection method and device
CN112686963B (en) Target positioning method of aerial work robot for shielding
CN110415292A (en) Movement attitude vision measurement method of ring identification and application thereof
CN108733076B (en) Method and device for grabbing target object by unmanned aerial vehicle and electronic equipment
CN116295340A (en) Unmanned aerial vehicle binocular vision SLAM method based on panoramic camera
CN115100287B (en) External parameter calibration method and robot
CN115588036A (en) Image acquisition method and device and robot
CN118050008B (en) Robot navigation system and navigation method thereof
CN116228849B (en) Navigation mapping method for constructing machine external image
CN116499456B (en) Automatic positioning device and method for mobile robot and positioning system for unmanned mower
Yin et al. Research on indoor multipoint data acquisition with a micro UAV

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant