CN115272452A - Target detection positioning method and device, unmanned aerial vehicle and storage medium - Google Patents
Target detection positioning method and device, unmanned aerial vehicle and storage medium Download PDFInfo
- Publication number
- CN115272452A CN115272452A CN202210772899.3A CN202210772899A CN115272452A CN 115272452 A CN115272452 A CN 115272452A CN 202210772899 A CN202210772899 A CN 202210772899A CN 115272452 A CN115272452 A CN 115272452A
- Authority
- CN
- China
- Prior art keywords
- target
- coordinate system
- unmanned aerial
- aerial vehicle
- laser radar
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10044—Radar image
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
The embodiment of the invention provides a target detection and positioning method, a target detection and positioning device, an unmanned aerial vehicle and a storage medium, wherein the method is applied to the unmanned aerial vehicle, the unmanned aerial vehicle is provided with a camera and a laser radar, and the method comprises the following steps: acquiring image data and point cloud data acquired by the camera and the laser radar synchronously for a target; calculating three-dimensional pose information of the target under a laser radar coordinate system of the laser radar according to the image data and the point cloud data; and determining the target pose information of the target in a geodetic coordinate system according to the three-dimensional pose information, the pose transformation matrix of the laser radar and the unmanned aerial vehicle and the flight data of the unmanned aerial vehicle. By the method, the pose information of the target in the laser radar coordinate system is determined based on the image data and the point cloud data, and finally the pose information of the target in the geodetic coordinate system is converted. The scheme reduces the scene requirement, improves the accuracy of target detection and avoids using complex processing algorithms.
Description
Technical Field
The invention relates to the technical field of unmanned aerial vehicles, in particular to a target detection positioning method and device, an unmanned aerial vehicle and a storage medium.
Background
Unmanned aerial vehicles on the market generally have a target detection positioning function, and image data is mainly collected through a visual sensor at present, and then the function is realized by adopting processing algorithms such as image matching and the like for the image data, however, the requirement on scenes by image matching is high, the situation of matching failure is likely to occur, and the processing algorithms are complex.
Disclosure of Invention
The embodiment of the invention provides a target detection positioning method and device, an unmanned aerial vehicle and a storage medium, which are used for improving the accuracy of target detection positioning and avoiding the use of a complex processing algorithm.
In a first aspect, the present embodiment provides a target detection and positioning method, which is applied to an unmanned aerial vehicle, where a camera and a laser radar are installed on the unmanned aerial vehicle, and the method includes:
acquiring image data and point cloud data acquired by the camera and the laser radar synchronously for a target;
calculating three-dimensional pose information of the target under a laser radar coordinate system of the laser radar according to the image data and the point cloud data;
and determining the target position and attitude information of the target in a geodetic coordinate system according to the three-dimensional position and attitude information, the position and attitude transformation matrix of the laser radar and the unmanned aerial vehicle and the flight data of the unmanned aerial vehicle.
In a second aspect, this embodiment provides a target detection positioner, integrates on unmanned aerial vehicle, last camera and the laser radar of installing of unmanned aerial vehicle includes:
the data acquisition module is used for acquiring image data and point cloud data acquired by the camera and the laser radar synchronously;
the three-dimensional pose information determining module is used for calculating three-dimensional pose information of the target under a laser radar coordinate system of the laser radar according to the image data and the point cloud data;
and the target position and pose information determining module is used for determining the target position and pose information of the target in a geodetic coordinate system according to the three-dimensional position and pose information, the position and pose transformation matrix of the laser radar and the unmanned aerial vehicle and the flight data of the unmanned aerial vehicle.
In a third aspect, this embodiment provides a drone, including:
an unmanned aerial vehicle main body;
the camera and the laser radar are installed on the unmanned aerial vehicle main body;
a controller in communication with the camera and lidar, the controller comprising:
one or more processors;
storage means for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the object detection and positioning method provided by the embodiment of the invention.
In a fourth aspect, this embodiment provides a computer-readable storage medium, where computer instructions are stored, and the computer instructions are used to enable a processor to implement the object detection and positioning method provided in this embodiment of the present invention when executed.
The embodiment of the invention provides a target detection and positioning method, a target detection and positioning device, an unmanned aerial vehicle and a storage medium, wherein the method is applied to the unmanned aerial vehicle, the unmanned aerial vehicle is provided with a camera and a laser radar, and the method comprises the following steps: acquiring image data and point cloud data acquired by the camera and the laser radar synchronously for a target; calculating three-dimensional pose information of the target under a laser radar coordinate system of the laser radar according to the image data and the point cloud data; and determining the target pose information of the target in a geodetic coordinate system according to the three-dimensional pose information, the pose transformation matrix of the laser radar and the unmanned aerial vehicle and the flight data of the unmanned aerial vehicle. According to the technical scheme, the method of fusing the point cloud data of the laser radar and the vision is adopted, the three-dimensional pose information of the target in the laser radar coordinate system is determined based on the image data and the point cloud data, and the three-dimensional pose information of the target in the laser radar coordinate system is finally converted into the pose information of the target in the geodetic coordinate system. The technical scheme reduces the requirements on the use scene, improves the accuracy of target detection and positioning, and avoids using complex processing algorithms.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present invention, nor do they necessarily limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a target detection and positioning method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of another target detection and positioning method according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a target detecting and positioning apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an unmanned aerial vehicle according to a fourth embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "original", "target", and the like in the description and claims of the present invention and the drawings described above are used for distinguishing similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in other sequences than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example one
Fig. 1 is a schematic flow chart of a target detection and positioning method according to an embodiment of the present invention, where the method is applicable to a situation where an unmanned aerial vehicle performs accurate detection and positioning on a target, and the method may be executed by a target detection and positioning device, where the target detection and positioning device may be implemented in a hardware and/or software form, and the device may be configured in the unmanned aerial vehicle, and a camera and a laser radar are installed on the unmanned aerial vehicle.
At present, a vision method is mainly adopted for target detection and target three-dimensional positioning applied to an unmanned aerial vehicle, and binocular vision or monocular and unmanned aerial vehicle height is adopted to calculate the three-dimensional position of a target through image detection of the target. For example, binocular vision has the following problems: 1) It is not suitable for scenes that are monotonously lacking in texture. Because the binocular stereo vision method performs image matching according to visual features, matching difficulty occurs for scenes lacking the visual features, and matching errors are large and even matching fails. 2) The computational complexity is high. The method is a pure visual method, and needs to calculate matching pixel by pixel; due to the influence of the various factors, the matching result needs to be ensured to be relatively robust, so that a large number of error elimination strategies are added in the algorithm, the requirement on the algorithm is high, the difficulty in realizing reliable commercial use is high, and the calculation amount is large. Aiming at the problems that in the prior art, the requirement of image matching on a scene is high, matching failure is likely to occur, and a processing algorithm is complex, a target detection positioning method is needed to solve the problems.
As shown in fig. 1, the target detection and positioning method provided in this embodiment may specifically include the following steps:
and S110, acquiring image data and point cloud data acquired by a camera and a laser radar synchronously.
The camera may be any camera that can generate an image, for example, a color industrial camera. The lidar may be any lidar capable of producing a point cloud, for example, solid state lidar CH128X1. Camera and laser radar pass through fixing device and install on unmanned aerial vehicle, and unmanned aerial vehicle can pass through the image data in the camera acquisition environment in real time at the flight in-process to cloud data is gathered through laser radar in step. The target can be understood as a target object that is desired to be detected and positioned, for example, the target can be any target object that a user needs to determine, and which targets need to be detected and positioned can be determined according to the use of the drone.
The image data may be used to identify the presence and location of objects in the image. The point cloud data may be understood as a set of vectors in a laser radar coordinate system of the laser radar, and the scanning data is recorded in the form of points, each point including three-dimensional coordinates, and possibly color information or reflection intensity information. In this embodiment, three-dimensional coordinates in the point cloud data are mainly used.
It can be understood that the camera field of vision reaches all can to obtain image data in this embodiment, and then carries out the three-dimensional location of unmanned aerial vehicle target through following step, compares and adopts binocular vision method to carry out the target detection location among the prior art, and this embodiment measuring range does not receive the base line restriction, has greatly promoted detection range.
And S120, calculating three-dimensional pose information of the target under a laser radar coordinate system of the laser radar according to the image data and the point cloud data.
It is known that image data is obtained by transforming a three-dimensional scene projection in a real environment into a two-dimensional plane through an imaging process of a camera. From the image data it can be determined whether the object is contained in the image and the position of the object in the image data.
In this step, the manner of determining whether the image data includes the target and the position of the target in the image data through the image data may be to input the image data acquired in real time through a pre-trained target detection model, that is, the target may be framed, and a manual framing or automatic framing manner may be adopted. When the unmanned aerial vehicle needs to perform target detection and positioning, the camera is started firstly, the pre-trained target detection model is loaded, and the real-time image is collected into the pre-trained target detection model, so that the target included in the picture data can be framed and selected.
The target detection model can be obtained by adopting a deep learning method and training a preset neural network based on an image data set, the neural network can be preset by a system or related personnel, and the training step is not limited in the step. For example, the step of training the preset neural network to obtain the target detection model may be: firstly, acquiring target pictures of various illumination environments, various directions and various heights to manufacture an image data set; and then training a preset neural network based on the image data in the image data set, evaluating the trained detection model through the test set, and comprehensively screening out the optimal trained detection model according to the detection effect of the trained detection model, namely the optimal trained detection model can be regarded as a finally used target detection model for detecting whether the image data contains a target.
It is to be appreciated that objects in the image data can be boxed out by inputting the image data into a pre-trained object detection model. Position information of the object in the two-dimensional image data is further detected. And when the camera collects the image data in real time, the laser radar collects the point cloud data in real time. In this embodiment, the point cloud data acquired by the laser radar is mapped to the two-dimensional image data. And determining a position information corresponding area based on the position information of the target, and taking the point cloud data in the area as target point cloud data corresponding to the target.
It should be clear that the imaging process of the camera is the projective transformation of a three-dimensional scene in a real environment into a two-dimensional plane, which is not only related to the relative orientation of the objects in space, but also related to the internal structure of the camera, which is determined by the camera parameters. The process of mapping the point cloud data to the two-dimensional image data may be: and based on the point cloud data, converting coordinate information under the laser radar coordinate system into two-dimensional pose information in the image data according to the camera internal reference matrix and the conversion matrix from the radar coordinate system to the camera coordinate system.
And S130, determining target pose information of the target in a geodetic coordinate system according to the three-dimensional pose information, the pose transformation matrix of the laser radar and the unmanned aerial vehicle and flight data of the unmanned aerial vehicle.
And the three-dimensional pose information is based on coordinate information in a laser radar coordinate system. In this step, the coordinate transformation relationship relates to four coordinate systems, which are respectively: laser radar coordinate system, organism coordinate system, navigation coordinate system and geodetic coordinate system.
It can be understood that when the three-dimensional position and pose information of the target under the laser radar coordinate system of the laser radar is determined, the position and pose information of the target under the body coordinate system of the unmanned aerial vehicle needs to be determined according to the position and pose transformation matrix of the laser radar and the unmanned aerial vehicle. And further determining target pose information of the target in a geodetic coordinate system according to the pose information and flight data of the target in the unmanned aerial vehicle body coordinate system. Illustratively, the flight data may include angle data, latitude and longitude, elevation, etc., and the flight data may be obtained from a flight control center.
In the step, the pose information of the target under the body coordinate system of the unmanned aerial vehicle is obtained, and the pose information of the target under the navigation coordinate system can be determined by combining the angle data of the unmanned aerial vehicle. And determining the target pose information of the target in the geodetic coordinate system according to the pose information, the longitude and latitude and the elevation of the target in the navigation coordinate system.
Illustratively, firstly, the pose information of the target under the body coordinate system is obtained according to the three-dimensional pose information of the target under the laser radar coordinate system of the laser radar and the pose transformation matrix of the laser radar and the unmanned aerial vehicle which is calibrated in advance. And finally, according to the flight data of the unmanned aerial vehicle, the flight data of the unmanned aerial vehicle refers to longitude, latitude and elevation, and the target pose information of the target in a geodetic coordinate system is obtained together with the earth major axis and the eccentricity, wherein the target pose information comprises longitude, latitude and elevation.
The embodiment of the invention provides a target detection and positioning method, which is applied to an unmanned aerial vehicle, wherein a camera and a laser radar are installed on the unmanned aerial vehicle, and the method comprises the following steps: firstly, acquiring image data and point cloud data acquired by a camera and a laser radar synchronously for a target; then, according to the image data and the point cloud data, calculating three-dimensional pose information of the target under a laser radar coordinate system of the laser radar; and finally, determining target pose information of the target in the geodetic coordinate system according to the three-dimensional pose information, the pose transformation matrix of the laser radar and the unmanned aerial vehicle and the flight data of the unmanned aerial vehicle. By the method, the three-dimensional pose information of the target under the laser radar coordinate system is determined based on the image data and the point cloud data in a mode of fusing the laser radar point cloud data and the vision, and the three-dimensional pose information of the target under the laser radar coordinate system is finally converted into the pose information of the target under the geodetic coordinate system. The technical scheme reduces the requirements on the use scene, improves the accuracy of target detection and positioning, and avoids using complex processing algorithms.
As an optional embodiment of the present invention, on the basis of the above embodiment, the method further includes: and uploading the target pose information of the target in the geodetic coordinate system to a server.
Specifically, after the target pose information of the target in the geodetic coordinate system is determined, the target pose information of the target in the geodetic coordinate system can be uploaded to the server, and the method for uploading the target pose information to the server is not limited in the step. The unmanned aerial vehicle can realize accurate operation on the target according to the target pose information, such as positioning and tracking the target.
According to the optional embodiment, the image data and the point cloud data are fused, the pose information of the target under the geodetic coordinate system is accurately determined, and the obtained pose information of the target under the geodetic coordinate system is uploaded to the server, so that the unmanned aerial vehicle can operate the target more accurately.
Example two
Fig. 2 is a schematic flow chart of another target detection and positioning method provided in the second embodiment of the present invention, which is a further optimization of the second embodiment, in this embodiment, the calculating of the three-dimensional pose information of the target in the lidar coordinate system of the lidar further includes: detecting position information of the target in the image data; determining target point cloud data of a region corresponding to the position information according to the position information and the point cloud data; and calculating the three-dimensional pose information of the target under a laser radar coordinate system of the laser radar according to the target point cloud data.
Meanwhile, the target position and posture information of the target under the geodetic coordinate system is further determined to be as follows according to the three-dimensional position and posture information, the position and posture conversion matrix of the laser radar and the unmanned aerial vehicle and the flight data of the unmanned aerial vehicle: acquiring a position and pose transformation matrix of the laser radar and the unmanned aerial vehicle which are calibrated in advance; calculating the pose information of the target under a body coordinate system of the unmanned aerial vehicle according to the three-dimensional pose information and the pose transformation matrix of the laser radar and the unmanned aerial vehicle; and determining target pose information of the target in a geodetic coordinate system according to the pose information of the target in the body coordinate system of the unmanned aerial vehicle and the flight data.
As shown in fig. 2, the second embodiment provides a target detection and positioning method, which specifically includes the following steps:
s210, image data and point cloud data acquired by a camera and a laser radar synchronously are acquired.
The image data acquisition process and the point cloud data acquisition process are described in the above embodiments, and are not described herein again.
And S220, detecting the position information of the target in the image data.
The position information of the target in the image data can be understood as two-dimensional coordinates of the target in an image coordinate system. Specifically, the image data collected by the camera is input into a pre-trained target detection model, and a target can be selected. Based on the object selected by the frame, position information of the object in the image data can be detected. The method of detecting the position information of the target in the image data is not particularly limited, and the position information of the target is obtained by matching the pixel coordinates with the image.
Optionally, before detecting the position information of the target in the image data, the method further includes: and inputting the image data into a pre-trained target detection model, and selecting a target in a frame mode.
The target detection model is obtained in advance for a preset neural network based on the image data set and the target test set. In this embodiment, the trained target detection model may be used to identify a target in image data. It can be understood that there may be more than one target, and when the unmanned aerial vehicle is used for target detection and positioning, it is necessary to identify multiple different targets, and then image data of different targets needs to be acquired, and a neural network model is trained based on the image data of the various different targets, so as to obtain a target detection model capable of identifying multiple different targets.
In this embodiment, the target may be selected manually or automatically. The manual target framing can be performed by manually determining a target contained in the image according to image data acquired by the camera and controlling an interface for providing the image through a touch object to perform touch operation so as to complete manual target framing.
Of course, the target in the image may be automatically framed by the image data, the target may be directly framed in the image, or the position coordinate and the area of the target may be obtained by matching the pixel coordinate and the image, which is not limited herein.
And S230, determining target point cloud data of the area corresponding to the position information according to the position information and the point cloud data.
The target point cloud data can be understood as point cloud data corresponding to a target. In the step, the point cloud data collected by the laser radar is filtered. And mapping the filtered point cloud data into the image data through a conversion relation between a laser radar coordinate system and an image coordinate system. According to the position information of the target in the image data determined in the above steps, the area occupied by the target in the image data can be determined, and the point cloud data in the corresponding area can be understood as point cloud data corresponding to the target and recorded as target point cloud data.
As an optional embodiment of the present invention, on the basis of the above embodiment, the step of determining the target point cloud data of the area corresponding to the position information according to the position information and the point cloud data is further optimized as follows:
a1 Filtering the point cloud data.
In consideration of influences brought by equipment precision, operator experience, environmental factors and the like when point cloud data are obtained, and influences caused by electromagnetic wave diffraction characteristics, surface property changes of a measured object and the data splicing and registering operation process, some noise points inevitably appear in the point cloud data. Meanwhile, besides noise points generated by random measurement errors, some outliers far away from the main point cloud often exist in the point cloud data due to external interference, such as sight shielding, obstacles and other factors. In this embodiment, in order to ensure that the point cloud data does not contain interference information, the collected point cloud data is first filtered.
In this embodiment, point Cloud data is filtered based on a Point Cloud Library (PCL) to solve the problems that the density of the Point Cloud data is irregular and needs to be smooth, outliers need to be removed due to problems such as occlusion, a large amount of data needs to be downsampled, and noise data needs to be removed. The filtering method in this embodiment is not particularly limited, and may be bilateral filtering, gaussian filtering, conditional filtering, straight-through filtering, voxel filtering, or the like. Specifically, point cloud filtering is performed on point cloud data acquired by the laser radar based on the PCL library, so that ground points and noise points are removed.
b1 Map the filtered point cloud data into image data.
Specifically, the filtered point cloud data can be converted into pose information under the image coordinate system through a conversion relation between the laser radar coordinate system and the image coordinate system. It can be understood that the pose information in the laser radar coordinate system is three-dimensional pose information, and the pose information in the image coordinate system is two-dimensional pose information.
The conversion relationship between the laser radar coordinate system and the image coordinate system can be expressed as follows:
wherein z iscAs scale parameters, (u, v) pose information in an image coordinate system, and the distance f between the origin of the camera coordinates and the origin of the image coordinate system, i.e. the focal length, the camera internal reference matrix is:radar coordinate system to camera coordinate system transformation matrix:(xw、yw、zw) And the three-dimensional pose information under the laser radar coordinate system.
c1 Target point cloud data of a region corresponding to the position information is determined.
Specifically, after the position information of the target in the image data is detected, a corresponding region of the target in the image data can be determined, and after the point cloud data is mapped to the image data, the point cloud data of the corresponding region of the target is determined to be point cloud data corresponding to the target and recorded as target point cloud data.
And S240, calculating three-dimensional pose information of the target under a laser radar coordinate system of the laser radar according to the target point cloud data.
Specifically, the target point cloud data is calculated, and three-dimensional pose information of the target in a laser radar coordinate system can be obtained, wherein the three-dimensional pose information can be pose information of a target center point.
And S250, acquiring a position and posture conversion matrix of the laser radar and the unmanned aerial vehicle which are calibrated in advance.
Wherein, laser radar and unmanned aerial vehicle's position and posture conversion matrix can be expressed as:specifically, a pre-calibrated pose transformation matrix of the laser radar and the unmanned aerial vehicle is obtained.
And S260, calculating the pose information of the target under the body coordinate system of the unmanned aerial vehicle according to the three-dimensional pose information and the pose conversion matrix of the laser radar and the unmanned aerial vehicle.
Specifically, the pose information of the target in the laser radar coordinate system of the laser radar is assumed to be represented as P0And the pose transformation matrix of the laser radar and the unmanned aerial vehicle can be expressed as M0And then the pose information of the target under the body coordinate system of the unmanned aerial vehicle is as follows: p1=P0·M0。
And S270, determining target pose information of the target in a geodetic coordinate system according to the pose information and flight data of the target in the body coordinate system of the unmanned aerial vehicle.
The flight data refers to angle data, such as pitch angle, roll angle, yaw angle, longitude and latitude of the unmanned aerial vehicle, elevation and the like. Specifically, the pose information of the target in the navigation coordinate system can be determined according to the pose information and the angle data of the target in the body coordinate system of the unmanned aerial vehicle. And further, determining target pose information of the target in a geodetic coordinate system according to the pose information of the target in the navigation coordinate system, the longitude and latitude and the elevation of the unmanned aerial vehicle.
As an optional embodiment of the present invention, on the basis of the above embodiment, the flight data includes angle data, longitude and latitude, and elevation, and the step of determining the target pose information of the target in the geodetic coordinate system according to the pose information of the target in the body coordinate system of the unmanned aerial vehicle and the flight data may be expressed as:
a2 According to the angle data and the pose information of the target in the body coordinate system of the unmanned aerial vehicle, determining the pose information of the target in the navigation coordinate system.
Wherein the angle data comprises pitch angle, roll angle, yaw angle. Wherein, the pitch angle is represented by θ, the roll angle is represented by ψ, and the yaw angle is represented by Φ, then the conversion matrix from the navigation coordinate system to the body coordinate system of the unmanned aerial vehicle can be represented as:
the transformation matrix from the body coordinate system to the navigation coordinate system can be expressed as a rotation matrix: r = DCMT。
Specifically, under the condition that the pose information of the target in the body coordinate system of the unmanned aerial vehicle is known, the pose information of the target in the navigation coordinate system can be determined according to the rotation matrix from the body coordinate system to the northeast coordinate system.
Exemplarily, the pose information of the target in the body coordinate system of the drone is assumed to be P1Then based on the rotation matrix R = DCMTThe pose information of the target in the navigation coordinate system can be determined to be represented as P2Then P is2=R·P1=DCMT·P1。
b2 According to the pose information, longitude and latitude and elevation of the target in the navigation coordinate system, determining the target pose information of the target in the geodetic coordinate system.
In this embodiment, the pose information of the target in the navigation coordinate system is assumed to be represented as P2,P2Coordinate information comprising three dimensions, respectively expressed as: p2_x、P2_y、P2And (iv) is (z). And determining target pose information of the target in the geodetic coordinate system based on the pose information of the target in the navigation coordinate system, the longitude and latitude, the elevation, the half axis of the earth field and the eccentricity, wherein the target pose information comprises the longitude of the target, the latitude of the target and the elevation of the target. Specifically, the method comprises the following steps:
the target longitude may be expressed as: longitude + P of body2_y·dEast·180/π,
The target latitude may be expressed as: body latitude + P2_x·dNorth·180/π,
The target elevation may be expressed as: height of machine body-P2-z, wherein,
in the formula, a represents the earth's major axis and e represents the eccentricity.
The method comprises the steps of calculating three-dimensional pose information of the target under a laser radar coordinate system of the laser radar according to image data and point cloud data, and determining target pose information of the target under a geodetic coordinate system according to the three-dimensional pose information, a pose conversion matrix of the laser radar and an unmanned aerial vehicle and flight data of the unmanned aerial vehicle. Firstly, detecting the position information of a target in image data; determining three-dimensional pose information of the target under a laser radar coordinate system of the laser radar according to the position information and the point cloud data; and determining target pose information of the target in a geodetic coordinate system according to the three-dimensional pose information, the pose transformation matrix of the laser radar and the unmanned aerial vehicle and flight data. By utilizing the method, the three-dimensional pose information corresponding to the target is determined according to the area of the target in the image by adopting a laser radar point cloud data and visual fusion mode, and the pose information of the target object in a laser radar coordinate system is finally converted into the target pose information of a relatively geodetic coordinate system according to a coordinate conversion relation. According to the technical scheme, the requirement on the use scene is reduced, the accuracy of target detection is improved, the use of a complex processing algorithm is avoided, and the military or civil practicability of the unmanned aerial vehicle is greatly improved.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a target detection and positioning device according to a third embodiment of the present invention, where the method is applicable to a situation where an unmanned aerial vehicle is used to perform accurate detection and positioning on a target. This target detection positioner can adopt the form of hardware and/or software to realize, can dispose in unmanned aerial vehicle, installs camera and laser radar on the unmanned aerial vehicle.
As shown in fig. 3, the apparatus includes: a data acquisition module 31, a three-dimensional pose information determination module 32, and an object pose information determination module 33, wherein,
the data acquisition module 31 is used for acquiring image data and point cloud data acquired by a camera and a laser radar synchronously;
the three-dimensional pose information determining module 32 is used for calculating three-dimensional pose information of the target under a laser radar coordinate system of the laser radar according to the image data and the point cloud data;
and the target pose information determining module 33 is configured to determine target pose information of the target in the geodetic coordinate system according to the three-dimensional pose information, the pose transformation matrix of the laser radar and the unmanned aerial vehicle, and flight data of the unmanned aerial vehicle.
The embodiment of the invention provides a target detection positioning device, which is integrated in an unmanned aerial vehicle, wherein a camera and a laser radar are installed on the unmanned aerial vehicle, and a data acquisition module in the device acquires image data and point cloud data which are acquired by the camera and the laser radar synchronously; the three-dimensional pose information determining module calculates three-dimensional pose information of the target under a laser radar coordinate system of the laser radar according to the image data and the point cloud data; and the target position and pose information determining module determines the target position and pose information of the target in a geodetic coordinate system according to the three-dimensional position and pose information, the position and pose transformation matrix of the laser radar and the unmanned aerial vehicle and the flight data of the unmanned aerial vehicle. The device determines the three-dimensional pose information of the target under a laser radar coordinate system based on image data and point cloud data by adopting a laser radar point cloud data and visual fusion mode, and finally converts the three-dimensional pose information of the target under the laser radar coordinate system into the pose information of the target under a geodetic coordinate system. The technical scheme reduces the requirements on the use scene, improves the accuracy of target detection and positioning, and avoids using complex processing algorithms.
Optionally, the three-dimensional pose information determination module 32 includes:
a position information detecting unit for detecting position information of the object in the image data;
the target point cloud data determining unit is used for determining target point cloud data of a region corresponding to the position information according to the position information and the point cloud data;
and the three-dimensional pose information calculation unit is used for calculating the three-dimensional pose information of the target under the laser radar coordinate system of the laser radar according to the target point cloud data.
Optionally, the apparatus further comprises a target selection module, configured to:
and inputting the image data into a pre-trained target detection model, and selecting a target in a frame mode.
Further, the target point cloud data determining unit is specifically configured to:
filtering the point cloud data;
mapping the filtered point cloud data into image data;
and determining target point cloud data of the area corresponding to the position information.
Optionally, the target pose information determining module 33 includes:
the conversion matrix acquisition unit is used for acquiring a position and pose conversion matrix of the laser radar and the unmanned aerial vehicle which are calibrated in advance;
the body coordinate system pose calculation unit is used for calculating pose information of the target under the body coordinate system of the unmanned aerial vehicle according to the three-dimensional pose information and the pose transformation matrix of the laser radar and the unmanned aerial vehicle;
and the target position and pose information determining unit is used for determining the target position and pose information of the target in a geodetic coordinate system according to the position and pose information of the target in the body coordinate system of the unmanned aerial vehicle and the flight data.
Further, the flight data includes angle data, longitude and latitude, and elevation, and according to pose information and flight data of the target in a body coordinate system of the unmanned aerial vehicle, the target pose information determining unit is specifically configured to:
determining the position and attitude information of the target under a navigation coordinate system according to the angle data and the position and attitude information of the target under the body coordinate system of the unmanned aerial vehicle;
and determining the target pose information of the target in the geodetic coordinate system according to the pose information, the longitude and latitude and the elevation of the target in the navigation coordinate system.
Optionally, the apparatus further comprises an upload module configured to:
and uploading the target pose information of the target in the geodetic coordinate system to a server.
The target detection positioning device provided by the embodiment of the invention can execute the target detection positioning method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of an unmanned aerial vehicle according to a fourth embodiment of the present invention. As shown in fig. 4, an unmanned aerial vehicle provided by the fourth embodiment of the present invention includes: a main body of the drone (not shown in the figures); the camera 2 is installed on the unmanned aerial vehicle main body; the laser radar 3 is installed on the unmanned aerial vehicle main body; and the controller 4 is in communication connection with the camera 2 and the laser radar 3.
The controller 4 includes: one or more processors 41 and storage 42; the processor 41 in the controller 4 may be one or more, and fig. 4 illustrates one processor 41 as an example; storage 42 is used to store one or more programs; the one or more programs are executed by the one or more processors 41, such that the one or more processors 41 implement the object detection and location method according to any of the embodiments of the present invention.
The processor 41 and the storage device 42 in the controller 4 may be connected by a bus or other means, and the bus connection is exemplified in fig. 4.
The storage device 42 in the controller 4 is used as a computer-readable storage medium for storing one or more programs, which may be software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the object detection and positioning method provided in one or two embodiments of the present invention (for example, the modules in the object detection and positioning apparatus shown in fig. 3 include the data acquisition module 31, the three-dimensional pose information determination module 32, and the object pose information determination module 33). The processor 41 executes various functional applications and data processing of the controller 4 by executing software programs, instructions and modules stored in the storage device 42, namely, implements the target detection and positioning method in the above method embodiment.
The storage device 42 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the controller 4, and the like. Further, the storage 42 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, storage 42 may further include memory located remotely from processor 41, which may be connected to the device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 43 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the controller 4. The output device 44 may include a display device such as a display screen.
And, when the one or more programs included in the above-mentioned controller 4 are executed by the one or more processors 41, the programs perform the following operations:
acquiring image data and point cloud data acquired by the camera and the laser radar synchronously for a target;
calculating three-dimensional pose information of the target under a laser radar coordinate system of the laser radar according to the image data and the point cloud data;
and determining the target pose information of the target in a geodetic coordinate system according to the three-dimensional pose information, the pose transformation matrix of the laser radar and the unmanned aerial vehicle and the flight data of the unmanned aerial vehicle.
EXAMPLE five
An embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program is used, when executed by a processor, to execute an object detection and positioning method, where the method includes:
acquiring image data and point cloud data acquired by the camera and the laser radar synchronously for a target;
calculating three-dimensional pose information of the target under a laser radar coordinate system of the laser radar according to the image data and the point cloud data;
and determining the target pose information of the target in a geodetic coordinate system according to the three-dimensional pose information, the pose transformation matrix of the laser radar and the unmanned aerial vehicle and the flight data of the unmanned aerial vehicle.
Optionally, the program, when executed by the processor, may be further configured to perform an object detection and positioning method provided in any embodiment of the present invention.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a flash Memory, an optical fiber, a portable CD-ROM, an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. A computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take a variety of forms, including, but not limited to: an electromagnetic signal, an optical signal, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, radio Frequency (RF), etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (10)
1. The target detection and positioning method is applied to an unmanned aerial vehicle, a camera and a laser radar are mounted on the unmanned aerial vehicle, and the method comprises the following steps:
acquiring image data and point cloud data acquired by the camera and the laser radar synchronously for a target;
calculating three-dimensional pose information of the target under a laser radar coordinate system of the laser radar according to the image data and the point cloud data;
and determining the target position and attitude information of the target in a geodetic coordinate system according to the three-dimensional position and attitude information, the position and attitude transformation matrix of the laser radar and the unmanned aerial vehicle and the flight data of the unmanned aerial vehicle.
2. The method of claim 1, wherein the calculating, from the image data and the point cloud data, three-dimensional pose information of the target in a lidar coordinate system of the lidar comprises:
detecting position information of the target in the image data;
determining target point cloud data of a region corresponding to the position information according to the position information and the point cloud data;
and calculating the three-dimensional pose information of the target under a laser radar coordinate system of the laser radar according to the target point cloud data.
3. The method of claim 2, further comprising, prior to detecting the location information of the target in the image data:
and inputting the image data into a pre-trained target detection model, and selecting the target.
4. The method of claim 2, wherein determining the target point cloud data of the area corresponding to the position information according to the position information and the point cloud data comprises:
filtering the point cloud data;
mapping the filtered point cloud data into the image data;
and determining target point cloud data of the area corresponding to the position information.
5. The method of claim 1, wherein determining target pose information for the target in a geodetic coordinate system based on the three-dimensional pose information, a pose transformation matrix for the lidar and the drone, and flight data for the drone comprises:
acquiring a position and posture conversion matrix of the laser radar and the unmanned aerial vehicle which are calibrated in advance;
calculating the pose information of the target under a body coordinate system of the unmanned aerial vehicle according to the three-dimensional pose information and the pose transformation matrix of the laser radar and the unmanned aerial vehicle;
and determining target pose information of the target in a geodetic coordinate system according to the pose information of the target in the body coordinate system of the unmanned aerial vehicle and the flight data.
6. The method of claim 5, wherein the flight data includes angle data, latitude and longitude, and elevation, and wherein determining the pose information of the target in the geodetic coordinate system based on the pose information of the target in the body coordinate system of the drone and the flight data comprises:
determining the position and attitude information of the target in a navigation coordinate system according to the angle data and the position and attitude information of the target in the body coordinate system of the unmanned aerial vehicle;
and determining the target pose information of the target in the geodetic coordinate system according to the pose information, the longitude and latitude and the elevation of the target in the navigation coordinate system.
7. The method of any one of claims 1-6, further comprising:
and uploading the target pose information of the target in the geodetic coordinate system to a server.
8. The utility model provides a target detection positioner, its characterized in that integrates on unmanned aerial vehicle, last camera and the laser radar of installing of unmanned aerial vehicle includes:
the data acquisition module is used for acquiring image data and point cloud data acquired by the camera and the laser radar synchronously;
the three-dimensional pose information determining module is used for calculating the three-dimensional pose information of the target under a laser radar coordinate system of the laser radar according to the image data and the point cloud data;
and the target pose information determining module is used for determining the target pose information of the target in a geodetic coordinate system according to the three-dimensional pose information, the pose conversion matrix of the laser radar and the unmanned aerial vehicle and the flight data of the unmanned aerial vehicle.
9. An unmanned aerial vehicle, comprising:
an unmanned aerial vehicle main body;
the camera and the laser radar are installed on the unmanned aerial vehicle main body;
a controller in communication with the camera and lidar, the controller comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the object detection and localization method of any of claims 1-7.
10. A computer-readable storage medium storing computer instructions for causing a processor to implement the object detection and localization method according to any one of claims 1-7 when executed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210772899.3A CN115272452A (en) | 2022-06-30 | 2022-06-30 | Target detection positioning method and device, unmanned aerial vehicle and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210772899.3A CN115272452A (en) | 2022-06-30 | 2022-06-30 | Target detection positioning method and device, unmanned aerial vehicle and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115272452A true CN115272452A (en) | 2022-11-01 |
Family
ID=83763069
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210772899.3A Pending CN115272452A (en) | 2022-06-30 | 2022-06-30 | Target detection positioning method and device, unmanned aerial vehicle and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115272452A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116363623A (en) * | 2023-01-28 | 2023-06-30 | 苏州飞搜科技有限公司 | Vehicle detection method based on millimeter wave radar and vision fusion |
CN117806336A (en) * | 2023-12-26 | 2024-04-02 | 珠海翔翼航空技术有限公司 | Automatic berthing method, system and equipment for airplane based on two-dimensional and three-dimensional identification |
CN118261982A (en) * | 2024-04-26 | 2024-06-28 | 连云港空巡智能科技有限公司 | Method and system for constructing three-dimensional model of unmanned aerial vehicle by utilizing laser point cloud scanning technology |
-
2022
- 2022-06-30 CN CN202210772899.3A patent/CN115272452A/en active Pending
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116363623A (en) * | 2023-01-28 | 2023-06-30 | 苏州飞搜科技有限公司 | Vehicle detection method based on millimeter wave radar and vision fusion |
CN116363623B (en) * | 2023-01-28 | 2023-10-20 | 苏州飞搜科技有限公司 | Vehicle detection method based on millimeter wave radar and vision fusion |
CN117806336A (en) * | 2023-12-26 | 2024-04-02 | 珠海翔翼航空技术有限公司 | Automatic berthing method, system and equipment for airplane based on two-dimensional and three-dimensional identification |
CN118261982A (en) * | 2024-04-26 | 2024-06-28 | 连云港空巡智能科技有限公司 | Method and system for constructing three-dimensional model of unmanned aerial vehicle by utilizing laser point cloud scanning technology |
CN118261982B (en) * | 2024-04-26 | 2024-09-17 | 连云港空巡智能科技有限公司 | Method and system for constructing three-dimensional model of unmanned aerial vehicle by utilizing laser point cloud scanning technology |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111325796B (en) | Method and apparatus for determining pose of vision equipment | |
CN112567201B (en) | Distance measuring method and device | |
CN110687549B (en) | Obstacle detection method and device | |
CN110148185B (en) | Method and device for determining coordinate system conversion parameters of imaging equipment and electronic equipment | |
CN111274343B (en) | Vehicle positioning method and device, electronic equipment and storage medium | |
CN107481292B (en) | Attitude error estimation method and device for vehicle-mounted camera | |
CN115272452A (en) | Target detection positioning method and device, unmanned aerial vehicle and storage medium | |
CN112444242B (en) | Pose optimization method and device | |
CN107123142B (en) | Pose estimation method and device | |
KR101880185B1 (en) | Electronic apparatus for estimating pose of moving object and method thereof | |
CN111461981B (en) | Error estimation method and device for point cloud stitching algorithm | |
CN109300143B (en) | Method, device and equipment for determining motion vector field, storage medium and vehicle | |
CN108508916B (en) | Control method, device and equipment for unmanned aerial vehicle formation and storage medium | |
CN110648283A (en) | Image splicing method and device, electronic equipment and computer readable storage medium | |
KR102006291B1 (en) | Method for estimating pose of moving object of electronic apparatus | |
CN111127584A (en) | Method and device for establishing visual map, electronic equipment and storage medium | |
US9679406B2 (en) | Systems and methods for providing a visualization of satellite sightline obstructions | |
CN111353453B (en) | Obstacle detection method and device for vehicle | |
CN111461980B (en) | Performance estimation method and device of point cloud stitching algorithm | |
US20230326098A1 (en) | Generating a digital twin representation of an environment or object | |
CN112630798A (en) | Method and apparatus for estimating ground | |
CN116142172A (en) | Parking method and device based on voxel coordinate system | |
CN115019167A (en) | Fusion positioning method, system, equipment and storage medium based on mobile terminal | |
KR102225321B1 (en) | System and method for building road space information through linkage between image information and position information acquired from a plurality of image sensors | |
CN111462321A (en) | Point cloud map processing method, processing device, electronic device and vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |