CN112378397B - Unmanned aerial vehicle target tracking method and device and unmanned aerial vehicle - Google Patents

Unmanned aerial vehicle target tracking method and device and unmanned aerial vehicle Download PDF

Info

Publication number
CN112378397B
CN112378397B CN202011205401.2A CN202011205401A CN112378397B CN 112378397 B CN112378397 B CN 112378397B CN 202011205401 A CN202011205401 A CN 202011205401A CN 112378397 B CN112378397 B CN 112378397B
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
information
target
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011205401.2A
Other languages
Chinese (zh)
Other versions
CN112378397A (en
Inventor
赵小川
董忆雪
李陈
徐凯
宋刚
刘华鹏
郑君哲
邵佳星
史津竹
王子彻
陈路豪
马燕琳
冯云铎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China North Computer Application Technology Research Institute
Original Assignee
China North Computer Application Technology Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China North Computer Application Technology Research Institute filed Critical China North Computer Application Technology Research Institute
Priority to CN202011205401.2A priority Critical patent/CN112378397B/en
Publication of CN112378397A publication Critical patent/CN112378397A/en
Application granted granted Critical
Publication of CN112378397B publication Critical patent/CN112378397B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • G06F17/141Discrete Fourier transforms
    • G06F17/142Fast Fourier transforms, e.g. using a Cooley-Tukey type algorithm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates

Abstract

The disclosure provides a method and a device for tracking a target by an unmanned aerial vehicle and the unmanned aerial vehicle, wherein the method comprises the following steps: acquiring an image acquired by a sensor of the unmanned aerial vehicle, wherein the sensor is used for acquiring the image; determining first spatial position information of the target object according to an image area corresponding to the identified target object in the image; predicting the next spatial position information of the target object according to the first spatial position information; and controlling the unmanned aerial vehicle to fly towards the target object according to the next spatial position information. According to the method of the embodiment, the target tracking effect can be improved.

Description

Unmanned aerial vehicle target tracking method and device and unmanned aerial vehicle
Technical Field
The disclosure relates to the technical field of unmanned aerial vehicle target tracking, and more particularly, to a method and a device for tracking a target by an unmanned aerial vehicle and the unmanned aerial vehicle.
Background
With the promotion of the urban process of the human society, the explosive growth of urban population and the limited contradiction of the ground surface area are continuously increased, so that the urban combat system is expanded towards the underground space. With the increasing status of underground space in urban combat, the realization of target reconnaissance detection of underground space is particularly important. In underground spaces, the use of unmanned aerial vehicles to search, identify, lock and track targets has wide application. In order to realize that the unmanned aerial vehicle normally performs tasks in the underground space, the problem of target tracking of the unmanned aerial vehicle in the underground space needs to be solved.
Currently, the target may be tracked according to preset feature points, such as color, texture, shape, etc. of the target.
However, considering that the spatial position of the target is generally changed, the target tracking effect is poor when the target is identified by simply relying on the feature points.
Disclosure of Invention
An object of the embodiments of the present disclosure is to provide a new technical solution for tracking a target by an unmanned aerial vehicle, so as to improve a target tracking effect.
According to a first aspect of the present disclosure, there is provided a method of tracking a target by a drone provided with a sensor for acquiring images, the method comprising: acquiring an image acquired by the sensor; determining first spatial position information of the target object according to an image area corresponding to the identified target object in the image; predicting next spatial position information of the target object according to the first spatial position information; and controlling the unmanned aerial vehicle to fly towards the target object according to the next space position information.
According to a second aspect of the present disclosure there is also provided a drone target tracking device comprising a processor and a memory for storing instructions for controlling the processor to operate to perform the method according to the first aspect of the present disclosure.
According to a third aspect of the present disclosure, there is also provided a drone, characterized by comprising a sensor for acquiring images and a drone target tracking device according to the second aspect of the present disclosure; the unmanned aerial vehicle target tracking device is in communication connection with the sensor.
The method of the embodiment of the disclosure has the beneficial effects that the image acquired by the sensor for acquiring the image of the unmanned aerial vehicle is acquired; determining first spatial position information of the target object according to an image area corresponding to the identified target object in the image; predicting the next spatial position information of the target object according to the first spatial position information; and controlling the unmanned aerial vehicle to fly towards the target object according to the next spatial position information. According to the method of the embodiment, the target tracking effect can be improved.
Other features of the present invention and its advantages will become apparent from the following detailed description of exemplary embodiments of the invention, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a block schematic diagram of an intelligent drone system, according to one embodiment;
FIG. 2 is a flow diagram of a method of unmanned aerial vehicle tracking targets according to one embodiment;
FIG. 3 is a schematic diagram of an optimized fDSST algorithm flow according to one embodiment;
FIG. 4 is a flow diagram of a method of unmanned aerial vehicle tracking a target according to one embodiment;
FIG. 5 is a schematic diagram of a drone autonomous positioning software interactive interface, according to one embodiment;
FIG. 6 is a flow chart of a method of tracking a target by a drone according to another embodiment;
FIG. 7 is a schematic diagram of an unmanned aerial vehicle navigation obstacle avoidance software interaction interface, according to one embodiment;
FIG. 8 is a flow chart of a method of tracking a target by a drone according to another embodiment;
FIG. 9 is a block schematic diagram of a drone target tracking device according to one embodiment;
fig. 10 is a block schematic diagram of a drone according to one embodiment.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless it is specifically stated otherwise.
The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of exemplary embodiments may have different values.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
< System example >
Fig. 1 is a schematic structural diagram of an alternative intelligent unmanned aerial vehicle system to which the method of the embodiments of the present disclosure may be applied. As shown in fig. 1, the intelligent unmanned aerial vehicle system at least may include an unmanned aerial vehicle and a ground measurement and control terminal. The unmanned plane and the ground measurement and control terminal can interact in a wireless communication mode.
As shown in fig. 1, the unmanned aerial vehicle may include at least a platform and power subsystem 10, a flight control subsystem 20, a perception and information fusion subsystem 30, a target detection and tracking subsystem 40, a data link subsystem 50, and a ground measurement and control subsystem 60.
The platform and power subsystem 10 may have a function of a multi-sensor mounting platform, and may specifically include a machine body, a brushless motor, an electronic speed regulator, a propeller, a battery, and other components; the flight control subsystem 20 can have functions of controlling the attitude and track stability, and the flight state parameter acquisition and navigation computation of the unmanned aerial vehicle, and specifically can comprise an inertial sensor 201, an autopilot, a receiver, a state knowledge point and other components; the perception and information fusion subsystem 30 can have functions of autonomous positioning, navigation flight and the like in an underground garage environment, and particularly can comprise an airborne computer, a two-dimensional laser radar 301, a visual odometer 302, a depth camera 303, a positioning Gao Leida and the like; the target detection and tracking subsystem 40 may have the functions of target detection and identification and dynamic tracking after target locking, and may specifically include components such as a photoelectric pod 401; the data link subsystem 50 can have functions of adopting a data link integrating high anti-shielding graphics and data and transmitting images and data between an airplane and the ground in real time, and can specifically comprise an airborne end, a ground end and the like; the ground measurement and control subsystem 60 can have the functions of ground control and man-machine interaction, and particularly can comprise a ground measurement and control terminal, a remote controller and the like.
Based on the above, the device interfacing relationship of the unmanned aerial vehicle may be as follows: the airborne computer is respectively connected with a fixed Gao Leida, a laser radar and an autopilot through UART (Universal Asynchronous Receiver/Transmitter, universal asynchronous receiver Transmitter) lines, and is respectively connected with a visual odometer and a depth camera through USB lines; the photoelectric pod is connected with an onboard computer through HDMI (High Definition Multimedia Interface ) lines, a video information line through CVBS (Composite Video Broadcast Signal, composite synchronous video broadcast signal) and a data link onboard end, and an autopilot through RS422 (balance voltage digital interface circuit electrical characteristic) lines; the autopilot is also connected with the onboard end of the data link through an RS 422; the data link machine carries end and data link ground end wireless connection.
In this embodiment, the inertial sensor 201, or inertial measurement unit (Inertial Measurement Unit, IMU), may be used to implement a six-axis position and attitude solution; the two-dimensional laser radar 301 can be used for realizing high-precision two-dimensional positioning; the visual odometer 302, or visual sensor, may be used to implement a six-axis reference position and attitude solution; the depth camera 303 may be used to enable acquisition of depth data; the fixed Gao Lei to 304 can be used for realizing accurate height setting; the optoelectronic pod 401 may be used to collect video data.
< method example >
Example 1
Fig. 2 is a flow diagram of a method of unmanned aerial vehicle tracking targets according to one embodiment. In this embodiment, the method for tracking the target by the unmanned aerial vehicle may include the following steps S210 to S240.
Step S210, acquiring an image acquired by a sensor of the unmanned aerial vehicle for acquiring an image. For example, the image may be an image acquired by the optoelectronic pod 401 described above.
Step S220, determining first spatial position information of the target object according to the image area corresponding to the identified target object in the image.
In detail, the target object to be tracked can be identified according to the image acquired by the sensor, and the target object is locked after the target object is identified. And tracking the target object based on the image corresponding to the identified target object and combining the image acquired by the sensor after the target object is identified.
In this embodiment, after determining the image area corresponding to the target object, the spatial position of the target object may be determined according to the determined image area, and then the next spatial position of the target object may be predicted according to the determined spatial position.
In detail, the method for tracking the target by the unmanned aerial vehicle can be applied to an underground garage environment so as to realize target tracking of the unmanned aerial vehicle in the underground garage. Thus, in one embodiment, the first spatial location information of the target object is spatial location information of the target object in an underground garage.
Step S230, predicting the next spatial position information of the target object according to the first spatial position information.
Considering the problem of scale scaling of the tracked target on the visual image and the requirement of rapid real-time tracking, the fDSST tracking algorithm can be used for realizing stable tracking of the dynamic target.
Based on this, in one embodiment, the step S230 includes: predicting a new position according to the position corresponding to the first spatial position information by using a position filter of an fDSST algorithm; predicting a new size according to the new position and the size corresponding to the first spatial position information by using a size filter of an fDSST algorithm; and obtaining the next space position information of the target object according to the new position and the new size.
In this embodiment, the next spatial location information may be predicted based on the location and the size corresponding to the first spatial location information through prediction of the new location and the new size.
To further enhance tracking accuracy to achieve stable tracking of dynamic targets, the fdst algorithm may be optimized, for example, a cyclic matrix method may be used to generate a series of training samples from the target selected in the first frame, to train the optimal correlation filter to estimate the position of the target in the next frame, and to select multi-channel HOG (Histogram of Oriented Gradient, direction gradient histogram) features to train the multi-channel correlation filter. Based on this, the optimized fDSST algorithm flow may be as shown in FIG. 3. The dashed lines in fig. 3 are used to identify the image region corresponding to the target object in the video image.
Referring to fig. 3, the algorithm flow may include:
(1) Extracting target characteristics according to target information of a first frame of the image, selecting a Gaussian function with a target as a center to be expected to be output, and calculating initial parameters of a filter according to input and output;
(2) Extracting image features from the predicted position of the previous frame for each subsequent frame of image;
(3) Adding a cosine window, predicting a target position in the frame through a correlation filter, and completing the operation by using Fast Fourier Transform (FFT) to accelerate the operation speed;
(4) Outputting the maximum response position, namely a target position, and extracting the target position;
(5) Updating relevant filter parameters according to the characteristics and repeating the step (2).
Based on this, in one embodiment, in the step S230, the predicting the next spatial location information of the target object includes: predicting the next spatial position information of the target object by using an optimized fDSST algorithm; wherein the optimized fDSST algorithm has one or more of the following features: firstly, the filter deduces and adopts point-by-point operation, and the operation in the time domain is converted into the complex frequency domain by accelerating through FFT (fast Fourier transform ); carrying out dimension compression on HOG features by adopting a PCA (Principal Component Analysis ) dimension reduction mode, reducing feature dimensions of a position filter and a scale filter, and obtaining scale positioning by triangular polynomial interpolation; and thirdly, interpolating the filtering result, training and detecting samples by using coarse characteristic lattice points, and obtaining a final prediction result by using triangular polynomial interpolation.
In the embodiment, the optimized fDSST tracking algorithm can be used for realizing stable tracking of a dynamic target, the target is tracked through a position filter and a scale filter based on HOG characteristics, the algorithm is fast and efficient by utilizing FFT (fast Fourier transform) fast calculation, the calculation amount is reduced by times by utilizing a PCA (principal component analysis) dimension compression method, the calculation cost of the FFT is in direct proportion to the characteristic dimension, the target detection range can be enlarged, the tracking precision and effect of the algorithm are better, and the real-time requirement is met while the accuracy is ensured. Based on the method, the unmanned aerial vehicle can still track the specific target in real time under the condition of high-speed flight, can be applied to an underground garage environment with the problems of interference of similar targets, shielding of scene obstacles, and view background clutter, and the like, and has a good target vehicle tracking effect.
In one embodiment, experiments were performed on videos acquired by a quad-rotor aircraft, and the results of the experiments indicate that the average frame rate exceeds 35 frames/second, a real-time effect is achieved, and the tracking error is within an acceptable range compared to the flight error caused by the shake of the quad-rotor aircraft.
And step S240, controlling the unmanned aerial vehicle to fly towards the target object according to the next space position information.
In the embodiment, a ground target is shot by using a high-definition camera, acquired image information is processed in real time by an onboard image processing algorithm to obtain a tracking result, and the tracking result can be further sent to an unmanned aerial vehicle flight control system in a serial port mode. Specifically, the flight control system may calculate the offset based on the tracking result and control the attitude of the aircraft in the form of the control amount, thereby completing tracking of the target object.
In order to ensure stable tracking of the unmanned aerial vehicle on the dynamic target object, each acquired frame of video image should comprise an image corresponding to the target object, otherwise, the target object is lost, namely the tracking is failed. In the event of a missing target object, the target search and recognition may be restarted and tracking continued after the target object is again recognized.
Based on this, in one embodiment, after said step S210, and before said step S220, the method further comprises: identifying whether the image has the image area; in the case where the image has the image area, performing the step S220; in the case where the image does not have the image area, the step S210 is performed.
As can be seen from the foregoing, the target tracking method provided in this embodiment performs image recognition on the recognized target object according to the video image collected by the unmanned aerial vehicle, determines the spatial position of the recognized target object according to the recognition result, and predicts the next spatial position of the recognized target object according to the spatial position, so as to track the target based on the prediction result. The target is not only identified by simply relying on the feature points to realize target tracking, so that the target tracking effect can be improved.
In one embodiment, the unmanned aerial vehicle target tracking method is applied to the unmanned aerial vehicle shown in fig. 1, and the speed, continuous tracking time and miss probability of the unmanned aerial vehicle system tracking vehicle are verified through a large number of repeated tests by combining simulation and experiments. The test environment, test content and test results are as follows:
test environment: underground three-layer and below garage environments without satellite signals.
The test contents are as follows: the unmanned aerial vehicle locks the target vehicle, so that the target vehicle moves in a parking lot, the running speed slowly reaches 20km/h, and the unmanned aerial vehicle turns, goes upstairs or downstairs to other floors and the like, and the unmanned aerial vehicle always locks the target vehicle to track continuously.
Test results: the unmanned plane can realize target tracking, and the continuous tracking time is larger, and the probability of losing track is smaller.
In summary, aiming at the problems of interference of similar targets, shielding of scene obstacles, and clutter of visual field background in an underground environment, the embodiment of the disclosure proposes to use a fDSST tracking algorithm to realize stable tracking of a dynamic vehicle. The target is tracked by the position filter and the scale filter based on HOG characteristics, the algorithm is quickly and efficiently realized by FFT (fast Fourier transform) quick calculation, the calculated amount is reduced by times by adopting a PCA (principal component analysis) dimension compression method, the target detection range is enlarged, the tracking precision and effect of the algorithm are better, and the accuracy is ensured and meanwhile, the real-time performance is good.
Example 2
Based on the disclosure of the above embodiment 1, the unmanned aerial vehicle can also be autonomously positioned in the process of realizing the tracking target of the unmanned aerial vehicle. In one embodiment, referring to fig. 4, the method for tracking a target by the unmanned aerial vehicle may further include the following steps S410 to S440.
In one embodiment, taking an underground garage environment as an example, the unmanned aerial vehicle can adopt an inertial sensor 201, a two-dimensional laser radar 301, a visual odometer 302, a depth camera 303 and a positioning Gao Leida to perform multi-sensor fusion positioning in the underground garage environment. A schematic diagram of the autonomous positioning software interaction interface of the unmanned aerial vehicle may be shown in fig. 5.
As shown in fig. 5, the on-board computer may be installed with an ROS, where a laser elevation driving Package, a laser radar driving Package, a camera driving Package, and a MAVROS Package may be disposed, and may communicate with a flight control system of the unmanned aerial vehicle (i.e., the flight control subsystem 20) through the MAVROS Package in the ROS.
Specifically, inertial navigation data collected by the inertial sensor 201 may be output to a MAVROS Package for information fusion, and multi-sensor fusion positioning information obtained by multi-sensor information fusion may be output to a flight control system, so as to control the unmanned aerial vehicle to fly accordingly.
Step S410, obtaining laser positioning information according to the data collected by the two-dimensional laser radar of the unmanned aerial vehicle.
In detail, lidar is typically of millimeter resolution so that the drone can produce higher accuracy even after long periods of operation. In the embodiment, laser SLAM positioning is performed by using a laser radar, so that scene depth can be directly acquired, and a matching map is spliced to acquire self pose.
In this embodiment, the laser positioning information obtained by the laser radar may be further combined with other sensor information to be used for multi-sensor information fusion positioning, so that a two-dimensional laser radar may be selected to obtain the laser positioning information without using a three-dimensional laser radar, so that the problems that the three-dimensional laser radar needs to consume relatively high processor performance when outputting point cloud information, and the three-dimensional laser radar is large in size and expensive and the like can be avoided.
In one embodiment, the step S410 includes: and carrying out two-dimensional positioning and environment mapping according to the data acquired by the two-dimensional laser radar of the unmanned aerial vehicle by using a laser SLAM algorithm based on the Cartographer to obtain pose information of the unmanned aerial vehicle in three plane directions and yaw directions, wherein the pose information is used as the laser positioning information.
In detail, the cartographer is a set of SLAM algorithm based on graph optimization, which is proposed by google.
In this embodiment, the laser SLAM scheme provides an optimization requirement for all observables at the back end, and can constrain drift generated by sensor data at any time, so as to ensure positioning accuracy.
As shown in fig. 5, according to the data collected by the two-dimensional laser radar 301, high-precision two-dimensional positioning and environment mapping can be performed by using a laser SLAM algorithm based on a cartograph, and pose information of the unmanned aerial vehicle in three plane directions and a yaw direction can be output, so as to obtain a laser positioning topic (i.e., the laser positioning information). The obtained laser positioning information can be further used for multi-sensor information fusion positioning so as to realize autonomous accurate positioning of the unmanned aerial vehicle.
In this embodiment, the laser SLAM is only a part of information used for autonomous positioning of the unmanned aerial vehicle, and the two-dimensional laser SLAM can ensure high-precision calculation of the unmanned aerial vehicle in the important degree-of-freedom direction, namely, the three plane directions and the yaw direction, under the long-time running condition.
And step S420, obtaining visual positioning information according to data respectively collected by the inertial sensor, the visual odometer and the depth camera of the unmanned aerial vehicle.
In the embodiment, the image frames are used as references to correct the accumulated errors of the inertial sensors so as to obtain the gestures in the rolling and pitching directions of the unmanned aerial vehicle, and the implementation mode is more accurate than the filtering and resolving of the gestures by only using the inertial sensors.
Specifically, the visual data obtained by the visual odometer, the inertial navigation data obtained by the inertial sensor and the depth data obtained by the depth camera can be fused with preliminary multi-sensor information to obtain visual positioning information, and the visual positioning information is combined with other sensor information to further fuse and position the multi-sensor information so as to realize autonomous positioning of the unmanned aerial vehicle. Based on the method, the adverse effects of large drift, weak light rays, increased errors in open environment and the like of the unmanned aerial vehicle after long-time operation due to the scale problem of the simple camera vision SLAM can be relieved.
In one embodiment, the step S420 includes: and obtaining pose information of the unmanned aerial vehicle in rolling and pitching directions according to inertial navigation data acquired by an inertial sensor of the unmanned aerial vehicle, monocular scene images acquired by a visual odometer of the unmanned aerial vehicle and depth information acquired by a depth camera of the unmanned aerial vehicle by using a VIO-SLAM fusion technology based on VINS, and taking the pose information as the visual positioning information.
In detail, VINS is a monocular visual inertial navigation SLAM scheme of the university of hong kong science and technology, which is a VIO (VisualInertial Odometry, visual inertial odometer) system based on an optimization and sliding window, constructs a tightly coupled frame by using IMU pre-integration, and has the functions of automatic initialization, online external parameter calibration, repositioning, closed loop detection and global pose diagram optimization.
In this embodiment, the visual SLAM scheme puts forward an optimization requirement on all observables at the back end, and can restrict drift generated by sensor data at any time, so as to ensure positioning accuracy.
In this embodiment, as shown in fig. 5, according to inertial navigation data collected by the inertial sensor 201, monocular scene images collected by the visual odometer 302, and depth information collected by the depth camera 303, the inertial navigation data is transmitted to a vision SLAM algorithm based on vision-inertia, and pose information of the unmanned aerial vehicle in rolling and pitching directions is output, so as to obtain a visual positioning topic (i.e., the visual positioning information). The obtained visual positioning information can be further used for multi-sensor information fusion positioning so as to realize autonomous accurate positioning of the unmanned aerial vehicle.
From the above, the visual SLAM in this embodiment is only a part of information used for autonomous positioning of the unmanned aerial vehicle, and can provide a reference for the pose of the unmanned aerial vehicle in the directions of degrees of freedom other than the three planes and the yaw direction.
Step S430, obtaining height information according to the data collected by the unmanned aerial vehicle station Gao Leida.
In this embodiment, as shown in fig. 5, according to the above-mentioned determination Gao Leida 304, the height topic of the height between the unmanned aerial vehicle and the ground (i.e., the above-mentioned height information) may be transmitted. The obtained height information can be further used for multi-sensor information fusion positioning so as to realize autonomous accurate positioning of the unmanned aerial vehicle. The present embodiment uses the stator Gao Leida instead of the barometer, which is beneficial for achieving higher positioning accuracy.
Step S440, according to the laser positioning information, the visual positioning information and the altitude information, the positioning information of the spatial position of the unmanned aerial vehicle is obtained.
As such, the step S250 includes: and controlling the unmanned aerial vehicle to fly towards the target object according to the next space position information and the positioning information of the space position of the unmanned aerial vehicle.
In this embodiment, the laser SLAM, the visual SLAM and the altitude information are only a part of information used for autonomous positioning of the unmanned aerial vehicle, and the three outputs the autonomous positioning information of the unmanned aerial vehicle through pose fusion filtering.
In detail, the unmanned aerial vehicle autonomous positioning mode can be applied to an underground garage environment so as to realize autonomous positioning flight of the unmanned aerial vehicle in the underground garage. Thus, in one embodiment, the positioning information of the spatial position where the unmanned aerial vehicle is located is the positioning information of the unmanned aerial vehicle in the underground garage.
In this embodiment, preferably, the high-precision pose information of the three planes and the yaw direction can be obtained based on the laser SLAM scheme of the Cartographer, and the roll and pitch pose information can be supplemented based on the VIO-SLAM fusion technology of the VINS, and by applying the two SLAM schemes to the closed indoor scene of the underground garage, the sensor can still operate well without being limited by the environment, so that accurate positioning information is provided for the system.
From the above, in the unmanned aerial vehicle autonomous positioning mode provided in this embodiment, the positioning information of the spatial position where the unmanned aerial vehicle is located can be obtained by performing multi-sensor information fusion on the sensor data acquired by the two-dimensional laser radar, the inertial sensor, the visual odometer, the depth camera and the fixed-height radar. The autonomous positioning mode does not depend on satellite signals such as GPS, so that the autonomous positioning mode can be applied to underground environments without satellite signals such as underground garages and can support stable operation of unmanned aerial vehicles. In addition, because the inertial navigation data collected by the inertial sensor is not relied on to realize autonomous positioning, the large drift of the inertial navigation data to the pose estimation of the unmanned aerial vehicle after the unmanned aerial vehicle runs for a long time can be avoided, and the unmanned aerial vehicle positioning accuracy is improved.
In one embodiment, the unmanned aerial vehicle autonomous positioning mode is applied to the unmanned aerial vehicle shown in fig. 1, and the autonomous positioning precision of the unmanned aerial vehicle system in autonomous flight of an underground garage and independent of satellite signals is verified through a large number of repeated tests by combining simulation and experiments. The test environment, test content and test results are as follows:
test environment: underground three-layer and below garage environments without satellite signals.
The test contents are as follows: after the unmanned aerial vehicle takes off, hovering is kept at a proper height, then a circular frame is drawn by taking a point right below as an origin, the actual horizontal position of the unmanned aerial vehicle is captured in real time, and when the flight is completed, the positioning error is calculated according to the offset of the measured actual horizontal position relative to the initial point right below.
Test results: unmanned aerial vehicle can realize autonomous positioning, and autonomous positioning accuracy is high.
To sum up, to the problem that when unmanned aerial vehicle is in underground garage, GPS module will not normally operate, only relies on inertial sensor at this moment, can bring great drift to the position appearance estimation after long-time operation to influence unmanned aerial vehicle location accuracy scheduling, the embodiment of this disclosure provides the mode that carries out the location based on the integration of the laser SLAM of cartograph and the VIO-SLAM based on the VINS, utilizes laser SLAM's to acquire three planes and yaw direction high accuracy position appearance information, utilizes VIO-SLAM to supplement roll and pitch gesture information simultaneously. The two SLAM schemes are arranged at the rear end, optimize requirements are provided for all observables, drift generated by sensor data can be restrained at any time, and the autonomous positioning mode can still well run without being limited by environment when the underground garage is in a closed indoor scene, so that the unmanned aerial vehicle can acquire accurate positioning information in real time.
Example 3
Based on the disclosure of the above embodiment 1, in the process of realizing the tracking target of the unmanned aerial vehicle, the unmanned aerial vehicle can also avoid obstacle flight. In one embodiment, referring to fig. 6, the step S240 may include the following steps S610 to S640. The unmanned aerial vehicle can respectively and simultaneously execute the step of obstacle avoidance flight and the step of target tracking.
Step S610, a first map of the space environment where the unmanned aerial vehicle is located is constructed.
The map of the space environment where the unmanned aerial vehicle is located can be constructed according to the starting position (such as the space position of the unmanned aerial vehicle) and the destination position of the obstacle avoidance flight task of the unmanned aerial vehicle and by combining the environmental data perceived by each sensor (such as the two-dimensional laser radar 301 and the visual odometer 302) on the unmanned aerial vehicle. Wherein, can confirm the space position of unmanned aerial vehicle according to the autonomous positioning information of unmanned aerial vehicle.
In detail, the unmanned aerial vehicle obstacle avoidance flight mode can be applied to an underground garage environment so as to realize the obstacle avoidance flight of the unmanned aerial vehicle in the underground garage. Thus, in one embodiment, the space environment in which the unmanned aerial vehicle is located is an underground garage in which the unmanned aerial vehicle is located.
For example, the above-mentioned sensors such as the visual odometer 302 and the two-dimensional laser radar 301 may be used to obtain the three-dimensional depth information of the underground garage environment and the static obstacle in the environment, and then construct the three-dimensional model of the environment according to the three-dimensional depth information, and build the global two-dimensional grid map and the three-dimensional octree map, and plan the global path, detect the dynamic obstacle, and so on based on the three-dimensional model.
From the above, from the perspective of the actual scene, the embodiment provides an environment sensing and obstacle detection mode based on multi-sensor information fusion such as a visual odometer and a laser radar, and constructs a grid map and an octree map of the underground garage, thereby improving the obstacle avoidance capability of the unmanned aerial vehicle on the obstacles such as vehicles.
In addition, the three-dimensional space map is constructed to provide a data basis for the obstacle avoidance flight of the unmanned aerial vehicle, and the problems that the efficiency of the obstacle avoidance flight of the unmanned aerial vehicle is low, local dilemma can be trapped and real-time accurate obstacle avoidance cannot be realized when the unmanned aerial vehicle only performs the obstacle avoidance flight on a two-dimensional plane can be avoided.
In one embodiment, the step S610 includes: acquiring scene structures and local barrier information of a space environment where the unmanned aerial vehicle is located according to data acquired by a depth camera of the unmanned aerial vehicle;
and establishing a depth information map of a local scene of the space environment where the unmanned aerial vehicle is positioned according to the scene structure and the local obstacle information, and presenting the depth information map in the form of an octree map.
Thus, a global path is planned according to the next spatial location information and the first map, i.e. a global path can be planned according to the next spatial location information and the octree map. For example, a global search may be performed in the established octree map using an a-star algorism (a-star) to plan a global path.
Step S620, planning a global path according to the next spatial location information and the first map.
In one embodiment, the planning a global path from the octree map includes: obtaining target area information of a destination area according to RGB (or RGB color mode) image data acquired by a depth camera of the unmanned aerial vehicle; according to the target area information and a preset SLAM (simultaneous localization and mapping, instant positioning and map construction) local map, performing three-dimensional waypoint determination to obtain positioning information of a spatial position of the destination area; and performing global planning in the octree map by using an A-type algorithm according to the positioning information of the spatial position of the destination area, the local depth information acquired by the depth camera, the positioning information of the spatial position of the unmanned aerial vehicle and the SLAM local map, so as to obtain a global path.
In detail, the RGB color mode is a color standard in the industry, and is a color standard that obtains various colors by changing three color channels of red (R), green (G), and blue (B) and overlapping them with each other.
In this embodiment, the visual tracking result may be converted into the target point tracking coordinate, and then the global path is planned by an a-x algorithm according to the depth information, the position of the unmanned aerial vehicle, the map established by the laser SLAM, and the position of the target point.
In one implementation, a schematic diagram of the unmanned aerial vehicle navigation obstacle avoidance software interaction interface may be shown in fig. 7. As shown in fig. 7, the on-board computer may be installed with an ROS (Robot Operating System ), where the ROS may be provided with a MAVROS Package, and may communicate with a flight control system of the unmanned aerial vehicle (i.e., the flight control subsystem 20) through MAVROS Package via MAVLink (micro air vehicle link) communication. For example, after the airborne computer obtains the obstacle avoidance flight result, the airborne computer can output a corresponding instruction of the obstacle avoidance flight result to the flight control system through the MAVROS Package, so as to control the unmanned aerial vehicle to fly accordingly.
In fig. 7, the topic of RGB images (i.e., the RGB image data) collected by the depth camera of the unmanned aerial vehicle may be obtained by a visual tracking process (e.g., the visual tracking process may be performed in combination with the Yolov3-tiny network structure and the fdst algorithm). And performing three-dimensional waypoint determination (for example, performing three-dimensional waypoint determination through conversion of a camera model) according to the target area topic and the SLAM local map, so as to obtain the three-dimensional waypoint topic (namely, positioning information of the spatial position of the destination area). And performing global planning by using an A-type algorithm according to positioning information of a spatial position where the unmanned aerial vehicle is located (for example, the position which can be obtained by autonomous positioning of an autonomous positioning module of the unmanned aerial vehicle), a local depth topic acquired by a depth camera (namely, the local depth information), an SLAM local map and a three-dimensional waypoint topic, so as to obtain a global path.
In detail, YOLO (You Only Look Once) is an object recognition and localization algorithm based on deep neural networks, which has been developed to v3 version, and can find a specific target in input data (picture or video). And (3) compressing and optimizing the YOLO V3 detection network to obtain the YOLO V3-tiny detection network.
In detail, the DSST (Discriminative Scale Space Tracking) algorithm is a correlation filter-based target tracking algorithm, and the fDSST algorithm is an accelerated modified version of the DSST algorithm.
In the embodiment, the global planning is performed by adopting an a-path planning algorithm with the characteristics of high instantaneity, high response speed and the like, so that the unmanned aerial vehicle flies in the planned path to avoid the obstacle, and the robustness of the unmanned aerial vehicle in avoiding the obstacle is improved.
Step S630, determining a next node of the nodes of the global path where the unmanned plane is located.
In the step, the next node in the global path is determined according to the position of the unmanned aerial vehicle to serve as the current target node, so that the unmanned aerial vehicle can be controlled to fly from the position of the unmanned aerial vehicle to the target node. When flying to the target node, the target node is usually the current node of the unmanned plane, so that the next node can be determined again. And circulating until the unmanned aerial vehicle flies to the last node in the global path, thereby completing the obstacle avoidance flight task.
Based on this, in one embodiment, the method further comprises: judging whether the next node is the last node in the global path or not under the condition that the unmanned aerial vehicle is controlled to fly to the next node; in case the next node is not the last node, the step S630 is performed.
When the next node is judged to be the last node in the global path, the unmanned aerial vehicle is confirmed to finish the obstacle avoidance flight task, so that the unmanned aerial vehicle can enter an autonomous return stage. In the autonomous return phase, the unmanned aerial vehicle can determine a return path according to the map established in the previous phase, and the autonomous recovery function is realized. And determining a new next node when the next node is not the last node in the global path.
Step S640, controlling the unmanned aerial vehicle to fly to the next node, and determining whether an obstacle exists in the process of controlling the unmanned aerial vehicle to fly to the next node; in case of an obstacle, the global path is updated, and the step S630 is performed.
In this step, local planning is performed based on the paths between nodes obtained by the global planning.
An optimal global path can be obtained on the constructed map through a path planning algorithm, and obstacles, such as pedestrians and the like, which are not originally present on the map cannot be avoided in the moving process of the unmanned aerial vehicle based on the global path. Therefore, the sensor carried on the unmanned aerial vehicle can be utilized to acquire surrounding environment information in real time in the running process, and the local path planning is perfected accordingly, so that obstacles on the moving path are avoided.
In this embodiment, the global path is updated in real time through real-time local obstacle avoidance, so that the optimality of global path planning is ensured not to be changed while the local path is planned.
In the embodiment, in the flight process of the unmanned aerial vehicle in the space environment, by combining global path planning and local obstacle avoidance execution, static obstacles and dynamic obstacles in the environment can be avoided in real time on a two-dimensional plane and a three-dimensional space, and the obstacle avoidance flight of the unmanned aerial vehicle is realized.
In one embodiment, the global planning and the local obstacle avoidance can be realized based on the octree map with adjustable resolution, namely, the global planning and the local obstacle avoidance are respectively based on the maps with different resolutions, so that the navigation efficiency can be improved while the path calculation amount is reduced, and the requirements of detecting and avoiding complex scene obstacles such as underground garages and the like can be met.
In one embodiment, in the step S640, the updating the global path includes: determining first positioning information of a spatial position where the obstacle is located and second positioning information of a spatial position where the unmanned aerial vehicle is located; setting a repulsive force field of the obstacle to the unmanned aerial vehicle according to the distance between the first positioning information and the second positioning information; setting a gravitational field of the next node to the unmanned aerial vehicle according to the distance between the positioning information of the next node and the second positioning information; superposing the repulsive force field and the gravitational field to obtain a superposed gravitational field; and updating the global path according to the superposition gravitational field.
In this embodiment, the repulsive force field of the obstacle and the gravitational field of the target node (such as the next node described above) are set, wherein the farther the obstacle is, the smaller the repulsive force is, and the closer the obstacle is, the larger the attractive force is. And further an abstract gravitational potential field can be obtained by superposition, which is related to the relative distance between the unmanned aerial vehicle and the obstacle, and the relative distance between the unmanned aerial vehicle and the target node. The trajectory of the movement can then be adjusted according to the forces the drone is subjected to in the potential field. Under the common action of the attraction force generated by the target node and the repulsive force generated by the obstacle, the unmanned aerial vehicle always moves from a high potential energy point to a low potential energy point, so that the unmanned aerial vehicle always reaches the lowest potential energy point in the map.
In this embodiment, the artificial potential energy field is utilized to perform dynamic local obstacle avoidance, so that real-time update of the global path is realized as required, and therefore, not only can the unmanned aerial vehicle effectively avoid static obstacles in the space environment, but also dynamic obstacles in the space environment (such as pedestrians, vehicles with higher speed and the like for the space environment of an underground garage) can be effectively avoided in the flight process of the unmanned aerial vehicle, which is beneficial to reducing the obstacle avoidance response time of the unmanned aerial vehicle and improving the robustness of the unmanned aerial vehicle for obstacle avoidance.
As shown in fig. 7, based on the planned global path, an APF (Artificial Potential Field, artificial potential field method) may be used to implement local obstacle avoidance, and update the global path according to the local obstacle avoidance result, so as to obtain a new global path, and control the unmanned aerial vehicle to avoid obstacle flight via a flight control system based on the new global path.
Aiming at the problems of obstacle avoidance and the like in the flight process of the unmanned aerial vehicle, the embodiment provides the unmanned aerial vehicle path planning and obstacle avoidance algorithm which is based on the A-type algorithm and the APF algorithm and has the characteristics of high instantaneity, high response speed and the like, so that the unmanned aerial vehicle has the path planning capability and can avoid the obstacle in the planned path, and the robustness of the unmanned aerial vehicle is improved.
In one embodiment, after said step S610, and before said step S620, the method further comprises: and eliminating redundant searching nodes in the first map by using a Tie_Breaker technology. For example, after a first map of a space environment where the unmanned aerial vehicle is located is constructed, redundant search nodes in the first map are removed, and then global search is performed on the first map with the redundant search nodes removed by using an a-x algorithm. Therefore, by eliminating unnecessary redundant search nodes, the search efficiency can be improved, so that the global path can be planned more quickly, and the planned global path is more accurate.
As can be seen from the above, in the unmanned aerial vehicle obstacle avoidance flight mode provided in this embodiment, a space environment map is constructed to plan a global path, and the existence of an obstacle is monitored in real time in the navigation flight process according to the global path, so that the local obstacle avoidance is executed in real time as required to update the global path until the unmanned aerial vehicle completes the obstacle avoidance flight task. The obstacle avoidance flight mode enables the unmanned aerial vehicle to have accurate environment sensing capability and rapid target detection capability, so that the unmanned aerial vehicle can be suitable for application scenes of obstacle avoidance flight of the unmanned aerial vehicle in an underground three-dimensional space. Of course, based on the same implementation principle, the obstacle avoidance flying mode can be also applied to non-underground three-dimensional space.
In one embodiment, the unmanned aerial vehicle obstacle avoidance flying mode is applied to the unmanned aerial vehicle shown in fig. 1, and the obstacle avoidance speed and the recognition distance of the unmanned aerial vehicle system are verified through a large number of repeated tests by combining simulation and experiments. The test environment, test content and test results are as follows:
test environment: underground three-layer and below garage environments without satellite signals.
The test contents are as follows: in an underground garage environment with barriers such as an upright post and a vehicle, an unmanned aerial vehicle flight path is arranged, the path contains the barriers, in the flight process, the unmanned aerial vehicle timely modifies the flight path to avoid the barriers, then returns to a planning path to continue flying, and finally drops to a designated position.
Test results: the unmanned aerial vehicle can realize obstacle avoidance flight, and the obstacle avoidance speed and the recognition distance are good.
To sum up, aiming at the problems that an unmanned aerial vehicle needs to have extremely high real-time path planning performance, fast obstacle avoidance flight and the like in a complex and narrow-space non-structural environment of an underground garage, the embodiment of the disclosure provides an environment sensing and obstacle detection mode based on multi-sensor information fusion such as a visual sensor, a laser radar and the like, an unmanned aerial vehicle path planning and an obstacle avoidance algorithm based on an APF (advanced power filter) are provided, a grid map and an octree map of the underground garage are constructed by utilizing the multi-sensor fusion, the path planning and the local obstacle avoidance are performed by utilizing the A and the APF algorithm, the global algorithm and the local algorithm are respectively based on different resolution maps, the robustness of the unmanned aerial vehicle is improved, the path calculation amount is reduced, and the navigation efficiency is improved.
Example 4
Based on the disclosure of embodiment 1 above, the drone may also identify the target before achieving the drone tracking target. In one embodiment, referring to fig. 8, the method for tracking a target by the unmanned aerial vehicle may further include the following steps S810 to S830.
Step S810, performing image recognition on the image acquired by the sensor for acquiring the image of the unmanned aerial vehicle until a first image conforming to preset characteristic information is recognized.
For example, the image may be a frame of image in the video captured by the optoelectronic pod 401.
In one implementation, the number of preset feature information may be more than one. For example, when the plurality of preset feature information is a plurality of vehicle feature information, the corresponding plurality of vehicles can be identified.
In detail, the acquired image can be input into a trained model, and then the image recognition can be performed by matching the features in the image. After the first image is identified according to the preset feature information, the object corresponding to the first image is considered as a suspicious object because the object corresponds to the preset feature information. Based on the above, in order to improve the accuracy of target recognition, the unmanned aerial vehicle can be controlled to fly to the identification area of the suspicious object so as to recognize the identification information thereof, and further the target object can be recognized according to the identification information.
In detail, the method for identifying the target by the unmanned aerial vehicle can be applied to an underground garage environment so as to realize target identification of the unmanned aerial vehicle in the underground garage. As such, in one embodiment, the preset feature information is vehicle feature information of the target vehicle; the preset identification information is the license plate number of the target vehicle; the identification area is a license plate area. For example, the vehicle characteristic information may include characteristics of a color, an appearance, and the like of the vehicle.
Taking vehicle identification as an example, in order to realize the identification of a target vehicle, firstly, an image is acquired by using a sensor on an unmanned aerial vehicle, and for each acquired frame of image, preset vehicle characteristic information is taken as input, and a suspicious target vehicle in a random search image is matched by using an image characteristic matching identification network and a license plate identification technology in deep learning. If not, searching the next frame of image further until a suspicious target vehicle meeting the preset characteristic information is identified from a certain frame of image. After the suspicious target vehicle is searched, the suspicious target vehicle can be flown to the suspicious target vehicle so as to position the license plate of the suspicious target vehicle in a short distance.
In one implementation, the video captured by the optoelectronic pod may have images with a resolution above 60fps full and high definition (1280 x 720). And the method can be used for carrying out preprocessing operations such as image cutting, noise filtering, automatic white balance, automatic exposure, gamma correction, edge enhancement, contrast adjustment and the like on the acquired image through the acquisition of the key frame, and then sending the preprocessed image into a vehicle feature recognition network.
In one implementation, before the video image is collected by using the sensor arranged on the unmanned aerial vehicle, adaptive light compensation processing can be performed according to environmental factors of the space environment where the unmanned aerial vehicle is located, so that the unmanned aerial vehicle can recognize the target object more quickly and accurately.
Step S820, controlling the unmanned aerial vehicle to fly to the identification area of the first object corresponding to the first image, and performing image recognition on the image acquired by the sensor under the condition that the unmanned aerial vehicle is controlled to fly to the identification area, so as to obtain the first identification information carried by the identification area.
In this step, when the unmanned aerial vehicle flies to the identification area of the suspicious object, the sensor on the unmanned aerial vehicle, such as the image collected by the optoelectronic pod, usually contains the image corresponding to the identification area, so that the identification information of the suspicious object can be identified. Of course, if the image corresponding to the identification area is not identified from the current frame image, the next frame image may be continuously identified, and the above steps are repeated until the image corresponding to the identification area is identified from the one image.
Taking vehicle identification as an example, the first object may be a suspicious target vehicle, the identification area may be an area where a license plate of the vehicle is located, and the first identification information may be a license plate number of the vehicle.
Based on this, in one embodiment, in step S820, the performing image recognition on the image acquired by the sensor to obtain the first identification information carried by the identification area includes: image recognition is carried out on the image acquired by the sensor until a second image corresponding to the license plate area is recognized; correcting the second image into a rectangular image in front view; sending the second image and the rectangular image into an OCR (Optical Character Recognition ) network from end to end for character matching recognition to obtain a first license plate number corresponding to the second image and a second license plate number corresponding to the rectangular image; judging whether the first license plate number is the same as the second license plate number; and under the condition that the first license plate number is the same as the second license plate number, determining that the first identification information is the first license plate number or the second license plate number.
In one implementation, the network input shape may be 3×160×40 and the output shape may be 1×84×20 for identifying license plate characters of variable length, and end-to-end character recognition may employ CTC-Loss as a Loss function.
In this embodiment, an end-to-end method is adopted to perform target vehicle recognition, and a neural network is used to perform detection positioning and character recognition of the license plate of the vehicle, so as to complete rapid detection and recognition of the target in real time. Based on the improvement of the target recognition efficiency, time can be strived for the stable tracking of the target object.
In this embodiment, after the license plate area is located, text information is directly identified on one hand to identify the license plate number therein, and on the other hand, specific correction is performed on the image of the license plate area, and then the license plate number is identified therefrom. If the identified license plate numbers are consistent, the identified license plate numbers can be considered to have higher identification accuracy.
Step S830 compares the first identification information with the preset identification information, and determines that the first object is a target object if the first identification information is the same as the preset identification information.
In this embodiment, step S830 may be followed by step S220.
In the step, if the identified first identification information is consistent with the preset identification information, the suspicious object is considered to be the target object, and the target object is locked. By locking the target object, the target tracking process for the target object can be further performed.
Based on the foregoing, in one embodiment, the method further comprises: inputting the CCPD data set into a YOLO V3-tiny detection network for model training to obtain a first detection model for identifying the first image;
in step S810, the performing image recognition on the image acquired by the sensor until a first image conforming to the preset feature information is recognized includes: and inputting the image acquired by the sensor into the first detection model for image recognition until a first image conforming to preset characteristic information is recognized.
Based on the foregoing, in one embodiment, the method further comprises: inputting the CCPD data set into a YOLO V3-tiny detection network for model training to obtain a second detection model for identifying the first identification information;
in step S820, the performing image recognition on the image acquired by the sensor to obtain first identification information carried by the identification area includes: and inputting the image acquired by the sensor into the second detection model for image recognition to obtain the first identification information carried by the identification area.
In detail, the CCPD data set is a large domestic data set for license plate recognition.
In detail, the one-stop target detection algorithm of the YOLO V3 has the characteristics of strong practicability, sensitivity to small target detection and the like. Considering the situation that the real-time detection of the YOLO V3 detection network is not high enough when the YOLO V3 detection network is applied to an onboard embedded system, the embodiment uses the YOLO V3-tiny network after compression pruning on the basis of the YOLO V3, and improves the prediction frame frequency by sacrificing a small amount of precision.
For example, all ResNet structures are removed on the basis of YOLO V3 and the final output branches are reduced to 2. The two branch feature maps are 13×13 and 26×26, respectively, each predicted using 3 anchors. Because each branch has different depth of layer feature fusion, the detection of small targets is still effective. The speed can reach more than 25 frames/second by testing in a mode of capturing images in real time by the camera.
For another example, the license plate detection positioning network Model is trained to obtain a YOLO Model, and then a Model Optimizer (Model Optimizer) is used to process the Model to reduce the weight accuracy of the trained Model from 32-bit floating point to 8-bit integer, so as to generate a Model-optimized intermediate Model (Intermediate Representation, IR). Based on the method, the performance advantage of reasoning on the onboard computer can be better played on the premise of sacrificing the requirement of proper accuracy. The model can then be deployed in the target environment to provision inference engine calls in the application.
Preferably, model optimization can be performed by using the OpenVINO toolkit published by intel to expand the visual workload of the computer and realize performance maximization and improve the running speed of the reasoning application.
In the embodiment, by selecting a one-stop secondary lightweight model YOLO V3 Tiny, utilizing network compression and pruning technology and matching with the on-board computer CPU hardware optimization acceleration technology, real-time high-precision target detection and identification under limited computing resources of unmanned aerial vehicle on-board equipment can be realized, so that the difficulties of poor shake and fuzzy illumination and contrast of image information acquired by random search, license plate shielding stains, small license plate area image occupation ratio to be positioned and identified and the like in an underground garage environment are relieved and even overcome, a plurality of target vehicles can be identified simultaneously, and the method is suitable for occasions of embedded low-cost CPU, and can achieve balance of precision and real-time requirements.
From the above, the method for identifying the target by the unmanned aerial vehicle provided in this embodiment performs preliminary identification on the identified object according to the preset feature information, further collects the identification information when the preliminary identification passes, performs re-identification under the condition that the identification information is ensured to be obtained without errors, and locks the target object when the preliminary identification passes. The implementation method can solve the problem that the accuracy of target identification is low due to the fact that the characteristic information has the problems of data dependence, poor expansibility, poor universality and the like, which are easy to exist when the target is identified only according to the characteristic information, and can be better applied to target identification detection in complex environments.
In one embodiment, the unmanned aerial vehicle obstacle avoidance flying mode is applied to the unmanned aerial vehicle shown in fig. 1, and the unmanned aerial vehicle system searching capability and the target vehicle identification accuracy are verified through a large number of repeated tests by combining simulation and experiments. The test environment, test content and test results are as follows:
test environment: underground three-layer and below garage environments without satellite signals.
The test contents are as follows: in an underground garage environment, a target garage with given characteristics is quickly searched and identified, the position of a target vehicle is judged, and the vehicle is locked.
Test results: the unmanned aerial vehicle can realize target identification, and the identification real-time performance and the identification accuracy are all good.
In summary, aiming at the problems of complex environment, different light, blurred image shake, complex background, small license plate region image occupation ratio, large calculation amount of a traditional calculation model and the like in the unmanned aerial vehicle target recognition process, the embodiment of the disclosure constructs a one-stop secondary light model YOLO V3-Tiny, utilizes network compression and pruning technology, and simultaneously carries out target rapid detection and recognition by matching with an on-board computer CPU hardware optimization acceleration technology. By optimizing the image feature matching recognition and license plate recognition network and adopting an end-to-end method to rapidly recognize the target vehicle, the real-time high-precision target detection recognition under the condition of limited computing resources is realized, and the balance of precision and real-time requirements is achieved.
Example 5
Based on the disclosure in the foregoing embodiments 1 to 4, the unmanned aerial vehicle shown in fig. 1 may have functions of autonomous positioning, obstacle avoidance flight, target recognition and target tracking, so as to implement the whole process from take-off to target searching and recognition, locking and dynamic tracking to task ending and returning of the unmanned aerial vehicle in an underground garage, and the specific implementation process of the process may be as follows:
(1) After the unmanned aerial vehicle takes off, various sensor information is fused, and the unmanned aerial vehicle is positioned autonomously and navigated to avoid obstacle flight;
(2) the target search and identification phase is then entered, which is primarily responsible for searching for target vehicles in the current underground garage. SLAM positioning and mapping are carried out through an autonomous positioning module, a current scene is traversed by utilizing a strategy of complementing a SLAM local map, in the process of traversing the complement map, multi-feature rapid matching search is carried out on vehicles in the environment by utilizing light compensation and through a target recognition module, suspicious vehicles are determined, license plate position positioning is carried out on the suspicious vehicles, and license plates are recognized and matched until target vehicles are found;
(3) the unmanned aerial vehicle locks the target after finishing the target search, enter the target tracking stage, this stage is mainly to dynamic tracking and navigation to keep away the obstacle to the target vehicle, carry on the visual tracking to the appointed vehicle that is discerned through the target tracking module, then carry on the fast motion tracking to the target vehicle that is visually tracked through the navigation to keep away the obstacle module, if trace and lose then start the target searching and discernment stage again in the period, until reaching the task requirement;
(4) after the unmanned aerial vehicle finishes the task requirement, the unmanned aerial vehicle enters an autonomous return phase, and at the phase, the unmanned aerial vehicle determines a return path according to a map established in the previous phase, so that the autonomous recovery function is realized.
Therefore, in this embodiment, by giving the characteristics of the target vehicle, the unmanned aerial vehicle can quickly identify and track the specific vehicle, and perform autonomous positioning and navigation obstacle avoidance without satellite signals in the process.
In one embodiment, the disclosure of embodiments 1-4 above is applied to the above-described drone shown in fig. 1, and the system is subjected to extensive, repeated testing to evaluate the overall performance of the system by combining simulation with experimentation. The test environment, test content and test results are as follows:
test environment: underground three-layer and below garage environments without satellite signals.
The test contents are as follows: and combining the whole test process, testing the unmanned aerial vehicle to realize quick search and dynamic tracking tasks, and realizing fully autonomous navigation obstacle avoidance flight, target search and identification, target locking and dynamic tracking.
Test results: unmanned aerial vehicle can realize autonomous positioning, obstacle avoidance flight, target recognition and target tracking.
< device example >
Fig. 9 is a block schematic diagram of a drone target tracking device according to one embodiment, the drone target tracking device 901 may include at least a processor 9011 and a memory 9012, the memory 9012 to store instructions to control the processor 9011 to operate to perform a drone autonomous positioning method according to any embodiment of the present disclosure. The skilled person can design the instructions according to the disclosed aspects of the present disclosure. How the instructions control the processor 9011 to operate is well known in the art and will not be described in detail here.
In the present embodiment, the memory 9012 is for storing computer instructions, and the memory 9012 includes, for example, ROM (read only memory), RAM (random access memory), nonvolatile memory such as a hard disk, and the like.
In this embodiment, the processor 9011 is configured to execute a computer program that may be written in an instruction set of an architecture such as x86, arm, RISC, MIPS, SSE, and the like. The processor 9011 may be, for example, a central processing unit CPU, a microprocessor MCU, or the like.
The drone target tracking device 901 may include interface devices or the like in addition to the processor 9011 and memory 9012 as described above. The interface device includes, for example, various bus interfaces including, for example, a serial bus interface (including a USB interface and the like), a parallel bus interface and the like.
In one embodiment, the unmanned target tracking device 901 may be in the onboard computer.
Fig. 10 is a block schematic diagram of a drone, according to one embodiment, the drone 100 may include at least a sensor 1002 for acquiring images and a drone target tracking device 1001 as disclosed in the present disclosure; wherein the drone target tracking device 1001 is communicatively coupled to the sensor 1002.
The present invention may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present invention may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present invention are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information for computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are all equivalent.
The foregoing description of embodiments of the invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvements in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (7)

1. A method of tracking a target by a drone, the drone being provided with a sensor for acquiring images, the method comprising:
acquiring an image acquired by the sensor;
determining first spatial position information of the target object according to an image area corresponding to the identified target object in the image;
predicting next spatial position information of the target object according to the first spatial position information;
obtaining laser positioning information according to the data acquired by the two-dimensional laser radar of the unmanned aerial vehicle;
Acquiring visual positioning information according to data acquired by the inertial sensor, the visual odometer and the depth camera of the unmanned aerial vehicle respectively;
acquiring height information according to data acquired by the unmanned aerial vehicle stator Gao Leida;
obtaining positioning information of the spatial position of the unmanned aerial vehicle according to the laser positioning information, the visual positioning information and the height information;
controlling the unmanned aerial vehicle to fly towards the target object according to the next space position information and the positioning information of the space position of the unmanned aerial vehicle;
the method for acquiring the laser positioning information according to the data acquired by the two-dimensional laser radar of the unmanned aerial vehicle comprises the following steps: performing two-dimensional positioning and environment mapping according to data acquired by a two-dimensional laser radar of the unmanned aerial vehicle by using a laser SLAM algorithm based on a Cartographer to obtain pose information of the unmanned aerial vehicle in three plane directions and a yaw direction, wherein the pose information is used as the laser positioning information;
the method for obtaining the visual positioning information according to the data respectively collected by the inertial sensor, the visual odometer and the depth camera of the unmanned aerial vehicle comprises the following steps: obtaining pose information of the unmanned aerial vehicle in rolling and pitching directions according to inertial navigation data acquired by an inertial sensor of the unmanned aerial vehicle, monocular scene images acquired by a visual odometer of the unmanned aerial vehicle and depth information acquired by a depth camera of the unmanned aerial vehicle by using a VIO-SLAM fusion technology based on VINS, and taking the pose information as the visual positioning information;
The laser SLAM, the visual SLAM and the height information are only part of information used for autonomous positioning of the unmanned aerial vehicle, and the three information output by pose fusion filtering are autonomous positioning information of the unmanned aerial vehicle;
wherein, according to the next spatial position information and the positioning information of the spatial position where the unmanned aerial vehicle is located, controlling the unmanned aerial vehicle to fly towards the target object includes:
constructing a first map of a space environment where the unmanned aerial vehicle is located;
planning a global path according to the next space position information and the first map;
determining a next node of the nodes of the unmanned plane in the global path;
controlling the unmanned aerial vehicle to fly to the next node, and determining whether an obstacle exists in the process of controlling the unmanned aerial vehicle to fly to the next node;
updating the global path in the presence of an obstacle, and executing the step of determining the next node of the nodes of the global path, in which the unmanned aerial vehicle is located;
wherein said updating said global path comprises:
determining first positioning information of a spatial position where the obstacle is located and second positioning information of a spatial position where the unmanned aerial vehicle is located;
Setting a repulsive force field of the obstacle to the unmanned aerial vehicle according to the distance between the first positioning information and the second positioning information;
setting a gravitational field of the next node to the unmanned aerial vehicle according to the distance between the positioning information of the next node and the second positioning information; superposing the repulsive force field and the gravitational field to obtain a superposed gravitational field;
updating the global path according to the superimposed gravitational field;
the constructing a first map of a space environment where the unmanned aerial vehicle is located includes:
acquiring scene structures and local barrier information of a space environment where the unmanned aerial vehicle is located according to data acquired by a depth camera of the unmanned aerial vehicle;
according to the scene structure and the local obstacle information, establishing a depth information map of a local scene of the space environment where the unmanned aerial vehicle is located, and presenting the depth information map in the form of an octree map;
the planning a global path according to the next spatial location information and the first map includes: and planning a global path according to the next space position information and the octree map.
2. The method of claim 1, wherein predicting next spatial location information of the target object based on the first spatial location information comprises:
Predicting a new position according to the position corresponding to the first spatial position information by using a position filter of an fDSST algorithm;
predicting a new size according to the new position and the size corresponding to the first spatial position information by using a size filter of an fDSST algorithm;
and obtaining the next space position information of the target object according to the new position and the new size.
3. The method of claim 1, wherein predicting next spatial location information of the target object comprises: predicting the next spatial position information of the target object by using an optimized fDSST algorithm;
wherein the optimized fDSST algorithm has one or more of the following features:
firstly, a filter deduces and adopts point-by-point operation, and the operation in a time domain is converted into a complex frequency domain by accelerating through FFT;
feature two, carrying out dimension compression on HOG features by adopting a PCA dimension reduction mode, reducing feature dimensions of a position filter and a scale filter, and obtaining scale positioning by triangular polynomial interpolation;
and thirdly, interpolating the filtering result, training and detecting samples by using coarse characteristic lattice points, and obtaining a final prediction result by using triangular polynomial interpolation.
4. The method of claim 1, wherein after the acquiring the image acquired by the sensor and before the determining the first spatial location information of the target object, the method further comprises:
identifying whether the image has the image area;
performing the step of determining first spatial location information of the target object in case the image has the image area;
and executing the step of acquiring the image acquired by the sensor in the case that the image does not have the image area.
5. The method of any one of claims 1 to 4, wherein the first spatial location information of the target object is spatial location information of the target object in an underground garage.
6. A drone target tracking device comprising a processor and a memory for storing instructions for controlling the processor to operate to perform the method of any one of claims 1 to 5.
7. A drone comprising a sensor for acquiring images and the drone target tracking device of claim 6;
The unmanned aerial vehicle target tracking device is in communication connection with the sensor.
CN202011205401.2A 2020-11-02 2020-11-02 Unmanned aerial vehicle target tracking method and device and unmanned aerial vehicle Active CN112378397B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011205401.2A CN112378397B (en) 2020-11-02 2020-11-02 Unmanned aerial vehicle target tracking method and device and unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011205401.2A CN112378397B (en) 2020-11-02 2020-11-02 Unmanned aerial vehicle target tracking method and device and unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN112378397A CN112378397A (en) 2021-02-19
CN112378397B true CN112378397B (en) 2023-10-10

Family

ID=74577535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011205401.2A Active CN112378397B (en) 2020-11-02 2020-11-02 Unmanned aerial vehicle target tracking method and device and unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN112378397B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113312992A (en) * 2021-05-18 2021-08-27 中山方显科技有限公司 Dynamic object sensing and predicting method based on multi-source sensor information fusion
CN115333887B (en) * 2022-07-25 2023-10-03 中国电子科技集团公司第十研究所 Multi-access fusion method and system for measurement and control communication network
CN115150784B (en) * 2022-09-02 2022-12-06 汕头大学 Unmanned aerial vehicle cluster area coverage method and device based on gene regulation and control network
CN116661501B (en) * 2023-07-24 2023-10-10 北京航空航天大学 Unmanned aerial vehicle cluster high dynamic environment obstacle avoidance and moving platform landing combined planning method

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003177010A (en) * 2001-12-11 2003-06-27 Mitsubishi Electric Corp Aircraft image detecting device
KR20150130032A (en) * 2014-05-13 2015-11-23 국방과학연구소 Conjugation Method of Feature-point for Performance Enhancement of Correlation Tracker and Image tracking system for implementing the same
CN106444780A (en) * 2016-11-10 2017-02-22 速感科技(北京)有限公司 Robot autonomous navigation method and system based on vision positioning algorithm
CN107037812A (en) * 2017-03-31 2017-08-11 南京理工大学 A kind of vehicle path planning method based on storage unmanned vehicle
CN107680119A (en) * 2017-09-05 2018-02-09 燕山大学 A kind of track algorithm based on space-time context fusion multiple features and scale filter
CN107741234A (en) * 2017-10-11 2018-02-27 深圳勇艺达机器人有限公司 The offline map structuring and localization method of a kind of view-based access control model
CN107909012A (en) * 2017-10-30 2018-04-13 北京中科慧眼科技有限公司 A kind of real-time vehicle tracking detection method and device based on disparity map
CN108537822A (en) * 2017-12-29 2018-09-14 西安电子科技大学 Motion target tracking method based on weighting reliability estimating
CN108734109A (en) * 2018-04-24 2018-11-02 中南民族大学 A kind of visual target tracking method and system towards image sequence
CN108876816A (en) * 2018-05-31 2018-11-23 西安电子科技大学 Method for tracking target based on adaptive targets response
CN108898624A (en) * 2018-06-12 2018-11-27 浙江大华技术股份有限公司 A kind of method, apparatus of moving body track, electronic equipment and storage medium
CN109255304A (en) * 2018-08-17 2019-01-22 西安电子科技大学 Method for tracking target based on distribution field feature
CN109521794A (en) * 2018-12-07 2019-03-26 南京航空航天大学 A kind of multiple no-manned plane routeing and dynamic obstacle avoidance method
CN109697753A (en) * 2018-12-10 2019-04-30 智灵飞(北京)科技有限公司 A kind of no-manned plane three-dimensional method for reconstructing, unmanned plane based on RGB-D SLAM
CN109816698A (en) * 2019-02-25 2019-05-28 南京航空航天大学 Unmanned plane visual target tracking method based on dimension self-adaption core correlation filtering
CN110222581A (en) * 2019-05-13 2019-09-10 电子科技大学 A kind of quadrotor drone visual target tracking method based on binocular camera
CN110609570A (en) * 2019-07-23 2019-12-24 中国南方电网有限责任公司超高压输电公司天生桥局 Autonomous obstacle avoidance inspection method based on unmanned aerial vehicle
CN110751670A (en) * 2018-07-23 2020-02-04 中国科学院长春光学精密机械与物理研究所 Target tracking method based on fusion
CN110850873A (en) * 2019-10-31 2020-02-28 五邑大学 Unmanned ship path planning method, device, equipment and storage medium
CN111263308A (en) * 2020-01-15 2020-06-09 上海交通大学 Positioning data acquisition method and system
CN111476814A (en) * 2020-03-25 2020-07-31 深圳大学 Target tracking method, device, equipment and storage medium
CN111507999A (en) * 2019-01-30 2020-08-07 北京四维图新科技股份有限公司 FDSST algorithm-based target tracking method and device
CN111752276A (en) * 2020-06-23 2020-10-09 深圳市优必选科技股份有限公司 Local path planning method and device, computer readable storage medium and robot
CN111784748A (en) * 2020-06-30 2020-10-16 深圳市道通智能航空技术有限公司 Target tracking method and device, electronic equipment and mobile carrier

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107818571B (en) * 2017-12-11 2018-07-20 珠海大横琴科技发展有限公司 Ship automatic tracking method and system based on deep learning network and average drifting

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003177010A (en) * 2001-12-11 2003-06-27 Mitsubishi Electric Corp Aircraft image detecting device
KR20150130032A (en) * 2014-05-13 2015-11-23 국방과학연구소 Conjugation Method of Feature-point for Performance Enhancement of Correlation Tracker and Image tracking system for implementing the same
CN106444780A (en) * 2016-11-10 2017-02-22 速感科技(北京)有限公司 Robot autonomous navigation method and system based on vision positioning algorithm
CN107037812A (en) * 2017-03-31 2017-08-11 南京理工大学 A kind of vehicle path planning method based on storage unmanned vehicle
CN107680119A (en) * 2017-09-05 2018-02-09 燕山大学 A kind of track algorithm based on space-time context fusion multiple features and scale filter
CN107741234A (en) * 2017-10-11 2018-02-27 深圳勇艺达机器人有限公司 The offline map structuring and localization method of a kind of view-based access control model
CN107909012A (en) * 2017-10-30 2018-04-13 北京中科慧眼科技有限公司 A kind of real-time vehicle tracking detection method and device based on disparity map
CN108537822A (en) * 2017-12-29 2018-09-14 西安电子科技大学 Motion target tracking method based on weighting reliability estimating
CN108734109A (en) * 2018-04-24 2018-11-02 中南民族大学 A kind of visual target tracking method and system towards image sequence
CN108876816A (en) * 2018-05-31 2018-11-23 西安电子科技大学 Method for tracking target based on adaptive targets response
CN108898624A (en) * 2018-06-12 2018-11-27 浙江大华技术股份有限公司 A kind of method, apparatus of moving body track, electronic equipment and storage medium
CN110751670A (en) * 2018-07-23 2020-02-04 中国科学院长春光学精密机械与物理研究所 Target tracking method based on fusion
CN109255304A (en) * 2018-08-17 2019-01-22 西安电子科技大学 Method for tracking target based on distribution field feature
CN109521794A (en) * 2018-12-07 2019-03-26 南京航空航天大学 A kind of multiple no-manned plane routeing and dynamic obstacle avoidance method
CN109697753A (en) * 2018-12-10 2019-04-30 智灵飞(北京)科技有限公司 A kind of no-manned plane three-dimensional method for reconstructing, unmanned plane based on RGB-D SLAM
CN111507999A (en) * 2019-01-30 2020-08-07 北京四维图新科技股份有限公司 FDSST algorithm-based target tracking method and device
CN109816698A (en) * 2019-02-25 2019-05-28 南京航空航天大学 Unmanned plane visual target tracking method based on dimension self-adaption core correlation filtering
CN110222581A (en) * 2019-05-13 2019-09-10 电子科技大学 A kind of quadrotor drone visual target tracking method based on binocular camera
CN110609570A (en) * 2019-07-23 2019-12-24 中国南方电网有限责任公司超高压输电公司天生桥局 Autonomous obstacle avoidance inspection method based on unmanned aerial vehicle
CN110850873A (en) * 2019-10-31 2020-02-28 五邑大学 Unmanned ship path planning method, device, equipment and storage medium
CN111263308A (en) * 2020-01-15 2020-06-09 上海交通大学 Positioning data acquisition method and system
CN111476814A (en) * 2020-03-25 2020-07-31 深圳大学 Target tracking method, device, equipment and storage medium
CN111752276A (en) * 2020-06-23 2020-10-09 深圳市优必选科技股份有限公司 Local path planning method and device, computer readable storage medium and robot
CN111784748A (en) * 2020-06-30 2020-10-16 深圳市道通智能航空技术有限公司 Target tracking method and device, electronic equipment and mobile carrier

Also Published As

Publication number Publication date
CN112378397A (en) 2021-02-19

Similar Documents

Publication Publication Date Title
CN112378397B (en) Unmanned aerial vehicle target tracking method and device and unmanned aerial vehicle
US10943355B2 (en) Systems and methods for detecting an object velocity
US10929713B2 (en) Semantic visual landmarks for navigation
US10991156B2 (en) Multi-modal data fusion for enhanced 3D perception for platforms
US10437252B1 (en) High-precision multi-layer visual and semantic map for autonomous driving
CN110969655B (en) Method, device, equipment, storage medium and vehicle for detecting parking space
CN112596071A (en) Unmanned aerial vehicle autonomous positioning method and device and unmanned aerial vehicle
Dey et al. Vision and learning for deliberative monocular cluttered flight
CN112379681A (en) Unmanned aerial vehicle obstacle avoidance flight method and device and unmanned aerial vehicle
US20100305857A1 (en) Method and System for Visual Collision Detection and Estimation
CN113485441A (en) Distribution network inspection method combining unmanned aerial vehicle high-precision positioning and visual tracking technology
Flores et al. A vision and GPS-based real-time trajectory planning for a MAV in unknown and low-sunlight environments
EP3690744A1 (en) Method for integrating driving images acquired from vehicles performing cooperative driving and driving image integrating device using same
CN112380933B (en) Unmanned aerial vehicle target recognition method and device and unmanned aerial vehicle
WO2020186444A1 (en) Object detection method, electronic device, and computer storage medium
CN113785253A (en) Information processing apparatus, information processing method, and program
CN114943757A (en) Unmanned aerial vehicle forest exploration system based on monocular depth of field prediction and depth reinforcement learning
Kang et al. Map building based on sensor fusion for autonomous vehicle
Choi et al. Improved CNN-based path planning for stairs climbing in autonomous UAV with LiDAR sensor
Lombaerts et al. Adaptive multi-sensor fusion based object tracking for autonomous urban air mobility operations
JP2022191188A (en) System and method for training prediction system
Cheng et al. Integration of active and passive sensors for obstacle avoidance
Hu et al. A small and lightweight autonomous laser mapping system without GPS
Li et al. UAV obstacle avoidance by human-in-the-loop reinforcement in arbitrary 3D environment
Soleimani et al. A disaster invariant feature for localization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant