CN116149371A - Multi-moving body three-dimensional tracking and controlling platform based on visual sensor network - Google Patents

Multi-moving body three-dimensional tracking and controlling platform based on visual sensor network Download PDF

Info

Publication number
CN116149371A
CN116149371A CN202310276977.5A CN202310276977A CN116149371A CN 116149371 A CN116149371 A CN 116149371A CN 202310276977 A CN202310276977 A CN 202310276977A CN 116149371 A CN116149371 A CN 116149371A
Authority
CN
China
Prior art keywords
camera
coordinate system
image
unmanned aerial
aerial vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310276977.5A
Other languages
Chinese (zh)
Inventor
邓恒
詹璟原
张利国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202310276977.5A priority Critical patent/CN116149371A/en
Publication of CN116149371A publication Critical patent/CN116149371A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/104Simultaneous control of position or course in three dimensions specially adapted for aircraft involving a plurality of aircrafts, e.g. formation flying

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multi-moving body three-dimensional tracking and controlling platform based on a visual sensor network, which comprises four main modules, namely a multi-camera system, a ground control system, an onboard infrared reflective small ball and a target unmanned plane. The multi-camera system comprises a plurality of cameras with infrared filters, an infrared light supplementing source for enhancing light intensity, a synchronous trigger for synchronizing image data of the plurality of cameras and a navigation terminal computer for running a core vision algorithm. The multiple cameras interact to form a visual sensor network. The ground control system receives the navigation data from the multiple camera system and then uploads the control commands to the target drone via the wireless local area network. The data transmission is to process the navigation data by utilizing topic subscription and release mechanism of the ROS of the robot operation system, calculate a control instruction for the unmanned aerial vehicle by combining the reference track and the controller, and release the instruction through the wireless local area network.

Description

Multi-moving body three-dimensional tracking and controlling platform based on visual sensor network
Technical Field
The invention relates to a multi-moving body three-dimensional tracking and controlling platform based on a visual sensor network, and belongs to the technical field of three-dimensional visual tracking and controlling.
Background
The vision sensor network is a distributed sensing network formed by a large number of intelligent camera nodes, and can realize large-range area coverage and target tracking. The vision sensor network is an application type network which takes data as a center and collects and processes environment information. Three-dimensional visual tracking is an important application in the field of computer vision, and fast, accurate, and robust three-dimensional visual tracking is important for practical tasks. Three-dimensional vision tracking based on vision sensor network has many aspects of research significance: a large number of vision sensor nodes can realize large-range area coverage, and the tracking range of the system is increased; the distributed computing framework has robustness to the abrupt change of the network structure, namely when the sensor node fails in a fault or a new sensor node is added, the network structure can perform the ad hoc network without influencing the execution of tasks; the introduction of wireless transmission can greatly simplify the network deployment and installation process, does not need complex wiring, and enhances the convenience and flexibility of the system; pose estimation based on multi-vision sensor is a multi-source information fusion mode, and both accuracy and robustness of algorithm are considered. However, the existing research has the problems of difficult node management, large wireless delay, limited node calculation storage and communication resources and the like in practical engineering application, and is difficult to effectively realize large-scale target real-time tracking.
The unmanned aerial vehicle three-dimensional tracking technology based on the visual sensing network is widely applied in various fields, such as unmanned aerial vehicle safety recovery, missile terminal guidance, virtual reality, environment monitoring and the like. Taking environmental monitoring as an example: the method utilizes the high-efficiency video image processing algorithm to exert the characteristics of high precision, rich information, strong anti-interference capability and the like, and can replace people to carry out large-scale and comprehensive monitoring and recording. Mobile sensor networks represented by unmanned aerial vehicle clusters have higher flexibility, autonomy and robustness, and face more challenges. Unmanned aerial vehicle cluster test evaluation is a key for promoting the unmanned aerial vehicle cluster technology to move from theory to practice, and a verification platform is mainly focused on demonstration verification of rotor unmanned aerial vehicles and fixed wing unmanned aerial vehicles at present. Some outdoor performances of large-scale shocking multi-rotor unmanned aerial vehicle are mostly to be used for the light show with unmanned aerial vehicle cluster technique. However, there are cases of failure in unmanned aerial vehicle cluster performance, because the unmanned aerial vehicle positioning system is interfered by unknown orientation, data are abnormal, performance pictures are incomplete, unmanned aerial vehicles fall down in a dispute, and the unmanned aerial vehicle clusters do not successfully form a preset complete pattern. Unmanned aerial vehicle cluster demonstration faults also show that the current unmanned aerial vehicle cluster technology is immature, and the cluster algorithm test evaluation and verification are not perfect.
At present, research on three-dimensional visual tracking test evaluation and verification of unmanned aerial vehicles is paid attention to, and many scientific researchers design and develop own tracking and positioning platforms for verifying and evaluating own intelligent and efficient navigation, guidance and control algorithms. The large-scale verification can be realized based on the software virtual simulation/semi-physical simulation verification method, but the credibility of the model is difficult to verify by utilizing the virtual simulation technology. The auxiliary verification method based on the high-precision indoor positioning system can carry out quick test verification on the cluster algorithm, but is difficult to carry out large-scale test due to the limitation of indoor environment. The verification method based on outdoor multi-unmanned aerial vehicle flight demonstration is relatively close to a real environment, the obtained verification result is generally reliable, but the test implementation process is complex and is limited by environmental changes, the test cost is high, and the large-scale test verification is difficult. Because the indoor environment is controllable, the experimental layout is convenient, and experiments can be repeatedly carried out, the development and design of the unmanned aerial vehicle test platform based on the indoor multi-camera system are continuously researched and perfected by domestic and foreign scientific researchers. However, existing platforms often lack a general-purpose complete solution that does not guide people in software and hardware design.
Therefore, the invention takes multiple moving bodies such as a multi-rotor unmanned aerial vehicle cluster and the like as objects, provides a visual sensor network-based three-dimensional tracking and control platform for the multiple moving bodies, integrates key technologies such as sensor layout optimization, global camera calibration, distributed pose estimation and the like, and realizes functions such as image acquisition and processing, camera calibration, pose estimation, motion control and the like. The invention provides a comprehensive and complete system verification and demonstration platform, can efficiently realize rapid test verification of navigation, guidance, control and decision algorithm, has strong practical significance and application value in the fields of scientific research, education, robot technology and the like, and can be further expanded to other autonomous intelligent unmanned systems including fixed-wing unmanned aerial vehicles, unmanned ships and the like.
Disclosure of Invention
The invention provides a visual sensor network-based multi-moving body three-dimensional tracking and control platform, which is shown in figure 1 and comprises four main modules, namely a multi-camera system, a ground control system, an onboard infrared reflective small ball and a target unmanned aerial vehicle. The function and interrelationship of the modules are as follows: the multi-camera system comprises a plurality of cameras with infrared filters, an infrared light supplementing source for enhancing light intensity, a synchronous trigger for synchronizing image data of the plurality of cameras and a navigation terminal computer for running a core vision algorithm. The interaction of multiple cameras forms a visual sensor network which jointly covers the unmanned plane active area. The ground control system receives the navigation data from the multiple camera system and then uploads the control commands to the target drone via the wireless local area network. The data transmission is a topic subscription and release mechanism by utilizing a robot operating system ROS, namely, a control end computer subscribes unmanned aerial vehicle navigation data (including data of a plurality of unmanned aerial vehicles) in a wireless local area network, processes the navigation data, combines a reference track and a controller to calculate a control instruction for the unmanned aerial vehicle, and finally releases the instruction through the wireless local area network. When the target unmanned aerial vehicle is observed by a plurality of cameras at the same time, the three-dimensional position and attitude angle information of the unmanned aerial vehicle can be acquired in real time.
After introducing the individual functions of the modules, introducing the operation flow of the whole platform, wherein the logic relationship among the modules is also embodied. As shown in fig. 2, the operational flow of the platform begins with the setup of a multi-camera system, including the deployment of all cameras and the setup of computers. Then, multi-camera calibration is performed to describe the mapping relationship between the three-dimensional space feature points and the corresponding two-dimensional image pixels, and then the onboard reflective beads are arranged on the target unmanned aerial vehicle. These processes are done offline before the unmanned aerial vehicle takes off, with the aim of creating a global coordinate system, i.e. the earth's coordinate system for the attachment. When the unmanned aerial vehicle flight test starts, the three-dimensional vision tracking main algorithm starts to work. First, a camera captures an image of the on-board ball feature point and obtains the exact pixel coordinates of the marker point by an image processing algorithm, and sends the pixel point coordinates to a navigation computer. The correspondence between the image point and the actual point can then be used to reconstruct the point in three dimensions by the principle of triangulation. Since the configuration and initial correspondence of the pellets is obtained during the filter initialization process, the process model and the vision measurement model can be used together by the EKF to estimate the pose of the unmanned aerial vehicle. And broadcasting and publishing the pose information of the unmanned aerial vehicle in the wireless local area network, subscribing and processing the pose information by the Simulink model. The platform utilizes a robotic system toolbox to establish communication between the Simulink model and the ROS-enabled drone. Finally, the control computer will process the visual navigation data and feed back control commands to the drone through ROS.
Having generally described the module functions and overall connections, the implementation of each module is specifically described with respect to FIG. 1:
s1, building a multi-camera system module.
The module comprises three core function sub-modules of camera management, calibration management and tracking management. Through the camera management submodule, operations such as image acquisition, processing and previewing can be performed, real-time image information of the cameras is acquired, whether the placement positions among the cameras are reasonable or not is clearly observed, and the layout operation of the cameras is assisted. Through the calibration management sub-module, camera calibration operation can be performed, including swing acquisition of calibration point data, operation of a calibration main algorithm, and creation of a global coordinate system at a calibration origin. Through the tracking management sub-module, operations such as rigid body modeling, rigid body tracking, three-dimensional window display and the like can be performed, and the rigid body motion state is observed in real time. These three sub-modules are described in detail below.
S11, realizing camera management submodule
As shown in fig. 3, the functions of camera management are mainly to perform operations such as image acquisition, feature point extraction, and image display for each camera. Meanwhile, parameters such as exposure time of a camera, threshold used in threshold segmentation and the like can be adjusted on line, so that the influence of miscellaneous points and ambient light changes can be eliminated as much as possible, and feature points can be extracted more accurately and robustly. Before the camera is calibrated, the camera view angle needs to be detected. Too high or too low a camera view angle will result in an insufficient field of view to cover the range of motion of the drone, such that the tracked drone cannot be captured by multiple cameras or even one camera in many locations, resulting in tracking failure. Therefore, the camera management sub-module is established, and camera pictures can be displayed in real time, so that the positions and angles of the cameras can be adjusted according to the pictures.
The feature point detection and extraction mainly processes the gray level image acquired by the camera to acquire the pixel coordinates of the feature point. The image processing algorithm is placed at the camera end for processing to reduce the calculation time consumption and improve the running speed; meanwhile, the directly processed characteristic point coordinates are output to a calibration main algorithm, so that the data transmission bandwidth is reduced. The main steps of the algorithm include: thresholding, gaussian smoothing, contour acquisition, and feature point extraction.
S111, thresholding. The image is divided directly by gray thresholding, and if the gray of the original image point (u, v) is I (u, v) and the thresholded gray is I' (u, v), the thresholding operation is as follows
Figure BDA0004136617330000041
wherein ,
Figure BDA0004136617330000042
the gray threshold value can be adjusted and determined according to a camera shutter, exposure time, ambient light intensity and the like in the actual process. In practice, however, the gray threshold is valid within a certain range (70, 100). />
Figure BDA0004136617330000043
Representing a positive real set.
S112, gaussian smoothing. This is partly used for blurring and noise reduction, removing some trivial details from the image before object extraction. The main operation is to replace the original pixel value in the image with the weighted average of the pixels in the neighborhood determined by the filter template. An image with resolution of MXN is subjected to a weighted average filtering process with size of MXn (M, N are all odd numbers) to obtain
Figure BDA0004136617330000044
Wherein a= (m-1)/2, b= (n-1)/2. a and b are defined as a= (m-1)/2, b= (n-1)/2, s is an integer between [ -a, a ], t is an integer between [ -b, b ], and w is a gaussian filter; in OpenCV there is a corresponding function gaussian blur (), and besides the template size, there are also filtering coefficients of gaussian filtering in the horizontal and vertical directions that can be actually selected.
S113, contour acquisition. Find multiple blocks (blobs) in the image and mark by using the findContours function of OpenCV, find the outline by using 8 connected regions, and need to judge: the area (total number of pixel points) in the area needs to be within a certain range, so that the area is too small and possibly mixed points, and the area is too large and possibly large interfering bright points; while the aspect ratio of the defined area is required to be small (more circular).
S114, extracting feature points. After contour extraction, a contour block of all feature points has been determined, but each contour contains a number of pixels, which in part is the calculation of the center coordinates of the block using the image moments. Image moment M pq The calculation formula of (2) is
Figure BDA0004136617330000045
wherein ,
Figure BDA0004136617330000051
u, v are pixel values; the sub-pixel level center coordinates of each block can be directly calculated as
Figure BDA0004136617330000052
Figure BDA0004136617330000053
Representing a positive integer set.
The image processing part at the camera end is completed, and the data for the subsequent calibration and tracking algorithm are all the pixel coordinates of the characteristic points shot by each camera.
S12, realizing calibration management submodule
As shown in fig. 4, the calibration management function is mainly to calibrate the camera parameters in the video sensor network based on the camera universal imaging model, and convert the reference coordinate system from a certain reference camera coordinate system to an inertial coordinate system (the origin needs to be set). And (3) considering the characteristics of multi-hop self-organization of the visual sensing network, distributing weights by the nodes, and determining the optimal paths from the reference camera to other cameras according to a shortest path method. If the double-target determination result is known, the conversion relation between the reference camera and other cameras can be obtained through rotation and translation of the coordinate system, so only the double-target determination process is described herein.
S121, establishing a camera general imaging model. The camera model describes the relationship of the three-dimensional space point captured by a real camera projected to a two-dimensional plane of a camera picture. Four coordinate systems are introduced here, including the world coordinate system O w -X w Y w Z w Camera coordinate system O c -X c Y c Z c An image coordinate system O-XY, a pixel coordinate system UV. Tracking target point P (X) w ,Y w ,Z w ) In the camera coordinate system (X c ,Y c ,Z c ) Can be obtained by rotation and translation:
Figure BDA0004136617330000054
wherein R and T are rotation matrix and translation vector respectively. Coordinates in camera coordinate system (X c ,Y c ,Z c ) Coordinates (x, y) to an image coordinate system are perspective projection transformation, and the triangle similarity relationship can be obtained:
Figure BDA0004136617330000055
Where f is the camera focal length. The image coordinate system coordinates (x, y) to the pixel coordinate system coordinates (u, v) satisfy the proportional relation:
Figure BDA0004136617330000061
wherein ,fx ,f y For the horizontal axis resolution, the vertical axis resolution, u 0 ,v 0 Is the principal point coordinates. The relationship of the transformation of the finally available three-dimensional points from the world coordinate system to the pixel coordinate system is as follows:
Figure BDA0004136617330000062
wherein
Figure BDA0004136617330000063
Is a camera with internal parameters>
Figure BDA0004136617330000064
Is a camera external parameter.
A generic imaging model was introduced, and (X w ,Y w ,Z w ) The relation with (u, v) is transformed into spherical coordinates
Figure BDA0004136617330000065
Where r·sin θ is defined as R (θ), i.e., the distance between an image point and a principal point (the intersection of the camera optical axis and the image plane), specifically:
r(θ)=k 1 θ+k 2 θ 3 +k 3 θ 5 +k 4 θ 7 +k 5 θ 9 +… (9)
wherein ,k1 ,k 2 ,k 3 ,k 4 ,k 5 And θ is the included angle between the optical axis and the incident light, and is a pending parameter.
Then (X) w ,Y w ,Z w ) The relation with (u, v) is converted into an internal reference
Figure BDA0004136617330000066
External ginseng [ R T ]]Is determined by the above-described method.
S122, performing internal reference calibration, and obtaining internal reference treatment by minimizing interpolation of imaging points and imaging model results. Since the image center coordinates, the nominal pixel size is known, the specific formula is:
Figure BDA0004136617330000067
wherein, the interval [0, θ ]]Interpolation is divided into p parts (θ 12 ,...,θ j ,...,θ p )。
S123, calibrating external parameters according to polar geometric constraint, wherein the external parameters include
m T Em=0 (11)
Wherein m is spherical coordinates determined by internal references, and E is an essential matrix to be solved. And (3) performing singular value decomposition on the E to obtain a rotation matrix R and a translation vector T.
S124, optimizing internal and external parameters, wherein the optimization target is to minimize three-dimensional reconstruction errors, and solving through a Levenberg-Marquardt algorithm.
S125, according to the optimal paths from the reference camera to other cameras, sequentially determining initial values of external parameters of the No. 0 camera relative to the No. 0 camera coordinate system on the path, wherein i=0
Figure BDA0004136617330000071
And repeating the steps S121-S124 until the initial value of the external parameter of the M-th camera relative to the coordinate system of the 0-th camera is calculated.
S126, according to the calibration result, determining external parameters of the i, i=0, & gt, M camera coordinate system relative to the world coordinate system
Figure BDA0004136617330000072
Thereby transforming the reference coordinate system from the camera to the world coordinate system as follows:
s1261, obtaining initial values of external parameters from an inertial coordinate system to a No. 0 camera coordinate system according to the calibration result by utilizing the characteristic points on the triangle
Figure BDA0004136617330000073
Wherein { e } is the inertial coordinate system, { c } i -camera coordinate system No. i;
S1262、
Figure BDA0004136617330000074
and->
Figure BDA0004136617330000075
The transformation relation is as follows:
Figure BDA0004136617330000076
/>
combining the calibrated internal and external parameters to obtain projection coordinates from the feature points to each camera;
s1263, performing nonlinear optimization to obtain an external parameter of the optimized i, i=0, & gt
Figure BDA0004136617330000077
The camera calibration module has the functions that: according to the determined external parameters, under the same field of view area, the camera performs three-dimensional reconstruction of the spatial feature points based on the triangulation principle on the projection plane points capturing the same feature points, and restores the feature points to three-dimensional positions under the world coordinate system.
S13, realizing tracking management submodule
As shown in fig. 5, the main functions of tracking management are rigid body feature point modeling and online pose estimation, and the calculated pose information is displayed online and broadcast out through a network by adopting UDP. The algorithm takes into account the special case that the rigid body is within the single camera field of view or the rigid body is out of the field of view. And the rigid body tracking module comprises a process model, an observation model and online pose estimation by taking pose information after three-dimensional reconstruction as input.
S131, a process model. Because the sampling time of the camera is very small, the motion model of the rigid body can be simplified into a uniform motion model at each sampling time. Assume that the system variable is set to
Figure BDA0004136617330000078
wherein />
Figure BDA0004136617330000079
Representing three-dimensional position of rigid body in world coordinate system, and has e p=T,/>
Figure BDA00041366173300000710
Representing the three-dimensional velocity of a rigid body in the world coordinate system, < >>
Figure BDA00041366173300000711
Representing the pitch angle, roll angle and yaw angle of the rigid body,
Figure BDA0004136617330000081
the three-dimensional angular velocity of the rigid body in the machine body system is represented, and { b } is the machine body coordinate system, and the general linear uniform motion model of the rigid body is as follows:
Figure BDA0004136617330000082
wherein ,γ12 Is Gaussian white noise, assuming T s Representing the sampling time, obtaining a discrete form according to a first-order backward difference method:
x k =Ax k-1k (14)
Figure BDA0004136617330000083
s132, observing the model. Imaging from feature points
Figure BDA0004136617330000084
The observation model describing the relationship between output measurement and system state can be obtained as:
z k =h(x k )+v k (16)
wherein ,
Figure BDA0004136617330000085
Figure BDA0004136617330000086
is n on camera No. i F And a measurement vector consisting of the characteristic points. v k Is a gaussian white noise with an average value of 0 and is independently and uniformly distributed.
S133, estimating the online pose. And establishing an EKF filter based on the process model and the observation model. Firstly, initializing, then making prediction process, according to state estimation of k-1 moment
Figure BDA0004136617330000087
Sum error covariance prediction P k-1|k-1 It is possible to predict the state estimate at time k +.>
Figure BDA0004136617330000091
And error covariance P k|k-1 . Updating on the basis of the prediction to obtain a final update +.>
Figure BDA0004136617330000092
and Pk|k I.e. to view pose information.
S2, building a ground control system module.
The ground control system module is built and mainly operates in a Ubuntu system of a ground computer, a Simulink model operates in an ROS environment, real-time navigation pose data issued by a multi-camera system is subscribed, control decision is made, and finally the multi-rotor unmanned aerial vehicle is controlled to track, and the purposes of road flight, real-time obstacle avoidance, formation control and the like are achieved. The method mainly comprises the steps of mainly establishing a three-way model of the multi-rotor unmanned aerial vehicle and designing a corresponding controller.
S21, establishing a three-channel model.
In order to facilitate the design of the controller, the linearization method is utilized to simplify the control model of the multi-rotor unmanned aerial vehicle, namely, the design of three channels is adopted: a height channel, a yaw channel, and a horizontal position channel.
S211, a height channel model. Refers to a control command u from a throttle remote lever T To the z-direction position
Figure BDA0004136617330000093
Is modeled specifically as:
Figure BDA0004136617330000094
wherein ,
Figure BDA0004136617330000095
the parameters determined by the semi-autonomous self-driving instrument can be considered as unknown. />
S212, yaw channel model. Refers to a yaw remote rod control instruction
Figure BDA0004136617330000096
The path to the yaw angle ψ is modeled specifically as:
Figure BDA0004136617330000097
wherein ,
Figure BDA0004136617330000098
the parameters determined by the semi-autonomous self-driving instrument can be considered as unknown.
S213, horizontal position channels. Refers to a control command u from roll/pitch tele bar h =[u φ u θ ] T To a horizontal direction position
Figure BDA0004136617330000099
Is modeled specifically as:
Figure BDA00041366173300000910
wherein ,
Figure BDA00041366173300000911
the speed is in the horizontal direction, theta h =[φ θ] T In order to be a roll/pitch angle,
Figure BDA00041366173300000912
is the angular velocity in the horizontal direction. />
Figure BDA0004136617330000101
Is a horizontal rotation matrix from the body coordinate system to the inertial coordinates. />
Figure BDA0004136617330000102
The parameters determined by the semi-autonomous self-driving instrument can be considered as unknown.
S22, designing a controller.
As shown in fig. 6, given a reference inertial position and a reference yaw angle (simple position control, can realize fixed-point hovering and waypoint flying), the inertial position, speed and attitude angle of the aircraft (can also be combined with the navigation speed provided by the aircraft) can be acquired in real time by a multi-camera system; the error quantity is obtained by comparison with the reference value, and three channel control quantities (the height direction speed, the yaw rate and the horizontal attitude angle) are obtained through the position controller; these control quantities are transmitted directly to the aircraft via ROS, which via the inner ring control and the dynamic model can be moved to a specified position at a specified yaw angle.
Additional controllers are designed to accomplish the trajectory tracking task based on three channel models (17) - (19) in self-stabilizing mode. Specifically, a desired trajectory for a given drone
Figure BDA0004136617330000103
And a desired yaw angle ψ d (t) design control input u T ,/>
Figure BDA0004136617330000104
u h So that when t & gtto & gtinfinity, the state output of the unmanned aerial vehicle satisfies the condition of & lt & gtx (t) -x d (t) ||→0, where x= [ p ] T ψ] T ,/>
Figure BDA0004136617330000105
Aiming at the unmanned aerial vehicle control model and the control target, the PD controller is designed as follows
Figure BDA0004136617330000106
/>
wherein ,
Figure BDA0004136617330000107
k T,P ,k T,D ,k ψ,P ,k ψ,D ∈R + ,K h,P ,K h,D ∈R 2×2 for the controller parameters, human settings are required. Note that the yaw angle can be directly used with the proportional controller, so k ψ,D =0。
S3, infrared reflecting small ball
The light reflecting mark points are silver gray balls with surfaces coated with special light reflecting substances, as shown in fig. 7, and are provided with double-sided sticky bases, so that the light reflecting mark points are conveniently fixed at the top position of the moving body main body. The onboard infrared reflective beads are typically mounted directly on the target drone and captured by a camera with an infrared filter to provide reliable and easily extracted image point features. The placement of the infrared reflective beads, i.e., their relative position on the target drone, is also an important factor in the platform design. The basic principle of the arrangement of the pellets is that the pellets can be placed randomly, but the pellets are required to be arranged asymmetrically, and the center of the pellets is ensured to coincide with the gravity center of the unmanned aerial vehicle as much as possible. In addition, in order to obtain accurate and robust position and attitude angle information at the same time, we set the number of the pellets to be at least 4, and note that the pellets should not be on the same plane, which is easy to meet in practical operation. Furthermore, the arrangement of the pellets should be such as to ensure that they can be viewed by as many cameras as possible.
S4, target unmanned aerial vehicle
Parrot Bebop 2.0 was chosen as the target unmanned aerial vehicle (ParrotAR. Drone quadrotor unmanned aerial vehicle was also used in the early stage), and an unmanned aerial vehicle with 4 infrared reflective beads was shown in FIG. 8. This is because they are small, inexpensive, rugged, and safe and reliable, and they all have their own specific SDKs to provide navigation data and control command interfaces, without the need to write the underlying data drive interface itself, which can be very simple to control.
S5, platform architecture
Based on the multi-camera system design and the ground control system design, the software architecture of the platform is shown in fig. 9, and mainly comprises four nodes (executable files in the ROS software package): simulink model nodes (visual perception control algorithm implementation), bebop_driver nodes, vps_driver nodes, and bebop_gui nodes. Therefore, researchers can flexibly test and evaluate self-designed advanced control algorithms, and only need to modify the reference track and the controller module in the Simulink model.
S51, vps_driver node. The node communicates with a multi-camera system through a UDP protocol, receives navigation information such as position, gesture, speed and the like, packages the navigation data into ROS information vps_driver/vps_navdata, and distributes the ROS information vps_driver/vps_navdata to topics/vpsnavdata, wherein the distribution frequency of the navigation data is 100Hz.
S52, a bebop_driver node. The node is communicated with a target unmanned aerial vehicle (Bebop 2.0) through a UDP protocol, can receive information such as unmanned aerial vehicle-mounted navigation data and video images, and meanwhile sends a control instruction to the unmanned aerial vehicle. Because unmanned aerial vehicle navigation data used by the platform is provided through vps_driver nodes, the main purpose of the bebop_driver nodes is to send control instructions to the unmanned aerial vehicle. The node subscribes to geometry_msgs/Twist messages from/bebop/cmd_vel topics, subscribes to std_msgs/Empty messages from/bebop/land topics, and/bebop/reset topics, decodes the messages, and then sends the messages to the unmanned aerial vehicle through a UDP protocol, so that control of the unmanned aerial vehicle is realized. The issue frequency of the autonomous control command is 50Hz.
S53, a bebop_GUI node. The node is in the form of MATLAB GUI interface (see figure 10), and realizes direct take-off and landing and emergency stop control (highest priority) of Bebop 2.0 through simple button operation, a mouse button instruction is packaged into ROS message std_msgs/Empty, a message for controlling the take-off of the unmanned aerial vehicle is issued to a topic/Bebop/takeoff, a message for controlling the landing of the unmanned aerial vehicle is issued to a topic/Bebop/land, and a message for controlling the reset of the unmanned aerial vehicle is issued to a topic/Bebop/reset. Meanwhile, all take-off, landing and emergency stop instructions are set for simultaneous operation of multiple unmanned aerial vehicles. For simplicity, the interface is exemplified by a simple operation of controlling four unmanned aerial vehicles, and if more unmanned aerial vehicles need to be controlled, the interface can be directly expanded on the basis. The instruction issue frequency is based on event triggers.
S54, a Simulink model node. The node is a MATLAB Simulink model running in ROS environments, implementing a visual perception control algorithm primarily through a toolbox Robotics System Toolbox. The nodes subscribe navigation data of the visual multi-camera system from topics/vpsnavdata, namely message packets vps_driver/vps_navdata, then a visual control algorithm is realized according to the navigation data, the visual control algorithm can comprise a waypoint planning algorithm, a track tracking algorithm, an obstacle avoidance algorithm, a multi-machine formation control algorithm and the like, and the algorithm output quantity is used as expected input of a position controller. Finally, the output quantity of the position controller is packaged into ROS message geometry_msgs/Twist, and the ROS message geometry_msgs/Twist is issued to topics/bebop/cmd_vel. The instruction issue frequency depends on the Simulink operating frequency (the issue frequency used by the present platform is 50 Hz).
The invention provides a visual sensor network-based multi-moving body three-dimensional tracking and controlling platform, which comprises a multi-camera system module, a ground control system module, an infrared reflecting small ball module and an unmanned plane module, wherein the system integrates key technologies such as sensor layout optimization, global camera calibration and distributed pose estimation, and the like, realizes functions such as image acquisition and processing, camera calibration, pose estimation and motion control, can quickly realize quick test verification of navigation, guidance, control and decision algorithm, and has stronger practical significance and application value.
Drawings
FIG. 1 is a schematic diagram of a platform structure;
FIG. 2 is a schematic diagram of a platform operational flow;
FIG. 3 is a schematic diagram of a camera management algorithm flow;
FIG. 4 is a schematic flow chart of a calibration management algorithm;
FIG. 5 is a schematic flow chart of a trace management algorithm;
FIG. 6 is a schematic diagram of a closed loop control block diagram of a surface control system;
FIG. 7 is a schematic view of an infrared reflective bead;
FIG. 8 is a schematic representation of a Bebop drone with infrared reflective beads;
FIG. 9 is a schematic diagram of a platform software architecture;
FIG. 10 is a schematic illustration of a drone control GUI interface;
FIG. 11 is a schematic diagram of a three-dimensional visual tracking and control platform for an unmanned aerial vehicle;
FIG. 12 is a multi-camera system software main interface;
FIG. 13 is a schematic diagram of the unmanned aerial vehicle trajectory tracking results during elliptical trajectory tracking;
FIG. 14 is a schematic view of horizontal trajectories of four unmanned aerial vehicles as they fly around an ellipse;
fig. 15 is a schematic diagram of a drone tracking flight status in the presence of noisy disturbances.
The symbols in the drawings are as follows:
symbol in fig. 6 illustrates: p is p dd Respectively the expected track and yaw angle of the unmanned aerial vehicle e p, e v, ψ are the actual trajectory, speed and yaw angle of the unmanned aerial vehicle. u (u) h U is the horizontal channel control quantity of the controller T For the amount of control of the height channel,
Figure BDA0004136617330000131
is the yaw path control quantity.
Symbol description in fig. 13 and 14:
Figure BDA0004136617330000132
respectively the three-dimensional positions of the unmanned aerial vehicle.
Detailed Description
The technical scheme of the invention is further described below with reference to the drawings and the embodiments.
According to the invention, the three-dimensional tracking and control experiment platform is built based on multiple intelligent cameras, as shown in fig. 11, eight intelligent cameras with infrared filters and external triggers form a vision sensor network, an unmanned aerial vehicle active area is covered, and the cameras transmit characteristic point data of a target capture unmanned aerial vehicle to a ground navigation computer. In the navigation computer, we developed a graphical user interface based on Microsoft basic classes (MFCs) to visualize the current state and operational steps of the system (see FIG. 12), performing algorithms such as image data processing, camera calibration, and pose estimation. The control computer running the ROS and Simulink models receives the navigation data from the navigation computer, and then generates and transmits control signals to the drone via wireless WiFi. The unmanned aerial vehicle used in the platform is a Bebop 2.0 quadrotor unmanned aerial vehicle carrying four infrared reflective pellets. The main unit attributes of the platform are shown in table 1.
Table 1 main unit attributes of platform
Figure BDA0004136617330000133
Figure BDA0004136617330000141
Ground control system
Figure BDA0004136617330000142
According to one embodiment of the invention, an unmanned aerial vehicle three-dimensional tracking and control system based on a visual sense network is provided, and closed-loop flight experiment verification is carried out. The experimental relevant configurations are shown in table 1, and some comprehensive performance evaluations were performed for the designed platform. First, for a multi-camera system, the accuracy and robustness of its pose estimation is evaluated. And then, based on the real-time navigation pose information provided by the multi-camera system and the ground control system, performing indoor unmanned aerial vehicle flight test, and evaluating the three-dimensional vision tracking and control functions of the whole platform.
First, the static accuracy of pose estimation of a multi-camera system is evaluated. Four stationary reflective pellets are placed on the ground and form a square with a side length of 60cm, and the global coordinates of the four pellets can be obtained according to the visual tracking algorithm provided by the invention, so that the reconstruction length of each side of the square can be further obtained. Table 2 shows the comparison between the reconstructed length and the true length. The result shows that the multi-camera system has higher precision, and the reconstruction error is less than 3mm.
Table 2 static accuracy assessment results for pose estimation in a multiple camera system
Figure BDA0004136617330000143
Then, under the condition that the multi-camera system provides real-time navigation pose data, the ground control system is utilized to perform some closed-loop control flight tests on the target unmanned aerial vehicle. In these flight tests, the position and yaw of the drone are controlled directly by the PD controller according to our design (see equation 20). The update rate of visual measurement is 100Hz, the system process noise and the measurement noise are assumed to be Gaussian white noise, and the corresponding error covariance is 0.0001 and 0.05 respectively. The PD control coefficient is set as: k (k) T,P =0.8,k T,P =0.1,k ψ,P =0.4,K h,P =diag(0.5,0.5),K h,D =diag(0.3,0.3)。
Firstly, a single-machine flight test experiment is carried out, the Bebop unmanned aerial vehicle is controlled to track an elliptical track, and fig. 13 shows an actual horizontal track of the unmanned aerial vehicle and a comparison result between a reference position and an actual position of the unmanned aerial vehicle. The result shows that the designed platform tracking control precision is high.
And then testing the expandability of the platform, and evaluating the number and the size of unmanned aerial vehicles of which the targets can be tracked and controlled by the platform. It should be noted that as the number of target drones and the number of cameras increases, the bandwidth of the visual data will also increase, which will place more burden on the processor. Furthermore, the effective field of view is limited by the number of external cameras and layout scheme. In this experiment we tracked four Bebop drones and controlled them to fly around an elliptical trajectory. The ellipse center of the reference ellipse is the global coordinate system origin. The semi-major and semi-minor axes of the reference ellipses are 3.5m and 1.3m, respectively. The actual horizontal trajectory of the final drone is shown in figure 14. The result shows that the designed platform can track and control four unmanned aerial vehicles, and the control precision is high. Furthermore, to demonstrate the robustness and reliability of the proposed test platform, we add some anomaly and interference pellets while flying under controlled conditions of the drone, fig. 15 shows an example frame of some tracking scenario. The results indicate reliable tracking control performance of the platform in the presence of interference and noise.

Claims (6)

1. The multi-moving body three-dimensional tracking and controlling platform based on the vision sensor network is characterized by comprising a multi-camera system, a ground control system, an onboard infrared reflecting small ball and a target unmanned aerial vehicle; the multi-camera system comprises a plurality of cameras with infrared filters, an infrared light supplementing source for enhancing light intensity, a synchronous trigger for synchronizing image data of the plurality of cameras and a navigation terminal computer for running a core vision algorithm; the interaction of multiple cameras forms a visual sensor network which jointly covers the unmanned plane active area; the ground control system receives navigation data from the multi-camera system and then uploads a control command to the target unmanned aerial vehicle through the wireless local area network; the data transmission is realized by utilizing a topic subscription and release mechanism of a robot operating system ROS, namely, a control end computer subscribes unmanned aerial vehicle navigation data in a wireless local area network, then processes the navigation data, combines a reference track and a controller to calculate a control instruction for the unmanned aerial vehicle, and finally releases the instruction through the wireless local area network; when a target unmanned aerial vehicle is observed by a plurality of cameras at the same time, three-dimensional position and attitude angle information of the unmanned aerial vehicle are acquired in real time.
2. The visual sensor network-based multi-motion body three-dimensional tracking and control platform according to claim 1, wherein the operation flow of the platform starts from the construction of a multi-camera system, including the deployment of all cameras and the setting of a computer; then, calibrating a plurality of cameras to describe the mapping relation between the three-dimensional space feature points and the corresponding two-dimensional image pixels, and then arranging the airborne reflective small balls on the target unmanned aerial vehicle; when the unmanned aerial vehicle flight test starts, the three-dimensional vision tracking main algorithm starts to work; firstly, capturing an image of an onboard small ball feature point by a camera, obtaining accurate pixel coordinates of the mark point by an image processing algorithm, and transmitting the pixel point coordinates to a navigation computer; then, three-dimensional reconstruction is carried out on the image point by utilizing the corresponding relation between the image point and the actual point through a triangulation principle; using the process model and the vision measurement model together by an EKF to estimate the pose of the unmanned aerial vehicle; then, broadcasting and publishing the pose information of the unmanned aerial vehicle in a wireless local area network, subscribing and processing by a Simulink model; establishing communication between the Simulink model and the ROS-enabled drone using a robotic system toolbox; finally, the control computer will process the visual navigation data and feed back control commands to the drone through ROS.
3. The multi-moving body three-dimensional tracking and control platform based on the visual sensor network according to claim 1, wherein the modules comprise three core function sub-modules of camera management, calibration management and tracking management; the camera management sub-module is used for performing image acquisition, processing and preview operations, acquiring real-time image information of the cameras, observing whether the placement positions among the cameras are reasonable or not, and assisting the layout operation of the cameras; the camera calibration operation is carried out through a calibration management sub-module, and the camera calibration operation comprises the steps of swinging to obtain calibration point data, running a calibration main algorithm and establishing a global coordinate system at a calibration origin; and the tracking management sub-module is used for carrying out rigid body modeling, rigid body tracking and three-dimensional window display operation and is used for observing the rigid body motion state in real time.
4. The multi-moving body three-dimensional tracking and control platform based on the visual sensor network according to claim 3, wherein three sub-modules are specifically introduced;
s11, realizing camera management submodule
The camera management function is to respectively perform image acquisition, feature point extraction and image display operation for each camera; adjusting the exposure time of the camera on line and the threshold used in threshold segmentation; before the camera is calibrated, the visual angle of the camera needs to be detected; the detection and extraction of the feature points are to process the gray level image obtained by the camera to obtain the pixel coordinates of the feature points; the image processing algorithm is placed at a camera end for processing; meanwhile, the feature point coordinates after direct processing are output to a calibration main algorithm; the steps of the algorithm include: thresholding, gaussian smoothing, contour acquisition and feature point extraction;
S111, thresholding; the image is divided directly by gray thresholding, and if the gray of the original image point (u, v) is I (u, v) and the thresholded gray is I' (u, v), the thresholding operation is as follows
Figure FDA0004136617300000021
/>
wherein ,
Figure FDA0004136617300000022
the gray threshold value is determined according to the camera shutter, the exposure time and the ambient illumination intensity in the actual process>
Figure FDA0004136617300000023
Representing a positive real number set;
s112, gaussian smoothing; replacing original pixel values in the image by using weighted average values of pixels in the neighborhood determined by the filter template; an image with resolution of MXN is subjected to a weighted average filtering process with size of MXn
Figure FDA0004136617300000024
Wherein a= (m-1)/2, b= (n-1)/2; a and b are defined as a= (m-1)/2, b= (n-1)/2, s is an integer between [ -a, a ], t is an integer between [ -b, b ], and w is a gaussian filter; in OpenCV, there is a corresponding function gaussian blur (), and besides the template size, there are also filtering coefficients of gaussian filtering in the horizontal and vertical directions that can be actually selected;
s113, contour acquisition; finding a plurality of blocks (blobs) in the image by utilizing the findContours function of OpenCV and marking, finding the outline by utilizing the 8 connected regions, and judging: the area in the region needs to be within a certain range; at the same time, the aspect ratio of the limited area is small;
S114, extracting feature points; after contour extraction, determining contour blocks of all feature points, wherein each contour comprises a plurality of pixels, and calculating the center coordinates of the blocks by using image moments; image moment M pq The calculation formula of (2) is
Figure FDA0004136617300000031
wherein ,
Figure FDA0004136617300000032
u, v are pixel values; the sub-pixel level center coordinates of each block can be directly calculated as
Figure FDA0004136617300000033
Figure FDA0004136617300000034
Representing a positive integer set, wherein the image processing part at the camera end is completed, and the data for the subsequent calibration and tracking algorithm are all characteristic point pixel coordinates shot by each camera;
s12, realizing calibration management submodule
The calibration management function is to calibrate the camera parameters in the video sensor network based on a camera universal imaging model and convert a reference coordinate system from a certain reference camera coordinate system to an inertial coordinate system;
s121, establishing a general imaging model of the camera; introducing four coordinate systems, including the world coordinate system O w -X w Y w Z w Camera coordinate system O c -X c Y c Z c An image coordinate system O-XY and a pixel coordinate system UV; tracking target point P (X) w ,Y w ,Z w ) In the camera coordinate system (X c ,Y c ,Z c ) The method is obtained by rotary translation:
Figure FDA0004136617300000035
wherein R, T are rotation matrix and translation vector respectively; coordinates in camera coordinate system (X c ,Y c ,Z c ) Coordinates (x, y) to an image coordinate system are perspective projection transformation, and the triangle similarity relationship can be obtained:
Figure FDA0004136617300000036
Wherein f is the focal length of the camera; the image coordinate system coordinates (x, y) to the pixel coordinate system coordinates (u, v) satisfy the proportional relation:
Figure FDA0004136617300000037
wherein ,fx ,f y For the horizontal axis resolution, the vertical axis resolution, u 0 ,v 0 Is the principal point coordinates; the relationship of the transformation of the finally available three-dimensional points from the world coordinate system to the pixel coordinate system is as follows:
Figure FDA0004136617300000041
wherein
Figure FDA0004136617300000042
Is a camera with internal parameters>
Figure FDA0004136617300000043
Is a camera external parameter;
a generic imaging model was introduced, and (X w ,Y w ,Z w ) The relation with (u, v) is transformed into spherical coordinates
Figure FDA0004136617300000044
Where r·sin θ is defined as R (θ), i.e., the distance between an image point and a principal point (the intersection of the camera optical axis and the image plane), specifically:
r(θ)=k 1 θ+k 2 θ 3 +k 3 θ 5 +k 4 θ 7 +k 5 θ 9 +…(9)
wherein ,k1 ,k 2 ,k 3 ,k 4 ,k 5 θ is the included angle between the optical axis and the incident light;
then (X) w ,Y w ,Z w ) The relation with (u, v) is converted into an internal reference
Figure FDA0004136617300000045
External ginseng [ R T ]]Is determined;
s122, performing internal reference calibration, namely obtaining internal reference treatment by minimizing interpolation between imaging points and imaging model results; since the image center coordinates, the nominal pixel size is known, the specific formula is:
Figure FDA0004136617300000046
wherein, the interval [0, θ ]]Interpolation is divided into p parts (θ 12 ,...,θ j ,...,θ p );
S123, calibrating external parameters according to polar geometric constraint, wherein the external parameters include
m T Em=0 (11)
Wherein m is spherical coordinates determined by internal references, E is an essential matrix to be solved; singular value decomposition is carried out on the E to obtain a rotation matrix R and a translation vector T;
S124, optimizing internal and external parameters, wherein an optimization target is to minimize a three-dimensional reconstruction error, and solving through a Levenberg-Marquardt algorithm;
s125, according to the optimal paths from the reference camera to other cameras, sequentially determining initial values of external parameters of the No. 0 camera relative to the No. 0 camera coordinate system on the path, wherein i=0
Figure FDA0004136617300000047
Repeating the steps S121-S124 until the initial value of the external parameter of the No. M camera relative to the No. 0 camera coordinate system is calculated; />
S126, according to the calibration result, determining external parameters of the ith camera coordinate system, i=0, & gtM camera coordinate system relative to the world coordinate system
Figure FDA0004136617300000051
Thereby transforming the reference coordinate system from the camera to the world coordinate system as follows:
s1261, obtaining initial values of external parameters from an inertial coordinate system to a No. 0 camera coordinate system according to the calibration result by utilizing the characteristic points on the triangle
Figure FDA0004136617300000052
Wherein { e } is the inertial coordinate system, { c } i -camera coordinate system No. i;
S1262、
Figure FDA0004136617300000053
and->
Figure FDA0004136617300000054
The transformation relation is as follows:
Figure FDA0004136617300000055
combining the calibrated internal and external parameters to obtain projection coordinates from the feature points to each camera;
s1263, performing nonlinear optimization to obtain an external parameter of the optimized i, i=0, & gt
Figure FDA0004136617300000056
The camera calibration module has the functions that: according to the determined external parameters, under the same field of view area, the camera pair captures projection plane points of the same feature points, three-dimensional reconstruction of the space feature points is carried out based on a triangulation principle, and the feature points are restored to three-dimensional positions under a world coordinate system;
S13, realizing tracking management submodule
The tracking management function is modeling of rigid body feature points and online pose estimation, and the calculated pose information is displayed online and broadcast out through a network by adopting UDP; the three-dimensional reconstructed pose information is taken as input, and the rigid body tracking module comprises a process model, an observation model and online pose estimation;
s131, a process model; because the sampling time of the camera is very small, the motion model of the rigid body can be simplified into a uniform motion model at each sampling time; assume that the system variable is set to
Figure FDA0004136617300000057
wherein />
Figure FDA0004136617300000058
Representing three-dimensional position of rigid body in world coordinate system, and has e p=T,/>
Figure FDA0004136617300000059
Representing three dimensions of a rigid body in a world coordinate systemSpeed (I)>
Figure FDA00041366173000000510
Representing the pitch angle, roll angle and yaw angle of the rigid body,
Figure FDA00041366173000000511
the three-dimensional angular velocity of the rigid body in the machine body system is represented, and { b } is the machine body coordinate system, and the general linear uniform motion model of the rigid body is as follows:
Figure FDA0004136617300000061
wherein ,γ12 Is Gaussian white noise, assuming T s Representing the sampling time, obtaining a discrete form according to a first-order backward difference method:
x k =Ax k-1k (14)
Figure FDA0004136617300000062
s132, observing a model; imaging from feature points
Figure FDA0004136617300000063
The observation model describing the relationship between output measurement and system state can be obtained as:
z k =h(x k )+v k (16)
wherein ,
Figure FDA0004136617300000064
Figure FDA0004136617300000065
is n on camera No. i F A measurement vector composed of the characteristic points; v k Each element in the set is Gaussian white noise with independent same distribution and average value of 0;
s133, estimating online pose; based on the process model and the observation model, an EKF filter is established; firstly, initializing, then making prediction process, according to state estimation of k-1 moment
Figure FDA0004136617300000066
Sum error covariance prediction P k-1|k-1 It is possible to predict the state estimate at time k +.>
Figure FDA0004136617300000067
And error covariance P k|k-1 The method comprises the steps of carrying out a first treatment on the surface of the Updating on the basis of the prediction to obtain a final update +.>
Figure FDA0004136617300000068
and Pk|k I.e. to view pose information.
5. The three-dimensional tracking and control platform of multiple moving bodies based on a visual sensor network according to claim 1, wherein the ground control system module is built and mainly operates in a Ubuntu system of a ground computer, a Simulink model operates in an ROS environment, real-time navigation pose data issued by a multiple-camera system is subscribed, control decision is made, and finally, the multiple-rotor unmanned aerial vehicle is controlled to track, and the airway flight, real-time obstacle avoidance and formation control are realized; establishing a three-way model of the multi-rotor unmanned aerial vehicle and designing a corresponding controller;
s21, establishing a three-channel model;
the control model of the multi-rotor unmanned aerial vehicle is simplified by using a linearization method, namely, the design of three channels is divided: a height channel, a yaw channel, and a horizontal position channel;
S211, a height channel model; refers to a control command u from a throttle remote lever T To the z-direction position
Figure FDA0004136617300000071
Is modeled specifically as:
Figure FDA0004136617300000072
wherein ,
Figure FDA0004136617300000073
parameters determined by the semi-autonomous self-driving instrument are considered to be unknown;
s212, a yaw channel model; refers to a yaw remote rod control instruction
Figure FDA0004136617300000074
The path to the yaw angle ψ is modeled specifically as: />
Figure FDA0004136617300000075
wherein ,
Figure FDA0004136617300000076
the parameters determined by the semi-autonomous self-driving instrument can be considered as unknown;
s213, horizontal position channels; refers to a control command u from roll/pitch tele bar h =[u φ u θ ] T To a horizontal direction position
Figure FDA0004136617300000077
Is modeled specifically as:
Figure FDA0004136617300000078
wherein ,
Figure FDA0004136617300000079
is a horizontal squareThe direction speed theta h =[φ θ] T For roll/pitch angle, +.>
Figure FDA00041366173000000710
Is the angular velocity in the horizontal direction; />
Figure FDA00041366173000000711
A horizontal rotation matrix from a body coordinate system to an inertial coordinate;
Figure FDA0004136617300000081
parameters determined by the semi-autonomous self-driving instrument are considered to be unknown;
s22, designing a controller;
giving a reference inertial position and a reference yaw angle, and acquiring the inertial position, the speed and the attitude angle of the aircraft in real time through a multi-camera system; comparing the error quantity with a reference value to obtain a three-channel control quantity through a position controller; the control quantity is directly transmitted to the aircraft through the ROS, and the aircraft moves to a specified position through an inner ring control and dynamic model at a specified yaw angle;
Designing additional controllers based on the three-way models (17) - (19) in the self-stabilizing mode to complete the track tracking task; given a desired trajectory of a drone
Figure FDA0004136617300000082
And a desired yaw angle ψ d (t) design control input u T ,/>
Figure FDA0004136617300000083
u h So that when t & gtto & gtinfinity, the state output of the unmanned aerial vehicle satisfies the condition of & lt & gtx (t) -x d (t) ||→0, where x= [ p ] T ψ] T ,
Figure FDA0004136617300000084
The PD controller designed for the unmanned aerial vehicle control model and the control target is as follows
Figure FDA0004136617300000085
wherein ,
Figure FDA0004136617300000086
k T,P ,k T,D ,k ψ,P ,k ψ,D ∈R + ,K h,P ,K h,D ∈R 2×2 for the controller parameters, the parameters are manually set, k ψ,D =0。
6. The three-dimensional tracking and control platform of multiple moving bodies based on a visual sensor network according to claim 1, wherein the reflective marking points are silver gray pellets with surfaces coated with special reflective substances, and are provided with double-sided sticky bases, so that the reflective marking points are conveniently fixed at the top positions of the moving body main bodies; the onboard infrared reflective beads are typically mounted directly on the target drone and captured by a camera with an infrared filter to provide reliable and easily extracted image point features; the placement of the infrared reflective beads, i.e., their relative position on the target drone, is also an important factor in the platform design; the basic principle of the arrangement of the pellets is that the pellets can be placed at will, but the pellets are required to be arranged asymmetrically, and the center of the pellets is ensured to coincide with the gravity center of the unmanned aerial vehicle; in addition, in order to obtain accurate and robust position and attitude angle information at the same time, the number of the small balls is set to be at least 4, and the small balls are not in the same plane.
CN202310276977.5A 2023-03-21 2023-03-21 Multi-moving body three-dimensional tracking and controlling platform based on visual sensor network Pending CN116149371A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310276977.5A CN116149371A (en) 2023-03-21 2023-03-21 Multi-moving body three-dimensional tracking and controlling platform based on visual sensor network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310276977.5A CN116149371A (en) 2023-03-21 2023-03-21 Multi-moving body three-dimensional tracking and controlling platform based on visual sensor network

Publications (1)

Publication Number Publication Date
CN116149371A true CN116149371A (en) 2023-05-23

Family

ID=86350730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310276977.5A Pending CN116149371A (en) 2023-03-21 2023-03-21 Multi-moving body three-dimensional tracking and controlling platform based on visual sensor network

Country Status (1)

Country Link
CN (1) CN116149371A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117974766A (en) * 2024-03-28 2024-05-03 西北工业大学 Multi-target identity judging method of distributed double infrared sensors based on space-time basis

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117974766A (en) * 2024-03-28 2024-05-03 西北工业大学 Multi-target identity judging method of distributed double infrared sensors based on space-time basis

Similar Documents

Publication Publication Date Title
JP6991122B2 (en) Target tracking methods, target tracking devices, target tracking systems and programs.
WO2021189507A1 (en) Rotor unmanned aerial vehicle system for vehicle detection and tracking, and detection and tracking method
Schmid et al. Autonomous vision‐based micro air vehicle for indoor and outdoor navigation
EP3123260B1 (en) Selective processing of sensor data
Shen et al. Vision-based state estimation for autonomous rotorcraft MAVs in complex environments
Deng et al. Indoor multi-camera-based testbed for 3-D tracking and control of UAVs
CN112789672B (en) Control and navigation system, gesture optimization, mapping and positioning techniques
CN112925223B (en) Unmanned aerial vehicle three-dimensional tracking virtual test simulation system based on visual sensing network
US20200012756A1 (en) Vision simulation system for simulating operations of a movable platform
Gans et al. A hardware in the loop simulation platform for vision-based control of unmanned air vehicles
CN112115607B (en) Mobile intelligent body digital twin system based on multidimensional microblog space
Stegagno et al. Relative localization and identification in a heterogeneous multi-robot system
Cui et al. Search and rescue using multiple drones in post-disaster situation
Clark et al. Autonomous and scalable control for remote inspection with multiple aerial vehicles
CN114488848A (en) Unmanned aerial vehicle autonomous flight system and simulation experiment platform for indoor building space
Elfes et al. Air-ground robotic ensembles for cooperative applications: Concepts and preliminary results
CN116149371A (en) Multi-moving body three-dimensional tracking and controlling platform based on visual sensor network
Mebarki et al. Image moments-based velocity estimation of UAVs in GPS denied environments
Mallik et al. Real-time Detection and Avoidance of Obstacles in the Path of Autonomous Vehicles Using Monocular RGB Camera
Grompone Vision-based 3D motion estimation for on-orbit proximity satellite tracking and navigation
Su et al. A framework of cooperative UAV-UGV system for target tracking
Qin et al. Visual-based tracking and control algorithm design for quadcopter UAV
Mian et al. Autonomous spacecraft inspection with free-flying drones
Bethke Persistent vision-based search and track using multiple UAVs
CN116009583A (en) Pure vision-based distributed unmanned aerial vehicle cooperative motion control method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination