CN114003041A - Multi-unmanned vehicle cooperative detection system - Google Patents

Multi-unmanned vehicle cooperative detection system Download PDF

Info

Publication number
CN114003041A
CN114003041A CN202111287343.7A CN202111287343A CN114003041A CN 114003041 A CN114003041 A CN 114003041A CN 202111287343 A CN202111287343 A CN 202111287343A CN 114003041 A CN114003041 A CN 114003041A
Authority
CN
China
Prior art keywords
target
intelligent
trolley
information
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111287343.7A
Other languages
Chinese (zh)
Inventor
杨雪榕
李嘉轩
张艳
杨起帆
童鹏飞
王龙飞
柳祥林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202111287343.7A priority Critical patent/CN114003041A/en
Publication of CN114003041A publication Critical patent/CN114003041A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0219Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory ensuring the processing of the whole working surface
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a multi-unmanned vehicle cooperative detection system, which comprises an intelligent vehicle module, a plurality of intelligent vehicle modules and a plurality of intelligent vehicle modules, wherein the intelligent vehicle module is used for carrying the modules and simulating vehicle motion; based on a monocular camera relative measurement module and based on a monocular camera with an initial position of a known world coordinate, distance information of a target and the position of the intelligent trolley are calculated by continuously measuring distance and angles of the target attached with a mark in the motion process; the system comprises a single-line radar-based relative measurement module, a single-line radar-based relative measurement module and a single-line radar-based relative measurement module, wherein the single-line radar-based relative measurement module is used for carrying out identification positioning and target motion estimation on the intelligent trolley through scanning two-dimensional point cloud, point cloud clustering and characteristic analysis; and the distributed cluster planning control module is used for carrying out cluster planning control on the basis of the measurement information, the intelligent trolley information and the virtual potential field dynamic planning algorithm by calibrating a field global coordinate system and a trolley initial position. By using the invention, the unmanned vehicles are subjected to cluster control, and the environment detection is completed. The invention is used as a multi-unmanned vehicle cooperative detection system, and can be widely applied to the field of cluster control.

Description

Multi-unmanned vehicle cooperative detection system
Technical Field
The invention relates to the field of cluster control, in particular to a multi-unmanned vehicle cooperative detection system.
Background
The research on the intelligent cluster system is vigorously developed in the last two decades, wide research is conducted on the theoretical aspect, and a verification system with digital simulation as a main part and semi-physical simulation as an auxiliary part is established. The cluster verification system can be divided into a centralized system and a distributed system according to the existence of control center nodes, because the architecture of the centralized system is simple, the previous cluster algorithm verification is mostly carried out around the centralized system, the system uses a global positioning system as a sensing means (such as a GPS and a top view camera), the control center completes the collaborative planning of an intelligent body, and the individual is only used as an executor of a task.
The research cluster problem necessarily needs to solve the requirements of positioning and navigation, while the centralized method has a limited experimental range for radiation, and the global positioning method (GPS) is affected by signals indoors and has a poor effect.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide a multi-unmanned vehicle cooperative detection system which performs cluster control on unmanned vehicles to complete detection of an environment.
The first technical scheme adopted by the invention is as follows: a multi-unmanned vehicle cooperative detection system comprises the following steps:
and the intelligent trolley module is used for carrying the module and simulating the motion of the vehicle.
Based on a monocular camera relative measurement module and based on a monocular camera with an initial position of a known world coordinate, distance information of a target and the position of the intelligent trolley are calculated by continuously measuring distance and angles of the target attached with a mark in the motion process;
the system comprises a single-line radar-based relative measurement module, a single-line radar-based relative measurement module and a single-line radar-based relative measurement module, wherein the single-line radar-based relative measurement module is used for carrying out identification positioning and target motion estimation on the intelligent trolley through scanning two-dimensional point cloud, point cloud clustering and characteristic analysis;
and the distributed cluster planning control module is used for carrying out cluster planning control on the basis of the measurement information, the intelligent trolley information and the virtual potential field dynamic planning algorithm by calibrating a field global coordinate system and a trolley initial position.
Further, the smart cart module includes:
the moving platform takes the STM32F103R8T6 as a control core and is used for controlling the speed and the steering;
an environment sensing unit with NVIDIA Jetson Nano as a processing core senses the state of the intelligent trolley, senses local environment and identifies and tracks a target;
and the inertial navigation unit is used for acquiring the motion information of the intelligent trolley.
Further, the step of measuring the distance based on the monocular camera relative measurement module specifically includes:
acquiring an image of a front target two-dimensional code based on a monocular camera;
detecting the object in the area, and identifying and displaying the object through the rectangular frame two-dimensional code to obtain rectangular frame information;
finding out two image plane coordinates of the top edge of the rectangular frame by combining the information of the rectangular frame;
deriving target plane coordinates according to the two image plane coordinates of the top edge based on a geometric relation derivation method;
and calculating the distance information between the target and the intelligent trolley by an Euclidean distance formula based on the target plane coordinates.
Further, the angle measurement step based on the monocular camera relative measurement module specifically includes:
acquiring the main distance of a monocular camera;
finding out the image plane coordinates of the corner points of the rectangular frame by combining the rectangular frame information;
calculating the pixel coordinates of the target center point according to the image plane coordinates of the rectangular frame corner points;
converting pixel coordinates of target central point into image coordinates
Converting the image coordinate of the target central point into a camera coordinate;
based on the camera coordinates, the azimuth and elevation angles of the target are calculated by polar coordinate conversion.
Further, the calculation formula of the azimuth angle and the elevation angle of the target is as follows:
Figure BDA0003333383960000021
in the above formula, θ represents the azimuth angle,
Figure BDA0003333383960000022
representing the elevation angle, f representing the monocular camera principal distance, and (x, y, z) representing the camera coordinates.
Further, the working steps based on the single line radar relative measurement module specifically include:
scanning two-dimensional point cloud based on a single-line radar to obtain point cloud data;
arranging the point cloud data into one-dimensional vectors and sequentially clustering the point cloud data according to the serial numbers of the point clouds to obtain point cloud clusters;
extracting line segments and angular points of the point cloud cluster to obtain two-dimensional features;
based on the matching of the two-dimensional features and the trolley appearance features in the preset template library, calculating the center position coordinates and postures of other trolleys;
and performing motion estimation according to the coordinates and the postures of the center positions of other trolleys.
Further, the method also comprises the following co-positioning steps:
judging that the intelligent trolley with the positioning information exists, and finishing the cooperative positioning among the intelligent trolley clusters by adopting an absolute positioning method;
defining the intelligent vehicle with the positioning information as an anchor point;
determining the self-motion state of the anchor point at a certain moment and measuring the motion states of other intelligent trolleys to obtain the self-motion state and the measurement state of the anchor point;
updating the positioning of the corresponding intelligent trolley by combining a Kalman filtering method according to the self motion state and the measurement state of the anchor point to obtain the intelligent trolley completing the positioning;
and defining the intelligent vehicle which finishes the positioning as another anchor point and circulating the cooperative positioning steps until all the intelligent vehicles finish the positioning.
Further, the working steps of the distributed cluster planning control module specifically include:
defining an initial absolute coordinate system according to the boundary of the field, placing a plurality of trolleys in the field and recording the initial coordinates of the trolleys;
updating the position of the intelligent trolley according to the motion information of the intelligent trolley;
calculating to obtain distance information generated by the environment source according to the monocular camera-based relative measurement module and the single-line radar-based relative measurement module;
calculating distance information generated by an environment source into a potential field function through a virtual potential field method, and superposing the potential field function to obtain the expected speed of the intelligent car at the moment;
and controlling the intelligent trolley through a PID controller according to the expected speed to complete the tasks of capturing the target and dynamically avoiding the obstacle of the intelligent trolley.
The method and the system have the beneficial effects that: according to the invention, relative measurement and information fusion are carried out through an airborne sensor, so that the intelligent trolley determines the position relation of the intelligent trolley and the environment based on the relative position of the intelligent trolley and the environment, and on the basis, planning control is carried out on the intelligent trolleys in the cluster, and detection and dynamic obstacle avoidance of an unknown environment are completed.
Drawings
FIG. 1 is a schematic diagram of the operation of a multiple unmanned vehicle cooperative detection system according to the present invention;
FIG. 2 is a schematic diagram of the working steps of a monocular camera-based relative measurement module according to an embodiment of the present invention;
FIG. 3 is a schematic view of a camera imaging process according to an embodiment of the present invention;
FIG. 4 is a flowchart of an algorithm for clustering in accordance with an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments. The step numbers in the following embodiments are provided only for convenience of illustration, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
Referring to fig. 1, the present invention provides a multi-unmanned vehicle cooperative detection system, including:
and the intelligent trolley module is used for carrying the module and simulating the motion of the vehicle.
Based on a monocular camera relative measurement module and based on a monocular camera with an initial position of a known world coordinate, distance information of a target and the position of the intelligent trolley are calculated by continuously measuring distance and angles of the target attached with a mark in the motion process;
specifically, the monocular camera relative measurement based on the two-dimensional target is a monocular camera at the starting position based on the known world coordinate, and the self-positioning and target tracking in the experimental process can be realized by continuously measuring the distance and the angle of the target attached with the two-dimensional mark in the motion process.
In the relative measurement, the distance, azimuth angle and elevation angle R, a, E obtained by the camera measurement are all spherical coordinate quantities, but our relative needs are performed in the camera coordinate system, so that the conversion of the camera coordinate system and the measurement (spherical) coordinate system is necessarily involved therein. Distance measurement and angle measurement (azimuth angle and elevation angle) are respectively carried out, wherein the monocular camera needs to know the actual size of the target when carrying out distance measurement, and then the focal length information of the camera is calculated, and the measurement of the target distance is completed. The angle measurement is based on the known inner orientation element camera principal distance f, and the elevation angle and the azimuth angle are calculated by using the projection principle according to the projection position of the tracking target on the imaging plane. Fig. 2 is a flow chart of camera relative measurement.
The system comprises a single-line radar-based relative measurement module, a single-line radar-based relative measurement module and a single-line radar-based relative measurement module, wherein the single-line radar-based relative measurement module is used for carrying out identification positioning and target motion estimation on the intelligent trolley through scanning two-dimensional point cloud, point cloud clustering and characteristic analysis;
specifically, the relative measurement based on the single line laser radar is realized by scanning two-dimensional point cloud through the radar, then carrying out target identification and positioning on the intelligent autonomous trolley attached with the white shell according to point cloud clustering and characteristic analysis, estimating target motion, and realizing self positioning in the experimental process and positioning posture estimation of adjacent intelligent autonomous trolleys.
And the distributed cluster planning control module is used for carrying out cluster planning control on the basis of the measurement information, the intelligent trolley information and the virtual potential field dynamic planning algorithm by calibrating a field global coordinate system and a trolley initial position.
As a further preferred embodiment of the method, the smart car module comprises:
the moving platform takes the STM32F103R8T6 as a control core and is used for controlling the speed and the steering;
an environment sensing unit with NVIDIA Jetson Nano as a processing core senses the state of the intelligent trolley, senses local environment and identifies and tracks a target;
and the inertial navigation unit is used for acquiring the motion information of the intelligent trolley.
Specifically, an intelligent autonomous trolley based on a Jetson Nano environment perception processor and an STM32 bottom layer control processor is designed, the trolley has the function of autonomous motion control, environment perception expansion modules such as a visible light camera, a single-line laser radar and an IMU module are added according to task requirements, a gigabit network card is equipped, the trolley can update the motion track of the trolley in real time, identify targets attached with cooperative identifications, perceive obstacles and adjacent information of unknown environments, and achieve communication through a local area network.
And for each intelligent autonomous trolley model, the function of an intelligent cluster is realized by modifying the trolley model. The trolley adopts a power system structure with front wheel steering and rear wheel driving, meanwhile, the front wheel of the trolley is connected with the steering engine, and the steering engine controls the steering of the two front wheels through a transmission structure. The power of the trolley is provided by a vehicle-mounted brush direct current motor, and the motor is driven by a full-bridge motor driver, so that the forward and reverse actions can be realized; the motor is connected with a rotary encoder through a gear, so that the real-time rotating speed of the driving motor can be accurately measured, the radial speed of vehicle motion is further obtained, and radial closed-loop control is realized. When the trolley moves, the direction change information of the trolley body caused by steering of the steering engine is measured through a vehicle-mounted 9-axis Inertia Measurement Unit (IMU), and lateral closed-loop control is achieved.
Further as a preferred embodiment of the method, the step of measuring the distance based on the monocular camera relative measurement module specifically includes:
s1, acquiring an image of the front target two-dimensional code based on the monocular camera;
s2, detecting the object in the area, and identifying and displaying the object through the rectangular frame two-dimensional code to obtain rectangular frame information;
s3, finding out the coordinates (u) of two image planes of the top edge of the rectangular frame according to the information of the rectangular frame1,v1) And (u)2,v2);
S4, deducing a method based on the geometric relation, and obtaining the coordinates (u) of the two image planes of the top edge1,v1) And (u)2,v2) Deriving target plane coordinates (x)1,y1) And (x)2,y2);
And S5, calculating distance information d between the target and the intelligent trolley through an Euclidean distance formula based on the target plane coordinates.
Specifically, monocular visual ranging is to derive depth information from a picture obtained by one camera, and is mainly classified into a measurement method based on a known motion and a known object according to the measurement principle. Here we use the measurement method of the known object. The measurement method of the known object is to obtain depth information by using a target picture obtained by a camera under the condition of known object information.
Similar triangles are used to calculate the distance of the camera to a known object or target. Assuming that there is a target or object with a width W, then placing the target at a distance D from the camera, taking a picture of the object with the camera and measuring the pixel width P of the object, the calculation formula for the camera focal length f can be derived:
f=(P*D)/W
when the camera is moved close to or away from an object or target, the distance d of the object from the camera is calculated by using a similar triangle:
d=(W*f)/P
and obtaining the distance d from the target to the camera, wherein P refers to the pixel distance, W refers to the width of the two-dimensional code mark, and f refers to the focal length of the camera.
Further as a preferred embodiment of the method, the step of measuring the angle based on the monocular camera relative measurement module specifically includes:
acquiring the dominant distance f of the monocular camera;
finding out the image plane coordinates of the corner point of the rectangular frame by combining the information of the rectangular frame, and respectively recording as (u)1,v1)(u2,v2)(u3,v3)(u4,v4);
Calculating the pixel coordinate (u) of the center point of the target according to the image plane coordinate of the corner point of the rectangular frame5,v5);
The target center point pixel (u)5,v5) Coordinate-to-image coordinates (x, y);
converting the target central point image coordinates (X, Y) into camera coordinates (X, Y, Z);
calculating azimuth and elevation angles of a target by polar coordinate conversion based on camera coordinates
Figure BDA0003333383960000061
θ。
Specifically, in the angle measurement, we can use the imaging principle, referring to fig. 3, the imaging point of the target in the image plane is P (u, v), and based on the known main distance f of the camera and the actual distance dx corresponding to each pixel on the image, we can calculate the two line-of-sight angles of the camera in the horizontal and vertical directions
Figure BDA0003333383960000062
θ。
Further as a preferred embodiment of the method, the calculation formula of the azimuth angle and the elevation angle of the target is as follows:
Figure BDA0003333383960000063
in the above formula, θ represents the azimuth angle,
Figure BDA0003333383960000064
Representing the elevation angle, f representing the monocular camera principal distance, and (x, y, z) representing the camera coordinates.
Further as a preferred embodiment of the method, the working steps of the relative measurement module based on the single line radar specifically include:
scanning two-dimensional point cloud based on a single-line radar to obtain point cloud data;
arranging the point cloud data into one-dimensional vectors and sequentially clustering the point cloud data according to the serial numbers of the point clouds to obtain point cloud clusters;
specifically, referring to fig. 4, the clustering process is to cluster the same or similar data into a cluster, where the clustering algorithm is used to distinguish point clouds from different targets, so as to sense the surrounding environment. The point cloud data obtained by the single-line laser radar are ordered, so that the point clouds are arranged into one-dimensional vectors, the point cloud data are sequentially clustered according to the serial numbers of the point clouds, and whether the point cloud data are the reflection information of the same object or not is judged according to the distance between adjacent points.
Extracting line segments and angular points of the point cloud cluster to obtain two-dimensional features;
specifically, in a plurality of point cloud clusters obtained by clustering, in order to identify the classification of a target from the point cloud clusters, two-dimensional features of the point cloud need to be extracted. Here we identify the target by extracting line segments and corner points. The line segment characteristics can be judged through the residual error of straight line fitting, and the angular point characteristics can be judged through the included angle of the two line segments after the line segment characteristics are identified. And calculating the line segment by taking the feet of the initial point and the final point in the point cloud cluster on the fitting straight line as end points.
And calculating the coordinates and postures of the center positions of other trolleys based on the matching of the two-dimensional features and the trolley appearance features in the preset template library.
In particular, in a cooperative task of the unmanned vehicle cluster, it is necessary to identify whether the detected object is another unmanned vehicle or an obstacle. At the moment, the detected two-dimensional features are matched with the known trolley appearance features, the outer surrounding of the known trolley is a rectangular frame, the length of the rectangular frame is L, and the width of the rectangular frame is W. And calculating the coordinates and the postures of the center position of the trolley by matching the length and the width of the straight line features and matching the right angle with the angular point features.
The pose of the unmanned vehicle is calculated according to the detected geometric shapes, and then the central position and the orientation of the unmanned vehicle are calculated.
When a linear target is detected, the fitted linear direction is y-kx + b, and the coordinate of the midpoint of the line segment is Pm=[xm,ym]. Because the center of the cart is located at the far end of the cart frame relative to the radar, the quadrant of the cart needs to be determined first. When the trolley is in the first quadrant and the fourth quadrant, the direction vector is as follows:
Figure BDA0003333383960000071
when the trolley is in the second and third image limits, the direction vector is as follows:
Figure BDA0003333383960000072
when the detected line segment is the long edge of the trolley, the position of the trolley is
P=Pm+uW(a)
The orientation of the cart is arctank or arctank + pi.
When the detected line segment is the wide edge of the trolley, the position of the trolley is
P=Pm+uL(b)
The orientation of the cart is either arctank +0.5 pi or arctank +1.5 pi.
When the angle point target is detected, fitting two straight lines to respectively obtain y and k1x+b1,y=k2x+b2. The coordinates of the two end points are respectively P1And PnThe center position coordinates can be calculated by using any line segment as shown in (a) and (b), or can be represented by the middle point of a line segment formed by connecting the end points of two line segments:
Figure BDA0003333383960000073
similarly, the orientation can be calculated by using any line segment.
And performing motion estimation according to the coordinates and postures of the central positions of other trolleys, and performing multi-target track maintenance:
in the unmanned cluster cooperative task, a vehicle-mounted radar can detect a plurality of targets simultaneously. In order to maintain the trajectories of the cluster members stably and know the motion states of the peers, the observation quantity and the maintenance state need to be matched. The multi-target estimation maintenance can be divided into a matching process and a Kalman filtering process, and the key steps are as follows: kalman prediction, the Hungarian algorithm matches the predicted track with the current observation, and the Kalman state is updated. For unmatched traces, they may be retained in the time domain; for unmatched observations, new trajectories can be established or deleted according to the specific conditions of the clusters. In this system, the state of maintenance is estimated as X ═ X, y, ψ, v, ω]Position, heading, velocity and angular velocity of the target. Observed value is D ═ x, y, psi12]The position and orientation of the target. Since the car extracted from the straight line is oriented in two directions, the direction in which the observed and predicted values of two consecutive frames are closest to each other is determined when initializing the trajectory.
The hungarian algorithm is a classical algorithm for solving the maximum allocation, and finds a matching pair of observation and prediction with the minimum total loss. The loss is defined as cost α dis _ cost + β ori _ cost, where dis _ cost is the distance between the observation and the predicted position and ori _ cost is the difference between the observation and the predicted orientation. Here we choose the one with the smallest ori _ cos as the orientation of the observation, since the observations have opposite directions. α and β are coefficients.
Further as a preferred embodiment of the method, the method further comprises the following co-location step:
particularly, autonomous cooperation of the unmanned system requires sufficiently high positioning accuracy, and the current outdoor positioning often depends on a GPS and the indoor positioning depends on a dynamic vision capture system. There are still some task scenarios that are limited such that some or even all of the unmanned vehicles cannot receive the positioning information of the positioning assistance system. In this case, the co-location technique becomes the best choice. In a group of unmanned aerial vehicles, each individual can get feedback by measuring the positions of other teammates, and then jointly estimate the positions of the individual and the teammates. The most obvious advantage of cooperative positioning is that accurate absolute positioning information obtained by any individual can be transmitted to each unmanned aerial vehicle through relative measurement in the set, and the unmanned aerial vehicle only depends on measurement load carried by the unmanned aerial vehicle and not external equipment, so that the cost is reduced, and the robustness of coping with severe environments is improved. Meanwhile, the cooperative positioning can also be used as an enhanced compensation system, and the coarse absolute positioning accuracy is improved by means of a high-accuracy measuring instrument in a task with extremely high requirement on the positioning accuracy. In the unmanned vehicle cluster, even if only a few individuals can receive absolute position information, the individuals can still share the measured positions to teammates which cannot be positioned through identification and positioning of the laser radar, and positioning of the whole cluster is achieved. Under the condition that positioning information cannot be received completely, the unmanned vehicle can realize relative positioning between the interiors of clusters by combining self inertial navigation information and relative measurement information of the radar.
Judging that the intelligent trolley with the positioning information exists, and finishing the cooperative positioning among the intelligent trolley clusters by adopting an absolute positioning method;
defining the intelligent vehicle with the positioning information as an anchor point;
determining the self-motion state of the anchor point at a certain moment and measuring the motion states of other intelligent trolleys to obtain the self-motion state and the measurement state of the anchor point;
updating the positioning of the corresponding intelligent trolley by combining a Kalman filtering method according to the self motion state and the measurement state of the anchor point to obtain the intelligent trolley completing the positioning;
and defining the intelligent vehicle which finishes the positioning as another anchor point and circulating the cooperative positioning steps until all the intelligent vehicles finish the positioning.
In particular, the above process is carried out during a stable positioning, since it is necessary that each trolley quantity has a state prediction of its own movement. Therefore, if the anchor vehicle is required to broadcast its own motion without knowing the initial state, the co-location process described above can only be started after the other vehicle recognizes the anchor vehicle in a stationary condition.
In addition, when all vehicles do not have absolute positioning information, relative positioning among clusters needs to be realized by means of combination of vehicle-mounted inertial navigation and radars. Because there are no absolute position coordinates, each vehicle needs to update the state estimates of the other vehicles in the local coordinate system. At the time k, the motion state of the A vehicle is obtained, the position at the time is taken as the origin, the direction is taken as the positive direction of an x axis, the counterclockwise rotation is taken as the positive direction of a y axis by 90 degrees, and a coordinate system XOY is establishedk. And obtaining the state of the vehicle B according to the measurement of the vehicle A. And at the moment k +1, the state of the vehicle B is obtained according to the measurement of the vehicle A. According to the information record of inertial navigation, the local coordinate system XOY at the momentk+1Relative to XOYkHas a rotation angle of phi and a translation vector of [ Delta x, Delta y]. Adjusting vehicle B at time k +1 at XOYkMeasurement state in a coordinate system. When multiple vehicles are observed, state predictions of the vehicles need to be obtained from other vehicles, and a multi-target track maintenance method is applied to matching. Because the state predictions of different vehicles are all in the local coordinate system, the rotation and translation of the local coordinate system need to be calculated through the difference between the measurement and the prediction, and the coordinate systems are unified. Similarly, this process is also implemented in the process of stable positioning, if the initial state is unknown, a vehicle is required to determine the coordinate system and then broadcast the motion state of the vehicle. Other vehicles, in stationary conditions, determine the coordinate system by observing the status broadcast of the vehicle and measuring it, and can only then begin moving.
Further as a preferred embodiment of the method, the working steps of the distributed cluster planning control module specifically include:
updating the position of the intelligent trolley according to the motion information of the intelligent trolley;
calculating to obtain distance information generated by the environment source according to the monocular camera-based relative measurement module and the single-line radar-based relative measurement module;
calculating distance information generated by an environment source into a potential field function through a virtual potential field method, and superposing the potential field function to obtain the expected speed of the intelligent car at the moment;
and controlling the intelligent trolley through a PID controller according to the expected speed to complete the tasks of capturing the target and dynamically avoiding the obstacle of the intelligent trolley.
Specifically, the distributed cluster planning control method is a planning mode based on a virtual potential field, and the principle of the distributed cluster planning control method is that all environment elements in an experimental site are defined as different potential field sources (such as obstacles, targets, boundaries, friends and the like), the size of the potential field is calculated based on the relative distance between the environment elements and the potential field sources, the moving direction of an individual can be determined by superposing the different potential field sources, a PID (proportion integration differentiation) controller is designed according to the deviation between the moving direction and the orientation of a vehicle body, and the heading control or the waypoint control of a trolley can be realized.
In addition, in the process, position updating based on inertial navigation is unreliable under most conditions and can generate data drifting, the change of environment information is registered through a radar, data fusion is carried out on the environment information and motion information generated by inertial navigation, a cooperation mark with a known position is arranged in an initial field, the position of the cooperation mark is measured relatively through vision, the position of the trolley can be corrected discontinuously, the trolley can be guaranteed to know the correct position of the trolley all the time in the moving process, and the overall cluster planning control flow is as shown in the figure.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. A multi-unmanned vehicle cooperative detection system, comprising:
and the intelligent trolley module is used for carrying the module and simulating the motion of the vehicle.
Based on a monocular camera relative measurement module and based on a monocular camera with an initial position of a known world coordinate, distance information of a target and the position of the intelligent trolley are calculated by continuously measuring distance and angles of the target attached with a mark in the motion process;
the system comprises a single-line radar-based relative measurement module, a single-line radar-based relative measurement module and a single-line radar-based relative measurement module, wherein the single-line radar-based relative measurement module is used for carrying out identification positioning and target motion estimation on the intelligent trolley through scanning two-dimensional point cloud, point cloud clustering and characteristic analysis;
and the distributed cluster planning control module is used for carrying out cluster planning control on the basis of the measurement information, the intelligent trolley information and the virtual potential field dynamic planning algorithm by calibrating a field global coordinate system and a trolley initial position.
2. The system of claim 1, wherein the smart cart module comprises:
the moving platform takes the STM32F103R8T6 as a control core and is used for controlling the speed and the steering;
an environment sensing unit with NVIDIA Jetson Nano as a processing core senses the state of the intelligent trolley, senses local environment and identifies and tracks a target;
and the inertial navigation unit is used for acquiring the motion information of the intelligent trolley.
3. The system according to claim 2, wherein the step of ranging based on the monocular camera relative measurement module specifically comprises:
acquiring an image of a front target two-dimensional code based on a monocular camera;
detecting the object in the area, and identifying and displaying the object through the rectangular frame two-dimensional code to obtain rectangular frame information;
finding out two image plane coordinates of the top edge of the rectangular frame by combining the information of the rectangular frame;
deriving target plane coordinates according to the two image plane coordinates of the top edge based on a geometric relation derivation method;
and calculating the distance information between the target and the intelligent trolley by an Euclidean distance formula based on the target plane coordinates.
4. The system according to claim 3, wherein the step of measuring the angle based on the monocular camera relative measurement module specifically comprises:
acquiring the main distance of a monocular camera;
finding out the image plane coordinates of the corner points of the rectangular frame by combining the rectangular frame information;
calculating the pixel coordinates of the target center point according to the image plane coordinates of the rectangular frame corner points;
converting the pixel coordinate of the target central point into an image coordinate to obtain the image coordinate of the target central point;
converting the image coordinates of the target central point into coordinates in a camera coordinate system;
and calculating the azimuth angle and the elevation angle of the target through polar coordinate conversion based on the coordinates in the camera coordinate system.
5. The system of claim 4, wherein the calculation formula of the azimuth and the elevation of the target is as follows:
Figure FDA0003333383950000021
in the above formula, θ represents the azimuth angle,
Figure FDA0003333383950000022
representing the elevation angle, f representing the monocular camera principal distance, and (x, y, z) representing the camera coordinates.
6. The system according to claim 5, wherein the working steps based on the single-line radar relative measurement module specifically include:
scanning two-dimensional point cloud based on a single-line radar to obtain point cloud data;
arranging the point cloud data into one-dimensional vectors and sequentially clustering the point cloud data according to the serial numbers of the point clouds to obtain point cloud clusters;
extracting line segments and angular points of the point cloud cluster to obtain two-dimensional features;
based on the matching of the two-dimensional features and the trolley appearance features in the preset template library, calculating the center position coordinates and postures of other trolleys;
and performing motion estimation according to the coordinates and the postures of the center positions of other trolleys.
7. The multi-unmanned-vehicle cooperative detection system according to claim 6, further comprising a cooperative positioning step of:
judging that the intelligent trolley with the positioning information exists, and finishing the cooperative positioning among the intelligent trolley clusters by adopting an absolute positioning method;
defining the intelligent vehicle with the positioning information as an anchor point;
determining the self-motion state of the anchor point at a certain moment and measuring the motion states of other intelligent trolleys to obtain the self-motion state and the measurement state of the anchor point;
updating the positioning of the corresponding intelligent trolley by combining a Kalman filtering method according to the self motion state and the measurement state of the anchor point to obtain the intelligent trolley completing the positioning;
and defining the intelligent vehicle which finishes the positioning as another anchor point and circulating the cooperative positioning steps until all the intelligent vehicles finish the positioning.
8. The system according to claim 7, wherein the distributed cluster planning control module specifically comprises:
defining an initial absolute coordinate system according to the boundary of the field, placing a plurality of trolleys in the field and recording the initial coordinates of the trolleys;
updating the position of the intelligent trolley according to the motion information of the intelligent trolley;
calculating to obtain distance information generated by the environment source according to the monocular camera-based relative measurement module and the single-line radar-based relative measurement module;
calculating distance information generated by an environment source into a potential field function through a virtual potential field method, and superposing the potential field function to obtain the expected speed of the intelligent car at the moment;
and controlling the intelligent trolley through a PID controller according to the expected speed to complete the tasks of capturing the target and dynamically avoiding the obstacle of the intelligent trolley.
CN202111287343.7A 2021-11-02 2021-11-02 Multi-unmanned vehicle cooperative detection system Pending CN114003041A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111287343.7A CN114003041A (en) 2021-11-02 2021-11-02 Multi-unmanned vehicle cooperative detection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111287343.7A CN114003041A (en) 2021-11-02 2021-11-02 Multi-unmanned vehicle cooperative detection system

Publications (1)

Publication Number Publication Date
CN114003041A true CN114003041A (en) 2022-02-01

Family

ID=79926336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111287343.7A Pending CN114003041A (en) 2021-11-02 2021-11-02 Multi-unmanned vehicle cooperative detection system

Country Status (1)

Country Link
CN (1) CN114003041A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117870652A (en) * 2024-03-13 2024-04-12 北京航空航天大学 Land-air online co-location method based on radar, inertia and vision fusion

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105635963A (en) * 2015-12-25 2016-06-01 中国空间技术研究院 Multi-agent distributed cooperative location method
CN108008738A (en) * 2017-12-27 2018-05-08 贵州大学 Target Tracking System under being cooperateed with based on unmanned plane with unmanned vehicle
CN108549407A (en) * 2018-05-23 2018-09-18 哈尔滨工业大学(威海) A kind of control algolithm of multiple no-manned plane collaboration formation avoidance
CN108986148A (en) * 2018-03-21 2018-12-11 南京邮电大学 Realize the method that more intelligent carriage collaboratively searchings identify and track specific objective group
CN110244715A (en) * 2019-05-23 2019-09-17 西安理工大学 A kind of multiple mobile robot's high-precision cooperative tracking method based on super-broadband tech
CN110930766A (en) * 2019-12-02 2020-03-27 武汉理工大学 Unmanned vehicle multi-lane convoy formation method based on graph theory and potential field method
CN111964673A (en) * 2020-08-25 2020-11-20 一汽解放汽车有限公司 Unmanned vehicle positioning system
CN113155123A (en) * 2021-04-01 2021-07-23 北京大学 Multi-intelligent-vehicle cooperative localization tracking method and device based on data sharing
CN113467508A (en) * 2021-06-30 2021-10-01 天津大学 Multi-unmanned aerial vehicle intelligent cooperative decision-making method for trapping task
US20210318693A1 (en) * 2020-04-14 2021-10-14 Electronics And Telecommunications Research Institute Multi-agent based manned-unmanned collaboration system and method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105635963A (en) * 2015-12-25 2016-06-01 中国空间技术研究院 Multi-agent distributed cooperative location method
CN108008738A (en) * 2017-12-27 2018-05-08 贵州大学 Target Tracking System under being cooperateed with based on unmanned plane with unmanned vehicle
CN108986148A (en) * 2018-03-21 2018-12-11 南京邮电大学 Realize the method that more intelligent carriage collaboratively searchings identify and track specific objective group
CN108549407A (en) * 2018-05-23 2018-09-18 哈尔滨工业大学(威海) A kind of control algolithm of multiple no-manned plane collaboration formation avoidance
CN110244715A (en) * 2019-05-23 2019-09-17 西安理工大学 A kind of multiple mobile robot's high-precision cooperative tracking method based on super-broadband tech
CN110930766A (en) * 2019-12-02 2020-03-27 武汉理工大学 Unmanned vehicle multi-lane convoy formation method based on graph theory and potential field method
US20210318693A1 (en) * 2020-04-14 2021-10-14 Electronics And Telecommunications Research Institute Multi-agent based manned-unmanned collaboration system and method
CN111964673A (en) * 2020-08-25 2020-11-20 一汽解放汽车有限公司 Unmanned vehicle positioning system
CN113155123A (en) * 2021-04-01 2021-07-23 北京大学 Multi-intelligent-vehicle cooperative localization tracking method and device based on data sharing
CN113467508A (en) * 2021-06-30 2021-10-01 天津大学 Multi-unmanned aerial vehicle intelligent cooperative decision-making method for trapping task

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117870652A (en) * 2024-03-13 2024-04-12 北京航空航天大学 Land-air online co-location method based on radar, inertia and vision fusion
CN117870652B (en) * 2024-03-13 2024-05-14 北京航空航天大学 Land-air online co-location method based on radar, inertia and vision fusion

Similar Documents

Publication Publication Date Title
EP3715785B1 (en) Slam assisted ins
CN109211241B (en) Unmanned aerial vehicle autonomous positioning method based on visual SLAM
CN109029417B (en) Unmanned aerial vehicle SLAM method based on mixed visual odometer and multi-scale map
CN110244772B (en) Navigation following system and navigation following control method of mobile robot
TWI827649B (en) Apparatuses, systems and methods for vslam scale estimation
CN105652891B (en) A kind of rotor wing unmanned aerial vehicle movement Target self-determination tracks of device and its control method
Engel et al. Camera-based navigation of a low-cost quadrocopter
Achtelik et al. Stereo vision and laser odometry for autonomous helicopters in GPS-denied indoor environments
Alonso et al. Accurate global localization using visual odometry and digital maps on urban environments
WO2017177533A1 (en) Method and system for controlling laser radar based micro unmanned aerial vehicle
Nieuwenhuisen et al. Multimodal obstacle detection and collision avoidance for micro aerial vehicles
Oh et al. Indoor UAV control using multi-camera visual feedback
Lippiello et al. MAV indoor navigation based on a closed-form solution for absolute scale velocity estimation using optical flow and inertial data
CN113311873B (en) Unmanned aerial vehicle servo tracking method based on vision
Shen et al. Localization through fusion of discrete and continuous epipolar geometry with wheel and IMU odometry
Holz et al. Towards multimodal omnidirectional obstacle detection for autonomous unmanned aerial vehicles
López et al. Indoor SLAM for micro aerial vehicles using visual and laser sensor fusion
Bazin et al. UAV attitude estimation by vanishing points in catadioptric images
RU195749U1 (en) Intelligent vision system for an unmanned aerial vehicle for solving navigation problems, building a three-dimensional map of the surrounding space and obstacles, and autonomous patrolling
CN114003041A (en) Multi-unmanned vehicle cooperative detection system
Li et al. Metric sensing and control of a quadrotor using a homography-based visual inertial fusion method
Jensen et al. Laser range imaging using mobile robots: From pose estimation to 3D-models
JP6832394B2 (en) Self-position estimator, self-regioselector, and learner
Wang et al. Micro aerial vehicle navigation with visual-inertial integration aided by structured light
Sa et al. 100hz onboard vision for quadrotor state estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination