CN113761647B - Simulation method and system of unmanned cluster system - Google Patents
Simulation method and system of unmanned cluster system Download PDFInfo
- Publication number
- CN113761647B CN113761647B CN202110883152.0A CN202110883152A CN113761647B CN 113761647 B CN113761647 B CN 113761647B CN 202110883152 A CN202110883152 A CN 202110883152A CN 113761647 B CN113761647 B CN 113761647B
- Authority
- CN
- China
- Prior art keywords
- vehicle system
- point
- unmanned vehicle
- map
- frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000004088 simulation Methods 0.000 title claims abstract description 131
- 238000000034 method Methods 0.000 title claims abstract description 55
- 238000010276 construction Methods 0.000 claims abstract description 23
- 238000005070 sampling Methods 0.000 claims description 104
- 230000009466 transformation Effects 0.000 claims description 24
- 239000011159 matrix material Substances 0.000 claims description 19
- 238000004422 calculation algorithm Methods 0.000 claims description 18
- 238000010586 diagram Methods 0.000 claims description 14
- 238000001514 detection method Methods 0.000 claims description 9
- 238000005259 measurement Methods 0.000 claims description 8
- 238000006243 chemical reaction Methods 0.000 claims description 7
- 238000012937 correction Methods 0.000 claims description 6
- 230000001351 cycling effect Effects 0.000 claims description 6
- 238000012163 sequencing technique Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 4
- 239000006185 dispersion Substances 0.000 claims description 4
- 239000000463 material Substances 0.000 claims description 3
- 230000001953 sensory effect Effects 0.000 claims description 2
- 230000009286 beneficial effect Effects 0.000 abstract description 3
- 230000006872 improvement Effects 0.000 description 28
- 230000006870 function Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 239000000203 mixture Substances 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013079 data visualisation Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000013515 script Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/15—Vehicle, aircraft or watercraft design
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
- G06Q10/047—Optimisation of routes or paths, e.g. travelling salesman problem
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Strategic Management (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Economics (AREA)
- General Engineering & Computer Science (AREA)
- Marketing (AREA)
- Computational Mathematics (AREA)
- Tourism & Hospitality (AREA)
- Quality & Reliability (AREA)
- Remote Sensing (AREA)
- Operations Research (AREA)
- Computer Graphics (AREA)
- Automation & Control Theory (AREA)
- Aviation & Aerospace Engineering (AREA)
- General Business, Economics & Management (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Entrepreneurship & Innovation (AREA)
- Game Theory and Decision Science (AREA)
- Development Economics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Navigation (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention discloses a simulation method of an unmanned cluster system, wherein the unmanned cluster system comprises an unmanned vehicle system and an unmanned plane system, and the method comprises the following steps: constructing the unmanned aerial vehicle system and a physical simulation model of the unmanned aerial vehicle system; based on the sensing data of the unmanned vehicle system, constructing a three-dimensional simulation scene dynamic map; and performing target tracking on the unmanned aerial vehicle system and a physical simulation model of the unmanned aerial vehicle system based on the three-dimensional simulation scene dynamic map. The invention also discloses a simulation system of the unmanned cluster system. The beneficial effects of the invention are as follows: the target recognition, self-positioning, image construction, tracking obstacle avoidance and other functional simulation of the unmanned cluster system can be realized, so that the collaborative simulation of the unmanned cluster system is realized.
Description
Technical Field
The invention relates to the technical field of unmanned systems, in particular to a simulation method and system of an unmanned cluster system.
Background
At present, in the technical field of unmanned systems, three-dimensional collaborative simulation of the unmanned cluster system cannot be generally realized, so that a simulation result cannot be applied to the unmanned cluster system in reality, and the collaborative combat capability of the unmanned cluster system is improved.
Disclosure of Invention
In order to solve the problems, the invention aims to provide a simulation method and a simulation system of an unmanned cluster system, which can realize target recognition, self-positioning, image construction, tracking obstacle avoidance and other functional simulation of the unmanned cluster system so as to realize collaborative simulation of the unmanned cluster system.
The invention provides a simulation method of an unmanned cluster system, wherein the unmanned cluster system comprises an unmanned vehicle system and an unmanned plane system, and the method comprises the following steps:
constructing the unmanned aerial vehicle system and a physical simulation model of the unmanned aerial vehicle system;
based on the sensing data of the unmanned vehicle system, constructing a three-dimensional simulation scene dynamic map;
and performing target tracking on the unmanned aerial vehicle system and a physical simulation model of the unmanned aerial vehicle system based on the three-dimensional simulation scene dynamic map.
As a further improvement of the present invention, constructing a physical simulation model of the unmanned aerial vehicle system and the unmanned aerial vehicle system includes:
constructing a physical simulation model of the unmanned aerial vehicle system based on a V-REP tool;
and constructing a physical simulation model of the unmanned vehicle system based on a Gazebo tool.
As a further improvement of the invention, based on the sensing data of the unmanned vehicle system, a three-dimensional simulation scene dynamic map is constructed, comprising:
Constructing a semi-dense map based on the sensing data of the unmanned vehicle system;
and based on the three-dimensional laser radar point cloud data of the unmanned vehicle system, fusing the semi-dense map to construct an environment dense map.
As a further improvement of the present invention, constructing a semi-dense map based on the sensed data of the unmanned vehicle system includes:
constructing a semi-dense map based on binocular vision sensing data of the unmanned vehicle system;
fusing the gyroscope and inertial navigation sensing data of the unmanned vehicle system to the semi-dense map to perform inter-frame motion estimation of the unmanned vehicle system;
and carrying out real-time correction on the semi-dense map through loop detection based on a key frame acquired by inter-frame motion estimation of the unmanned vehicle system.
As a further improvement of the present invention, the unmanned vehicle system, when performing inter-frame motion estimation, includes:
estimating incremental motion T of the unmanned vehicle system k,k-1 To make the light measurement errorMinimum;
wherein T is k,k-1 A last frame of an image captured by a camera of the unmanned vehicle systemAnd current frame->Pose transformation between, I represents the image, < +.>For photodetection residual, i.e. in the last frame +.>And said current frame->The same three-dimensional point ρ as observed in u U is the center position of the image.
As a further development of the invention, the estimation of the incremental movement T of the unmanned system k,k-1 To make the light measurement errorMinimum, comprising:
S2, at the last frameIn which three-dimensional point ρ is calculated by re-projection u Is a pixel intensity difference value;
wherein T is BC A transformation matrix of a body coordinate system B and a camera coordinate system C of the unmanned vehicle system, wherein u is the previous frameCenter position of the middle image block;
s3, at the current frameImage feature alignment is performed in the current frame, and the +.>Image block P centered on the projection feature position u' is positioned relative to the frame in which the same feature is observed for the first time>Pixel intensity differences of the reference image block;
s4, correcting the projection characteristic position u' to obtain the current frameThe projection characteristic position u';
for the point feature:
u′ * =u′+δu * ;
wherein Deltau is an iteration variable of the sum calculated over the image block P, deltau is the reprojection error, T CB A transformation matrix T for a camera coordinate system C and a body coordinate system B of the unmanned vehicle system kr For the current frame And said frame->Is used for the conversion matrix of the (c),a is a constant;
for line segment features:
u′ * =u′+δu * ·n;
wherein deltau is an iteration variable of the sum calculated on the image block P, deltau is a reprojection error, A is a constant, and n is the normal direction of the line segment feature;
s5, optimizing camera pose and landmark position χ= { T by minimizing the sum of squares of the re-projection errors kW ,ρ i };
Where K is the set of all key frames in the semi-dense map,for the current frame->A set of all landmarks corresponding to point features, +.>For the current frame->Set of all landmarks corresponding to line segment features, T kW For the current frame->And Key frame->Is used for the conversion matrix of (a).
As a further improvement of the present invention, the loop detection includes:
extracting key point features and key line segment features in the key frame, constructing a word bag model for the key point features and the line segment features, and storing the word bag model into a data dictionary;
when the camera of the unmanned vehicle system moves, calculating a similarity score value for the obtained current key frame, and searching an image which is most similar to the current key frame in the data dictionary;
and correcting the semi-dense map in real time based on the most similar images.
As a further improvement of the present invention, the calculating a similarity score value for the obtained current key frame when the camera of the unmanned vehicle system moves, and searching the image most similar to the current key frame in the data dictionary, includes:
determining the key point number n of the current key frame k And the number of key line segments n l ;
Calculating the dispersity value d of the key points according to the coordinates (x, y) of the key points k Calculating the dispersity value d of the key line segment according to the midpoint coordinates of the key line segment l ;
Based on key point characteristics S k Weight a of (2) k And key line segment feature S l Weight a of (2) l Calculating the similarity score value A of the obtained image frame t ;
Wherein A is t =a k *(n k /(n k +n l )+d k /(d k +d l ))*S k +a l *(n l /(n k +n l )+d l /(d k +d l ))*S l 。
As a further improvement of the invention, the dispersity value d of the key points is calculated according to the coordinates (x, y) of the key points k Comprising:
calculating the sum of variances of the x coordinate and the y coordinate;
calculating the square root of the sum of variances as the dispersity value d of the key points k 。
As a further improvement of the present invention, based on the three-dimensional laser radar point cloud data of the unmanned vehicle system, and fusing the semi-dense map to construct an environment dense map, the method includes:
based on the three-dimensional laser point cloud data of the unmanned vehicle system, fusing the gyroscope and inertial navigation sensing data of the unmanned vehicle system to obtain a point cloud key frame;
Obtaining the pose of the unmanned vehicle system through the inter-frame motion estimation of the unmanned vehicle system, translating and rotating the point cloud key frame according to the pose of the unmanned vehicle system, and constructing a three-dimensional point cloud model;
and fusing the semi-dense map and the three-dimensional point cloud model to construct an environment dense map.
As a further improvement of the present invention, the estimating the pose of the unmanned vehicle system through the inter-frame motion of the unmanned vehicle system, translating and rotating the point cloud keyframe according to the pose of the unmanned vehicle system, and constructing a three-dimensional point cloud model, includes:
establishing a three-dimensional grid map based on the size of the unmanned vehicle system by taking the starting point coordinate of the unmanned vehicle system as an origin, taking the north direction as the positive X-axis direction, taking the east direction as the positive Y-axis direction and taking the upward direction as the positive Z-axis direction;
according to the current pose of the unmanned vehicle system, translating and rotating the acquired obstacle point clouds, filling the translated and rotated point clouds into corresponding grids, and recording the number S of the point clouds in each grid;
updating each grid in a local range centering on the unmanned vehicle system, and determining the number S of point clouds of each grid after updating + =αS - +S; in the grid, the number of point clouds before updating is S - The number of the point clouds updated in the grid is S + Alpha is a forgetting coefficient between 0 and 1, S is the number of the current point clouds, and the corresponding S of the non-obstacle grid is 0;
for the updated point cloud quantity S + Threshold judgment is carried out, if the updated point cloud quantity S + If the preset value is met, determining the grid as an obstacle, otherwise, determining the grid as being capable of being passedLine space.
As a further improvement of the present invention, the target tracking for the unmanned aerial vehicle system and the physical simulation model of the unmanned aerial vehicle system based on the three-dimensional simulation scene dynamic map includes:
in the three-dimensional simulation scene dynamic map, a 3D route map is built according to the size of the unmanned aerial vehicle system;
and searching an optimal path through an A algorithm based on the 3D route map.
As a further improvement of the present invention, in the three-dimensional simulation scene dynamic map, the creating a 3D roadmap according to the size of the unmanned aerial vehicle system includes:
randomly selecting sampling points in the three-dimensional simulation scene dynamic map, if the sampling points are positioned in the obstacle, reserving the sampling points if the sampling points are positioned in the passable space, and analogizing the sampling points to construct a sampling point diagram based on the reserved sampling points;
Connecting all sampling points based on the sampling point diagram, canceling the connecting line if the connecting line collides with an obstacle, and analogizing to finish connecting lines of all the sampling points;
and connecting at least one adjacent point around each sampling point, if the sampling points and the adjacent points can be connected, reserving corresponding connecting lines, and so on, so as to complete the construction of the 3D roadmap.
As a further improvement of the invention, after a period of time or each time the unmanned aerial vehicle system moves for a certain distance, the 3D roadmap is updated according to a new obstacle environment, new sampling points are added to the 3D roadmap, or sampling points or connecting lines which collide with the obstacle are deleted.
As a further improvement of the present invention, searching an optimal path by an a-x algorithm based on the 3D roadmap includes:
calculating each estimated value of the starting point reaching the end point through each sampling point in the 3D route map by an A-type algorithm based on the starting point and the end point of the given unmanned aerial vehicle system;
sequencing the estimated values, deleting the sampling point with the minimum estimated value, and adding surrounding points of the deleted sampling point as expansion points;
and sequentially cycling until the end point is searched.
As a further improvement of the present invention, the unmanned cluster system includes an unmanned vehicle system and an unmanned vehicle system, the system including:
The simulation model building module is used for building the unmanned aerial vehicle system and a physical simulation model of the unmanned aerial vehicle system;
the simulation map construction module is used for constructing a three-dimensional simulation scene dynamic map based on the sensing data of the unmanned vehicle system;
and the simulation module is used for tracking targets of the unmanned aerial vehicle system and the physical simulation model of the unmanned aerial vehicle system based on the three-dimensional simulation scene dynamic map.
As a further improvement of the present invention, the simulation model building module is configured to:
constructing a physical simulation model of the unmanned aerial vehicle system based on a V-REP tool;
and constructing a physical simulation model of the unmanned vehicle system based on a Gazebo tool.
As a further improvement of the present invention, the simulation map construction module is configured to:
constructing a semi-dense map based on the sensing data of the unmanned vehicle system;
and based on the three-dimensional laser radar point cloud data of the unmanned vehicle system, fusing the semi-dense map to construct an environment dense map.
As a further improvement of the present invention, the simulation map construction module is configured to:
constructing a semi-dense map based on binocular vision sensing data of the unmanned vehicle system;
Fusing the gyroscope and inertial navigation sensing data of the unmanned vehicle system to the semi-dense map to perform inter-frame motion estimation of the unmanned vehicle system;
and carrying out real-time correction on the semi-dense map through loop detection based on a key frame acquired by inter-frame motion estimation of the unmanned vehicle system.
As a further improvement of the present invention, the simulation map construction module is configured to:
estimating incremental motion T of the unmanned vehicle system k,k-1 To make the light measurement errorMinimum;
wherein T is k,k-1 A last frame of an image captured by a camera of the unmanned vehicle systemAnd current frame->Pose transformation between, I represents the image, < +.>For photodetection residual, i.e. in the last frame +.>And said current frame->The same three-dimensional point ρ as observed in u U is the center position of the image.
As a further improvement of the present invention, the simulation map construction module is configured to:
At the last frameIn which three-dimensional point ρ is calculated by re-projection u Is a pixel intensity difference value;
wherein T is BC A transformation matrix of a body coordinate system B and a camera coordinate system C of the unmanned vehicle system, wherein u is the previous frame Center position of the middle image block;
at the current frameImage feature alignment is performed in the current frame, and the +.>Image block P centered on the projection feature position u' is positioned relative to the frame in which the same feature is observed for the first time>Pixel intensity differences of the reference image block;
correcting the projection characteristic position u' to obtain the current frameThe projection characteristic position u';
for the point feature:
u′ * =u′+δu * ;
wherein Deltau is an iteration variable of the sum calculated over the image block P, deltau is the reprojection error, T CB A transformation matrix T for a camera coordinate system C and a body coordinate system B of the unmanned vehicle system kr For the current frameAnd said frame->A is a constant;
for line segment features:
u′ * =u′+δu * ·n;
wherein deltau is an iteration variable of the sum calculated on the image block P, deltau is a reprojection error, A is a constant, and n is the normal direction of the line segment feature;
optimizing camera pose and landmark position χ= { T by minimizing the sum of squares of the re-projection errors kW ,ρ i };
Where K is the set of all key frames in the semi-dense map,for the current frame->A set of all landmarks corresponding to point features, +. >For the current frame->Set of all landmarks corresponding to line segment features, T kW For the current frame->And Key frame->Is used for the conversion matrix of (a).
As a further improvement of the present invention, the simulation map construction module is configured to:
extracting key point features and key line segment features in the key frame, constructing a word bag model for the key point features and the line segment features, and storing the word bag model into a data dictionary;
when the camera of the unmanned vehicle system moves, calculating a similarity score value for the obtained current key frame, and searching an image which is most similar to the current key frame in the data dictionary;
and correcting the semi-dense map in real time based on the most similar images.
As a further improvement of the present invention, the simulation map construction module is configured to:
determining the key point number n of the current key frame k And the number of key line segments n l ;
Calculating the dispersity value d of the key points according to the coordinates (x, y) of the key points k Calculating the dispersity value d of the key line segment according to the midpoint coordinates of the key line segment l ;
Based on key point characteristics S k Weight a of (2) k And key line segment feature S l Weight a of (2) l Calculating the similarity score value A of the obtained image frame t ;
Wherein A is t =a k *(n k /(n k +n l )+d k /(d k +d l ))*S k +a l *(n l /(n k +n l )+d l /(d k +d l ))*S l 。
As a further improvement of the present invention, the simulation map construction module is configured to:
Calculating the sum of variances of the x coordinate and the y coordinate;
calculating the square root of the sum of variances as the dispersity value d of the key points k 。
As a further improvement of the present invention, the simulation map construction module is configured to:
based on the three-dimensional laser point cloud data of the unmanned vehicle system, fusing the gyroscope and inertial navigation sensing data of the unmanned vehicle system to obtain a point cloud key frame;
obtaining the pose of the unmanned vehicle system through the inter-frame motion estimation of the unmanned vehicle system, translating and rotating the point cloud key frame according to the pose of the unmanned vehicle system, and constructing a three-dimensional point cloud model;
and fusing the semi-dense map and the three-dimensional point cloud model to construct an environment dense map.
As a further improvement of the present invention, the simulation map construction module is configured to:
establishing a three-dimensional grid map based on the size of the unmanned vehicle system by taking the starting point coordinate of the unmanned vehicle system as an origin, taking the north direction as the positive X-axis direction, taking the east direction as the positive Y-axis direction and taking the upward direction as the positive Z-axis direction;
according to the current pose of the unmanned vehicle system, translating and rotating the acquired obstacle point clouds, filling the translated and rotated point clouds into corresponding grids, and recording the number S of the point clouds in each grid;
Updating each grid in a local range centering on the unmanned vehicle system, and determining the number S of point clouds of each grid after updating + =αS - +S; in the grid, the number of point clouds before updating is S - The number of the point clouds updated in the grid is S + Alpha is between 0 and 1Forgetting coefficient, S is the number of the current point clouds, and the corresponding S of the non-obstacle grid is 0;
for the updated point cloud quantity S + Threshold judgment is carried out, if the updated point cloud quantity S + And if the preset value is met, determining the grid as an obstacle, otherwise, determining the grid as a passable space.
As a further improvement of the invention, the simulation module is configured to:
in the three-dimensional simulation scene dynamic map, a 3D route map is built according to the size of the unmanned aerial vehicle system;
and searching an optimal path through an A algorithm based on the 3D route map.
As a further improvement of the invention, the simulation module is configured to:
randomly selecting sampling points in the three-dimensional simulation scene dynamic map, if the sampling points are positioned in the obstacle, reserving the sampling points if the sampling points are positioned in the passable space, and analogizing the sampling points to construct a sampling point diagram based on the reserved sampling points;
Connecting all sampling points based on the sampling point diagram, canceling the connecting line if the connecting line collides with an obstacle, and analogizing to finish connecting lines of all the sampling points;
and connecting at least one adjacent point around each sampling point, if the sampling points and the adjacent points can be connected, reserving corresponding connecting lines, and so on, so as to complete the construction of the 3D roadmap.
As a further improvement of the invention, the simulation module is configured to: and updating the 3D route map according to a new obstacle environment at intervals or after the unmanned aerial vehicle system moves for a certain distance, adding new sampling points into the 3D route map, or deleting sampling points or connecting lines which collide with the obstacle.
As a further improvement of the invention, the simulation module is configured to:
calculating each estimated value of the starting point reaching the end point through each sampling point in the 3D route map by an A-type algorithm based on the starting point and the end point of the given unmanned aerial vehicle system;
sequencing the estimated values, deleting the sampling point with the minimum estimated value, and adding surrounding points of the deleted sampling point as expansion points;
and sequentially cycling until the end point is searched.
The invention also provides an electronic device comprising a memory and a processor, wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method.
The invention also provides a computer readable storage medium having stored thereon a computer program, characterized in that the computer program is executed by a processor to implement the method.
The beneficial effects of the invention are as follows:
the unmanned cluster simulation system is built based on the ROS, and functional simulation such as target identification, self-positioning, image construction, tracking obstacle avoidance and the like can be realized, so that collaborative simulation of the unmanned clusters is realized. In addition, the simulation system of the invention can be conveniently transplanted to various hardware platforms, and has good expansibility and compatibility.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is evident that the figures in the following description are only some embodiments of the invention, from which other figures can be obtained without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a simulation method of an unmanned cluster system according to an exemplary embodiment of the invention;
FIG. 2 is a schematic flow chart of constructing a dynamic map of a three-dimensional simulation scene according to an exemplary embodiment of the invention;
FIG. 3 is a schematic diagram of pose transformation between a previous frame and a current frame according to an exemplary embodiment of the present invention;
fig. 4 is a schematic diagram of a constructed 3D roadmap according to an exemplary embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, if directional indications (such as up, down, left, right, front, and rear … …) are included in the embodiments of the present invention, the directional indications are merely used to explain the relative positional relationship, movement conditions, etc. between the components in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indications are correspondingly changed.
In addition, in the description of the present invention, the terminology used is for the purpose of illustration only and is not intended to limit the scope of the present invention. The terms "comprises" and/or "comprising" are used to specify the presence of stated elements, steps, operations, and/or components, but do not preclude the presence or addition of one or more other elements, steps, operations, and/or components. The terms "first," "second," and the like may be used for describing various elements, do not represent a sequence, and are not intended to limit the elements. Furthermore, in the description of the present invention, unless otherwise indicated, the meaning of "a plurality" is two or more. These terms are only used to distinguish one element from another element. These and/or other aspects will become apparent to those skilled in the art from the following description, when taken in conjunction with the accompanying drawings, wherein the present invention is described in connection with embodiments thereof. The drawings are intended to depict embodiments of the invention for purposes of illustration only. Those skilled in the art will readily recognize from the following description that alternative embodiments of the illustrated structures and methods of the present invention may be employed without departing from the principles of the present invention.
The embodiment of the invention discloses a simulation method of an unmanned cluster system, wherein the unmanned cluster system comprises an unmanned vehicle system and an unmanned plane system, and as shown in fig. 1, the method comprises the following steps:
constructing an unmanned aerial vehicle system and a physical simulation model of the unmanned aerial vehicle system;
based on sensing data of the unmanned vehicle system, constructing a three-dimensional simulation scene dynamic map;
and performing target tracking on the physical simulation models of the unmanned aerial vehicle system and the unmanned aerial vehicle system based on the three-dimensional simulation scene dynamic map.
In an alternative embodiment, constructing a physical simulation model of a drone system and a drone system includes:
constructing a physical simulation model of the unmanned aerial vehicle system based on the V-REP tool;
and constructing a physical simulation model of the unmanned vehicle system based on the Gazebo tool.
V-REP (Virtual Robot Experiment Platform) supports the bullets, ODE and Vortex (for fluid simulation) engines, integrates a large number of common models, is simple to model, and the integrated physical engine can calculate motion, rotation and collision according to the physical properties of an object. V-REP supports a variety of control schemes including remote control, asynchronous control, and synchronous control. The V-REP programming style supports embedded scripts, add-ons, plug-ins, remote APIs, ROS nodes, etc. Gazebo is the default emulator of ROS, supporting the bullets and ODE engines. The model format in Gazebo is based on XML SDF (Simulation Description Format), which simulates many physical characteristics in a robot and in an environment. The unmanned aerial vehicle system and the physical simulation model of the unmanned aerial vehicle system can display relevant data calculated by all unmanned aerial vehicle systems in real time through RVIZ data visualization tools.
In an alternative embodiment, based on the sensing data of the unmanned vehicle system, a three-dimensional simulation scene dynamic map is constructed, as shown in fig. 2, comprising:
constructing a semi-dense map based on sensing data of the unmanned vehicle system;
and (3) three-dimensional laser radar point cloud data based on the unmanned vehicle system are fused with a semi-dense map to construct an environment dense map.
According to the invention, positioning and environment modeling are realized through the unmanned vehicle system, and the constructed three-dimensional scene dynamic map provides priori information for the whole unmanned vehicle system so as to realize cooperative control and target tracking. Firstly, by fusing inertial navigation and IMU data in pose estimation, the visual self-positioning precision of an unmanned system is improved, a semi-dense map is rapidly generated, and the positioning and composition frequency at this stage is about 30Hz; and then constructing an environment dense map on the basis of the semi-dense map by combining the three-dimensional laser radar multiframe fusion data and the motion estimation, wherein the composition frequency at the stage is about 1Hz, and the aim is to locally generate a high-precision environment map.
In an alternative embodiment, constructing a semi-dense map based on sensory data of an unmanned vehicle system includes:
constructing a semi-dense map based on binocular vision sensing data of the unmanned vehicle system;
Fusing gyroscope and inertial navigation sensing data of the unmanned vehicle system to a semi-dense map, and performing inter-frame motion estimation of the unmanned vehicle system;
and carrying out real-time correction on the semi-dense map through loop detection based on a key frame acquired by inter-frame motion estimation of the unmanned vehicle system.
In an alternative embodiment, the unmanned vehicle system, when performing inter-frame motion estimation, as shown in fig. 3, includes:
estimating incremental motion T of an unmanned vehicle system k,k-1 To make the light measurement errorMinimum;
wherein T is k,k-1 Last frame of image shot by camera of unmanned vehicle systemAnd current frame->Pose transformation between, I represents the image, < +.>For photodetection residual, i.e. in last frame +.>And current frame->The same three-dimensional point ρ as observed in u U is the center position of the image.
In an alternative embodiment, the incremental movement T of the unmanned system is estimated k,k-1 To make the light measurement errorMinimum, comprising:
S2, last frameIn which three-dimensional point ρ is calculated by re-projection u Is a pixel intensity difference value;
wherein T is BC The transformation matrix of the body coordinate system B and the camera coordinate system C of the unmanned vehicle system, u is the previous frame Middle image blockA center position;
s3, in the current frameImage feature alignment is performed in the process, and the current frame is calculated>Image block P centered on the projection feature position u' is positioned relative to the frame in which the same feature is observed for the first time>Pixel intensity differences of the reference image block;
s4, correcting the projection characteristic position u' to obtain a current frameThe projection characteristic position u';
for the point feature:
u′ * =u′+δu * ;
where Deltau is an iteration variable of the sum calculated over the image block P, deltau is the reprojection error, T CB Is a transformation matrix of a camera coordinate system C and a body coordinate system B of the unmanned vehicle system, T kr For the current frameAnd frame->A is a constant;
for line segment features:
u′ * =u′+δu * ·n;
wherein, deltau is an iteration variable of the sum value calculated on the image block P, deltau is a reprojection error, A is a constant, and n is a normal direction of the line segment feature;
s5, optimizing camera pose and landmark position χ= { T by minimizing sum of squares of re-projection errors kW ,ρ i };
Where K is the set of all key frames in the semi-dense map,for the current frame->A set of all landmarks corresponding to point features, +.>For the current frame->Set of all landmarks corresponding to line segment features, T kW For the current frame->And Key frame->Is used for the conversion matrix of (a).
In an alternative embodiment, loop back detection includes:
extracting key point features and key line segment features in the key frames, constructing a word bag model for the key point features and the line segment features, and storing the word bag model in a data dictionary;
when a camera of the unmanned vehicle system moves, calculating a similarity score value for the obtained current key frame, and searching an image which is most similar to the current key frame in a data dictionary;
the semi-dense map is modified in real time based on the most similar images.
In an alternative embodiment, calculating a similarity score value for a current keyframe and looking up an image in a data dictionary that is most similar to the current keyframe as a camera of the unmanned vehicle system moves, includes:
determining key point number n of current key frame k And the number of key line segments n l ;
Calculating the dispersity value d of the key points according to the coordinates (x, y) of the key points k Calculating the dispersity value d of the key line segment according to the midpoint coordinates of the key line segment l ;
Based on key point characteristics S k Weight a of (2) k And key line segment feature S l Weight a of (2) l Calculating the similarity score value A of the obtained image frame t ;
Wherein A is t =a k *(n k /(n k +n l )+d k /(d k +d l ))*S k +a l *(n l /(n k +n l )+d l /(d k +d l ))*S l 。
Wherein a scaling factor of 0.5 means that the point features and the line features have the same importance, the scaling can be suitably adjusted according to the specific circumstances.
In an alternative embodiment, the dispersion value d of the keypoints is calculated from the coordinates (x, y) of the keypoints k Comprising:
calculating the sum of variances of the x coordinate and the y coordinate;
calculating the square root of the sum of variances as the dispersion value d of the key points k 。
In an alternative embodiment, the method for constructing the environment dense map based on three-dimensional laser radar point cloud data of the unmanned vehicle system and fusing the semi-dense map comprises the following steps:
based on three-dimensional laser point cloud data of the unmanned vehicle system, and fusing gyroscope and inertial navigation sensing data of the unmanned vehicle system, obtaining a point cloud key frame;
the pose of the unmanned vehicle system is obtained through the inter-frame motion estimation of the unmanned vehicle system, and the point cloud key frame is translated and rotated according to the pose of the unmanned vehicle system, so that a three-dimensional point cloud model is constructed;
and fusing the semi-dense map with the three-dimensional point cloud model to construct an environment dense map.
When the three-dimensional map is constructed, the point cloud data are translated and rotated according to the pose change of the unmanned vehicle system, and a unified three-dimensional grid map is finally formed through discretization. In an optional embodiment, the pose of the unmanned vehicle system is obtained through inter-frame motion estimation of the unmanned vehicle system, and translation and rotation are performed on the point cloud keyframe according to the pose of the unmanned vehicle system, so as to construct a three-dimensional point cloud model, which comprises the following steps:
Establishing a three-dimensional grid map based on the size of the unmanned vehicle system by taking the starting point coordinate of the unmanned vehicle system as an origin, taking the north as the positive X-axis direction, taking the east as the positive Y-axis direction and taking the upward as the positive Z-axis direction;
translating and rotating the acquired obstacle point clouds according to the current pose of the unmanned vehicle system, filling the translated and rotated point clouds into corresponding grids, and recording the number S of the point clouds in each grid;
updating each grid in a local range centering on the unmanned vehicle system, and determining the number S of point clouds of each grid after updating + =αS - +S; in the grid, the number of point clouds before updating is S - The number of the point clouds updated in the grid is S + Alpha is a forgetting coefficient between 0 and 1, S is the number of the current point clouds, and the corresponding S of the non-obstacle grid is 0;
for the updated point cloud quantity S + Threshold judgment is carried out, if the updated point cloud quantity S + If the preset value is met, determining the grid as an obstacle, otherwise, determining the grid as passableSpace.
The invention can autonomously plan the tracking path of each unmanned system to reach the target position according to the motion state of the tracked target for the specific target which is found. In the tracking process, real-time obstacle avoidance is also required to cope with dynamic scenes.
In an alternative embodiment, based on the three-dimensional simulation scene dynamic map, performing target tracking on the physical simulation model of the unmanned aerial vehicle system and the unmanned aerial vehicle system comprises:
in a three-dimensional simulation scene dynamic map, a 3D roadmap is built according to the size of the unmanned aerial vehicle system;
and searching an optimal path through an A-algorithm based on the 3D route map.
In an alternative embodiment, in the three-dimensional simulation scene dynamic map, the 3D roadmap is built according to the size of the unmanned aerial vehicle system, including:
randomly selecting sampling points in the three-dimensional simulation scene dynamic map, if the sampling points are positioned in the obstacle, reserving the sampling points if the sampling points are positioned in the passable space, and analogizing the sampling points in sequence, and constructing a sampling point diagram based on the reserved sampling points;
connecting all sampling points based on the sampling point diagram, canceling the connecting line if the connecting line collides with an obstacle, and analogizing in sequence to finish connecting lines of all the sampling points;
and connecting at least one adjacent point around each sampling point, if the sampling points and the adjacent points can be connected, reserving corresponding connecting lines, and so on to complete the construction of the 3D route map.
The 3D roadmap is a probability roadmap, adopts a random planning space sampling mode, and completes the description of a planning space by using fewer sampling points and paths. The complexity of the method is irrelevant to the environment complexity and the dimension of the planning space, is mainly dependent on the path searching complexity, and has the advantages of no local minimum, suitability for multiple dimensions and small calculated amount. The 3D roadmap constructed is shown, for example, in fig. 4.
In an alternative embodiment, after a period of time or each time the unmanned aerial vehicle system moves a distance, the 3D roadmap is updated according to the new obstacle environment, new sampling points are added to the 3D roadmap, or sampling points or connecting lines which collide with the obstacle are deleted.
In an alternative embodiment, searching for an optimal path by an a-algorithm based on a 3D roadmap includes:
calculating each estimated value of the starting point reaching the end point through each sampling point in the 3D route map by an A-algorithm based on the starting point and the end point of the given unmanned aerial vehicle system;
sequencing the estimated values, deleting the sampling point with the minimum estimated value, and adding surrounding points of the deleted sampling point as expansion points;
and sequentially cycling until the end point is searched.
The algorithm a is the most effective method for solving the shortest path in the static road network, and the basic idea is that: and selecting a proper heuristic function in a heuristic search mode, calculating and evaluating the cost value of each expansion point, and then selecting a result with the optimal cost value to expand until a target point is found.
The algorithm evaluates the estimates of the surrounding nodes by using a valuation function, which is generally:
f(n)=g(n)+h(n)
wherein f (n) is an estimated function, and is an optimal estimated value from a starting point to an end point through a node n; g (n) is the actual consumption from the initial node to node n; h (n) is the estimated consumption of arriving at the endpoint from node n.
The invention searches the shortest path based on the A-algorithm so as to lead the unmanned aerial vehicle system to fly according to the shortest path. In the process of controlling flight, real-time obstacle avoidance control is also required for the unmanned aerial vehicle system. The invention adopts an artificial potential field method to carry out real-time obstacle avoidance control on the unmanned aerial vehicle system, and applies attractive force F to the unmanned aerial vehicle system from a target position att The obstacle applies a repulsive force F to the unmanned aerial vehicle system rep The unmanned plane system avoids the obstacle and reaches the target point under the action of the resultant force of the two forces. The artificial potential field method has the advantages of small calculated amount, easy understanding and clear mathematical expression.
The embodiment of the invention discloses a simulation system of an unmanned cluster system, which comprises an unmanned vehicle system and an unmanned plane system, wherein the system comprises:
the simulation model building module is used for building an unmanned aerial vehicle system and a physical simulation model of the unmanned aerial vehicle system;
the simulation map construction module is used for constructing a three-dimensional simulation scene dynamic map based on sensing data of the unmanned vehicle system;
the simulation module is used for tracking targets of the unmanned aerial vehicle system and the physical simulation model of the unmanned aerial vehicle system based on the three-dimensional simulation scene dynamic map.
In an alternative embodiment, the simulation model building module is further configured to:
Constructing a physical simulation model of the unmanned aerial vehicle system based on the V-REP tool;
and constructing a physical simulation model of the unmanned vehicle system based on the Gazebo tool.
In an alternative embodiment, the simulated map building module is further configured to:
constructing a semi-dense map based on sensing data of the unmanned vehicle system;
and (3) three-dimensional laser radar point cloud data based on the unmanned vehicle system are fused with a semi-dense map to construct an environment dense map.
In an alternative embodiment, the simulated map building module is further configured to:
constructing a semi-dense map based on binocular vision sensing data of the unmanned vehicle system;
fusing gyroscope and inertial navigation sensing data of the unmanned vehicle system to a semi-dense map, and performing inter-frame motion estimation of the unmanned vehicle system;
and carrying out real-time correction on the semi-dense map through loop detection based on a key frame acquired by inter-frame motion estimation of the unmanned vehicle system.
In an alternative embodiment, the simulated map building module is further configured to:
estimating incremental motion T of an unmanned vehicle system k,k-1 To make the light measurement errorMinimum;
wherein T is k,k-1 Last frame of image shot by camera of unmanned vehicle systemAnd current frame->Pose transformation between, I represents the image, < +. >For photodetection residual, i.e. in last frame +.>And current frame->The same three-dimensional point ρ as observed in u U is the center position of the image.
In an alternative embodiment, the simulated map building module is further configured to:
Last frameIn which three-dimensional point ρ is calculated by re-projection u Is of the pixel intensity of (2)A difference value;
wherein T is BC The transformation matrix of the body coordinate system B and the camera coordinate system C of the unmanned vehicle system, u is the previous frameCenter position of the middle image block;
at the current frameImage feature alignment is performed in the process, and the current frame is calculated>Image block P centered on the projection feature position u' is positioned relative to the frame in which the same feature is observed for the first time>Pixel intensity differences of the reference image block;
correcting the projection characteristic position u' to obtain the current frameThe projection characteristic position u';
for the point feature:
u′ * =u′+δu * ;
where Deltau is an iteration variable of the sum calculated over the image block P, deltau is the reprojection error, T CB Is an unmanned vehicle systemTransformation matrix of camera coordinate system C and body coordinate system B, T kr For the current frameAnd frame->A is a constant;
For line segment features:
u′ * =u′+δu * ·n;
wherein, deltau is an iteration variable of the sum value calculated on the image block P, deltau is a reprojection error, A is a constant, and n is a normal direction of the line segment feature;
Where K is the set of all key frames in the semi-dense map,for the current frame->A set of all landmarks corresponding to point features, +.>For the current frame->Corresponding to line segment characteristicsWith collection of landmarks, T kW For the current frame->And Key frame->Is used for the conversion matrix of (a).
In an alternative embodiment, the simulated map building module is further configured to:
extracting key point features and key line segment features in the key frames, constructing a word bag model for the key point features and the line segment features, and storing the word bag model in a data dictionary;
when a camera of the unmanned vehicle system moves, calculating a similarity score value for the obtained current key frame, and searching an image which is most similar to the current key frame in a data dictionary;
the semi-dense map is modified in real time based on the most similar images.
In an alternative embodiment, the simulated map building module is further configured to:
determining key point number n of current key frame k And the number of key line segments n l ;
Calculating the dispersity value d of the key points according to the coordinates (x, y) of the key points k Calculating the dispersity value d of the key line segment according to the midpoint coordinates of the key line segment l ;
Based on key point characteristics S k Weight a of (2) k And key line segment feature S l Weight a of (2) l Calculating the similarity score value A of the obtained image frame t ;
Wherein A is t =a k *(n k /(n k +n l )+d k /(d k +d l ))*S k +a l *(n l /(n k +n l )+d l /(d k +d l ))*S l 。
In an alternative embodiment, the simulated map building module is further configured to:
calculating the sum of variances of the x coordinate and the y coordinate;
calculating the square root of the sum of variances as the dispersion value d of the key points k 。
In an alternative embodiment, the simulated map building module is further configured to:
based on three-dimensional laser point cloud data of the unmanned vehicle system, and fusing gyroscope and inertial navigation sensing data of the unmanned vehicle system, obtaining a point cloud key frame;
the pose of the unmanned vehicle system is obtained through the inter-frame motion estimation of the unmanned vehicle system, and the point cloud key frame is translated and rotated according to the pose of the unmanned vehicle system, so that a three-dimensional point cloud model is constructed;
and fusing the semi-dense map with the three-dimensional point cloud model to construct an environment dense map.
In an alternative embodiment, the simulated map building module is further configured to:
Establishing a three-dimensional grid map based on the size of the unmanned vehicle system by taking the starting point coordinate of the unmanned vehicle system as an origin, taking the north as the positive X-axis direction, taking the east as the positive Y-axis direction and taking the upward as the positive Z-axis direction;
translating and rotating the acquired obstacle point clouds according to the current pose of the unmanned vehicle system, filling the translated and rotated point clouds into corresponding grids, and recording the number S of the point clouds in each grid;
updating each grid in a local range centering on the unmanned vehicle system, and determining the number S of point clouds of each grid after updating + =αS - +S; in the grid, the number of point clouds before updating is S - The number of the point clouds updated in the grid is S + Alpha is a forgetting coefficient between 0 and 1, S is the number of the current point clouds, and the corresponding S of the non-obstacle grid is 0;
for the updated point cloud quantity S + Threshold judgment is carried out, if the updated point cloud quantity S + And if the preset value is met, determining the grid as an obstacle, otherwise, determining the grid as a passable space.
In an alternative embodiment, the simulation module is further configured to:
in a three-dimensional simulation scene dynamic map, a 3D roadmap is built according to the size of the unmanned aerial vehicle system;
and searching an optimal path through an A-algorithm based on the 3D route map.
In an alternative embodiment, the simulation module is further configured to:
randomly selecting sampling points in the three-dimensional simulation scene dynamic map, if the sampling points are positioned in the obstacle, reserving the sampling points if the sampling points are positioned in the passable space, and analogizing the sampling points in sequence, and constructing a sampling point diagram based on the reserved sampling points;
connecting all sampling points based on the sampling point diagram, canceling the connecting line if the connecting line collides with an obstacle, and analogizing in sequence to finish connecting lines of all the sampling points;
and connecting at least one adjacent point around each sampling point, if the sampling points and the adjacent points can be connected, reserving corresponding connecting lines, and so on to complete the construction of the 3D route map.
In an alternative embodiment, the simulation module is further configured to: and updating the 3D route map according to the new obstacle environment after a period of time or each distance moved by the unmanned aerial vehicle system, adding new sampling points into the 3D route map, or deleting sampling points or connecting lines which collide with the obstacle.
In an alternative embodiment, the simulation module is further configured to:
calculating each estimated value of the starting point reaching the end point through each sampling point in the 3D route map by an A-algorithm based on the starting point and the end point of the given unmanned aerial vehicle system;
Sequencing the estimated values, deleting the sampling point with the minimum estimated value, and adding surrounding points of the deleted sampling point as expansion points;
and sequentially cycling until the end point is searched.
The disclosure also relates to an electronic device, including a server, a terminal, and the like. The electronic device includes: at least one processor; a memory communicatively coupled to the at least one processor; and a communication component in communication with the storage medium, the communication component receiving and transmitting data under control of the processor; wherein the memory stores instructions executable by the at least one processor to implement the simulation method of the above embodiment.
In an alternative embodiment, the memory is implemented as a non-volatile computer-readable storage medium, and is used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The processor executes various functional applications of the device and data processing, i.e., implements the emulation method, by running non-volatile software programs, instructions, and modules stored in memory.
The memory may include a memory program area and a memory data area, wherein the memory program area may store an operating system, at least one application program required for a function; the storage data area may store a list of options, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, the memory optionally includes memory remotely located from the processor, the remote memory being connectable to the external device through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in memory that, when executed by one or more processors, perform the methods of any of the method embodiments described above.
The product may execute the simulation method provided by the embodiment of the present application, and has the corresponding functional module and beneficial effect of the execution method, and technical details not described in detail in the embodiment of the present application may refer to the simulation method provided by the embodiment of the present application.
The present disclosure also relates to a computer-readable storage medium storing a computer-readable program for causing a computer to execute some or all of the simulation method embodiments described above.
That is, it will be understood by those skilled in the art that all or part of the steps in implementing the simulation method of the above embodiments may be implemented by a program stored in a storage medium and including several instructions for causing a device (which may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps in the method of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Furthermore, one of ordinary skill in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
It will be understood by those skilled in the art that while the invention has been described with reference to exemplary embodiments, various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.
Claims (30)
1. A simulation method of an unmanned cluster system, wherein the unmanned cluster system comprises an unmanned vehicle system and an unmanned aerial vehicle system, the method comprising:
constructing the unmanned aerial vehicle system and a physical simulation model of the unmanned aerial vehicle system;
based on the sensing data of the unmanned vehicle system, constructing a three-dimensional simulation scene dynamic map, wherein the three-dimensional simulation scene dynamic map comprises a semi-dense map constructed based on the sensing data of the unmanned vehicle system, performing inter-frame motion estimation of the unmanned vehicle system based on the semi-dense map, and estimating incremental motion T of the unmanned vehicle system k,k-1 To make the light measurement errorMinimum, comprising:
S2, at the last frameIn which three-dimensional point ρ is calculated by re-projection u Is a pixel intensity difference value;
wherein T is BC A transformation matrix of a body coordinate system B and a camera coordinate system C of the unmanned vehicle system, wherein u is the previous frameCenter position of the middle image block;
s3, at the current frameImage feature alignment in processCalculating the current frame->Image block P centered on the projection feature position u' is positioned relative to the frame in which the same feature is observed for the first time>Pixel intensity differences of the reference image block;
S4, correcting the projection characteristic position u' to obtain the current frameThe projection characteristic position u';
for the point feature:
u′ * =u′+δu * ;
wherein Deltau is an iteration variable of the sum calculated over the image block P, deltau is the reprojection error, T CB A transformation matrix T for a camera coordinate system C and a body coordinate system B of the unmanned vehicle system kr For the current frameAnd said frame->A is a constant;
for line segment features:
u′ * =u′+δu * ·n;
wherein deltau is an iteration variable of the sum calculated on the image block P, deltau is a reprojection error, A is a constant, and n is the normal direction of the line segment feature;
s5, optimizing camera pose and landmark position χ= { T by minimizing the sum of squares of the re-projection errors kW ,ρ i };
Where K is the set of all key frames in the semi-dense map,for the current frame->A set of all landmarks corresponding to point features, +.>For the current frame->Set of all landmarks corresponding to line segment features, T kW For the current frame->And Key frame->Is a conversion matrix of (a);
and performing target tracking on the unmanned aerial vehicle system and a physical simulation model of the unmanned aerial vehicle system based on the three-dimensional simulation scene dynamic map.
2. The method of claim 1, wherein constructing the unmanned aerial vehicle system and the physical simulation model of the unmanned aerial vehicle system comprises:
constructing a physical simulation model of the unmanned aerial vehicle system based on a V-REP tool;
and constructing a physical simulation model of the unmanned vehicle system based on a Gazebo tool.
3. The method of claim 1, wherein the constructing a three-dimensional simulated scene dynamic map based on the sensory data of the unmanned vehicle system further comprises:
and based on the three-dimensional laser radar point cloud data of the unmanned vehicle system, fusing the semi-dense map to construct an environment dense map.
4. The method of claim 1, wherein the constructing a semi-dense map based on the sensed data of the unmanned vehicle system comprises:
constructing a semi-dense map based on binocular vision sensing data of the unmanned vehicle system;
fusing the gyroscope and inertial navigation sensing data of the unmanned vehicle system to the semi-dense map to perform inter-frame motion estimation of the unmanned vehicle system;
and carrying out real-time correction on the semi-dense map through loop detection based on a key frame acquired by inter-frame motion estimation of the unmanned vehicle system.
5. The method of claim 1, wherein,
Wherein T is k,k-1 A last frame of an image captured by a camera of the unmanned vehicle systemAnd current frame->Pose transformation between, I represents the image, < +.>For photodetection residual, i.e. in the last frame +.>And said current frame->The same three-dimensional point ρ as observed in u U is the center position of the image.
6. The method of claim 4, wherein the loop-back detection comprises:
extracting key point features and key line segment features in the key frame, constructing a word bag model for the key point features and the line segment features, and storing the word bag model into a data dictionary;
when the camera of the unmanned vehicle system moves, calculating a similarity score value for the obtained current key frame, and searching an image which is most similar to the current key frame in the data dictionary;
and correcting the semi-dense map in real time based on the most similar images.
7. The method of claim 6, wherein said computing a similarity score value for a current keyframe and looking up an image in the data dictionary that is most similar to the current keyframe as the camera of the unmanned vehicle system moves comprises:
determining the key point number n of the current key frame k And the number of key line segments n l ;
Calculating the dispersity value d of the key points according to the coordinates (x, y) of the key points k And according to the switchCalculating the dispersity value d of key line segment by the midpoint coordinates of the key line segment l ;
Based on key point characteristics S k Weight a of (2) k And key line segment feature S l Weight a of (2) l Calculating the similarity score value A of the obtained image frame t ;
Wherein A is t =a k* (n k /(n k +n l )+d k /(d k +d l ))*S k +a l* (n l /(n k +n l )+d l /(d k +d l ))*S l 。
8. The method of claim 7, wherein the calculating the dispersion value d of the keypoints is based on coordinates (x, y) of the keypoints k Comprising:
calculating the sum of variances of the x coordinate and the y coordinate;
calculating the square root of the sum of variances as the dispersity value d of the key points k 。
9. The method of claim 3, wherein constructing an ambient dense map based on three-dimensional lidar point cloud data of the unmanned vehicle system and fusing the semi-dense map comprises:
based on the three-dimensional laser point cloud data of the unmanned vehicle system, fusing the gyroscope and inertial navigation sensing data of the unmanned vehicle system to obtain a point cloud key frame;
obtaining the pose of the unmanned vehicle system through the inter-frame motion estimation of the unmanned vehicle system, translating and rotating the point cloud key frame according to the pose of the unmanned vehicle system, and constructing a three-dimensional point cloud model;
And fusing the semi-dense map and the three-dimensional point cloud model to construct an environment dense map.
10. The method of claim 9, wherein the estimating the pose of the unmanned vehicle system by the inter-frame motion of the unmanned vehicle system, translating and rotating the point cloud keyframes according to the pose of the unmanned vehicle system, and constructing a three-dimensional point cloud model comprises:
establishing a three-dimensional grid map based on the size of the unmanned vehicle system by taking the starting point coordinate of the unmanned vehicle system as an origin, taking the north direction as the positive X-axis direction, taking the east direction as the positive Y-axis direction and taking the upward direction as the positive Z-axis direction;
according to the current pose of the unmanned vehicle system, translating and rotating the acquired obstacle point clouds, filling the translated and rotated point clouds into corresponding grids, and recording the number S of the point clouds in each grid;
updating each grid in a local range centering on the unmanned vehicle system, and determining the number S of point clouds of each grid after updating + =S - ++, of the material; in the grid, the number of point clouds before updating is S - The number of the point clouds updated in the grid is S + Alpha is a forgetting coefficient between 0 and 1, S is the number of the current point clouds, and the corresponding S of the non-obstacle grid is 0;
For the updated point cloud quantity S + Threshold judgment is carried out, if the updated point cloud quantity S + And if the preset value is met, determining the grid as an obstacle, otherwise, determining the grid as a passable space.
11. The method of claim 1, wherein the performing object tracking on the physical simulation model of the unmanned aerial vehicle system and the unmanned aerial vehicle system based on the three-dimensional simulation scene dynamic map comprises:
in the three-dimensional simulation scene dynamic map, a 3D route map is built according to the size of the unmanned aerial vehicle system;
and searching an optimal path through an A algorithm based on the 3D route map.
12. The method of claim 11, wherein said creating a 3D roadmap in said three-dimensional simulated scene dynamic map from the dimensions of said unmanned aerial vehicle system comprises:
randomly selecting sampling points in the three-dimensional simulation scene dynamic map, if the sampling points are positioned in the obstacle, reserving the sampling points if the sampling points are positioned in the passable space, and analogizing the sampling points to construct a sampling point diagram based on the reserved sampling points;
connecting all sampling points based on the sampling point diagram, canceling the connecting line if the connecting line collides with an obstacle, and analogizing to finish connecting lines of all the sampling points;
And connecting at least one adjacent point around each sampling point, if the sampling points and the adjacent points can be connected, reserving corresponding connecting lines, and so on, so as to complete the construction of the 3D roadmap.
13. The method of claim 12, wherein the 3D roadmap is updated according to a new obstacle environment, new sampling points are added to the 3D roadmap, or sampling points or links that collide with an obstacle are deleted at intervals or after each movement of the drone system a distance.
14. The method of claim 11, wherein searching for an optimal path by an a-algorithm based on the 3D roadmap comprises:
calculating each estimated value of the starting point reaching the end point through each sampling point in the 3D route map by an A-type algorithm based on the starting point and the end point of the given unmanned aerial vehicle system;
sequencing the estimated values, deleting the sampling point with the minimum estimated value, and adding surrounding points of the deleted sampling point as expansion points;
and sequentially cycling until the end point is searched.
15. A simulation system of an unmanned cluster system, wherein the unmanned cluster system comprises an unmanned vehicle system and an unmanned aerial vehicle system, the system comprising:
The simulation model building module is used for building the unmanned aerial vehicle system and a physical simulation model of the unmanned aerial vehicle system;
the simulation map construction module is used for constructing three-dimensional simulation based on the sensing data of the unmanned vehicle systemA real scene dynamic map comprising a sensor data for constructing a semi-dense map based on the unmanned vehicle system, and estimating the incremental motion T of the unmanned vehicle system by performing inter-frame motion estimation of the unmanned vehicle system based on the semi-dense map k,k-1 To make the light measurement errorMinimum, comprising:
At the last frameIn which three-dimensional point ρ is calculated by re-projection u Is a pixel intensity difference value;
wherein T is BC A transformation matrix of a body coordinate system B and a camera coordinate system C of the unmanned vehicle system, wherein u is the previous frameCenter position of the middle image block;
at the current frameImage feature alignment is performed in the current frame, and the +.>Image block centered on projection characteristic position uP is +.The same feature is observed for the first time with respect to the frame>Pixel intensity differences of the reference image block;
correcting the projection characteristic position u' to obtain the current frame The projection characteristic position u';
for the point feature:
u′ * =u′+δu * ;
wherein Deltau is an iteration variable of the sum calculated over the image block P, deltau is the reprojection error, T CB A transformation matrix T for a camera coordinate system C and a body coordinate system B of the unmanned vehicle system kr For the current frameAnd said frame->A is a constant;
for line segment features:
u′ * =u′+δu * ·n;
wherein deltau is an iteration variable of the sum calculated on the image block P, deltau is a reprojection error, A is a constant, and n is the normal direction of the line segment feature;
optimizing camera pose and landmark position χ= { T by minimizing the sum of squares of the re-projection errors kW ,ρ i };
Where K is the set of all key frames in the semi-dense map,for the current frame->A set of all landmarks corresponding to point features, +.>For the current frame->Set of all landmarks corresponding to line segment features, T kW For the current frame->And Key frame->Is a conversion matrix of (a);
and the simulation module is used for tracking targets of the unmanned aerial vehicle system and the physical simulation model of the unmanned aerial vehicle system based on the three-dimensional simulation scene dynamic map.
16. The system of claim 15, wherein the simulation model building module is configured to:
Constructing a physical simulation model of the unmanned aerial vehicle system based on a V-REP tool;
and constructing a physical simulation model of the unmanned vehicle system based on a Gazebo tool.
17. The system of claim 15, wherein the simulated map building module is further configured to:
and based on the three-dimensional laser radar point cloud data of the unmanned vehicle system, fusing the semi-dense map to construct an environment dense map.
18. The system of claim 15, wherein the simulated map building module is configured to:
constructing a semi-dense map based on binocular vision sensing data of the unmanned vehicle system;
fusing the gyroscope and inertial navigation sensing data of the unmanned vehicle system to the semi-dense map to perform inter-frame motion estimation of the unmanned vehicle system;
and carrying out real-time correction on the semi-dense map through loop detection based on a key frame acquired by inter-frame motion estimation of the unmanned vehicle system.
19. The system of claim 15, wherein,
wherein T is k,k-1 A last frame of an image captured by a camera of the unmanned vehicle systemAnd current frame->Pose transformation between, I represents the image, < +.>For photodetection residual, i.e. in the last frame +.>And said current frame- >The same three-dimensional point ρ as observed in u U is the center position of the image.
20. The system of claim 18, wherein the simulated map building module is configured to:
extracting key point features and key line segment features in the key frame, constructing a word bag model for the key point features and the line segment features, and storing the word bag model into a data dictionary;
when the camera of the unmanned vehicle system moves, calculating a similarity score value for the obtained current key frame, and searching an image which is most similar to the current key frame in the data dictionary;
and correcting the semi-dense map in real time based on the most similar images.
21. The system of claim 20, wherein the simulated map building module is configured to:
determining the key point number n of the current key frame k And the number of key line segments n l ;
Calculating the dispersity value d of the key points according to the coordinates (x, y) of the key points k Calculating the dispersity value d of the key line segment according to the midpoint coordinates of the key line segment l ;
Based on key point characteristics S k Weight a of (2) k And key line segment feature S l Weight a of (2) l Calculating the similarity score value A of the obtained image frame t ;
Wherein A is t =a k* (n k /(n k +n l )+d k /(d k +d l ))*S k +a l* (n l /(n k +n l )+d l /(d k +d l ))*S l 。
22. The system of claim 21, wherein the simulated map construction module is configured to:
Calculating the sum of variances of the x coordinate and the y coordinate;
calculating the square root of the sum of variances as the dispersity value d of the key points k 。
23. The system of claim 17, wherein the simulated map building module is configured to:
based on the three-dimensional laser point cloud data of the unmanned vehicle system, fusing the gyroscope and inertial navigation sensing data of the unmanned vehicle system to obtain a point cloud key frame;
obtaining the pose of the unmanned vehicle system through the inter-frame motion estimation of the unmanned vehicle system, translating and rotating the point cloud key frame according to the pose of the unmanned vehicle system, and constructing a three-dimensional point cloud model;
and fusing the semi-dense map and the three-dimensional point cloud model to construct an environment dense map.
24. The system of claim 23, wherein the simulated map construction module is configured to:
establishing a three-dimensional grid map based on the size of the unmanned vehicle system by taking the starting point coordinate of the unmanned vehicle system as an origin, taking the north direction as the positive X-axis direction, taking the east direction as the positive Y-axis direction and taking the upward direction as the positive Z-axis direction;
according to the current pose of the unmanned vehicle system, translating and rotating the acquired obstacle point clouds, filling the translated and rotated point clouds into corresponding grids, and recording the number S of the point clouds in each grid;
Updating each grid in a local range centering on the unmanned vehicle system, and determining the number S of point clouds of each grid after updating + =S - ++, of the material; in the method, in the process of the invention,the number of point clouds before updating in the grid is S - The number of the point clouds updated in the grid is S + Alpha is a forgetting coefficient between 0 and 1, S is the number of the current point clouds, and the corresponding S of the non-obstacle grid is 0;
for the updated point cloud quantity S + Threshold judgment is carried out, if the updated point cloud quantity S + And if the preset value is met, determining the grid as an obstacle, otherwise, determining the grid as a passable space.
25. The system of claim 15, wherein the simulation module is configured to:
in the three-dimensional simulation scene dynamic map, a 3D route map is built according to the size of the unmanned aerial vehicle system;
and searching an optimal path through an A algorithm based on the 3D route map.
26. The system of claim 25, wherein the simulation module is configured to:
randomly selecting sampling points in the three-dimensional simulation scene dynamic map, if the sampling points are positioned in the obstacle, reserving the sampling points if the sampling points are positioned in the passable space, and analogizing the sampling points to construct a sampling point diagram based on the reserved sampling points;
Connecting all sampling points based on the sampling point diagram, canceling the connecting line if the connecting line collides with an obstacle, and analogizing to finish connecting lines of all the sampling points;
and connecting at least one adjacent point around each sampling point, if the sampling points and the adjacent points can be connected, reserving corresponding connecting lines, and so on, so as to complete the construction of the 3D roadmap.
27. The system of claim 26, wherein the simulation module is configured to: and updating the 3D route map according to a new obstacle environment at intervals or after the unmanned aerial vehicle system moves for a certain distance, adding new sampling points into the 3D route map, or deleting sampling points or connecting lines which collide with the obstacle.
28. The system of claim 25, wherein the simulation module is configured to:
calculating each estimated value of the starting point reaching the end point through each sampling point in the 3D route map by an A-type algorithm based on the starting point and the end point of the given unmanned aerial vehicle system;
sequencing the estimated values, deleting the sampling point with the minimum estimated value, and adding surrounding points of the deleted sampling point as expansion points;
and sequentially cycling until the end point is searched.
29. An electronic device comprising a memory and a processor, wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method of any of claims 1-14.
30. A computer readable storage medium having stored thereon a computer program, wherein the computer program is executed by a processor to implement the method of any of claims 1-14.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110883152.0A CN113761647B (en) | 2021-08-02 | 2021-08-02 | Simulation method and system of unmanned cluster system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110883152.0A CN113761647B (en) | 2021-08-02 | 2021-08-02 | Simulation method and system of unmanned cluster system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113761647A CN113761647A (en) | 2021-12-07 |
CN113761647B true CN113761647B (en) | 2023-06-30 |
Family
ID=78788360
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110883152.0A Active CN113761647B (en) | 2021-08-02 | 2021-08-02 | Simulation method and system of unmanned cluster system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113761647B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12024196B2 (en) * | 2020-06-17 | 2024-07-02 | Shenzhen Guo Dong Intelligent Drive Technologies Co., Ltd | Simulation test method for autonomous driving vehicle, computer equipment and medium |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114964269B (en) * | 2022-08-01 | 2022-11-08 | 成都航空职业技术学院 | Unmanned aerial vehicle path planning method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102789171A (en) * | 2012-09-05 | 2012-11-21 | 北京理工大学 | Method and system for semi-physical simulation test of visual unmanned aerial vehicle flight control |
CN110675418A (en) * | 2019-09-26 | 2020-01-10 | 深圳市唯特视科技有限公司 | Target track optimization method based on DS evidence theory |
CN111694287A (en) * | 2020-05-14 | 2020-09-22 | 北京百度网讯科技有限公司 | Obstacle simulation method and device in unmanned simulation scene |
-
2021
- 2021-08-02 CN CN202110883152.0A patent/CN113761647B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102789171A (en) * | 2012-09-05 | 2012-11-21 | 北京理工大学 | Method and system for semi-physical simulation test of visual unmanned aerial vehicle flight control |
CN110675418A (en) * | 2019-09-26 | 2020-01-10 | 深圳市唯特视科技有限公司 | Target track optimization method based on DS evidence theory |
CN111694287A (en) * | 2020-05-14 | 2020-09-22 | 北京百度网讯科技有限公司 | Obstacle simulation method and device in unmanned simulation scene |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12024196B2 (en) * | 2020-06-17 | 2024-07-02 | Shenzhen Guo Dong Intelligent Drive Technologies Co., Ltd | Simulation test method for autonomous driving vehicle, computer equipment and medium |
Also Published As
Publication number | Publication date |
---|---|
CN113761647A (en) | 2021-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109974693B (en) | Unmanned aerial vehicle positioning method and device, computer equipment and storage medium | |
Qin et al. | Lins: A lidar-inertial state estimator for robust and efficient navigation | |
CN109084732B (en) | Positioning and navigation method, device and processing equipment | |
US11313684B2 (en) | Collaborative navigation and mapping | |
CN109211241B (en) | Unmanned aerial vehicle autonomous positioning method based on visual SLAM | |
Lupton et al. | Visual-inertial-aided navigation for high-dynamic motion in built environments without initial conditions | |
Panahandeh et al. | Vision-aided inertial navigation based on ground plane feature detection | |
CN112525202A (en) | SLAM positioning and navigation method and system based on multi-sensor fusion | |
CN112304307A (en) | Positioning method and device based on multi-sensor fusion and storage medium | |
CN110726406A (en) | Improved nonlinear optimization monocular inertial navigation SLAM method | |
US10347001B2 (en) | Localizing and mapping platform | |
Geneva et al. | An efficient schmidt-ekf for 3D visual-inertial SLAM | |
CN110749308B (en) | SLAM-oriented outdoor positioning method using consumer-grade GPS and 2.5D building models | |
CN102607532B (en) | Quick low-level image matching method by utilizing flight control data | |
CN110986945A (en) | Local navigation method and system based on semantic height map | |
CN115639823A (en) | Terrain sensing and movement control method and system for robot under rugged and undulating terrain | |
CN113761647B (en) | Simulation method and system of unmanned cluster system | |
CN117367427A (en) | Multi-mode slam method applicable to vision-assisted laser fusion IMU in indoor environment | |
Ellingson et al. | Relative visual-inertial odometry for fixed-wing aircraft in GPS-denied environments | |
CN111815684A (en) | Space multivariate feature registration optimization method and device based on unified residual error model | |
Abdulov et al. | Visual odometry approaches to autonomous navigation for multicopter model in virtual indoor environment | |
Cao et al. | Visual-Inertial-Laser SLAM Based on ORB-SLAM3 | |
CN109901589B (en) | Mobile robot control method and device | |
Hu et al. | Efficient Visual-Inertial navigation with point-plane map | |
Gui et al. | Robust direct visual inertial odometry via entropy-based relative pose estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |