CN117252011A - Heterogeneous ground-air unmanned cluster simulation system construction method based on distributed architecture - Google Patents

Heterogeneous ground-air unmanned cluster simulation system construction method based on distributed architecture Download PDF

Info

Publication number
CN117252011A
CN117252011A CN202311228857.4A CN202311228857A CN117252011A CN 117252011 A CN117252011 A CN 117252011A CN 202311228857 A CN202311228857 A CN 202311228857A CN 117252011 A CN117252011 A CN 117252011A
Authority
CN
China
Prior art keywords
unmanned
platform
task
distributed architecture
constructing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311228857.4A
Other languages
Chinese (zh)
Inventor
安旭阳
宋威龙
余雪玮
韩乐
杨婷婷
苏治宝
苏波
冯付勇
党睿娜
李兆冬
项燊
白晨青
田泽宇
张梦轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongbing Intelligent Innovation Research Institute Co ltd
Original Assignee
Zhongbing Intelligent Innovation Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongbing Intelligent Innovation Research Institute Co ltd filed Critical Zhongbing Intelligent Innovation Research Institute Co ltd
Priority to CN202311228857.4A priority Critical patent/CN117252011A/en
Publication of CN117252011A publication Critical patent/CN117252011A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/20Software design

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Evolutionary Computation (AREA)
  • Computer Hardware Design (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention belongs to the technical field of ground unmanned platform system architecture, and particularly relates to a heterogeneous ground unmanned cluster simulation system construction method based on a distributed architecture, which comprises the steps of constructing a universal command control terminal, monitoring the running state of an unmanned cluster in a digital simulation scene, issuing single-platform and multi-platform task planning by issuing single-platform and group task control instructions; the unmanned platform task editing function module is constructed, so that the on-loop temporary decision of people is realized, the task state of the unmanned cluster is dynamically adjusted, and the unmanned platform task editing function module has a multi-mode control strategy of one control, multiple control and the like; based on the illusion engine, the design of a heterogeneous unmanned cluster model and a sensor model with cross domains and cross platforms is completed, a platform model is formed, and a monomer simulation model with complete functions is formed under the support of an interoperable distributed architecture; and designing a single-platform and multi-platform system collaborative scene, and completing formation maneuvering tasks in the semi-physical simulation system.

Description

Heterogeneous ground-air unmanned cluster simulation system construction method based on distributed architecture
Technical Field
The invention belongs to the technical field of ground unmanned platform system architecture, and particularly relates to a heterogeneous ground-air unmanned cluster simulation system construction method based on a distributed architecture.
Background
The artificial intelligence technology is pushing the informatization age to be crossed to the intellectualization age, and plays an important role in the fields of unmanned driving, intelligent manufacturing, biological materials, virtual reality, intelligent security, cloud computing and the like with unique superiority.
At present, the ground unmanned platform has been applied and tested in countries such as America, russia and the like, and the performance is also rapidly improved. In order to more fully verify the performance of the ground unmanned platform, reliability verification needs to be carried out in environments such as extremely cold, extremely hot, humid, sand dust and the like, a virtual simulation system is provided by civil unmanned, a virtual city craft simulation system is built by Waymo corporation under Google flag, an automatic driving suite of the system can travel 2000 ten thousand miles in a virtual road every day, which is equivalent to travel 10 years in a physical scene, and at present, the travel mileage exceeding 150 hundred million miles is completed, and the actual road test reaches 2000 ten thousand miles. Tencen develops an autopilot virtual simulation test platform TAD Sim2.0, which can test 1000 kilometers per day.
The simulation system plays an important role in the product development process, supports the development of repeatability test work in a laboratory scene, can verify the overall technical indexes of the system, the key technical indexes of the sub-systems and the like, and reversely optimizes the product design through technical means such as efficiency evaluation and the like. Meanwhile, the military unmanned platform has a certain difference from civil automatic driving, functional modules such as task understanding, command control and system coordination are added, and a driving road surface is mainly a cross-country scene and cannot borrow a civil automatic driving simulation system. Therefore, the invention takes the generalized control terminal, the ground unmanned platform controller and the virtual simulation environment as primitives and takes the distributed architecture as a core, and the ground unmanned platform simulation system is constructed and used for collaborative control simulation of the autonomous navigation system and formation, thereby optimizing, improving and enhancing the efficiency of the ground unmanned platform.
Disclosure of Invention
First, the technical problem to be solved
The invention aims to solve the technical problems that: aiming at the defects of the prior art, how to provide a heterogeneous ground air unmanned cluster simulation system construction method based on a distributed architecture.
(II) technical scheme
In order to solve the technical problems, the invention provides a method for constructing a heterogeneous ground-air unmanned cluster simulation system based on a distributed architecture, which comprises the following steps:
step 1: command control terminal system construction;
step 2: designing a software and hardware interface of the unmanned platform controller;
step 3: parameterizing and constructing a digital scene;
step 4: and integrating a semi-physical simulation system.
The process for constructing the command control terminal system in the step 1 comprises the following steps:
step 11: constructing a task planning module and a state monitoring module according to the functional requirements of the unmanned platform; the task planning module comprises a command control terminal-autonomous navigation system module, a command control terminal-chassis module, a command control terminal-task load module and a command control terminal-multi-platform task management module;
the command control terminal-autonomous navigation system module is used for switching operation control modes of the unmanned platform, including an autonomous mode and a remote control mode, and simultaneously receiving a task path from the control terminal, and generating speed and curvature control instructions through a local path planner and a bottom layer controller;
the command control terminal-chassis module is used for realizing the temporary intervention control of people in the ring, and when the track of the ground unmanned platform deviates from a preset path, the command of speed and steering angle is directly issued through the distributed architecture;
the command control terminal-task load module is used for generating command information and control information, the command information is used for switching weapon types, white light/infrared rays and camera focal lengths of the ground unmanned platform, and the control information is used for giving horizontal rotation angles and pitching angles of weapon loads;
the command control terminal-multi-platform task management module is used for sending a cluster cooperative control instruction, including formation maneuvering and one-key scram;
the state monitoring module is used for acquiring operation state information of the unmanned platform, supporting acquisition of position information (UTM coordinates, course angle, pitch angle and roll angle), state information (speed, front wheel steering angle) and task load position information (horizontal angle and pitch angle) of the unmanned platform, and simultaneously transmitting image data back to the command control terminal based on the deployed RGB camera;
step 12: an intelligent task planning module is constructed, wherein the intelligent task planning module takes an initial task target list as input, generates a deployment scheme through reinforcement learning, and completes single-platform and multi-platform task planning;
step 13: when the single body and the group deviate from a preset track, the command control terminal carries out intervention decision on the single body or the group according to the IP and the port number of the unmanned platform, dynamically adjusts the running state of the unmanned platform, and has one-control-multiple-mode control strategies.
The process of designing the software and hardware interface of the unmanned platform controller in the step 2 comprises the following steps:
step 21: designing an unmanned platform hardware interface, and simulating a physical scene of unmanned platform movement by adopting an UNREAL virtual simulation environment, wherein each digital unmanned platform is provided with 1 RGB camera, 1 wire harness configurable multi-line laser radar and positioning equipment, and the digital unmanned platform is used for replacing sensor equipment on a real installation;
step 22: a software interface of a ground unmanned platform is designed, laser radar point cloud data, positioning data and camera image data in a virtual simulation environment are obtained through a Google remote procedure call (Google Remote Procedure Call, GRPC) communication middleware of an open source, and a data processing module, an autonomous navigation module and a system collaborative software module are deployed on an industrial personal computer;
step 23: the acquired sensor data is converted into a data format which can be read by a ground unmanned platform through GRPC/distributed architecture analysis and inverse analysis middleware, and the sensor data is uploaded to the distributed architecture, so that data sharing among the platforms is realized;
step 24: the multi-line radar map building module in autonomous navigation fuses the fusion positioning data and the laser radar to construct a local cost map;
step 25: the image recognition module in autonomous navigation takes the acquired scene picture as input, and utilizes a YOLOv3 target recognition algorithm to recognize the person and the vehicle;
step 26: a path planning module in autonomous navigation constructs a local cost map according to the sensor data, generates a group of motion curves with speed and angular speed in a speed space by a dynamic window method (DWA), constructs a cost function of generating the curves according to the maximum curvature, the speed, the transverse offset value relative to a target point and the offset value relative to an obstacle, and selects a minimum cost curve as an optimal path;
step 27: converting the optimal distance into the speed and curvature for driving the ground unmanned platform to run through a pure tracking algorithm, so as to realize an unmanned platform autonomous navigation system;
step 28: the ground unmanned platform sends the acquired unmanned platform state information such as positioning data, control instructions (speed and curvature) and images to a command control terminal through a distributed architecture to complete visual display;
through the distributed architecture, each unmanned platform can realize the multi-platform formation maneuvering driving task through a system cooperation software module from the position states of other unmanned platforms in the field.
The process of the parameterized construction of the digital scene in the step 3 comprises the following steps:
step 31: designing a digital topography; constructing unstructured terrains of 30 square kilometers through Landscape of the illusion engine, and realizing high-pixel altitude map generation through powerful detail features (LOD) and efficient memory processing modes, and carrying out raising and lowering operations on terrains to form terrains of mountains, hills, valleys and trenches;
step 32: constructing a digital building; building a three-dimensional model of a building and a tree through 3DMAX software, and then importing the three-dimensional model into an illusion engine to generate a dynamic model; for close-up buildings, more accurate models and mapping are needed, placing the building in a main area of 2x2 km, with details of windows, walls, doors, floors, glass, pipes; for a long-range building, the appearance is taken as a main part, the model is subjected to order reduction treatment, and only the appearance of the building is kept;
step 33: constructing an unmanned vehicle model; each unmanned platform is composed of a platform body and a load; instantiating a PxVehicleWheelsSimData class by PhysX for storing configuration parameters of four wheels including fields of suspension strength, damping rate, wheel mass, tire stiffness; then instantiating PxVehicleDrivSimData 4W class for storing configuration parameters of the driving model including fields of engine peak torque, clutch strength, transmission ratio and Ackerman steering correction; finally instantiating a PxRigidDynamicoactor class for configuring geometric shapes and dynamic properties of wheels and chassis, including mass, moment of inertia and mass center;
step 34: constructing an unmanned aerial vehicle model; the unmanned aerial vehicle adopts a four-rotor unmanned aerial vehicle, and can vertically lift and translate in multiple directions; by controlling the rotation coordinates of the propeller, the angle of the propeller can be updated every frame, and flight actions such as climbing and landing of the unmanned aerial vehicle are simulated. Each unmanned aerial vehicle mainly comprises a platform body and a load, and can realize information feedback of an area;
step 35: constructing a laser radar model; based on the working characteristics of the laser radar, simulating a ray physical rule, specifying a starting point and an ending point (0-200 m) of a ray in a certain range through a single lineTraceByChannel, and recording the angle and the distance of the ray if the ray is shielded; checking radar scanning conditions by calling a radar scanning interface;
step 36: constructing a camera model; the image data under the 3D scene is obtained through a SceneaptureComponent 2D component of the illusion engine, the image data is rendered to a textureRenderTarget2D through a shader rendering pipeline, and the image data in an RGB format is obtained after serialization;
step 37: constructing a positioning equipment model; based on the construction of a fantasy engine transformation module, the transformation attribute of the unmanned platform comprises position and rotation degree;
step 38: the virtual sensor/executor interface software establishes connection with the control computer through the GRPC, can receive the task instruction of the control computer, and transmits the control instruction to the virtual scene software, and the virtual scene software performs control on the corresponding units; likewise, the virtual scene software transmits the sensor data to the control computer through the virtual sensor software to complete fusion map building; the virtual scene computer has an effect display function, and the host is externally connected with a display to display the situation information of the unmanned aerial vehicle and the unmanned aerial vehicle in real time.
In step 33, the configuration parameters of the four wheels include fields of suspension strength, damping rate, wheel mass, and tire stiffness.
The process of integrating the semi-physical simulation system in the step 4 comprises the following steps:
step 41: selecting 1 Window10 operating system server to deploy a virtual simulation environment, selecting 3 Ubuntu system industrial personal computers to deploy ground unmanned platform controllers, simulating an autonomous navigation system of the ground unmanned platform, selecting 1 Ubuntu system to deploy a control terminal, simulating task instruction issuing and status data returning;
step 42: the autonomous navigation system of each unmanned platform can acquire laser radar point cloud data, positioning data and camera image data in a virtual simulation environment through the GRPC;
step 43: the acquired sensor data is converted into a data format which can be read by a ground unmanned platform through GRPC/distributed architecture analysis and inverse analysis middleware, and the sensor data is uploaded to the distributed architecture, so that data sharing among the platforms is realized;
step 44: with distributed architecture as supporting, issue formation collaborative instruction through command control terminal, including guide vehicle preset path, guide vehicle number, following distance and following speed, select 1 ground unmanned platform as guide vehicle, other 2 platforms are as following vehicle, specifically as follows:
step 441: remotely controlling the guided vehicle to travel for a distance including the processes of straight running and steering running, and recording the position of the guided vehicle at intervals of 3 meters as a task path point of subsequent formation running;
step 442: uniformly managing newly added or exited nodes through a distributed architecture CMRS, and constructing a topology structure of an unmanned platform connection relation in the current formation;
step 443: and loading a distributed architecture resource tree of the guided vehicle, uploading self state information (such as positioning data and the like), and updating in real time.
Step 444: loading a distributed architecture resource tree of the following vehicle, uploading self state information (such as positioning data and the like) and updating in real time, and simultaneously searching the positioning data of the following vehicle according to the IP address and the port number of the guiding vehicle to serve as a task navigation point of the following vehicle;
step 445: the control terminal issues recorded task path points to the guided vehicle, the guided vehicle starts to conduct path planning, and a control instruction is issued by utilizing a single vehicle architecture Dnet/GRPC analysis reverse analysis program;
step 446: the following vehicle takes the position searched by the CRMS as a task path point of the following vehicle, and updates the position once every 1m interval; as the task points are always sent, the memory overhead is increased, and when the following vehicle passes through one task point each time, the following vehicle is rejected;
step 447: the unmanned platform starts to move according to formation in turn.
(III) beneficial effects
Compared with the prior art, the ground unmanned platform simulation system is constructed, the autonomous navigation development of a single platform and the collaborative strategy development of a multi-platform system are energized, and the labor cost and the time cost of the actual environment test are saved.
Drawings
FIG. 1 is a schematic diagram of a control interface.
Fig. 2 is a schematic diagram of the overall system scheme.
Fig. 3 is a hardware connection diagram.
Fig. 4 is a schematic diagram of a DWA.
FIG. 5 is a schematic diagram of a virtual simulation environment.
FIG. 6 is a schematic view of a terrain editing interface.
Fig. 7 is a schematic diagram of a building model.
Fig. 8 is an edge building effect diagram.
Fig. 9 is a schematic view of a radar harness.
Fig. 10 is a schematic diagram of image data.
FIG. 11 is a semi-physical simulation flow chart.
Detailed Description
For the purposes of clarity, content, and advantages of the present invention, a detailed description of the embodiments of the present invention will be described in detail below with reference to the drawings and examples.
In order to solve the problems in the prior art, the invention provides a method for constructing a heterogeneous ground-air unmanned cluster simulation system based on a distributed architecture, which comprises the following steps:
step 1: command control terminal system construction;
step 2: designing a software and hardware interface of the unmanned platform controller;
step 3: parameterizing and constructing a digital scene;
step 4: and integrating a semi-physical simulation system.
The process for constructing the command control terminal system in the step 1 comprises the following steps:
step 11: according to the function requirements of the unmanned platform, a task planning module and a state monitoring module are constructed, as shown in figure 1; the task planning module comprises a command control terminal-autonomous navigation system module, a command control terminal-chassis module, a command control terminal-task load module and a command control terminal-multi-platform task management module;
the command control terminal-autonomous navigation system module is used for switching operation control modes of the unmanned platform, including an autonomous mode and a remote control mode, and simultaneously receiving a task path from the control terminal, and generating speed and curvature control instructions through a local path planner and a bottom layer controller;
the command control terminal-chassis module is used for realizing the temporary intervention control of people in the ring, and when the track of the ground unmanned platform deviates from a preset path, the command of speed and steering angle is directly issued through the distributed architecture;
the command control terminal-task load module is used for generating command information and control information, the command information is used for switching weapon types, white light/infrared rays and camera focal lengths of the ground unmanned platform, and the control information is used for giving horizontal rotation angles and pitching angles of weapon loads;
the command control terminal-multi-platform task management module is used for sending a cluster cooperative control instruction, including formation maneuvering and one-key scram;
the state monitoring module is used for acquiring operation state information of the unmanned platform, supporting acquisition of position information (UTM coordinates, course angle, pitch angle and roll angle), state information (speed, front wheel steering angle) and task load position information (horizontal angle and pitch angle) of the unmanned platform, and simultaneously transmitting image data back to the command control terminal based on the deployed RGB camera;
step 12: an intelligent task planning module is constructed, wherein the intelligent task planning module takes an initial task target list as input, generates a deployment scheme through reinforcement learning, and completes single-platform and multi-platform task planning;
step 13: when the single body and the group deviate from a preset track, the command control terminal carries out intervention decision on the single body or the group according to the IP and the port number of the unmanned platform, dynamically adjusts the running state of the unmanned platform, and has one-control-multiple-mode control strategies.
The process of designing the software and hardware interface of the unmanned platform controller in the step 2 comprises the following steps:
step 21: designing an unmanned platform hardware interface, and simulating a physical scene of unmanned platform movement by adopting an UNREAL virtual simulation environment, wherein each digital unmanned platform is provided with 1 RGB camera, 1 wire harness configurable multi-line laser radar and positioning equipment, and the digital unmanned platform is used for replacing sensor equipment on a real installation, and the connection mode is shown in figure 3;
step 22: designing a software interface of a ground unmanned platform, acquiring laser radar point cloud data, positioning data and camera image data in a virtual simulation environment through a Google remote procedure call (Google Remote Procedure Call, GRPC) communication middleware of an open source, and deploying a data processing module, an autonomous navigation module and a system collaborative software module on an industrial personal computer as shown in figure 2;
step 23: the acquired sensor data is converted into a data format which can be read by a ground unmanned platform through GRPC/distributed architecture analysis and inverse analysis middleware, and the sensor data is uploaded to the distributed architecture, so that data sharing among the platforms is realized;
step 24: the multi-line radar map building module in autonomous navigation fuses the fusion positioning data and the laser radar to construct a local cost map;
step 25: the image recognition module in autonomous navigation takes the acquired scene picture as input, and utilizes a YOLOv3 target recognition algorithm to recognize the person and the vehicle;
step 26: a path planning module in autonomous navigation constructs a local cost map according to the sensor data, generates a group of motion curves of speed and angular speed in a speed space by a dynamic window method (DWA), constructs a cost function of generating the curves according to the maximum curvature, the speed, the transverse offset value relative to a target point and the offset value relative to an obstacle, and selects a minimum cost curve as an optimal path as shown in fig. 4;
step 27: converting the optimal distance into the speed and curvature for driving the ground unmanned platform to run through a pure tracking algorithm, so as to realize an unmanned platform autonomous navigation system;
step 28: the ground unmanned platform sends the acquired unmanned platform state information such as positioning data, control instructions (speed and curvature) and images to a command control terminal through a distributed architecture to complete visual display;
through the distributed architecture, each unmanned platform can realize the multi-platform formation maneuvering driving task through a system cooperation software module from the position states of other unmanned platforms in the field.
As shown in fig. 5, the process of the parameterized construction of the digitized scene in the step 3 includes:
step 31: designing a digital topography; constructing unstructured terrains of 30 square kilometers through Landscape of the illusive engine, realizing high-pixel altitude map generation through powerful detail features (LOD) and efficient memory processing modes, and carrying out raising and lowering operations on terrains to form terrains of mountains, hills, valleys and trenches, as shown in FIG. 6;
step 32: constructing a digital building; building a three-dimensional model of a building and a tree through 3DMAX software, and then importing the three-dimensional model into an illusion engine to generate a dynamic model as shown in figure 7; for close-up buildings, more accurate models and mapping are needed, placing the building in a main area of 2x2 km, with details of windows, walls, doors, floors, glass, pipes; for a long-range building, the appearance is taken as a main part, the model is subjected to order reduction treatment, and only the appearance of the building is kept, as shown in fig. 8;
step 33: constructing an unmanned vehicle model; each unmanned platform is composed of a platform body and a load; instantiating a PxVehicleWheelsSimData class by PhysX for storing configuration parameters of four wheels including fields of suspension strength, damping rate, wheel mass, tire stiffness; then instantiating PxVehicleDrivSimData 4W class for storing configuration parameters of the driving model including fields of engine peak torque, clutch strength, transmission ratio and Ackerman steering correction; finally instantiating a PxRigidDynamicoactor class for configuring geometric shapes and dynamic properties of wheels and chassis, including mass, moment of inertia and mass center;
step 34: constructing an unmanned aerial vehicle model; the unmanned aerial vehicle adopts a four-rotor unmanned aerial vehicle, and can vertically lift and translate in multiple directions; by controlling the rotation coordinates of the propeller, the angle of the propeller can be updated every frame, and flight actions such as climbing and landing of the unmanned aerial vehicle are simulated. Each unmanned aerial vehicle mainly comprises a platform body and a load, and can realize information feedback of an area;
step 35: constructing a laser radar model; based on the working characteristics of the laser radar, simulating a ray physical rule, specifying a starting point and an ending point (0-200 m) of a ray in a certain range through a single lineTraceByChannel, and recording the angle and the distance of the ray if the ray is shielded; by calling the radar scanning interface, the radar scanning condition is checked, as shown in fig. 9;
step 36: constructing a camera model; obtaining image data in a 3D scene through a SceneaptureComponent 2D component of the illusion engine, rendering the image data to textureRenderTarget2D through a shader rendering pipeline, and obtaining image data in an RGB format after serialization, as shown in figure 10;
step 37: constructing a positioning equipment model; based on the construction of a fantasy engine transformation module, the transformation attribute of the unmanned platform comprises position and rotation degree;
step 38: the virtual sensor/executor interface software establishes connection with the control computer through the GRPC, can receive the task instruction of the control computer, and transmits the control instruction to the virtual scene software, and the virtual scene software performs control on the corresponding units; likewise, the virtual scene software transmits the sensor data to the control computer through the virtual sensor software to complete fusion map building; the virtual scene computer has an effect display function, and the host is externally connected with a display to display the situation information of the unmanned aerial vehicle and the unmanned aerial vehicle in real time.
The process of integrating the semi-physical simulation system in the step 4 comprises the following steps:
step 41: selecting 1 Window10 operating system server to deploy a virtual simulation environment, selecting 3 Ubuntu system industrial personal computers to deploy ground unmanned platform controllers, simulating an autonomous navigation system of the ground unmanned platform, selecting 1 Ubuntu system to deploy a control terminal, simulating task instruction issuing and status data returning;
step 42: the autonomous navigation system of each unmanned platform can acquire laser radar point cloud data, positioning data and camera image data in a virtual simulation environment through the GRPC;
step 43: the acquired sensor data is converted into a data format which can be read by a ground unmanned platform through GRPC/distributed architecture analysis and inverse analysis middleware, and the sensor data is uploaded to the distributed architecture, so that data sharing among the platforms is realized;
step 44: with distributed architecture as the support, issue formation collaborative instruction through command control terminal, including guide vehicle preset path, guide vehicle number, following distance and following speed, select 1 ground unmanned platform as guide vehicle, other 2 platforms are as following vehicle, simulation flow is as shown in figure 11, and specifically as follows:
step 441: remotely controlling the guided vehicle to travel for a distance including the processes of straight running and steering running, and recording the position of the guided vehicle at intervals of 3 meters as a task path point of subsequent formation running;
step 442: uniformly managing newly added or exited nodes through a distributed architecture CMRS, and constructing a topology structure of an unmanned platform connection relation in the current formation;
step 443: and loading a distributed architecture resource tree of the guided vehicle, uploading self state information (such as positioning data and the like), and updating in real time.
Step 444: loading a distributed architecture resource tree of the following vehicle, uploading self state information (such as positioning data and the like) and updating in real time, and simultaneously searching the positioning data of the following vehicle according to the IP address and the port number of the guiding vehicle to serve as a task navigation point of the following vehicle;
step 445: the control terminal issues recorded task path points to the guided vehicle, the guided vehicle starts to conduct path planning, and a control instruction is issued by utilizing a single vehicle architecture Dnet/GRPC analysis reverse analysis program;
step 446: the following vehicle takes the position searched by the CRMS as a task path point of the following vehicle, and updates the position once every 1m interval; as the task points are always sent, the memory overhead is increased, and when the following vehicle passes through one task point each time, the following vehicle is rejected;
step 447: the unmanned platform starts to move according to formation in turn.
Example 1
The embodiment provides a method for constructing a heterogeneous ground-air unmanned cluster simulation system based on a distributed architecture, which comprises the following steps:
the method comprises the steps of constructing a universal command control terminal, monitoring the running state of an unmanned cluster in a digital simulation scene through a distributed software architecture, issuing single-platform and multi-platform task control instructions, and completing single-platform and multi-platform task planning;
an unmanned platform task editing function module is constructed, the on-line decision of people in the ring is realized through unmanned cluster rule adjustment, algorithm editing and the like, the task state of an unmanned cluster is dynamically adjusted, and the unmanned platform task editing function module has a multi-mode control strategy of one control, multiple control and the like;
based on a virtual ENGINE (UNREAL ENGINE), the design of a single or group model construction technology is broken through, a cross-domain and cross-platform heterogeneous unmanned cluster model and a sensor model are completed, a physical model covering the characteristics of kinematics, dynamics and the like is formed, based on tasks such as unmanned platform autonomous navigation, formation maneuver and the like, modular task load components such as radars, infrared sensors, photoelectric sensors and the like and autonomous system components such as ultrasonic waves, laser radars and the like are integrated under the support of an interoperation distributed architecture, and a monomer simulation model with complete functions is formed; and designing a single-platform and multi-platform system collaborative scene, and completing formation maneuvering tasks in the semi-physical simulation system.
Wherein, the generalized command control terminal is: according to the functional requirements of the unmanned ground platform, a task planning module (comprising a command control terminal-autonomous navigation system, a command control terminal-chassis and a command control terminal-task load) and a state monitoring module are constructed, an initial task target list is taken as input, and an intelligent task planning mechanism is used for generating a force deployment scheme to complete single-platform and multi-platform task planning.
The person-in-loop decision control is as follows: the command control terminal and the unmanned platform construct a multi-node topological connection structure through a distributed architecture, the command control terminal can issue control instructions and state monitoring to the unmanned platform, when a monomer and a group deviate from a preset track, an intervention decision is implemented on the monomer or the group according to the IP and the port number of the unmanned platform, the running state of the unmanned platform is dynamically adjusted, and the multi-mode control strategy such as one control and multiple control is provided.
The virtual simulation environment is as follows: constructing unstructured terrains and structured terrains of 30 square kilometers through a Landscape module to form off-road undulating pavements, urban asphalt pavements, grassland pavements and the like; under the constraint of the dynamics and the kinematic performance of an actual ground/aerial unmanned platform, a digital unmanned platform is constructed, the working mode of the existing laser radar is simulated through rays, images in a scene are rendered to texturerender target2D through a shader rendering pipeline through a Sceneapturecomponent 2D, image data in RGB format is obtained after serialization, transformation attributes (position, rotation angle and the like) of the unmanned platform are constructed through a transformation module, and the topography, the platform model, the sensor model, the task load model and the like required by joint simulation are formed. Under the support of the interoperable distributed architecture, modularized task load components such as radar, infrared and photoelectric sensors and autonomous system components such as ultrasonic and laser radar are integrated to form a monomer simulation model with complete functions.
The semi-physical simulation system is as follows: 1 server is selected to deploy a digital simulation scene, and sensor data input such as a laser radar, a camera, positioning equipment and the like is provided; selecting a single-platform autonomous navigation algorithm and a group cooperative intelligent algorithm of 3 industrial personal computers for deploying unmanned platforms, and endowing autonomous cooperative capacity of a platform level and a group level; the method comprises the steps of selecting 1 portable control terminals to deploy command control software, sending commands such as autonomous navigation and system coordination to an unmanned platform, connecting hardware parts through a gigabit switch, and enabling software parts to achieve data acquisition of a virtual sensor, data sharing among platforms and the like through a distributed software architecture and GRPC communication middleware.
The formation maneuver is: the method comprises the steps that a distributed architecture is used as a support, a command control terminal is used for issuing formation cooperative instructions, the formation cooperative instructions comprise a preset path of a guided vehicle, a number of the following vehicle, a following distance, a following speed and the like, the guided vehicle performs local path planning on the preset path, the motion track of the platform is uploaded to the distributed architecture in a resource tree mode through the distributed architecture, the following vehicle acquires path information of the guided vehicle from the distributed architecture, and then corresponding local path planning and autonomous obstacle avoidance tasks are completed, so that autonomous formation maneuver is realized.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that modifications and variations could be made by those skilled in the art without departing from the technical principles of the present invention, and such modifications and variations should also be regarded as being within the scope of the invention.

Claims (10)

1. A method for constructing a heterogeneous ground-air unmanned cluster simulation system based on a distributed architecture is characterized by comprising the following steps:
step 1: command control terminal system construction;
step 2: designing a software and hardware interface of the unmanned platform controller;
step 3: parameterizing and constructing a digital scene;
step 4: and integrating a semi-physical simulation system.
2. The method for constructing the heterogeneous ground air unmanned cluster simulation system based on the distributed architecture as claimed in claim 1, wherein the process of constructing the command control terminal system in step 1 comprises the following steps:
step 11: constructing a task planning module and a state monitoring module according to the functional requirements of the unmanned platform; the task planning module comprises a command control terminal-autonomous navigation system module, a command control terminal-chassis module, a command control terminal-task load module and a command control terminal-multi-platform task management module;
the command control terminal-autonomous navigation system module is used for switching operation control modes of the unmanned platform, including an autonomous mode and a remote control mode, and simultaneously receiving a task path from the control terminal, and generating speed and curvature control instructions through a local path planner and a bottom layer controller;
the command control terminal-chassis module is used for realizing the temporary intervention control of people in the ring, and when the track of the ground unmanned platform deviates from a preset path, the command of speed and steering angle is directly issued through the distributed architecture;
the command control terminal-task load module is used for generating command information and control information, the command information is used for switching weapon types, white light/infrared rays and camera focal lengths of the ground unmanned platform, and the control information is used for giving horizontal rotation angles and pitching angles of weapon loads;
the command control terminal-multi-platform task management module is used for sending a cluster cooperative control instruction, including formation maneuvering and one-key scram;
the state monitoring module is used for acquiring the running state information of the unmanned platform, supporting the acquisition of the position information, the state information and the task load position information of the unmanned platform, and simultaneously transmitting the image data back to the command control terminal based on the deployed RGB camera;
step 12: an intelligent task planning module is constructed, wherein the intelligent task planning module takes an initial task target list as input, generates a deployment scheme through reinforcement learning, and completes single-platform and multi-platform task planning;
step 13: when the single body and the group deviate from a preset track, the command control terminal carries out intervention decision on the single body or the group according to the IP and the port number of the unmanned platform, dynamically adjusts the running state of the unmanned platform, and has one-control-multiple-mode control strategies.
3. The method for constructing heterogeneous ground air unmanned cluster simulation system based on the distributed architecture according to claim 2, wherein in the step 11, the location information of the unmanned platform includes UTM coordinates, heading angle, pitch angle, and roll angle.
4. The method for constructing heterogeneous ground-air unmanned cluster simulation system based on the distributed architecture according to claim 2, wherein in step 11, the status information of the unmanned platform includes speed and front wheel steering angle.
5. The method for constructing heterogeneous ground air unmanned cluster simulation system based on the distributed architecture according to claim 2, wherein in step 11, the task load position information of the unmanned platform includes a horizontal angle and a pitch angle.
6. The method for constructing heterogeneous ground air unmanned cluster simulation system based on the distributed architecture as claimed in claim 2, wherein the process of designing the software and hardware interface of the unmanned platform controller in step 2 comprises the following steps:
step 21: designing an unmanned platform hardware interface, and simulating a physical scene of unmanned platform movement by adopting an UNREAL virtual simulation environment, wherein each digital unmanned platform is provided with 1 RGB camera, 1 wire harness configurable multi-line laser radar and positioning equipment, and the digital unmanned platform is used for replacing sensor equipment on a real installation;
step 22: a software interface of a ground unmanned platform is designed, communication middleware is called through an open-source Google remote procedure to obtain laser radar point cloud data, positioning data and camera image data in a virtual simulation environment, and a data processing module, an autonomous navigation module and a system collaborative software module are deployed on an industrial personal computer;
step 23: the acquired sensor data is converted into a data format which can be read by a ground unmanned platform through GRPC/distributed architecture analysis and inverse analysis middleware, and the sensor data is uploaded to the distributed architecture, so that data sharing among the platforms is realized;
step 24: the multi-line radar map building module in autonomous navigation fuses the fusion positioning data and the laser radar to construct a local cost map;
step 25: the image recognition module in autonomous navigation takes the acquired scene picture as input, and utilizes a YOLOv3 target recognition algorithm to recognize the person and the vehicle;
step 26: a path planning module in autonomous navigation constructs a local cost map according to the sensor data, generates a group of motion curves within a certain time by generating speed and angular speed in a speed space through a dynamic window method, constructs a cost function of generating the curves according to the maximum curvature, the speed, the transverse offset value of a relative target point and the offset value of a relative obstacle, and selects a minimum cost curve as an optimal path;
step 27: converting the optimal distance into the speed and curvature for driving the ground unmanned platform to run through a pure tracking algorithm, so as to realize an unmanned platform autonomous navigation system;
step 28: the ground unmanned platform sends the acquired unmanned platform state information such as positioning data, control instructions and images to the command control terminal through the distributed architecture to complete visual display;
through the distributed architecture, each unmanned platform can realize the multi-platform formation maneuvering driving task through a system cooperation software module from the position states of other unmanned platforms in the field.
7. The method for constructing heterogeneous ground air unmanned cluster simulation system based on the distributed architecture according to claim 6, wherein the process of parameterizing and constructing the digital scene in step 3 comprises the following steps:
step 31: designing a digital topography; constructing unstructured terrains of 30 square kilometers through Landscape of the illusive engine, realizing high-pixel altitude map generation through strong detail characteristics and an efficient memory processing mode, and carrying out raising and lowering operations on terrains to form terrains of mountains, hills, valleys and trenches;
step 32: constructing a digital building; building a three-dimensional model of a building and a tree through 3DMAX software, and then importing the three-dimensional model into an illusion engine to generate a dynamic model; for close-up buildings, more accurate models and mapping are needed, placing the building in a main area of 2x2 km, with details of windows, walls, doors, floors, glass, pipes; for a long-range building, the appearance is taken as a main part, the model is subjected to order reduction treatment, and only the appearance of the building is kept;
step 33: constructing an unmanned vehicle model; each unmanned platform is composed of a platform body and a load; instantiating a PxVehicleWheelsSimData class by PhysX for storing configuration parameters of four wheels including fields of suspension strength, damping rate, wheel mass, tire stiffness; then instantiating PxVehicleDrivSimData 4W class for storing configuration parameters of the driving model including fields of engine peak torque, clutch strength, transmission ratio and Ackerman steering correction; finally instantiating a PxRigidDynamicoactor class for configuring geometric shapes and dynamic properties of wheels and chassis, including mass, moment of inertia and mass center;
step 34: constructing an unmanned aerial vehicle model; the unmanned aerial vehicle adopts a four-rotor unmanned aerial vehicle, and can vertically lift and translate in multiple directions; by controlling the rotation coordinates of the propeller, the angle of the propeller can be updated every frame, and flight actions such as climbing and landing of the unmanned aerial vehicle are simulated. Each unmanned aerial vehicle mainly comprises a platform body and a load, and can realize information feedback of an area;
step 35: constructing a laser radar model; simulating a ray physical rule based on the working characteristics of a laser radar, specifying a starting point and an ending point of a ray in a certain range through a single lineTraceByChannel, and recording the angle and the distance of the ray if the ray is shielded; checking radar scanning conditions by calling a radar scanning interface;
step 36: constructing a camera model; the image data under the 3D scene is obtained through a SceneaptureComponent 2D component of the illusion engine, the image data is rendered to a textureRenderTarget2D through a shader rendering pipeline, and the image data in an RGB format is obtained after serialization;
step 37: constructing a positioning equipment model; based on the construction of a fantasy engine transformation module, the transformation attribute of the unmanned platform comprises position and rotation degree;
step 38: the virtual sensor/executor interface software establishes connection with the control computer through the GRPC, can receive the task instruction of the control computer, and transmits the control instruction to the virtual scene software, and the virtual scene software performs control on the corresponding units; likewise, the virtual scene software transmits the sensor data to the control computer through the virtual sensor software to complete fusion map building; the virtual scene computer has an effect display function, and the host is externally connected with a display to display the situation information of the unmanned aerial vehicle and the unmanned aerial vehicle in real time.
8. The method for constructing heterogeneous unmanned ground air vehicle cluster simulation system based on distributed architecture according to claim 7, wherein in step 33, the configuration parameters of four wheels include fields of suspension strength, damping rate, wheel mass and tire rigidity.
9. The method for constructing heterogeneous unmanned ground air vehicle cluster simulation system based on distributed architecture according to claim 7, wherein the dynamic properties include mass, moment of inertia and centroid in step 33.
10. The method for constructing heterogeneous ground air unmanned cluster simulation system based on the distributed architecture as claimed in claim 7, wherein the process of integrating the semi-physical simulation system in step 4 comprises the following steps:
step 41: selecting 1 Window10 operating system server to deploy a virtual simulation environment, selecting 3 Ubuntu system industrial personal computers to deploy ground unmanned platform controllers, simulating an autonomous navigation system of the ground unmanned platform, selecting 1 Ubuntu system to deploy a control terminal, simulating task instruction issuing and status data returning;
step 42: the autonomous navigation system of each unmanned platform can acquire laser radar point cloud data, positioning data and camera image data in a virtual simulation environment through the GRPC;
step 43: the acquired sensor data is converted into a data format which can be read by a ground unmanned platform through GRPC/distributed architecture analysis and inverse analysis middleware, and the sensor data is uploaded to the distributed architecture, so that data sharing among the platforms is realized;
step 44: with distributed architecture as supporting, issue formation collaborative instruction through command control terminal, including guide vehicle preset path, guide vehicle number, following distance and following speed, select 1 ground unmanned platform as guide vehicle, other 2 platforms are as following vehicle, specifically as follows:
step 441: remotely controlling the guided vehicle to travel for a distance including the processes of straight running and steering running, and recording the position of the guided vehicle at intervals of 3 meters as a task path point of subsequent formation running;
step 442: uniformly managing newly added or exited nodes through a distributed architecture CMRS, and constructing a topology structure of an unmanned platform connection relation in the current formation;
step 443: and loading a distributed architecture resource tree of the guided vehicle, uploading self state information, and updating in real time.
Step 444: loading a distributed architecture resource tree of the following vehicle, uploading self state information, updating in real time, and searching positioning data of the following vehicle according to an IP address and a port number of the guiding vehicle to serve as a task navigation point of the following vehicle;
step 445: the control terminal issues recorded task path points to the guided vehicle, the guided vehicle starts to conduct path planning, and a control instruction is issued by utilizing a single vehicle architecture Dnet/GRPC analysis reverse analysis program;
step 446: the following vehicle takes the position searched by the CRMS as a task path point of the following vehicle, and updates the position once every 1m interval; as the task points are always sent, the memory overhead is increased, and when the following vehicle passes through one task point each time, the following vehicle is rejected;
step 447: the unmanned platform starts to move according to formation in turn.
CN202311228857.4A 2023-09-22 2023-09-22 Heterogeneous ground-air unmanned cluster simulation system construction method based on distributed architecture Pending CN117252011A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311228857.4A CN117252011A (en) 2023-09-22 2023-09-22 Heterogeneous ground-air unmanned cluster simulation system construction method based on distributed architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311228857.4A CN117252011A (en) 2023-09-22 2023-09-22 Heterogeneous ground-air unmanned cluster simulation system construction method based on distributed architecture

Publications (1)

Publication Number Publication Date
CN117252011A true CN117252011A (en) 2023-12-19

Family

ID=89128873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311228857.4A Pending CN117252011A (en) 2023-09-22 2023-09-22 Heterogeneous ground-air unmanned cluster simulation system construction method based on distributed architecture

Country Status (1)

Country Link
CN (1) CN117252011A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117742540A (en) * 2024-02-20 2024-03-22 成都流体动力创新中心 Virtual-real interaction system based on virtual engine and semi-physical simulation

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117742540A (en) * 2024-02-20 2024-03-22 成都流体动力创新中心 Virtual-real interaction system based on virtual engine and semi-physical simulation
CN117742540B (en) * 2024-02-20 2024-05-10 成都流体动力创新中心 Virtual-real interaction system based on virtual engine and semi-physical simulation

Similar Documents

Publication Publication Date Title
US11914369B2 (en) Multi-sensor environmental mapping
Zhang et al. 2D Lidar‐Based SLAM and Path Planning for Indoor Rescue Using Mobile Robots
US20200026720A1 (en) Construction and update of elevation maps
CN110456757A (en) A kind of the vehicle test method and system of automatic driving vehicle
CN112789672B (en) Control and navigation system, gesture optimization, mapping and positioning techniques
CN103699106A (en) Multi-unmanned aerial vehicle cooperative task planning simulation system based on VR-Forces simulation platform
CN110471426A (en) Unmanned intelligent vehicle automatic Collision Avoidance method based on quantum wolf pack algorithm
WO2022205102A1 (en) Scene processing method, apparatus and system and related device
EP3850455A1 (en) Control and navigation systems
CN117252011A (en) Heterogeneous ground-air unmanned cluster simulation system construction method based on distributed architecture
WO2021218693A1 (en) Image processing method, network training method, and related device
Doukhi et al. Deep reinforcement learning for autonomous map-less navigation of a flying robot
GB2581403A (en) Pose optimisation, mapping, and localisation techniques
CN112037330A (en) Unmanned aerial vehicle operation scene simulation method based on AirSim
CN112034733A (en) City simulation method of quad-rotor unmanned aerial vehicle based on Unity3D
CN115758687A (en) Unmanned aerial vehicle autopilot simulation platform
Feng et al. Autonomous RC-car for education purpose in iSTEM projects
Li Constructing the intelligent expressway traffic monitoring system using the internet of things and inspection robot
US11618477B2 (en) Service area maps for autonomous vehicles
CN114964269B (en) Unmanned aerial vehicle path planning method
Liang et al. Research on Navigation Recognition Optimization of Unmanned Self-Built Map
Zhi-Hua et al. Design of UAV telepresence and simulation platform based on VR
CN117671402A (en) Recognition model training method and device and mobile intelligent equipment
CN114967752A (en) Unmanned aerial vehicle autonomous landing method and system based on vision
KR20230162278A (en) Weapon system test method of electronic apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination