CN113359709B - Unmanned motion planning method based on digital twins - Google Patents

Unmanned motion planning method based on digital twins Download PDF

Info

Publication number
CN113359709B
CN113359709B CN202110546715.7A CN202110546715A CN113359709B CN 113359709 B CN113359709 B CN 113359709B CN 202110546715 A CN202110546715 A CN 202110546715A CN 113359709 B CN113359709 B CN 113359709B
Authority
CN
China
Prior art keywords
vehicle
data
unmanned
planning
emergency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110546715.7A
Other languages
Chinese (zh)
Other versions
CN113359709A (en
Inventor
陈龙
胡学敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei University
Sun Yat Sen University
Original Assignee
Hubei University
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei University, Sun Yat Sen University filed Critical Hubei University
Priority to CN202110546715.7A priority Critical patent/CN113359709B/en
Publication of CN113359709A publication Critical patent/CN113359709A/en
Application granted granted Critical
Publication of CN113359709B publication Critical patent/CN113359709B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision

Abstract

The invention belongs to the technical field of unmanned driving, and particularly relates to an unmanned motion planning method based on digital twins, which comprises the following steps: acquiring environmental data and body attitude data around a vehicle; constructing a digital twin driving scene according to the environment data and the vehicle body posture data; generating various emergency simulation driving scenes, and respectively planning the various emergency simulation driving scenes to obtain a plurality of respectively corresponding planning results; matching the digital twin driving scene with various emergency simulation driving scenes in real time to obtain a matched emergency simulation driving scene; and transmitting the planning result corresponding to the matched emergency event simulation driving scene to the vehicle. According to the unmanned aerial vehicle system and the method, various emergency simulation driving scenes are generated in advance, corresponding motion planning results are generated, and the unmanned aerial vehicle system and the method are directly matched and synchronized in the vehicle, so that the response sensitivity of the unmanned aerial vehicle system in the face of emergency is improved, and the safety and the reliability of the unmanned aerial vehicle system are improved.

Description

Unmanned motion planning method based on digital twinning
Technical Field
The invention belongs to the technical field of unmanned driving, and particularly relates to an unmanned motion planning method based on digital twins.
Background
An unmanned automobile mainly depends on an unmanned system to achieve the purpose of automatic driving, namely, an intelligent automobile which senses the road environment through a vehicle-mounted sensing system, automatically plans a driving route and controls the automobile to reach a preset target. In recent years, a large number of investors and internet practitioners have come into the field of intelligent automobiles, and the blowout type development of the unmanned technology is driven, so that complete unmanned driving can be realized in the foreseeable future.
Because the automobile is high in running speed, if the route planning of the unmanned system is unreasonable, serious damage can be caused to passengers in the automobile, personnel outside the automobile or the environment, and therefore the safety and the reliability of the unmanned system need to meet related requirements to allow mass production for getting on the road. The existing unmanned system has to train and test decision-making algorithms such as motion planning in corresponding physical scenes, so that the cost is high, the period is long, the completeness is insufficient, and the rapid development of the unmanned technology is not facilitated.
In order to solve the above technical problems, chinese patent CN110716558A discloses an automatic driving system for non-public roads based on digital twin technology, but it is applied to an automatic driving system for non-public roads, and for public roads, the driving environment is more complex, and there are many emergency scenes or sudden scenes, so there are more potential dangers, and the requirement for reliability and safety of unmanned system is higher, so it is not suitable for unmanned system for public roads.
Disclosure of Invention
The present invention has been made to overcome at least one of the above-mentioned drawbacks of the prior art, and provides a digital twin-based unmanned motion planning method, which is capable of coping with an emergency, increasing the response speed of an unmanned vehicle to an emergency driving scene, and improving safety and reliability of unmanned driving, and which is suitable for opening public roads.
In order to solve the technical problems, the invention adopts the technical scheme that:
the method for planning the unmanned motion based on the digital twin is characterized by comprising the following steps: acquiring environmental data and vehicle body posture data around a vehicle;
constructing a digital twin driving scene according to the environment data and the vehicle body posture data;
generating various emergency simulation driving scenes, and respectively planning the various emergency simulation driving scenes to obtain a plurality of respectively corresponding planning results;
matching the digital twin driving scene with various emergency simulation driving scenes in real time to obtain a matched emergency simulation driving scene;
and transmitting the planning result corresponding to the matched emergency event simulation driving scene to the vehicle.
According to the scheme, various emergency simulated driving scenes are generated in advance, the motion planning is carried out on the various emergency simulated driving scenes to obtain corresponding planning results, when the unmanned motion planning is carried out, the digital twin driving scene of the vehicle is matched with the various emergency simulated driving scenes, if the matching is successful, the vehicle is in the emergency driving scene, the emergency planning needs to be carried out, the calculated corresponding planning results are synchronized to the vehicle, the process of planning after the emergency scene is judged is omitted, the planning time of the emergency driving scene is greatly shortened, and the response sensitivity, the safety and the reliability of the unmanned vehicle are improved.
Preferably, the environment data includes RGB serial image data in front of the vehicle, point cloud image data of the environment around the vehicle, and high-precision map data around the vehicle, and the vehicle body posture data includes vehicle speed, vehicle acceleration, vehicle coordinates, and vehicle heading angle.
Preferably, the vehicle absolute coordinates are obtained by using an RTK-SLAM (carrier phase differential-instantaneous positioning and map building technology), network differential data broadcast by an RTK (carrier phase differential technology) base station are received, CORS (urban continuous operation reference station system) differential data are received, and differential data of the vehicle position are received; and positioning by using a CORS mode to obtain a local high-precision map.
Preferably, the above-mentioned construction of the digital twin driving scenario specifically includes:
detecting RGB sequence image data to obtain traffic signal information, and extracting a feasible path according to the traffic signal information;
detecting the point cloud image data to obtain the speed of the obstacle, the heading angle of the obstacle and the position of the obstacle; obtaining an obstacle image according to the position of the obstacle;
mapping the vehicle coordinates into a vehicle body position image;
fusing the feasible path, the obstacle position image and the vehicle body position image to obtain a multi-semantic channel image;
and splicing the vehicle attitude data, the multi-semantic channel image, the barrier speed and the barrier course angle to form a digital twin driving scene.
Preferably, the extracting the feasible path specifically includes:
carrying out target detection on the RGB sequence image by using a YOLO (You Only Look one) v3 network to obtain a traffic sign and a signal lamp;
suppressing overlap detection using a non-maxima method;
classifying the detected traffic signs and signal lamps to obtain traffic signal information;
extracting a local high-precision map by taking the vehicle coordinates as a center, extracting and splicing lane lines, and constructing a local road network map;
and extracting the current feasible path of the vehicle in the road network graph by using the traffic signal information.
Preferably, the process of detecting the point cloud image data to obtain the speed, the heading angle and the position of the obstacle is as follows:
mapping the point cloud image in a two-dimensional transmission image space;
and expanding the point cloud image on the basis of a primary regression structure of a two-dimensional perspective image space to generate a three-dimensional frame corresponding to the three-dimensional barrier.
Preferably, the generating of the plurality of emergency simulated driving scenes specifically includes:
acquiring digital driving scenes of various emergencies to construct a training data set, encoding the digital driving scenes of different emergencies into feature vectors as conditions of the different emergencies, setting parameters and classifying the digital driving scenes of the various emergencies to obtain a sample library;
defining a plurality of Gaussian distributions, and randomly sampling hidden vectors obeying the Gaussian distributions to obtain corresponding expression vectors;
generating different types of virtual emergency scenes according to the expression vectors and conditions;
and performing countermeasure training on different types of emergency simulation digital driving scenes according to the sample library to finally obtain a plurality of emergency simulation driving scenes.
Preferably, the digital driving scenes of various emergencies include emergency braking, emergency lane change, pedestrian rushing into a motor vehicle lane, and pedestrian running a red light.
Preferably, the step of respectively planning the multiple emergency simulated driving scenes to obtain multiple corresponding planning results specifically includes:
extracting space-time information and vehicle body posture data of a digital twin driving scene, and training a motion planning model by using a depth certainty strategy gradient algorithm to obtain a DDPG model;
and inputting the digital driving scenes of various emergency events into the DDPG model to obtain corresponding planning results.
Preferably, the planning result includes a steering angle, an accelerator or electric valve opening degree and a braking torque of the vehicle.
The system for realizing the digital twin-based unmanned motion planning method comprises an unmanned vehicle terminal, a processing terminal and a communication module; the unmanned vehicle terminal is provided with an environment data acquisition module and a vehicle body attitude data acquisition module, and the processing terminal is provided with an environment sensing unit, a scene construction unit and a motion planning unit which are in communication connection; the unmanned terminal is in communication connection with the processing terminal through the communication module;
the vehicle surrounding environment data acquisition module is used for acquiring surrounding environment data of the unmanned vehicle terminal and transmitting the surrounding environment data to the environment perception calculation unit;
the vehicle body attitude data acquisition module is used for acquiring vehicle body attitude data of the unmanned vehicle terminal and transmitting the vehicle body attitude data to the scene construction unit;
the environment sensing unit is used for obtaining environment information around the unmanned vehicle terminal according to the environment data and transmitting the environment information to the scene construction unit;
the scene construction unit is used for constructing and obtaining a digital twin driving scene according to the received environmental information and the vehicle body attitude data in a fusion manner;
the motion planning unit is used for generating various emergency simulation driving scenes, obtaining a plurality of corresponding planning results respectively, and transmitting the planning results corresponding to the emergency simulation driving scenes matched with the digital twin driving scenes to the unmanned vehicle terminal.
Compared with the prior art, the beneficial effects are:
in the scheme, various emergency simulated driving scenes are generated in advance, the motion planning is carried out on the various emergency simulated driving scenes to obtain corresponding planning results, when the unmanned motion planning is carried out, the digital twin driving scene of the vehicle is matched with the various emergency simulated driving scenes, if the matching is successful, the vehicle is in the emergency driving scene, the emergency planning needs to be carried out, and the calculated corresponding planning results are synchronized to the vehicle, so that the process of planning after the emergency scene is judged is omitted, the planning time of the emergency driving scene is greatly shortened, and the response sensitivity, safety and reliability of the unmanned vehicle are improved; and moreover, due to the addition of the motion planning of the emergency, the unmanned system can deal with various uncertain factors of an open road.
Drawings
FIG. 1 is a schematic block flow diagram of a digital twin-based unmanned motion planning method of the present invention;
FIG. 2 is a block diagram of a digital twin driving scene construction process of the digital twin-based unmanned motion planning method of the present invention;
FIG. 3 is a block diagram of an emergency simulated driving scenario generation process of the digital twin-based unmanned motion planning method of the present invention;
FIG. 4 is a block diagram of a system for implementing the digital twin based unmanned motion planning method of the present invention.
Detailed Description
The drawings are for illustration purposes only and are not to be construed as limiting the invention; for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted. The positional relationships depicted in the drawings are for illustrative purposes only and are not to be construed as limiting the invention.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there are terms such as "upper", "lower", "left", "right", "long", "short", etc., indicating orientations or positional relationships based on the orientations or positional relationships shown in the drawings, it is only for convenience of description and simplicity of description, but does not indicate or imply that the device or element referred to must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationships in the drawings are only used for illustrative purposes and are not to be construed as limitations of the present patent, and specific meanings of the terms may be understood by those skilled in the art according to specific situations.
The technical scheme of the invention is further described in detail by the following specific embodiments in combination with the attached drawings:
example (b):
an embodiment of a method for digital twin based unmanned motion planning, as shown in fig. 1-3, comprises:
acquiring environmental data and body attitude data around a vehicle;
constructing a digital twin driving scene according to the environment data and the vehicle body posture data;
generating various emergency simulation driving scenes, and respectively planning the various emergency simulation driving scenes to obtain a plurality of respectively corresponding planning results;
matching the digital twin driving scene with the multiple emergency simulation driving scenes in real time to obtain a matched emergency simulation driving scene;
and transmitting the planning result corresponding to the matched emergency event simulation driving scene to a vehicle.
The environment data in the embodiment includes RGB serial image data in front of the vehicle, point cloud image data of the environment around the vehicle, and high-precision map data around the vehicle, and the vehicle body posture data includes vehicle speed, vehicle acceleration, vehicle coordinates, and vehicle heading angle.
The RGB sequence image data can be collected through a vehicle-mounted color camera, the resolution of the camera is preferably 200 ten thousand pixels, the video sampling frequency is 60Hz, and the focal length is 50 mm.
In addition, the point cloud image data can be collected through a laser radar, the parameter of the laser radar is preferably a 16-line medium-distance laser radar, the wavelength is 900nm, the vertical resolution is 2 degrees, the horizontal resolution is 0.2 degrees, and the scanning frequency is 10 Hz.
In the embodiment, an RTK-SLAM (carrier phase difference-instantaneous positioning and map construction technology) is adopted to obtain the absolute coordinates of the vehicle, network differential data broadcasted by an RTK (carrier phase difference technology) base station is received, CORS (urban continuous operation reference station system) differential data is received, and differential data of the position of the vehicle is received; and positioning by using a CORS mode to obtain a local high-precision map.
As shown in fig. 2, the digital twin driving scene in this embodiment is specifically constructed as follows:
detecting RGB sequence image data to obtain traffic signal information, and extracting a feasible path according to the traffic signal information;
detecting the point cloud image data to obtain an obstacle speed, an obstacle course angle and an obstacle position, wherein the obstacle speed and the obstacle course angle are represented by one-dimensional vectors; wherein, the obstacles comprise motor vehicles, non-motor vehicles, pedestrians, road facilities and the like;
carrying out anti-reflection transformation and binaryzation on the position of the obstacle, mapping the position of the obstacle into a binary image with a black background, and representing the obstacle as a white matrix to obtain an image of the obstacle;
correcting the coordinates of the vehicle through a GPS (global positioning system) and an RTK (real-time kinematic), mapping the coordinate position of the vehicle to an image of a black background, and representing the vehicle by adopting a white solid circle to obtain a vehicle body position image;
respectively taking the feasible path, the obstacle position image and the vehicle body position image as independent channels, and fusing to obtain a multi-meaning channel image; the multi-semantic channel image is synchronous with the real-time physical driving scene of the vehicle;
and splicing the vehicle attitude data, the multi-semantic channel image, the barrier speed and the barrier course angle to form a digital twin driving scene, wherein the digital twin driving scene is synchronous with the vehicle physical driving scene.
The extracting of the feasible path in this embodiment specifically includes:
carrying out target detection on the RGB sequence image by using a YOLO (You Only Look one) v3 network to obtain a traffic sign and a signal lamp;
suppressing overlap detection using a non-maxima method;
classifying the detected traffic signs and signal lamps to obtain traffic signal information;
extracting a local high-precision map by taking the vehicle coordinates as a center, extracting and splicing lane lines, and constructing a local road network map; the local road network image is a binary image, the background is black, and the lane lines are white;
and extracting the current feasible path of the vehicle in the road network graph by using the traffic signal information.
The process of detecting the point cloud image data to obtain the speed, the course angle and the position of the obstacle in the embodiment is as follows:
mapping the point cloud image in a two-dimensional transmission image space;
and expanding the point cloud image on the basis of a primary regression structure of a two-dimensional perspective image space to generate a three-dimensional frame corresponding to the three-dimensional barrier.
As shown in fig. 3, the generating of multiple emergency simulated driving scenes in the embodiment specifically includes:
acquiring digital driving scenes of various emergencies to construct a training data set, encoding the digital driving scenes of different emergencies into feature vectors as conditions of the different emergencies, setting parameters and classifying the digital driving scenes of the various emergencies to obtain a sample library;
defining a plurality of Gaussian distributions, and randomly sampling hidden vectors obeying the Gaussian distributions to obtain corresponding expression vectors;
generating different types of virtual emergency scenes according to the expression vectors and conditions;
and performing countermeasure training on different types of emergency simulation digital driving scenes according to the sample library to finally obtain a plurality of emergency simulation driving scenes.
Specifically, in the embodiment, the confrontation network construction simulator is generated based on improved conditions, wherein the simulator comprises an encoder, a generator, a classifier and a discriminator, and is used for generating and training an emergency simulated driving scene; the encoder is used for encoding and reducing dimensions of a driving scene, the generator is used for generating various emergency simulation driving scenes, and the discriminator is used for judging whether the digital driving scene belongs to a training data set or is generated by the generator. The method comprises the following steps that countermeasure learning is carried out in a sample base through a discriminator, a generator and a classifier, the generator receives a hidden vector and carries out parameter learning on the hidden vector to obtain a driving scene, the driving scene close to the sample base is generated as far as possible, the driving scene is compared with the driving scene in the sample base, the discriminator judges whether the driving scene and the driving scene are in the sample base or in the generator, the self discrimination capability is continuously improved through learning, and the driving scene generated by the generator is identified as far as possible; meanwhile, the classifier classifies the driving scenes of different types so as to continuously improve the self classification capability and correctly classify the driving scenes generated by the generator as much as possible. Finally, the generator, the discriminator and the classifier reach a balanced state, and an emergency simulated driving scene which is close to a real scene to the maximum extent can be generated.
The digital driving scenes of various emergencies in this embodiment include emergency braking, emergency lane changing, pedestrians rushing into a motor vehicle lane, pedestrians running a red light, and the like, which certainly cannot be understood as a limitation of this solution, and in a specific implementation process, digital driving scenes of other types of emergencies may be generated as needed.
The step of planning the multiple emergency simulation driving scenes to obtain multiple corresponding planning results in the embodiment specifically includes:
extracting space-time information and vehicle body posture data of a digital twin driving scene, and training a motion planning model by using a depth certainty strategy gradient algorithm to obtain a DDPG model;
and inputting the digital driving scenes of various emergency events into the DDPG model to obtain corresponding planning results.
The planning result in this embodiment includes a steering angle, an accelerator or electric door opening degree, and a braking torque of the vehicle. After the planning result is synchronized to the vehicle, the vehicle can be controlled to make corresponding actions so as to avoid emergently.
As shown in fig. 4, the present embodiment further provides a system for implementing the above unmanned motion planning method based on digital twins, which includes an unmanned vehicle terminal, a processing terminal, and a communication module; the unmanned vehicle terminal is provided with an environment data acquisition module and a vehicle body attitude data acquisition module, and the processing terminal is provided with an environment sensing unit, a scene construction unit and a motion planning unit which are in communication connection; the unmanned terminal is in communication connection with the processing terminal through the communication module;
the vehicle surrounding environment data acquisition module is used for acquiring surrounding environment data of the unmanned vehicle terminal and transmitting the surrounding environment data to the environment perception calculation unit;
the vehicle body posture data acquisition module is used for acquiring vehicle body posture data of the unmanned vehicle terminal and transmitting the vehicle body posture data to the scene construction unit;
the environment sensing unit is used for obtaining environment information around the unmanned vehicle terminal according to the environment data and transmitting the environment information to the scene construction unit;
the scene construction unit is used for obtaining a digital twin driving scene according to the received environment information and the vehicle body posture data;
the motion planning unit is used for generating various emergency simulation driving scenes, obtaining a plurality of corresponding planning results respectively, and transmitting the planning results corresponding to the emergency simulation driving scenes matched with the digital twin driving scenes to the unmanned vehicle terminal.
The environment information acquisition module in the embodiment comprises a laser radar and a camera; the laser radar is used for acquiring point cloud image data around the unmanned vehicle terminal, the parameters of the point cloud image data are preferably 16-line intermediate distance laser radar, the wavelength is 900nm, the vertical resolution is 2 degrees, the horizontal resolution is 0.2 degrees, and the scanning frequency is 10 Hz; the camera is used for acquiring RGB sequence image data around the unmanned vehicle terminal, parameters of the camera are preferably 200 ten thousand pixels, the video sampling frequency is 60Hz, and the focal length is 50 mm.
The vehicle body posture data acquisition module in the embodiment comprises a GPS unit for acquiring coordinates and a high-precision map of the unmanned vehicle terminal, an IMU unit for acquiring acceleration and a course angle of the unmanned vehicle terminal, and a speed sensor for acquiring the speed of the unmanned vehicle terminal.
The processing terminal in this embodiment is a cloud service platform, the communication module is a wireless high-speed communication module adopting a 5G communication technology, an ECU (electronic control unit) is arranged on the unmanned vehicle terminal at this time, and the cloud service platform performs wireless high-speed communication with the ECU on the unmanned vehicle terminal through the communication module to perform corresponding processing calculation on received data, obtain a planning result, and transmit the planning result to the ECU. It should be noted that, in this embodiment, both the cloud service platform and the 5G wireless high-speed communication module are used as references, which cannot be understood as a limitation to this solution, and the wireless high-speed communication module may also use a new generation wireless communication technology such as 6G; or the communication module is a wired communication module, at the moment, the processing terminal can be a chip integrated on an ECU (electronic control Unit) of the unmanned vehicle terminal, and the functions of the scheme can be realized.
The motion planning unit in the embodiment comprises a simulator and a planner, wherein the simulator is used for generating a plurality of emergency simulation driving scenes, and the planner is used for generating corresponding planning results according to the emergency simulation driving scenes.
In the embodiment, a plurality of emergency simulated driving scenes are generated in advance, the plurality of emergency simulated driving scenes are subjected to motion planning to obtain corresponding planning results, when unmanned motion planning is performed, a digital twin driving scene of a vehicle is matched with the plurality of emergency simulated driving scenes, if the matching is successful, the vehicle is in the emergency driving scene, emergency planning needs to be performed, and the calculated corresponding planning results are synchronized to the vehicle, so that the process of planning after judging as the emergency scene is omitted, the planning time of the emergency driving scene is greatly shortened, and the response sensitivity, safety and reliability of the unmanned vehicle are improved; and because the motion planning of the emergency is added, the unmanned system can deal with various uncertain factors of the open road, and can face uncertain driving environment factors in the open road.
The present invention has been described with reference to flowchart illustrations or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application, and it is understood that each flowchart or block, and combination of flowcharts or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. This need not be, nor should it be exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (8)

1. A digital twin-based unmanned motion planning method is characterized by comprising the following steps:
acquiring environmental data and body attitude data around a vehicle;
constructing a digital twin driving scene according to the environment data and the vehicle body attitude data;
generating various emergency simulation driving scenes, and respectively planning the various emergency simulation driving scenes to obtain a plurality of respectively corresponding planning results;
matching the digital twin driving scene with the multiple emergency simulation driving scenes in real time to obtain a matched emergency simulation driving scene;
transmitting a planning result corresponding to the matched emergency event simulation driving scene to a vehicle;
the environment data comprises RGB sequence image data in front of the vehicle, point cloud image data of the surrounding environment of the vehicle and high-precision map data of the surrounding of the vehicle, and the vehicle body attitude data comprises vehicle speed, vehicle acceleration, vehicle coordinates and vehicle course angle;
the method for constructing the digital twin driving scene specifically comprises the following steps:
detecting the RGB sequence image data to obtain traffic signal information, and extracting a feasible path according to the traffic signal information;
detecting the point cloud image data to obtain an obstacle speed, an obstacle course angle and an obstacle position; obtaining an obstacle position image according to the obstacle position;
mapping the vehicle coordinates to a vehicle body position image;
fusing the feasible path, the obstacle position image and the vehicle body position image to obtain a multi-semantic channel image;
and fusing the vehicle body attitude data, the multi-semantic channel image, the barrier speed and the barrier course angle to form a digital twin driving scene.
2. The method for planning the unmanned motion based on the digital twin according to claim 1, wherein the obtaining of the high-precision map specifically comprises:
acquiring absolute coordinates of the vehicle by using an RTK-SLAM, receiving network differential data broadcasted by an RTK reference station, receiving CORS differential data and receiving differential data of the position of the vehicle; and positioning by using a CORS mode to obtain a local high-precision map.
3. The digital twin-based unmanned motion planning method according to claim 2, wherein the extracting of the feasible path specifically comprises:
carrying out target detection on the RGB sequence image to obtain a traffic sign and a signal lamp; suppressing overlap detection using a non-maxima method;
classifying the detected traffic signs and signal lamps to obtain traffic signal information;
extracting a local high-precision map by taking the vehicle coordinates as a center, extracting and splicing lane lines, and constructing a local road network map;
and extracting the current feasible path of the vehicle in the local road network graph by using the traffic signal information.
4. The method for unmanned planning of vehicle based on digital twin as claimed in claim 1, wherein the steps of detecting the point cloud image data to obtain the speed, heading angle and position of the obstacle are as follows:
mapping a point cloud image in the point cloud image data in a two-dimensional perspective image space;
and expanding the point cloud image on the basis of a primary regression structure of a two-dimensional perspective image space to generate a three-dimensional frame corresponding to the three-dimensional barrier.
5. The method for digital twin-based unmanned motion planning according to claim 1, wherein the generating of the plurality of emergency simulated driving scenarios specifically comprises:
acquiring digital driving scenes of various emergencies to construct a training data set, encoding the digital driving scenes of different emergencies into feature vectors as conditions of the different emergencies, setting parameters and classifying the digital driving scenes of the various emergencies to obtain a sample library;
defining a plurality of Gaussian distributions, and randomly sampling hidden vectors obeying the Gaussian distributions to obtain corresponding expression vectors;
generating different types of virtual emergency scenes according to the expression vectors and the conditions;
and performing countermeasure training on the different types of virtual emergency scenes according to the sample library to finally obtain multiple emergency simulation driving scenes.
6. The unmanned motion planning method based on digital twins as claimed in claim 5, wherein the step of respectively planning the multiple emergency simulation driving scenes to obtain multiple corresponding planning results specifically comprises:
extracting the space-time information of the digital twin driving scene and the vehicle body attitude data, and training a motion planning model by using a depth certainty strategy gradient algorithm to obtain a DDPG model;
and inputting the plurality of emergency simulation driving scenes into the DDPG model to obtain corresponding planning results.
7. A digital twin based unmanned motion planning method according to any one of claims 1 to 6, wherein the planning results include a steering angle, throttle or electric door opening and braking torque of the vehicle.
8. A system for implementing the digital twin based unmanned motion planning method of any of claims 1 to 7, comprising an unmanned vehicle terminal, a processing terminal and a communication module; the unmanned vehicle terminal is provided with an environment data acquisition module and a vehicle body attitude data acquisition module, and the processing terminal is provided with an environment sensing unit, a scene construction unit and a motion planning unit which are in communication connection; the unmanned vehicle terminal is in communication connection with the processing terminal through the communication module;
the environment data acquisition module is used for acquiring environment data around the unmanned vehicle terminal and transmitting the environment data to the environment perception calculation unit;
the vehicle body posture data acquisition module is used for acquiring vehicle body posture data of the unmanned vehicle terminal and transmitting the vehicle body posture data to the scene construction unit;
the environment sensing unit is used for obtaining environment information around the unmanned vehicle terminal according to the environment data and transmitting the environment information to the scene construction unit;
the scene construction unit is used for obtaining a digital twin driving scene according to the received environment information and the vehicle body posture data in a fusion construction mode;
the motion planning unit is used for generating various emergency simulation driving scenes, obtaining a plurality of corresponding planning results respectively, and transmitting the planning results corresponding to the emergency simulation driving scenes matched with the digital twin driving scenes to the unmanned vehicle terminal.
CN202110546715.7A 2021-05-19 2021-05-19 Unmanned motion planning method based on digital twins Active CN113359709B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110546715.7A CN113359709B (en) 2021-05-19 2021-05-19 Unmanned motion planning method based on digital twins

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110546715.7A CN113359709B (en) 2021-05-19 2021-05-19 Unmanned motion planning method based on digital twins

Publications (2)

Publication Number Publication Date
CN113359709A CN113359709A (en) 2021-09-07
CN113359709B true CN113359709B (en) 2022-07-05

Family

ID=77526935

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110546715.7A Active CN113359709B (en) 2021-05-19 2021-05-19 Unmanned motion planning method based on digital twins

Country Status (1)

Country Link
CN (1) CN113359709B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113867354B (en) * 2021-10-11 2023-05-02 电子科技大学 Regional traffic flow guiding method for intelligent cooperation of automatic driving multiple vehicles
CN114396944B (en) * 2022-01-18 2024-03-22 西安塔力科技有限公司 Autonomous positioning error correction method based on digital twinning
CN114715197B (en) * 2022-06-10 2022-08-30 深圳市爱云信息科技有限公司 Automatic driving safety method and system based on digital twin DaaS platform
CN117974942A (en) * 2022-10-25 2024-05-03 腾讯科技(深圳)有限公司 Vehicle driving state display method and device, electronic equipment and storage medium
CN115840404B (en) * 2022-12-21 2023-11-03 浙江大学 Cloud control automatic driving system based on automatic driving special road network and digital twin map

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107272683A (en) * 2017-06-19 2017-10-20 中国科学院自动化研究所 Parallel intelligent vehicle control based on ACP methods
CN110716558A (en) * 2019-11-21 2020-01-21 上海车右智能科技有限公司 Automatic driving system for non-public road based on digital twin technology

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107272683A (en) * 2017-06-19 2017-10-20 中国科学院自动化研究所 Parallel intelligent vehicle control based on ACP methods
CN110716558A (en) * 2019-11-21 2020-01-21 上海车右智能科技有限公司 Automatic driving system for non-public road based on digital twin technology

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
基于3D LiDAR的矿山无人驾驶车行驶边界检测;陈龙等;《煤炭学报》;20200630;第45卷(第6期);第2140-2146页 *
基于数字四胞胎的平行驾驶系统及应用;刘腾等;《智能科学与技术学报》;20190331;第1卷(第1期);第40-51页 *
平行无人系统;陈龙等;《无人系统技术》;20180531(第1期);第23-37页 *
智能车的智能指挥与控制:基本方法与系统结构;刘腾等;《指挥与控制学报》;20180331;第4卷(第1期);第22-31页 *
端对端平行无人矿山系统及其关键技术;杨超等;《智能科学与技术学报》;20190930;第1卷(第3期);第228-240页 *

Also Published As

Publication number Publication date
CN113359709A (en) 2021-09-07

Similar Documents

Publication Publication Date Title
CN113359709B (en) Unmanned motion planning method based on digital twins
US10565458B2 (en) Simulation system, simulation program and simulation method
CN114282597B (en) Method and system for detecting vehicle travelable area and automatic driving vehicle adopting system
CN108417087B (en) Vehicle safe passing system and method
CN112700470B (en) Target detection and track extraction method based on traffic video stream
CN110446278B (en) Intelligent driving automobile sensor blind area safety control method and system based on V2I
WO2022206942A1 (en) Laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
CN111164967B (en) Image processing apparatus and image processing method
CN113313154A (en) Integrated multi-sensor integrated automatic driving intelligent sensing device
US11150660B1 (en) Scenario editor and simulator
CN110007675B (en) Vehicle automatic driving decision-making system based on driving situation map and training set preparation method based on unmanned aerial vehicle
CN110083163A (en) A kind of 5G C-V2X bus or train route cloud cooperation perceptive method and system for autonomous driving vehicle
GB2621048A (en) Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
US20220137636A1 (en) Systems and Methods for Simultaneous Localization and Mapping Using Asynchronous Multi-View Cameras
Armingol et al. IVVI: Intelligent vehicle based on visual information
CN110356412A (en) The method and apparatus that automatically rule for autonomous driving learns
WO2019188391A1 (en) Control device, control method, and program
US20240005642A1 (en) Data Augmentation for Vehicle Control
US20240005641A1 (en) Data Augmentation for Detour Path Configuring
US20230222671A1 (en) System for predicting near future location of object
CN115662166A (en) Automatic driving data processing method and automatic driving traffic system
WO2022098511A2 (en) Architecture for map change detection in autonomous vehicles
He et al. Towards C-V2X Enabled Collaborative Autonomous Driving
KR20220073472A (en) Cross section integrated information providing system and method based on V2X
US20230252903A1 (en) Autonomous driving system with air support

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant