CN117074049B - Intelligent driving vehicle road cloud testing system based on virtual-real combination - Google Patents

Intelligent driving vehicle road cloud testing system based on virtual-real combination Download PDF

Info

Publication number
CN117074049B
CN117074049B CN202311351851.6A CN202311351851A CN117074049B CN 117074049 B CN117074049 B CN 117074049B CN 202311351851 A CN202311351851 A CN 202311351851A CN 117074049 B CN117074049 B CN 117074049B
Authority
CN
China
Prior art keywords
vehicle
information
virtual
test
cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311351851.6A
Other languages
Chinese (zh)
Other versions
CN117074049A (en
Inventor
刘爽爽
赵博文
马宇宸
扈鹏
牛宏宇
陈宏硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Automobile Media Tianjin Co ltd
Original Assignee
China Automobile Media Tianjin Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Automobile Media Tianjin Co ltd filed Critical China Automobile Media Tianjin Co ltd
Priority to CN202311351851.6A priority Critical patent/CN117074049B/en
Publication of CN117074049A publication Critical patent/CN117074049A/en
Application granted granted Critical
Publication of CN117074049B publication Critical patent/CN117074049B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M17/00Testing of vehicles
    • G01M17/007Wheeled or endless-tracked vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/95Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to an intelligent driving road cloud testing system based on virtual-real combination, and relates to the technical field of intelligent driving testing of vehicles, wherein a center cloud is used for issuing testing tasks to an edge cloud; the edge cloud is used for analyzing the test task, merging the object information on the actual road uploaded by the edge computing unit according to the analysis result of the test task, generating virtual vehicle information based on the object information and the tested vehicle information, and transmitting the virtual vehicle information and the early warning information to the edge computing unit; the edge computing unit forwards the virtual vehicle information to the tested vehicle through the road side equipment; the early warning information is sent to road side equipment; and the tested vehicle makes decisions and executes according to the virtual vehicle information, the threat vehicle information and the own vehicle perception information. The intelligent driving road cloud testing system based on virtual-real combination can reduce testing cost, improve reality and diversity of testing scenes and reduce dangerous degree of limit scenes.

Description

Intelligent driving vehicle road cloud testing system based on virtual-real combination
Technical Field
The application relates to the technical field of intelligent driving tests of vehicles, in particular to an intelligent driving road cloud testing system based on virtual-real combination.
Background
The intelligent network car is provided with advanced devices such as a vehicle-mounted sensor, a controller and an actuator, integrates modern communication and network technology, realizes intelligent information exchange and sharing of the car, people, roads, clouds and the like, and has the characteristics of safety, comfort, energy conservation and high efficiency. With the development of intelligent networking technology, the requirement for testing the functions of vehicles is also increasingly urgent.
The prior art mainly tests unmanned vehicles under the conditions of simulation, closed field test and open test roads. The method is characterized in that the method is used for solving the problems that the test environment is not close to the real environment, the operability is poor or danger is easy to occur in the test of complex application scenes or limit scenes aiming at the challenges brought by the rapid increase of the test mileage and the test time and aiming at the real vehicle test of the closed field and the open test road, and depending on the condition of the infrastructure and other traffic participants of the real road environment.
The present application is specifically directed to the above-described drawbacks.
Disclosure of Invention
In order to solve the technical problem, the application provides an intelligent driving road cloud test system based on virtual-real combination, and the intelligent driving road cloud test system based on virtual-real combination can reduce test cost, improve the authenticity and diversity of test scenes and reduce the dangerous degree of limit scenes.
The embodiment of the application provides an intelligent driving vehicle road cloud test system based on virtual-real combination, which comprises:
the center cloud is used for issuing the test task to the edge cloud;
the edge cloud is used for analyzing the test task, merging target object information on an actual road uploaded by the edge computing unit according to the analysis result of the test task, generating virtual vehicle information based on the target object information and the tested vehicle information, generating early warning information based on the target object information and the test task, and transmitting the virtual vehicle information and the early warning information to the edge computing unit;
the edge calculation unit is used for forwarding virtual vehicle information to the vehicle to be tested through the road side equipment; the early warning information is sent to the road side equipment;
the road side equipment is used for determining a threat vehicle which generates traffic threat to the detected vehicle according to the early warning information and sending the threat vehicle information to the detected vehicle;
the detected vehicle is used for making a decision and executing according to the virtual vehicle information, the threat vehicle information and the host vehicle perception information, and transmitting the executing information to the edge cloud through the road side equipment and the edge computing unit in sequence; and the edge cloud generates a test result according to the execution information and feeds the test result back to the center cloud.
Optionally, when generating virtual vehicle information based on the target object information and the measured vehicle information, the edge cloud is specifically configured to:
selecting a vehicle with driving interference with the tested vehicle from the target object information as a generalized vehicle;
predicting the positions of the generalized vehicles at a plurality of continuous future moments from the current moment, and selecting unoccupied space points around each position as virtual vehicle path points;
and taking the state of the generalized vehicle at the current moment as an initial state, and generating running information of the virtual vehicle through a plurality of path points of the virtual vehicle.
Optionally, predicting the position of the generalized vehicle at a plurality of future times from the current time, and selecting the unoccupied space point around each position as the virtual vehicle path point includes:
generating a bird's-eye view of the road sections at the current moment and a plurality of continuous future moments, and marking the positions of the generalized vehicle and other objects in a grid of the bird's-eye view;
selecting grids which are around the generalized vehicle and are not occupied by the target object from each aerial view as path points of the virtual vehicle;
the number of virtual vehicles is at least one.
Optionally, the generating the running information of the virtual vehicle by using the state of the generalized vehicle at the current moment as an initial state and passing through a plurality of path points of the virtual vehicle includes:
establishing a virtual vehicle control model, a kinematic model and a dynamics model;
setting parameters of the virtual vehicle control model, the kinematic model and the dynamic model to enable the virtual vehicle to pass through a plurality of path points, and generating running information of the virtual vehicle.
Optionally, after generating the running information of the virtual vehicle, the edge cloud is further configured to:
adding the running information of the virtual vehicle into a macroscopic traffic flow, and verifying the steady state of the traffic flow;
and if the virtual vehicle passes the verification, transmitting the information of the virtual vehicle to the edge computing unit.
Optionally, the selecting, in each aerial view, a grid around the generalized vehicle and not occupied by the vehicle as a path point of the virtual vehicle includes:
inputting each marked aerial view into a pre-trained prediction neural network;
and outputting the position information of the virtual vehicle in each aerial view through the prediction neural network.
Optionally, before inputting each annotated aerial view into the pre-trained predictive neural network, the method further comprises:
obtaining a sample for training the predictive neural network;
training the predictive neural network using the samples;
wherein the sample is a bird's eye view of marking the position of a virtual vehicle on a grid around and not occupied by the vehicle, said virtual vehicle being validated by traffic flow steady state.
Optionally, the selecting, from the target object information, a vehicle having driving interference with the vehicle under test as the generalized vehicle includes:
selecting a vehicle with an intersection with a running path of the tested vehicle in a period of time in the future from the target object information as a generalized vehicle; and/or the number of the groups of groups,
a vehicle having a distance smaller than a set value and the same traveling direction is selected from the target object information as a generalized vehicle.
Optionally, the detected vehicle is used for uploading the information of the detected vehicle to the edge computing unit through the road side equipment; the road side equipment is used for uploading the acquired perception information to the edge computing unit;
the edge computing unit is used for fusing the perception information to obtain target object information and sending the target object information and the detected vehicle information to an edge cloud;
and the edge cloud is used for fusing the target object information and the detected vehicle information uploaded by the edge computing units of the plurality of continuous intersections to generate virtual vehicle information and early warning information.
Optionally, the test tasks include a test purpose, a test time, a test scenario, and a test requirement.
The system provided by the application has the following technical effects:
1) The intelligent driving road cloud testing system based on virtual-real combination can reduce testing cost, improve reality and diversity of testing scenes and reduce dangerous degree of limit scenes.
2) The edge cloud generates virtual vehicle information and early warning information to the tested vehicle, so that the tested vehicle makes decisions and executes according to the virtual vehicle information, threat vehicle information and host vehicle perception information. According to the vehicle-to-vehicle cloud collaborative test system, the virtual vehicles and the threat vehicles are added in the actual traffic flow, the degree of diversity of road targets is improved, the threat vehicles are designed, the response condition of the tested vehicles to cloud early warning information can be fully tested, and the real vehicle-to-road cloud collaborative test is realized.
3) According to the method, the route points of the virtual vehicle are determined in a mode of searching the unoccupied grids in the aerial view, and the route points are connected to form the running information, so that the virtual vehicle which does not obstruct traffic and has influence on the tested vehicle is generated in the actual traffic flow, and the flexible test on the tested vehicle is facilitated.
4) According to the method and the device, the grid positions which pass through the traffic flow steady state verification are automatically generated through the neural network, and the processing efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of an intelligent driving vehicle road cloud testing system based on virtual-real combination according to an embodiment of the present application;
FIG. 2 is a plurality of bird's eye views provided by embodiments of the present application;
fig. 3 is a schematic diagram of a virtual vehicle driving path according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more clear, the technical solutions of the present application will be clearly and completely described below. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
Example 1
Fig. 1 is a schematic structural diagram of an intelligent driving vehicle road cloud testing system based on virtual-real combination, which is provided by the embodiment of the application, and the system comprises a center cloud, an edge computing unit, road side equipment and a tested vehicle which are sequentially connected. The two mutually connected two-way transmission signals are not limited to wired or wireless connection modes. In addition, the test system includes targets such as traffic participants present in the roadway environment. The target information includes information of position coordinates, heading angle, speed, type, and the like of a vehicle, a pedestrian, a bicycle, a motorcycle, and the like. The vehicle to be tested is an unmanned vehicle carrying a vehicle-mounted terminal.
Each of which is described in detail below.
The center cloud is used for issuing the test task to the edge cloud; the test tasks include test purpose, test time, test scenario and test requirements. For example, the test scene is an intersection collision early warning, and the test requirement is that the tested vehicle timely makes a coping policy and does not collide with any target object (including virtual vehicles).
The road side equipment comprises a road side unit, a laser radar, a millimeter wave radar, a camera and the like and is used for uploading acquired perception information to the edge calculation unit. Wherein, the laser radar, the millimeter wave radar and the camera are responsible for collecting traffic participants on roads, such as real vehicles, pedestrians and the like; the road side unit is arranged at the road intersection and can communicate with the vehicle-mounted terminal of the tested vehicle to acquire the information of the tested vehicle, and the information is uploaded to the edge calculation unit.
The edge computing units are located at the respective intersections, the sensing information uploaded by the road side equipment is fused to obtain target object information, and the target object information and the detected vehicle information are sent to the edge cloud. The fused target information refers to the information detected by a laser radar, a millimeter wave radar, a camera and the like and road environment information, which are subjected to characteristic extraction and are converted into target level information under the same time and space. The road environment information comprises road information, traffic sign information, traffic lights and the like, and the road information is, for example, the line type, width, number of lanes, curvature and the like of a road; the traffic sign information is semantic information, location, height and the like of the traffic sign; the traffic light information is the color, phase and the like of the traffic light. The static unchanged information in the road environment information can be stored in an edge computing unit in advance, and can be called when needed, and the dynamic information (such as traffic light information) is obtained through the road side equipment in real time.
The edge cloud is used for analyzing the test task, merging target object information on an actual road uploaded by the edge computing unit according to the analysis result of the test task, generating virtual vehicle information based on the target object information and the tested vehicle information, generating early warning information based on the target object information and the test task, and transmitting the virtual vehicle information and the early warning information to the edge computing unit.
In practical application, if the analysis result of the test task is that whether the vehicle collides with the target object and the virtual vehicle in the road environment of the intersection, when the detected vehicle passes through the intersection, the target object information and the detected vehicle information uploaded by the edge computing units of a plurality of continuous intersections are fused to form a macroscopic traffic flow, and virtual vehicle information is generated based on the macroscopic traffic flow. The measured vehicle information comprises information such as position coordinates, course angle, speed, vehicle type and the like of the measured vehicle. The virtual vehicle information includes position coordinates, heading angle, speed, vehicle type, and the like of the virtual vehicle. The virtual vehicle will affect the state of the vehicle under test along with the real vehicle, pedestrian, etc.
And the edge cloud also generates early warning information based on the target object information and the test task, and transmits the virtual vehicle information and the early warning information to the edge computing unit. Specifically, the early warning information is different according to different specific test scenes, such as intersection collision early warning, road danger condition prompt, collaborative lane change and the like. Further illustrating in detail, the intersection collision early warning information is information for screening out a certain range of vehicles possibly having potential threats, and the information for further analyzing the threat vehicles by the road side equipment includes: position, direction angle, body size, speed, triaxial acceleration, yaw rate, etc. The cooperative lane change early warning information is lane change intention information of the vehicle and lane change intention information of the vehicle, and the specific information of the lane change vehicle is further analyzed by the road side equipment.
The edge computing unit is also used for forwarding the virtual vehicle information to the tested vehicle through the road side equipment; sending the early warning information to the road side equipment; and the road side equipment determines a threat vehicle which generates traffic threat to the detected vehicle according to the early warning information, and sends the threat vehicle information to the detected vehicle. The road side equipment determines the intersection collision early warning, screens out the most urgent threat vehicle, and the corresponding threat vehicle information is the moment, the position, the head direction angle, the vehicle body size, the speed, the triaxial acceleration, the yaw rate and the like of the threat vehicle, and also transmits the threat vehicle information to the detected vehicle.
The detected vehicle is used for making a decision and executing according to the virtual vehicle information, the threat vehicle information and the host vehicle perception information, and the executing information is sequentially sent to the edge cloud through the road side equipment and the edge computing unit; and the edge cloud generates a test result according to the execution information and feeds the test result back to the center cloud. Meanwhile, the tested vehicle uploads the vehicle information to the edge computing unit through the road side equipment in real time so as to carry out subsequent tests.
When the test task is finished, the central cloud manages and files the test results, including information such as test time, test results, test records and the like, wherein the information such as the test records and the like comprises the tested vehicle state information, the virtual vehicle information of the current intersection, the fused target object information, the virtual vehicle information of other intersections, the fused target object information, early warning information and the like.
The system provided by the embodiment has the following technical effects:
1) The intelligent driving road cloud testing system based on virtual-real combination can reduce testing cost, improve reality and diversity of testing scenes and reduce dangerous degree of limit scenes.
2) The edge cloud generates virtual vehicle information and early warning information to the tested vehicle, so that the tested vehicle makes decisions and executes according to the virtual vehicle information, threat vehicle information and host vehicle perception information. According to the vehicle-to-vehicle cloud collaborative test system, the virtual vehicles and the threat vehicles are added in the actual traffic flow, the degree of diversity of road targets is improved, the threat vehicles are designed, the response condition of the tested vehicles to cloud early warning information can be fully tested, and the real vehicle-to-road cloud collaborative test is realized.
Example two
The present embodiment refines the generation process of the virtual vehicle information on the basis of the above-described embodiments.
The edge cloud is specifically configured to perform, when generating virtual vehicle information based on the target object information and the measured vehicle information:
s1, selecting a vehicle with driving interference with the tested vehicle from the target object information as a generalized vehicle.
Specifically, a vehicle with an intersection with a future driving path of the tested vehicle in a period of time is selected from target object information to be used as a generalized vehicle; and/or selecting vehicles with the same traveling direction and a distance smaller than a set value from the target object information as generalized vehicles. The number of generalized vehicles is at least one.
S2, predicting the positions of the generalized vehicles at a plurality of continuous future moments from the current moment, and selecting unoccupied space points around each position as virtual vehicle path points.
The virtual vehicle in the application does not exist in the actual scene, but the information of the virtual vehicle is transmitted to the vehicle-mounted terminal, so that the tested vehicle considers the virtual vehicle as a real vehicle.
The virtual vehicle is generated based on the generalized vehicle, and the generalized vehicle is dynamic, so that the virtual vehicle can move along with the generalized vehicle, and the route points and the running information of the virtual vehicle can be generated in real time from the time t 0. The positions of the targets at the times t1, t2 and t3 are predicted according to the information such as the position, speed, acceleration and course angle of the generalized vehicle, and the positions of the targets at the times t1, t2 and t3 are also required to be predicted, and the space which is not occupied by the targets is selected around the positions at the times t1, t2 and t 3.
In a specific embodiment, referring to fig. 2, a bird's-eye view image of a road section at a current moment and a plurality of continuous future moments is generated, wherein the sensing information of the three-dimensional bird's-eye view image is output for the whole intersection, sensor information of different visual angles can be fused to the same two-dimensional coordinate system, and the bird's-eye view image contains time sequence information; the positions of the vehicle to be generalized and other objects are marked on grids of the aerial view, and grids which are around the vehicle to be generalized and are not occupied by the objects are selected from each aerial view to serve as route points of the virtual vehicle. A vehicle or an object may occupy a grid or multiple grids.
If an unoccupied grid of vehicles is selected in each bird's eye view, then the bird's eye views form a virtual vehicle. If an unoccupied grid of two vehicles is selected in each bird's eye view, then multiple bird's eye views form 2 virtual vehicles, and so on. The number of virtual vehicles is at least one. In fig. 2, the hatching indicates occupied grids, the vehicle-shaped icon indicates a vehicle to be generalized, unoccupied grids are selected among adjacent grids of the vehicle to be generalized, and the grids are indicated by circles, so that 1 virtual vehicle is formed in total.
And selecting grids which are arranged around the generalized vehicle and are not occupied by the target object from each aerial view as path points of the virtual vehicle.
And S3, taking the state of the generalized vehicle at the current moment as an initial state, and generating running information of the virtual vehicle through a plurality of route points of the virtual vehicle.
Referring to fig. 3, each waypoint is connected to form travel information of the virtual vehicle. The initial states include speed, heading angle, and acceleration of the vehicle being generalized.
Alternatively, to be close to a real driving situation, the virtual vehicle needs to travel between the path points according to kinematic constraints. Based on this, a virtual vehicle control model, a kinematic model, and a kinetic model are established so that the virtual vehicle passes through a plurality of route points, and travel information of the virtual vehicle is generated.
The virtual vehicle control model may be a driver behavior model, and may be a neural network model. The driver model can be independently operated in the transverse direction and the longitudinal direction, and can also be cooperatively controlled in the transverse direction and the longitudinal direction, and the transverse direction control instructions such as acceleration, accelerator, brake pedal opening, front wheel steering angle, steering wheel torque and the like are output through the model. The vehicle kinematics and dynamics model receives a vehicle transverse and longitudinal control instruction and outputs information such as position coordinates, course angles, speeds, accelerations, vehicle types and the like of the virtual vehicle.
Preferably, the addition of the virtual vehicles cannot cause destabilization confusion or blockage of the whole traffic flow, and after the running information of the virtual vehicles is generated, the edge cloud adds the running information of the virtual vehicles into the macroscopic traffic flow to perform traffic flow steady state verification; if the virtual vehicle passes the verification, the information of the virtual vehicle is issued to the edge computing unit; if the virtual vehicle is not verified, the waypoint of the virtual vehicle is modified. The specific verification process can refer to a two-dimensional lattice hydrodynamic traffic model, a) vehicles running in the x direction and the y direction are established, the area where the virtual vehicles are located is selected, local density and local speed are obtained, b) information transmitted by edge computing units of other intersections is received, and the total average density of the current road network, the average vehicle distance in the x direction and the average vehicle distance in the y direction are established. c) And carrying out stability verification of traffic flow by taking the two-dimensional lattice hydrodynamic traffic model, and if the traffic flow is kept in a stable state, issuing virtual vehicle information.
Preferably, in practical application, there are many unoccupied grids of a bird's eye view, and possible running information formed by path points cannot pass verification, so that calculation resources and time are wasted. To solve this problem, the present application trains the predictive neural network beforehand with a sample, which is a bird's eye view of marking the position of a virtual vehicle that passes the traffic flow steady state verification, on a grid around the generalized vehicle and not occupied by the vehicle. The predictive application network may be a convolutional neural network that learns the location characteristics of the virtual vehicle. Inputting each marked aerial view into a pre-trained prediction neural network; and outputting the position information of the virtual vehicle in each aerial view through the prediction neural network. The virtual vehicles thus output can be substantially verified.
According to the method, the path points of the virtual vehicles are determined in a mode of searching unoccupied grids in the aerial view and then are connected to form the running information, so that the virtual vehicles which do not obstruct traffic and have influence on the tested vehicles are generated in the actual traffic flow, and flexible testing of the tested vehicles is facilitated.
According to the embodiment, the grid positions verified by the traffic flow steady state are automatically generated through the neural network, so that the processing efficiency is improved.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present application. As used in this specification, the terms "a," "an," "the," and/or "the" are not intended to be special references, but rather are intended to include the singular as well, unless the context clearly indicates otherwise. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method or apparatus comprising such elements.
It should also be noted that the terms "center," "upper," "lower," "left," "right," "vertical," "horizontal," "inner," "outer," and the like indicate an orientation or a positional relationship based on that shown in the drawings, and are merely for convenience of description and simplification of the description, and do not indicate or imply that the apparatus or element in question must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present application. Unless specifically stated or limited otherwise, the terms "mounted," "connected," and the like are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art in a specific context.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; these modifications or substitutions do not depart from the essence of the corresponding technical solutions from the technical solutions of the embodiments of the present application.

Claims (7)

1. Intelligent driving vehicle road cloud test system based on virtual-real combination, which is characterized by comprising:
the center cloud is used for issuing the test task to the edge cloud;
the edge cloud is used for analyzing the test task, merging target object information on an actual road uploaded by the edge computing unit according to the analysis result of the test task, generating virtual vehicle information based on the target object information and the tested vehicle information, generating early warning information based on the target object information and the test task, and transmitting the virtual vehicle information and the early warning information to the edge computing unit;
the edge calculation unit is used for forwarding virtual vehicle information to the vehicle to be tested through the road side equipment; the early warning information is sent to the road side equipment;
the road side equipment is used for determining a threat vehicle which generates traffic threat to the detected vehicle according to the early warning information and sending the threat vehicle information to the detected vehicle;
the detected vehicle is used for making a decision and executing according to the virtual vehicle information, the threat vehicle information and the host vehicle perception information, and transmitting the executing information to the edge cloud through the road side equipment and the edge computing unit in sequence; the edge cloud generates a test result according to the execution information and feeds the test result back to the center cloud;
the edge cloud is specifically configured to, when generating virtual vehicle information based on the target object information and the measured vehicle information:
selecting a vehicle with driving interference with the tested vehicle from the target object information as a generalized vehicle;
generating a bird's-eye view of the road sections at the current moment and a plurality of continuous future moments, and marking the positions of the generalized vehicle and other objects in a grid of the bird's-eye view; selecting grids which are around the generalized vehicle and are not occupied by the target object from each aerial view as path points of the virtual vehicle; the number of the virtual vehicles is at least one;
establishing a virtual vehicle control model, a kinematic model and a dynamics model; setting parameters of the virtual vehicle control model, the kinematic model and the dynamic model to enable the virtual vehicle to pass through a plurality of path points, and generating running information of the virtual vehicle.
2. The system of claim 1, wherein after generating the travel information of the virtual vehicle, the edge cloud is further to:
adding the running information of the virtual vehicle into a macroscopic traffic flow, and verifying the steady state of the traffic flow;
and if the virtual vehicle passes the verification, transmitting the information of the virtual vehicle to the edge computing unit.
3. The system of claim 1, wherein the selecting a grid around the generalized vehicle and not occupied by the vehicle in each bird's eye view as a path point for a virtual vehicle comprises:
inputting each marked aerial view into a pre-trained prediction neural network;
and outputting the position information of the virtual vehicle in each aerial view through the prediction neural network.
4. A system according to claim 3, further comprising, prior to inputting each annotated bird's eye view into the pre-trained predictive neural network:
obtaining a sample for training the predictive neural network;
training the predictive neural network using the samples;
wherein the sample is a bird's eye view of marking the position of a virtual vehicle on a grid around and not occupied by the vehicle, said virtual vehicle being validated by traffic flow steady state.
5. The system according to claim 1, wherein the selecting, from the target information, a vehicle having driving disturbance with the vehicle under test as the generalized vehicle includes:
selecting a vehicle with an intersection with a running path of the tested vehicle in a period of time in the future from the target object information as a generalized vehicle; and/or the number of the groups of groups,
a vehicle having a distance smaller than a set value and the same traveling direction is selected from the target object information as a generalized vehicle.
6. The system of claim 1, wherein the system further comprises a controller configured to control the controller,
the detected vehicle is used for uploading the information of the detected vehicle to the edge computing unit through the road side equipment;
the road side equipment is used for uploading the acquired perception information to the edge computing unit;
the edge computing unit is used for fusing the perception information to obtain target object information and sending the target object information and the detected vehicle information to an edge cloud;
and the edge cloud is used for fusing the target object information and the detected vehicle information uploaded by the edge computing units of the plurality of continuous intersections to generate virtual vehicle information and early warning information.
7. The system of any of claims 1-6, wherein the test tasks include test purpose, test time, test scenario, and test requirements.
CN202311351851.6A 2023-10-19 2023-10-19 Intelligent driving vehicle road cloud testing system based on virtual-real combination Active CN117074049B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311351851.6A CN117074049B (en) 2023-10-19 2023-10-19 Intelligent driving vehicle road cloud testing system based on virtual-real combination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311351851.6A CN117074049B (en) 2023-10-19 2023-10-19 Intelligent driving vehicle road cloud testing system based on virtual-real combination

Publications (2)

Publication Number Publication Date
CN117074049A CN117074049A (en) 2023-11-17
CN117074049B true CN117074049B (en) 2024-01-02

Family

ID=88715735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311351851.6A Active CN117074049B (en) 2023-10-19 2023-10-19 Intelligent driving vehicle road cloud testing system based on virtual-real combination

Country Status (1)

Country Link
CN (1) CN117074049B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111683348A (en) * 2020-05-26 2020-09-18 重庆车辆检测研究院有限公司 Method, device and system for testing scale performance of V2X security application
CN111781855A (en) * 2020-07-15 2020-10-16 北京领骏科技有限公司 Traffic on-loop automatic driving simulation system
CN111795832A (en) * 2020-06-02 2020-10-20 福瑞泰克智能系统有限公司 Intelligent driving vehicle testing method, device and equipment
CN113447276A (en) * 2021-05-26 2021-09-28 中国电子产品可靠性与环境试验研究所((工业和信息化部电子第五研究所)(中国赛宝实验室)) Vehicle testing system and vehicle testing method
CN113589794A (en) * 2021-07-30 2021-11-02 中汽院智能网联科技有限公司 Virtual-real combined automatic driving whole vehicle testing system
CN113867315A (en) * 2021-09-24 2021-12-31 同济大学 Virtual-real combined high-fidelity traffic flow intelligent vehicle test platform and test method
WO2022027304A1 (en) * 2020-08-05 2022-02-10 华为技术有限公司 Testing method and apparatus for autonomous vehicle
CN115061385A (en) * 2022-06-09 2022-09-16 电子科技大学 Real vehicle in-loop simulation test platform based on vehicle road cloud cooperation
CN115525974A (en) * 2022-10-14 2022-12-27 中汽研(天津)汽车工程研究院有限公司 Map matching test scene building method suitable for V2X field test system
CN115906511A (en) * 2022-12-19 2023-04-04 国汽(北京)智能网联汽车研究院有限公司 Simulation test system, method, device, equipment, medium and product
CN116680854A (en) * 2022-02-23 2023-09-01 中科大路(青岛)科技有限公司 Closed scene system for intelligent network-connected automatic driving vehicle and construction method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6730613B2 (en) * 2017-02-28 2020-07-29 株式会社Jvcケンウッド Overhead video generation device, overhead video generation system, overhead video generation method and program

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111683348A (en) * 2020-05-26 2020-09-18 重庆车辆检测研究院有限公司 Method, device and system for testing scale performance of V2X security application
CN111795832A (en) * 2020-06-02 2020-10-20 福瑞泰克智能系统有限公司 Intelligent driving vehicle testing method, device and equipment
CN111781855A (en) * 2020-07-15 2020-10-16 北京领骏科技有限公司 Traffic on-loop automatic driving simulation system
WO2022027304A1 (en) * 2020-08-05 2022-02-10 华为技术有限公司 Testing method and apparatus for autonomous vehicle
CN113447276A (en) * 2021-05-26 2021-09-28 中国电子产品可靠性与环境试验研究所((工业和信息化部电子第五研究所)(中国赛宝实验室)) Vehicle testing system and vehicle testing method
CN113589794A (en) * 2021-07-30 2021-11-02 中汽院智能网联科技有限公司 Virtual-real combined automatic driving whole vehicle testing system
CN113867315A (en) * 2021-09-24 2021-12-31 同济大学 Virtual-real combined high-fidelity traffic flow intelligent vehicle test platform and test method
CN116680854A (en) * 2022-02-23 2023-09-01 中科大路(青岛)科技有限公司 Closed scene system for intelligent network-connected automatic driving vehicle and construction method
CN115061385A (en) * 2022-06-09 2022-09-16 电子科技大学 Real vehicle in-loop simulation test platform based on vehicle road cloud cooperation
CN115525974A (en) * 2022-10-14 2022-12-27 中汽研(天津)汽车工程研究院有限公司 Map matching test scene building method suitable for V2X field test system
CN115906511A (en) * 2022-12-19 2023-04-04 国汽(北京)智能网联汽车研究院有限公司 Simulation test system, method, device, equipment, medium and product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Jianghong Dong,et al.Mixed Cloud Control Testbed: Validating Vehicle-Road-Cloud Integration via Mixed Digital Twin.《IEEE TRANSACTIONS ON INTELLIGENT VEHICLES》.2023,第2723-2736页. *

Also Published As

Publication number Publication date
CN117074049A (en) 2023-11-17

Similar Documents

Publication Publication Date Title
US10606270B2 (en) Controlling an autonomous vehicle using cost maps
Hu et al. A review of research on traffic conflicts based on intelligent vehicles
US10394243B1 (en) Autonomous vehicle technology for facilitating operation according to motion primitives
Best et al. Autonovi-sim: Autonomous vehicle simulation platform with weather, sensing, and traffic control
CN110562258B (en) Method for vehicle automatic lane change decision, vehicle-mounted equipment and storage medium
Wang et al. Cooperative ramp merging system: Agent-based modeling and simulation using game engine
CN103158705B (en) Method and system for controlling a host vehicle
US20200134494A1 (en) Systems and Methods for Generating Artificial Scenarios for an Autonomous Vehicle
CN109884916A (en) A kind of automatic Pilot Simulation Evaluation method and device
CN107798861A (en) A kind of vehicle cooperative formula formation running method and system
CN107886750B (en) Unmanned automobile control method and system based on beyond-visual-range cooperative cognition
EP3588226B1 (en) Method and arrangement for generating control commands for an autonomous road vehicle
CN110126837A (en) System and method for autonomous vehicle motion planning
CN108734979A (en) Traffic lights detecting system and method
CN110531740A (en) A kind of intelligent vehicle intelligence degree quantization assessment method
WO2019106789A1 (en) Processing device and processing method
US20210118288A1 (en) Attention-Based Control of Vehicular Traffic
EP4020113A1 (en) Dynamic model evaluation package for autonomous driving vehicles
CN110441790A (en) Method and apparatus crosstalk and multipath noise reduction in laser radar system
US11645360B2 (en) Neural network image processing
CN112977473A (en) Method and system for predicting moving obstacle exiting from crossroad
CN112896185A (en) Intelligent driving behavior decision planning method and system for vehicle-road cooperation
Zhang et al. A cruise control method for connected vehicle systems considering side vehicles merging behavior
Li et al. Development and testing of advanced driver assistance systems through scenario-based system engineering
CN117074049B (en) Intelligent driving vehicle road cloud testing system based on virtual-real combination

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant