CN114139329A - Virtual test scene construction method and device - Google Patents

Virtual test scene construction method and device Download PDF

Info

Publication number
CN114139329A
CN114139329A CN202010917524.2A CN202010917524A CN114139329A CN 114139329 A CN114139329 A CN 114139329A CN 202010917524 A CN202010917524 A CN 202010917524A CN 114139329 A CN114139329 A CN 114139329A
Authority
CN
China
Prior art keywords
dynamic
ontology
vehicle
static
lane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010917524.2A
Other languages
Chinese (zh)
Inventor
张良壮
杨林
余本德
孙剑
周东浩
田野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010917524.2A priority Critical patent/CN114139329A/en
Publication of CN114139329A publication Critical patent/CN114139329A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M17/00Testing of vehicles
    • G01M17/007Wheeled or endless-tracked vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/04Constraint-based CAD

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a virtual test scene construction method and device, which can solve the problems that the constructed virtual test scene cannot interact with an unmanned vehicle, is incomplete and unreasonable, so that the virtual test efficiency of the unmanned vehicle is improved, and the method and device can be applied to the test scene of the unmanned vehicle. The method comprises the following steps: and constructing a static ontology model by combining road specifications and expert experience. Wherein the static ontology model is used to describe design constraints of the static ontology, the static ontology comprising one or more of: road topology, road infrastructure, traffic control, or environment. And constructing a dynamic ontology model based on the static ontology model. Wherein the dynamic ontology comprises one or more of: a vehicle, a pedestrian, or an animal. And determining the running track of the dynamic body according to the design constraint of the static body and the behavior constraint of the dynamic body by combining traffic flow simulation so as to generate a test scene model.

Description

Virtual test scene construction method and device
Technical Field
The present application relates to the field of testing, and in particular, to a method and an apparatus for constructing a virtual test scenario.
Background
In order to ensure the driving safety, the unmanned vehicle needs to be fully tested before getting on the road, for example, the unmanned vehicle is tested in a virtual test scene. Currently, a virtual test scenario may be constructed by the following method: and extracting a virtual test scene from the natural driving data and generating the virtual test scene based on expert experience.
However, since natural driving data is a kind of playback test, all the travel trajectories of the background traffic flow cannot be changed, so that the background traffic flow and the unmanned vehicle cannot interact in both directions, such as reacting to the driving behavior of the unmanned vehicle. Moreover, the above scheme for generating a virtual test scenario based on expert experience has the following problems: due to the subjectivity of virtual test scene classification, all test scenes are difficult to cover, and the virtual test scenes are constructed by a computer in a parameter combination traversal mode, only reasonable values of parameters are considered, and the relevance among the parameters is not considered, so that a large number of unreasonable virtual test scenes are generated. That is to say, the virtual test scene constructed based on the two methods has the problems of incapability of interacting with the unmanned vehicle, incompleteness, unreasonable performance and the like, so that the virtual test efficiency of the unmanned vehicle is low.
Disclosure of Invention
The embodiment of the application provides a virtual test scene construction method and device, which can solve the problems that the constructed virtual test scene cannot interact with an unmanned vehicle, is incomplete and unreasonable, and can improve the virtual test efficiency of the unmanned vehicle.
In order to achieve the purpose, the technical scheme is as follows:
in a first aspect, a method for constructing a virtual test scenario is provided. The virtual test scene construction method comprises the following steps: and constructing a static ontology model by combining road specifications and expert experience. Wherein the static ontology model is used to describe design constraints of the static ontology, the static ontology comprising one or more of: road topology, road infrastructure, traffic control, or environment. And constructing a dynamic ontology model based on the static ontology model. Wherein the dynamic ontology model is used for describing the behavior constraint of the dynamic ontology, and the dynamic ontology comprises one or more items of the following items: a vehicle, a pedestrian, or an animal. And determining the running track of the dynamic body according to the design constraint of the static body and the behavior constraint of the dynamic body by combining traffic flow simulation. And generating a test scene model. The test scene model comprises a driving track, and the driving track is used for testing whether the driving behavior of the unmanned vehicle meets the design requirement.
Based on the virtual test scene construction method provided by the first aspect, the static ontology model constructed by combining expert experience and road specifications can embody the design constraints among the static ontologies, and the problem that the value of the parameter combination traversed by the computer is unreasonable due to the fact that the constructed virtual test scene does not accord with the road specifications can be solved. In addition, the dynamic body model constructed based on the static body model can embody the behavior constraint between the dynamic body and the static body, and further determine the driving track of the dynamic body based on the behavior constraint and the traffic flow simulation, so that the problem that the dynamic body in a virtual test scene extracted based on natural driving data cannot dynamically interact with an unmanned vehicle can be solved, and unreasonable behaviors of the dynamic body can be eliminated. Therefore, a large number of reasonable and complete virtual test scenes capable of dynamically interacting with the unmanned vehicle can be constructed, and the test efficiency of the unmanned vehicle can be improved.
In one possible design, the building of the static ontology model by combining the road specification and the expert experience may include: a static ontology class is defined. The static ontology class is used for describing one or more of the following corresponding static ontologies: virtual test scenes, roads, banks, lanes, traffic signs, weather, or lighting conditions. Static object properties and static data properties are defined. The static object attributes are used for describing the mapping relation between the static ontology classes, and the static data attributes are used for describing the parameter set of the static ontology classes. Static design rules are defined. The static design rule is used for describing a mapping relation among the static ontology class, the static object attribute and the static data attribute, and the static design rule is used for determining the rationality of the generated virtual test scene. Based on the defined static object properties and static design rules, the static ontologies can be constrained so that the static ontologies with mapping relations are associated. Therefore, in the process of generating the static scene by the static ontology model, the relevance among the static ontologies can be considered, and a reasonable static scene is generated.
Optionally, the design constraints of the static ontology may include one or more of: the position relation among roads, the position relation among roads and lanes, the position relation among lanes and dividing strips, the position and type of a lane line and the constraint relation among lanes and/or dividing strips, and the position relation among traffic signs and lanes and/or dividing strips. Because the static object attribute and the static design rule are defined through the design constraint related to the road specification, when the road specification is updated, the static object attribute and the static design rule are redefined only according to the new design constraint, and the constructed static ontology model is convenient to modify.
Further, the building of the dynamic ontology model based on the static ontology model may include: a dynamic ontology class is defined. The dynamic ontology class is used for describing one or more items corresponding to the dynamic ontology: a vehicle, a pedestrian, or an animal. Dynamic object properties and dynamic data properties are defined. The dynamic data attribute is used for describing a parameter set of the dynamic ontology class. Dynamic design rules are defined. The dynamic design rule is used for describing the mapping relation among the dynamic ontology class, the dynamic object attribute and the dynamic data attribute, and the dynamic design rule is used for determining the rationality of the generated test scene. And constraining the dynamic ontology by defining the attributes of the dynamic objects and the dynamic design rules so as to enable the dynamic ontology with mapping relation to be associated with the static ontology. Therefore, in the process of generating the dynamic scene by the dynamic ontology model, the relevance between the dynamic ontologies and between the dynamic ontology and the static ontology is considered, and a reasonable dynamic scene is generated.
Optionally, the behavior constraint of the dynamic ontology may include one or more of the following: the vehicle-to-lane position relation, the vehicle speed limit, the vehicle running direction constraint, the vehicle steering constraint, the vehicle lane change constraint, the vehicle cut-in constraint and the vehicle cut-out constraint. Because the dynamic object attribute and the dynamic design rule are defined through the behavior constraint related to the driving behavior specification, when the driving behavior specification is updated, the dynamic object attribute and the dynamic design rule are redefined only according to the new behavior constraint, and the constructed dynamic body model is convenient to modify.
Still further, the determining the driving trajectory of the dynamic ontology according to the design constraint of the static ontology and the behavior constraint of the dynamic ontology in combination with the traffic flow simulation may include: and determining the initial state of the dynamic ontology according to the design constraint of the static ontology, the behavior constraint of the dynamic ontology and the scene design requirement. And determining the running track of the dynamic body according to the initial state of the dynamic body by combining traffic flow simulation. By determining the initial state and the driving track of the dynamic ontology, complex dynamic interaction among all traffic participants can be embodied in a dynamic scene. Therefore, when the unmanned vehicle is tested in the virtual test scene, interactive test data meeting the design can be obtained, and the virtual test efficiency of the unmanned vehicle can be improved.
Optionally, the determining the initial state of the dynamic ontology according to the design constraint of the static ontology and the behavior constraint of the dynamic ontology may include: and determining the initial state of the dynamic body according to the design constraint of the static body, the behavior constraint and the scene design requirement of the dynamic body, the test target and the behavior model of the dynamic body.
Optionally, the behavior model of the dynamic ontology may include one or more of the following: the system comprises a following model, a lane changing model, a signal lamp reaction model, a random interference model, a public traffic and pedestrian behavior model, a clearance receiving model, an importing model, a cooperation model or a collision avoiding model.
In a possible design, the method for constructing a virtual test scenario provided in the first aspect may further include: the driving behavior of the unmanned vehicle under the first test scene is obtained. A second test scenario is determined. The second test scene is a test scene in which the driving behavior of the unmanned vehicle in the first test scene does not meet the design requirement. And carrying out encryption sampling on the parameter interval corresponding to the second test scene to generate a third test scene. And testing the third test scene. And constructing a first test scene through coarse-grained parameter value taking, and carrying out encryption sampling on a parameter interval corresponding to the first test scene with the driving behavior not meeting the design requirement according to the driving behavior of the unmanned vehicle in the first test scene to generate a third test scene. Therefore, the virtual test scene which does not meet the design requirement is directionally sampled and tested for multiple times, the problem that the test efficiency is low due to the fact that the virtual test scene constructed due to the fact that the parameter value is too thin can be solved, the virtual test efficiency and the test pertinence of the unmanned vehicle can be improved, and the method is more suitable for the unmanned test compared with the traditional test scene design based on human driving behaviors.
In a possible design, the method for constructing a virtual test scenario provided in the first aspect may further include: and adding a label to the virtual test scene based on the interactive relation between the static ontology and the dynamic ontology. Wherein the label is used for managing the virtual test scenario. The virtual test scene is labeled, for example, classified according to the function or danger degree of the virtual test scene, so that management in the virtual test scene library is facilitated.
In a second aspect, a virtual test scenario constructing apparatus is provided. The virtual test scene constructing device comprises: the device comprises a building module, a determining module and a generating module. The building module is used for building the static ontology model by combining road specifications and expert experience. Wherein the static ontology model is used to describe design constraints of the static ontology. The static ontology comprises one or more of: road topology, road infrastructure, traffic control, or environment. And the building module is also used for building the dynamic body model based on the static body model. The dynamic ontology model is used for describing the behavior constraint of the dynamic ontology, and the dynamic ontology comprises one or more of the following items: a vehicle, a pedestrian, or an animal. And the determining module is used for determining the running track of the dynamic body according to the design constraint of the static body and the behavior constraint of the dynamic body by combining the traffic flow simulation. And the generating module is used for generating a test scene model. The test scene model comprises a driving track, and the driving track is used for testing whether the driving behavior of the unmanned vehicle meets the design requirement.
In a possible design, the building module is further configured to define a static ontology class. The static ontology class is used for describing one or more of the following corresponding static ontologies: virtual test scenes, roads, banks, lanes, traffic signs, weather, or lighting conditions. The building module is also used for defining the static object attribute. And the static object attributes are used for describing the mapping relation between the static ontology classes. The building module is also used for defining static data attributes. The static data attribute is used for describing a parameter set of the static ontology class. The building module is also used for defining static design rules. The static design rule is used for describing a mapping relation among the static ontology class, the static object attribute and the static data attribute, and the static design rule is used for determining the rationality of the generated virtual test scene.
Optionally, the design constraints of the static ontology include one or more of: the position relation among roads, the position relation among roads and lanes, the position relation among lanes and dividing strips, the position and type of a lane line and the constraint relation among lanes and/or dividing strips, and the position relation among traffic signs and lanes and/or dividing strips.
Further, the building module is further used for defining the dynamic ontology class. The dynamic ontology class is used for describing one or more items corresponding to the dynamic ontology: a vehicle, a pedestrian, or an animal. The building module is also used for defining the dynamic object attribute. The dynamic object attributes are used for describing mapping relations among dynamic ontology classes and constraint relations among the dynamic ontologies and the static ontologies. The building module is also used for defining dynamic data attributes. And the dynamic data attribute is used for describing a parameter set of the dynamic ontology class. The building module is also used for defining dynamic design rules. The dynamic design rule is used for describing the mapping relation among the dynamic ontology class, the dynamic object attribute and the dynamic data attribute, and the dynamic design rule is used for determining the rationality of the generated test scene.
Optionally, the behavior constraint of the dynamic ontology includes one or more of the following items: the vehicle-to-lane position relation, the vehicle speed limit, the vehicle running direction constraint, the vehicle steering constraint, the vehicle lane change constraint, the vehicle cut-in constraint and the vehicle cut-out constraint.
Still further, the determining module is further configured to determine an initial state of the dynamic ontology according to the design constraint of the static ontology, the behavior constraint of the dynamic ontology, and the scene design requirement. The determining module is also used for determining the running track of the dynamic body according to the initial state of the dynamic body by combining with traffic flow simulation.
Optionally, the building module is further configured to determine an initial state of the dynamic ontology according to the design constraint of the static ontology, the behavior constraint and the scenario design requirement of the dynamic ontology, and the test target and the behavior model of the dynamic ontology.
Optionally, the behavior model of the dynamic ontology includes one or more of the following items: the system comprises a following model, a lane changing model, a signal lamp reaction model, a random interference model, a public traffic and pedestrian behavior model, a clearance receiving model, an importing model, a cooperation model or a collision avoiding model.
In a possible design, the virtual test scenario constructing apparatus provided in the second aspect may further include an obtaining module and a testing module. The acquisition module is used for acquiring the driving behavior of the unmanned vehicle in a first test scene. And the determining module is also used for determining a second test scene. The second test scene is a test scene in which the driving behavior of the unmanned vehicle in the first test scene does not meet the design requirement. And the generating module is further used for carrying out encryption sampling on the parameter interval corresponding to the second test scene so as to generate a third test scene. And the test module is used for testing the third test scene.
In a possible design, the virtual test scenario constructing apparatus provided in the second aspect may further include an adding module. And the adding module is used for adding a label to the virtual test scene based on the interactive relation between the static ontology and the dynamic ontology. Wherein the label is used for managing the virtual test scenario.
Optionally, the virtual test scenario construction apparatus according to the second aspect may further include a storage module, where the storage module stores a program or instructions. When the processing module executes the program or the instructions, the virtual test scenario construction apparatus according to the second aspect may execute the virtual test scenario construction method according to the first aspect.
It should be noted that the virtual test scenario constructing apparatus according to the second aspect may be a network device, such as a server, or may also be a chip (system) or other component or assembly disposed in the network device, which is not limited in this application.
In addition, the technical effect of the virtual test scenario construction apparatus according to the second aspect may refer to the technical effect of the virtual test scenario construction method according to the first aspect, and details are not repeated here.
In a third aspect, a virtual test scenario constructing apparatus is provided. The virtual test scenario construction apparatus is configured to execute the virtual test scenario construction method according to a possible implementation manner of the first aspect.
In a fourth aspect, a virtual test scenario construction apparatus is provided. The virtual test scene constructing device comprises: a processor, configured to execute the virtual test scenario construction method according to a possible implementation manner of the first aspect.
In a fifth aspect, a virtual test scenario constructing apparatus is provided. The virtual test scene constructing device comprises: a processor coupled to a memory, the memory for storing a computer program; the processor is configured to execute the computer program stored in the memory, so that the virtual test scenario construction apparatus executes the virtual test scenario construction method according to the possible implementation manner of the first aspect.
It should be noted that the virtual test scenario constructing apparatus in the fifth aspect may be a terminal device or a network device, such as a computer or a server, or may be a chip (system) or other component or assembly disposed in the terminal device or the network device, which is not limited in this application.
In addition, the technical effect of the virtual test scenario construction apparatus according to the fifth aspect may refer to the technical effect of the virtual test scenario construction method according to the first aspect, and details are not repeated here.
In a sixth aspect, a virtual test scenario construction system is provided. The virtual test scenario construction system comprises one or more network devices.
In a seventh aspect, a computer-readable storage medium is provided, comprising: computer programs or instructions; when the computer program or the instructions are run on a computer, the computer is enabled to execute the virtual test scenario construction method described in the possible implementation manner of the first aspect.
In an eighth aspect, a computer program product is provided, which includes a computer program or instructions, and when the computer program or instructions runs on a computer, the computer executes the virtual test scenario construction method described in the possible implementation manner of the first aspect.
Drawings
Fig. 1 is a traffic scene diagram provided in an embodiment of the present application;
fig. 2 is a first flowchart of a virtual test scenario construction method provided in an embodiment of the present application;
fig. 3 is a second schematic diagram of static scene construction provided in the embodiment of the present application;
FIG. 4 is a schematic diagram of a test scenario provided by an embodiment of the present application;
fig. 5 is a third schematic flowchart of a virtual test scenario construction method provided in the embodiment of the present application;
FIG. 6 is a schematic flow chart of a traffic flow simulation provided in an embodiment of the present application;
FIG. 7 is a schematic illustration of a highway provided by an embodiment of the present application;
FIG. 8 is a schematic illustration of a connection between different road segments provided by an embodiment of the present application;
FIG. 9 is a schematic illustration of a current-flow following vehicle provided in an embodiment of the present application;
FIG. 10 is a schematic illustration of a following under split flow as provided by an embodiment of the present application;
fig. 11 is a schematic diagram of a third order bezier curve provided in the embodiment of the present application;
fig. 12 is a schematic diagram of collision avoidance of a vehicle according to an embodiment of the present application;
fig. 13 is a first schematic structural diagram of a virtual test scenario construction apparatus according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a virtual test scenario construction apparatus according to an embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
This application is intended to present various aspects, embodiments or features around a system that may include a number of devices, components, modules, and the like. It is to be understood and appreciated that the various systems may include additional devices, components, modules, etc. and/or may not include all of the devices, components, modules etc. discussed in connection with the figures. Furthermore, a combination of these schemes may also be used.
In addition, in the embodiments of the present application, words such as "exemplarily", "for example", etc. are used for indicating as examples, illustrations or explanations. Any embodiment or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, the term using examples is intended to present concepts in a concrete fashion.
In the embodiments of the present application, "of", "corresponding" and "corresponding" may be sometimes used in combination, and it should be noted that the intended meaning is consistent when the difference is not emphasized.
In the examples of the present application, the subscripts are sometimes as W1May be incorrectly non-subscriptedThe form is W1, and its intended meaning is consistent when no distinction is made.
The traffic scene described in the embodiment of the present application is for more clearly illustrating the technical solution of the embodiment of the present application, and does not constitute a limitation to the technical solution provided in the embodiment of the present application, and as a person having ordinary skill in the art knows, along with the evolution of the road specification and the driving behavior specification and the appearance of a new traffic scene, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems.
Since the virtual test scenario is constructed based on a real traffic scenario, for facilitating understanding of the embodiment of the present application, first, the traffic scenario shown in fig. 1 is taken as an example to describe in detail the virtual test scenario applicable to the embodiment of the present application. Exemplarily, fig. 1 is a traffic scene graph to which the virtual test scene constructing method provided in the embodiment of the present application is applied.
As shown in fig. 1, the traffic scenario includes, but is not limited to: traffic lights, traffic signs, stop lines, guide arrows, zebra crossings, lane boundaries, road isolation zones, motor vehicles. The traffic signal lamp is a signal lamp for commanding traffic operation and belongs to road infrastructure. Traffic lights are generally composed of red, green, and yellow lights. The red light indicates no traffic, the green light indicates permission, and the yellow light indicates warning. Traffic signs are road devices that convey guidance information, traffic lights for restrictions, traffic lights for warnings, and direction information by words or symbols. The traffic sign in fig. 1 is used to indicate that the road section ahead of the vehicle is jogging. The stop line is a solid white line perpendicular to the center line of the roadway at the intersection and represents the position of the vehicle waiting to release the traffic signal. Stop lines are prohibited markings that belong under traffic signs. The guide arrow is a traffic sign for indicating the direction of travel of the vehicle. Two guiding arrows are shown in fig. 1, where the vehicle can go straight and the vehicle can go straight or turn right. The zebra crossing is a marking line forming a pedestrian crossing and is used for marking the walking range of a pedestrian crossing the lane. Lane boundaries are traffic markings used to separate traffic flows traveling in the same direction or in opposite directions. The lane line is generally divided into a solid line and a broken line, the broken line indicates that the vehicle can change lanes, and the solid line also indicates that the vehicle cannot change lanes. The road isolation belt is used for separating opposite traffic flows, and serious traffic accidents caused by the fact that vehicles drive into opposite traffic lanes are avoided. Motor vehicles are wheeled vehicles that are driven or towed by a power plant.
It should be understood that fig. 1 is a simplified schematic diagram which is only illustrated for convenience of understanding, and other traffic signs and road traffic markings can also be included in the virtual test scene construction scene. For example, other traffic signs may include road signs, road construction safety signs, tourist areas signs, etc.; other road traffic markings may include no-passing lines, left-turning waiting lanes, highway headway confirmation markings, and the like.
It should be noted that the traffic scenario shown in fig. 1 is only an example. The virtual test scene construction method provided by the embodiment of the application can also be applied to other traffic scenes, such as crossroads, T-shaped intersections, one-way roads, roundabouts, overpasses, expressways, mountain roads, country roads and the like, and is not described again here.
The virtual test scenario construction method provided by the embodiment of the present application will be specifically described below with reference to fig. 2 to 11.
Exemplarily, fig. 2 is a first flowchart of a virtual test scenario construction method provided in the embodiment of the present application. The virtual test scenario construction method can be applied to construction of a traffic scenario as shown in fig. 1.
As shown in fig. 2, the virtual test scenario construction method includes the following steps:
s201, a static ontology model is constructed by combining road specifications and expert experience.
The road specification is a national and/or regional specification design for roads, such as a road layout and a road infrastructure design. The canonical design of the road layout includes, but is not limited to: and designing topological relation of lane markings and roads. The normative design of the road infrastructure includes, but is not limited to: the design of road isolation belts, traffic signs and traffic signal lamps. The expert experience refers to the construction of the static ontology model depending on the experience and knowledge of the expert.
Fig. 3 is a schematic diagram of static scene construction provided in the embodiment of the present application. As shown in FIG. 3, the parameters available for the simulation scenario are divided into two categories, static layer and dynamic layer. Wherein the static layer corresponds to the static body. The static layer includes four types of parameters, respectively road topology, road infrastructure, traffic control, environment. The dynamic layer corresponds to a dynamic ontology, including a type of parameter, the behavior of the traffic participant.
In connection with FIG. 3, a static ontology model is used to describe the design constraints of a static ontology that includes one or more of: road topology, road infrastructure, traffic control, or environment. Wherein, the road topology includes but is not limited to: road type, number of lanes and positional relationship, lane lines, stop lines, or arrow indications. Road infrastructure includes, but is not limited to: road isolation zones, traffic signs, traffic lights. Traffic control refers to short-term, planned control measures, including but not limited to: temporary forbidding of road sections caused by road construction and heavy activities. The environment refers to the environment information where the virtual test scenario is located, including but not limited to: weather, such as rainy and sunny days; lighting, such as day and night.
Illustratively, the design constraints of the static ontology include one or more of: the position relation among roads, the position relation among roads and lanes, the position relation among lanes and dividing strips, the position and type of a lane line and the constraint relation among lanes and/or dividing strips, and the position relation among traffic signs and lanes and/or dividing strips.
Wherein the inter-road positional relationship includes, but is not limited to: the road sections are connected through a left-turn intersection, a right-turn intersection, a T-shaped intersection, a rotary island or a crossroad. The road-to-lane positional relationship includes, but is not limited to: the type, width, or number of lanes set on the road. The lane types may include one-way lane, two-way lane, motor lane, non-motor lane, and the like. Inter-lane positional relationships include, but are not limited to: the lanes are adjacent to the lanes, the lanes in the same road section are separated by lane dividing lines or central separating zones, or the lanes in different road sections are connected by left-turn intersections, right-turn intersections, T-shaped intersections, rotary islands or crossroads. The positional relationship between the lane and the dividing strip includes but is not limited to: the lane left is adjacent to the division zone, or the lane right is adjacent to the division zone. The constraint relationship between the location and type of lane line and lane and/or dividing strip includes but is not limited to: the following can be arranged in sequence according to positions from the center line of the road to the outside: center separator, lane solid line, lane dotted line, lane solid line, road outside separator. The positional relationship between the traffic sign and the lane and/or the dividing strip includes, but is not limited to: the traffic sign can not be arranged on the lane, can be suspended in the road, and is arranged on the road side separation belt.
As shown in fig. 3, after the static layer is constructed, the class, the object attribute of the class, the data attribute of the class, and the design rule of the class corresponding to the static layer need to be defined according to the parameters in the static layer. Thus, in one possible design scenario, in combination with fig. 3, the above S201, and the building of the static ontology model in combination with the road specification and the expert experience, may include the following steps:
and step 11, defining the static ontology class.
A class is a concept that describes a domain. For example, a way class represents all ways, one of which is an instance of the way class. Classes may have subclasses. For example, the roads can be divided into urban roads and highways, or the roads can be classified into main roads, secondary roads and branch roads of the express way according to grades. Based on this concept, a static ontology class can be used to describe one or more of the following corresponding to the static ontology: virtual test scenes, roads, banks, lanes, traffic signs, weather, or lighting conditions.
Step 12, defining static object properties.
The static object attribute refers to a mapping relation from one static ontology class to another static ontology class. Static object properties may include positional relationships between static objects. Such as the positional relationship between the lane and the dividing strip. Positional relationships include, but are not limited to: the left side of the division belt is adjacent to the lane, and the right side of the division belt is adjacent to the lane. Static object properties may also include containment relationships between static objects. Such as the road, lane, dividing strip, etc. Inclusion relationships include, but are not limited to: the road includes lanes and banks. Static object properties may also include the relationship of the virtual test scenario to weather. The definition domain and value domain of static object attributes are defined in conjunction with expert experience, and the static data attributes are illustrated below by table 1:
TABLE 1
Figure BDA0002665549330000071
As shown in table 1, the adjacent relationship of the lanes includes an adjacent relationship between the lanes and the division strip. For example, the left adjacent lane of the division strip or the right adjacent lane of the division strip. The containing and contained relationships include: the road includes a lane and a partition, which are included in the road. The relationship of the traffic sign to the lane includes the traffic sign being suspended above the lane. The weather of the static scene may be rainy, hot, etc.
It should be understood that table 1 is only a few examples of the static object attributes described above, and that the static objects in table 1 may also include other relationships. For example, the relationship between the traffic sign and the lane may further include that the traffic sign is disposed on both sides of the lane.
It should be noted that the static object attributes shown in table 1 are only a few examples. The static object attributes provided by the embodiment of the application can also be applied to the relationship between other static objects, for example, the inter-road position relationship includes that the road sections are connected through a left-turn intersection, a right-turn intersection, a T-shaped intersection or an intersection. For another example, the position relationship between the road and the lane includes the type, width, or number of lanes set on the road, and is not described herein again.
Step 13, defining static data attributes.
The static data attribute is used for describing a parameter set of the static ontology class. For example, data attributes of a lane may include, but are not limited to, lane number, lane width; data attributes of traffic signs may include, but are not limited to: type, location; the separator data attributes may include, but are not limited to: width, type.
Step 14, defining static design rules.
The static design rule is used for describing a mapping relation among the static ontology class, the static object attribute and the static data attribute, and the static design rule is used for determining the rationality of the generated virtual test scene.
The following illustrates static design rules:
(1) for roads, it is necessary to include a center bank, an outside bank, a lane, and a shoulder.
(2) Between lane and median, it is necessary to obey the adjacent rule of left and right, for example, the following may be arranged in sequence according to positions from the center line of the road outwards: central division strip, traffic lane, road shoulder, road outside division strip.
(3) For roads with the number of unidirectional lanes less than 3, the central separator may not be used, but the marked line may be used for separation.
(4) The marked lines between the lanes are broken lines, and the marked lines between the lanes and the dividing strips are solid lines.
(5) Traffic signs need to be suspended in the road or placed on roadside barriers.
It should be noted that the static design rule includes, but is not limited to, the above-mentioned examples.
After the static ontology class, the static object attribute, the static data attribute and the static design rule are defined, a static scene can be generated by traversing the static data attribute. However, in traversing static data properties, it is constrained by static object properties and static design rules. In this way, unreasonable static scenes can be avoided from being generated.
Both the static object properties and the static design rules are defined according to static design constraints associated with the road specification, i.e. for the static design constraints, the design according to the road specification is required. For example, if the road specification stipulates that a left-turn lane needs to be provided with a guide arrow indicating that the vehicle turns left, the corresponding static design constraint needs to include: and a left-turn guide arrow is arranged on the left-turn lane. For another example, the road specification stipulates that reminding marks need to be set on both sides of a road section with a pedestrian road, and the corresponding static design constraint may be as shown in fig. 1: and a traffic sign for reminding the vehicle of slowly running is arranged on one side of the driving lane.
Based on the defined static object properties and static design rules, the static ontologies can be constrained so that the static ontologies with mapping relations are associated. Therefore, in the process of generating the static scene by the static ontology model, the relevance among the static ontologies can be considered, and a reasonable static scene is generated. Furthermore, because the static object attributes and the static design rules are defined through the design constraints related to the road specifications, when the road specifications are updated, the static object attributes and the static design rules only need to be redefined according to the new design constraints, and the constructed static ontology model is convenient to modify.
S202, constructing a dynamic ontology model based on the static ontology model.
Wherein the dynamic ontology model is used for describing the behavior constraint of the dynamic ontology, and the dynamic ontology comprises one or more items of the following items: a vehicle, a pedestrian, or an animal. The vehicle may include an automobile and a non-automobile, among others. Further, the motor vehicles may include cars, buses, trucks, motorcycles, and the like; non-motorized vehicles may include bicycles, human-powered tricycles, and the like.
Illustratively, the behavior constraints of the dynamic ontology may include one or more of: the vehicle-to-lane position relation, the vehicle speed limit, the vehicle running direction constraint, the vehicle steering constraint, the vehicle lane change constraint, the vehicle cut-in constraint and the vehicle cut-out constraint.
The inter-lane position relationship of the vehicle includes but is not limited to: the vehicle needs to be on the lane. Inter-vehicle positional relationships include, but are not limited to: different vehicles cannot be located at the same location; the front vehicle and the rear vehicle are positioned on the same lane, and the relative distance between the two vehicles is smaller than or equal to a set value, so that the front vehicle and the rear vehicle have a following relationship. Wherein the setting value can be set according to expert experience. For example, if the relative distance between the front vehicle and the rear vehicle on the same lane is less than 50 meters, the front vehicle and the rear vehicle can be considered to have a following relationship.
Vehicle speed limits include, but are not limited to: the corresponding speed-limiting requirements are met on different road sections, and when a vehicle runs on the road section, the vehicle needs to run according to the speed-limiting requirements of the road section. Vehicle heading constraints include, but are not limited to: the vehicle needs to travel in the direction indicated by the traffic sign. For example, the traffic sign indicates that the vehicle can only travel straight at the intersection ahead, which is the only way to travel straight at the intersection ahead.
Vehicle steering constraints include, but are not limited to: the vehicle turns on the lane according to the corresponding guide arrow. For example, a vehicle travels on a left-turn lane and can only turn left after arriving at a front intersection. Vehicle lane change constraints include, but are not limited to: when the vehicle needs to change lanes, whether to change lanes needs to be determined according to the surrounding vehicle conditions and whether the lane boundary is a broken line. For example, before a front vehicle changes lanes to the left, it is necessary to confirm that the lane boundary on the left side of the vehicle is a dotted line, and after the relative safety distance between the front vehicle and a rear vehicle meets the lane change requirement, the front vehicle can change lanes to the left; or, when the vehicle is located on the leftmost lane in the lanes, the vehicle cannot change lanes to the left any more; alternatively, when the vehicle is located on the rightmost one of the lanes, the vehicle can no longer change lanes to the right.
The vehicle cut-in constraint includes but is not limited to that when the vehicle needs to cut into the traffic flow, whether the vehicle needs to cut into the traffic flow is determined according to the surrounding vehicle conditions; for example, before the traffic flow is cut into the traffic flow to the left, the vehicle can only cut into the traffic flow to the left after the relative safety distance between the vehicle and the rear vehicle is required to be confirmed to meet the cut-in requirement. The cut-out (cut-out) constraint of the vehicle comprises but is not limited to that when the vehicle needs to cut out the traffic flow, whether the vehicle needs to be cut out or not is determined according to the surrounding vehicle condition; for example, before the traffic flow is cut to the right, the vehicle can cut the traffic flow to the right after confirming that the relative safety distance between the vehicle and the rear vehicle meets the cut-out requirement.
As shown in fig. 3, after the dynamic layer is constructed, the class, the object attribute of the class, the data attribute of the class, and the design rule of the class corresponding to the dynamic layer need to be defined according to the parameters in the dynamic layer. Thus, in one possible design scenario, in combination with fig. 3, in the above S202, building a dynamic ontology model based on a static ontology model may include the following steps:
step 21, defining the dynamic ontology class.
The dynamic ontology class is used for describing one or more items corresponding to the dynamic ontology: a vehicle, a pedestrian, or an animal. In this step 21, it is necessary to classify the vehicle into an automobile and a non-automobile. Further, the motor vehicles may include cars, buses, trucks, motorcycles, and the like; non-motorized vehicles may include bicycles, human-powered tricycles, and the like.
Step 22, defining dynamic object properties.
The dynamic object attributes are used for describing mapping relations among dynamic ontology classes and constraint relations among the dynamic ontologies and the static ontologies. The mapping relationship between the dynamic ontology classes may include a following relationship between the vehicle and the vehicle, including but not limited to: relative spacing from vehicle to vehicle. The constraint relationship between the dynamic ontology and the static ontology may include a vehicle-to-lane relationship, including but not limited to: whether the vehicle is located on a lane. The definition domain and value domain of the dynamic object attributes are defined in conjunction with expert experience, and the dynamic data attributes are illustrated by table 2 below:
TABLE 2
Dynamic object properties Description of the invention Definition domain Value range
On/with vehicles on the driveway Relationship of vehicle to lane Vehicle with a steering wheel Lane for traffic
Following/followed Following relationship between workshops Vehicle with a steering wheel Vehicle with a steering wheel
Track changing machine Lane changing motor for vehicle Vehicle with a steering wheel Lane for traffic
Scene of changing lanes Lane changing scene of vehicle Scene Lane
Cut-through scenario Cut-through scenario for vehicle Scene Lane
As shown in table 2, the dynamic object attribute is located on/has a vehicle in the lane, and means that the position relationship between the vehicle and the lane may include: the vehicle is located on the lane or has a vehicle on the lane. The definition domain located on the lane or corresponding to the vehicle is the vehicle, and the corresponding value domain is the traffic lane. The dynamic object attribute is following/followed, and means that the following relationship between vehicles may include: the heel-in-heel condition between the rear vehicle and the front vehicle or the rear vehicle already drives with the front vehicle. The following condition is that the front vehicle and the rear vehicle are positioned on the same lane, and the relative distance between the two vehicles is smaller than or equal to a set value. Wherein the setting value can be set according to expert experience. For example, if the relative distance between the front vehicle and the rear vehicle on the same lane is less than 50 meters, the front vehicle and the rear vehicle can be considered to have a following relationship. The definition domain corresponding to the following/followed is the vehicle, and the corresponding value domain is the vehicle. Similarly, the dynamic object attribute is a lane change motivation, the definition domain corresponding to the lane change motivation is a vehicle, and the corresponding value domain is the vehicle. The dynamic object attribute is a lane change scene, which refers to a lane change scene of a vehicle. The definition domain corresponding to the lane changing scene is the scene, and the corresponding value domain is the lane. The dynamic object attribute is a cut-in scene, which refers to a cut-in scene of the vehicle. The definition domain corresponding to the cut-in scene is the scene, and the corresponding value domain is the lane.
It should be appreciated that Table 2 is only a few examples of the dynamic object properties described above, and that the dynamic object properties may also include other relationships. For example, the relationship of the vehicle to the lane may also include the vehicle turning on the lane according to the corresponding steering arrow. For example, a vehicle travels on a left-turn lane and can only turn left after reaching a front intersection.
In addition, the dynamic object attribute may also be applicable to the relationship between other static objects, for example, the relationship between the vehicle and the road may also include speed limit requirements corresponding to different road segments. For example, the lowest speed limit of the highway is 60 kilometers per hour, and the highest speed limit is 120 kilometers per hour. When the vehicle runs on the highway, the vehicle needs to run according to the speed limit requirement of the highway. For another example, the position relationship between the vehicle and the traffic sign may further include that the vehicle needs to travel in the direction indicated by the traffic sign, and the like, which is not described herein again.
Step 23, defining dynamic data attributes.
And the dynamic data attribute is used for describing a parameter set of the dynamic ontology class. The static and dynamic data attributes are illustrated below by table 3:
TABLE 3
Figure BDA0002665549330000101
As shown in table 3, when the class is a lane, the data attribute of the lane may include the number of lanes and the width of the lane. Wherein the number of lanes represents the number of lanes on one road. For example, when the number of lanes is 1 in the reference value range in table 3, it indicates that there is only one lane on one road. Lane width represents the width of the lane, and the value range of lane width in table 3 is "2.75: 0.05: 3.75 "means: the value range of the lane width is 2.75 meters to 3.75 meters, and the step length is 0.05 meters, namely the lane width can be 2.75 meters, 2.80 meters, … meters, 3.70 meters and 3.75 meters.
In table 3, when the class is a separator, the data attribute of the separator may include a width and a kind. The width span of the division strip is 1: 0.1: 3 "means: the width of the separating strip ranges from 1 meter to 3 meters, the step length is 0.1 meter, namely the width of the separating strip can be 1 meter, 1.1 meter, … meter, 2.9 meter and 3.0 meter. The types of the separator can comprise a hard separator, a flexible separator and a green separator.
In table 3, when the class is a vehicle, the data attribute of the vehicle includes the lane coordinate, the longitudinal coordinate, the speed, and the lane change motive of the vehicle. The lane coordinate of the vehicle is the lane on which the vehicle is located, so the reference value range of the lane coordinate is less than or equal to the number of lanes. For example, 2 lanes are arranged on one road, and according to the sequence from left to right, the coordinate of the first lane is 1, and the coordinate of the second lane is 2; when the lane coordinate of the vehicle on the road is 1, the vehicle is positioned on a first lane; when the lane coordinate of the vehicle on the road is 2, the vehicle is positioned on the second lane. The value of the longitudinal coordinate of the vehicle corresponds to the length of the lane, and therefore the reference value range of the longitudinal coordinate of the vehicle is smaller than or equal to the length of the lane. The speed value range of the vehicle is' 0: 10: 120 km/h "means: the speed range of the vehicle is 0-120 km/h, and the step length is 10 km/h, namely the vehicle speed can be 0 km/h, 10 km/h, …, 110 km/h and 120 km/h. When the vehicle has the lane changing motive, the lane changing motive takes the value of yes; and when the vehicle does not have the lane changing motive, the lane changing motive is not taken.
In table 3, when the class is a traffic sign, the data attribute of the traffic sign may include the class, the lateral position, and the longitudinal position. The traffic sign may be speed limit sign or indication sign. Traffic signs are constrained by static design and cannot be placed in the middle of a road. When the traffic sign is disposed on both sides of the road, the lateral position of the traffic sign needs to be greater than or equal to the road width. The value of the longitudinal coordinate of the traffic sign corresponds to the length of the lane, so that the reference value range of the longitudinal coordinate of the traffic sign is smaller than or equal to the length of the lane.
It should be noted that the above table 3 is only a few examples of the above static data attributes and dynamic data attributes. In practical applications, other examples of the static data attribute and the dynamic data attribute may be used, which are not listed here.
And step 24, defining a dynamic design rule.
The dynamic design rule is used for describing the mapping relation among the dynamic ontology class, the dynamic object attribute and the dynamic data attribute, and the dynamic design rule is used for determining the rationality of the generated test scene.
After the dynamic ontology class, the dynamic object attribute, the dynamic data attribute and the dynamic design rule are defined, a dynamic scene can be generated by traversing the dynamic data attribute. However, in traversing dynamic data attributes, it is constrained by dynamic object attributes and dynamic design rules. In this way, unreasonable dynamic scenarios can be avoided from being generated.
The dynamic object attributes and the dynamic design rules are defined according to dynamic behavior constraints related to driving behavior specifications, that is, for the dynamic behavior constraints of the dynamic ontology, the dynamic object attributes and the dynamic design rules need to be designed according to the driving behavior specifications. For example, if the driving behavior specification specifies that the motor vehicle cannot travel on the non-motor road, the dynamic behavior constraint corresponding to the motor vehicle needs to include: allowing the motor vehicle to travel on the motorway and prohibiting the motor vehicle from traveling on the non-motorway. For another example, if the driving behavior specification specifies that the vehicle cannot run at an excessive speed, the dynamic behavior constraint of the corresponding vehicle needs to include: the vehicle needs to run according to the speed indicated by the speed limit sign. If the vehicle speed flag indicates that the maximum travel speed is 60 km/h, the maximum speed at which the vehicle travels cannot exceed 60 km/h.
It should be noted that the dynamic object attributes and the dynamic design rules may be defined according to the scene design requirements, in addition to the dynamic behavior constraints associated with the driving behavior specifications. Wherein the scenario design requirements include special scenarios that do not comply with the behavioral constraints. For example, special scenarios that do not comply with behavioral constraints may include: there are situations where animals move in the vehicle's direction of travel, someone enters the motorway, etc. When a special scene which does not accord with the behavior constraint exists, the running operation of the vehicle can be controlled through the random disturbance model so as to match various conditions possibly met in the real traffic scene. For example, when there is animal activity in the driving direction of the vehicle, the random disturbance model may control the vehicle to brake to wait for the animal to leave the traffic lane.
Because the dynamic object attribute and the dynamic design rule are defined through the behavior constraint related to the driving behavior specification, when the driving behavior specification is updated, the dynamic object attribute and the dynamic design rule are redefined only according to the new behavior constraint, and the constructed dynamic body model is convenient to modify.
The following illustrates the dynamic design rules:
1) the vehicle needs to be located on the lane while traveling. Wherein the non-motorized vehicle is only allowed to travel on the non-motorized lane; motor vehicles are only allowed to travel on the motor vehicle lane.
2) Different vehicles are located in different positions. That is, different vehicles cannot be located at the same location.
3) For vehicles traveling in the same direction, a safety margin (RSS) such as a critical safety margin needs to be satisfied between the front vehicle and the rear vehicle. The critical safety spacing is: when the rear vehicle encounters the sudden braking of the front vehicle, the limit distance between the rear vehicle and the front vehicle is obtained. At this limit distance, the rear vehicle can stop without hitting the front vehicle by braking. If the limit distance between the rear vehicle and the front vehicle is smaller than the limit distance, the rear vehicle may collide with the front vehicle even if the rear vehicle brakes. The critical safety distance can be obtained by the following formula:
Figure BDA0002665549330000111
wherein S isdIn order to obtain a critical safety distance,
Figure BDA0002665549330000112
the distance of the front vehicle to brake is the distance of the front vehicle,
Figure BDA0002665549330000113
the braking distance of the rear vehicle is;
Figure BDA0002665549330000114
the time for the reaction of the front vehicle,
Figure BDA0002665549330000115
the reaction time of the rear car; v. of1Is the front speed, v2The rear vehicle speed.
4) When the vehicle changes lanes in the leftmost lane or the rightmost lane, a restriction condition exists. For example, the vehicle cannot switch left when located in the leftmost lane, and cannot switch right when located in the rightmost lane.
5) When the vehicle changes the lane, the vehicle firstly confirms the surrounding vehicle condition and then determines whether to change the lane. For example, when there is no vehicle around, the vehicle can change lanes if the vehicle lane change condition is satisfied. For another example, when the front vehicle changes lanes to the left, if the distance between the vehicle behind the left lane and the front vehicle is very short, the vehicle cannot change lanes if the vehicle lane change condition is not met.
6) Whether a heel-strike relationship exists between the two vehicles requires a determination of whether a heel-strike condition exists between the two vehicles. The following condition may include: the two vehicles need to be in the same lane, and the relative distance between the two vehicles meets the numerical value set manually. For example, if two vehicles are in the same lane and the relative distance between the two vehicles is less than 50 meters, it is determined that the two vehicles have a following relationship.
7) Determining that the scene where the vehicle is located is a cut-in scene, and meeting the following conditions: the vehicle tends to change lanes, and the surrounding vehicle conditions meet the lane change condition of the vehicle, that is, the vehicle can safely cut into the lane to be changed, so the scene where the vehicle is located is the cut-in scene.
It should be noted that the dynamic design rule includes, but is not limited to, the above examples.
Fig. 4 is a schematic diagram of a test scenario provided in an embodiment of the present application. In fig. 4, the dynamic body comprises a vehicle, and the static body comprises: road, traffic lane, division, lane, outside road division, central division.
FIG. 4 shows the relationship between different ontologies in a test scenario. As shown in fig. 4, the arrows between the road and the traffic lanes indicate that the road includes the traffic lanes. The arrow between the road and the outside of the road shows that the outside of the road is provided with a division. The arrow between the road and the center divider indicates that the center of the road is provided with a divider. The arrows between the traffic lanes indicate that a plurality of traffic lanes are provided on the lane. The arrow between the vehicle and the lane indicates that the vehicle needs to travel on the lane. The arrow between the vehicle and the center divider indicates that the vehicle cannot pass over the center divider while traveling. The arrows between the separator and the lanes indicate that the separator separates the different lanes.
The behavior of the dynamic ontology is restricted by defining the attributes of the dynamic objects and the dynamic design rules, so that the dynamic ontologies with mapping relations and the dynamic ontologies and the static ontologies are associated. Therefore, in the process of generating the dynamic scene by the dynamic ontology model, the relevance between the dynamic ontologies and between the dynamic ontology and the static ontology is considered, and further the reasonable and complete-coverage dynamic scene is generated.
The static and dynamic scenarios are illustrated below by table 4:
TABLE 4
Figure BDA0002665549330000121
As shown in table 4, the scene constructed based on the parameters such as road topology, road infrastructure, traffic control, and environment is a static scene, and the scene constructed based on the parameters such as traffic participants and behaviors is a dynamic scene. Wherein, the functional scene is a scene expressed by natural language and can be understood by human; a logical scene is a scene that enables a computer to understand by describing a logic; the specific scene is a scene generated after each parameter in the logic scene takes a specific value.
As shown in table 4, in the functional scenario corresponding to the parameter such as road topology, the three-lane road with curvature indicates that the number of lanes of the road is three, and the road has a curve. In a logical scene corresponding to a parameter such as road topology, the lane width value range is 3 to 3.5 meters, and the curvature radius value range of the curve is 0.6 to 0.9 kilometer. In a specific scene corresponding to a parameter such as road topology, the value of the lane width is 3.2 meters, and the value of the curvature radius of the curve is 0.7 kilometer.
As shown in table 4, in the functional scenario corresponding to the road infrastructure, the speed limit 100 km/h flag indicates that there is a traffic sign on the road, and the traffic sign indicates that the maximum vehicle speed cannot exceed 100 km/h when the vehicle travels on the front road section. In a logic scene corresponding to the road infrastructure, the value range of the position of the traffic sign is 0-200 m, which means that the traffic sign is longitudinally arranged in the range of 200 m according to the traffic lane direction. In a specific scene corresponding to the traffic sign, the value of the position of the traffic sign is 150 meters, which means that the traffic sign is longitudinally arranged at the position of 150 meters according to the traffic lane direction.
As shown in table 4, the functional scenarios corresponding to the traffic participants and the behaviors include: when the host vehicle runs on the center lane, a congestion situation exists in the front road section, and the vehicle in the congestion situation moves slowly. In a logic scene corresponding to traffic participants and behaviors, the length value range of the front congestion road section is 10-200 meters; the moving speed of the vehicle in the congestion condition ranges from 0 to 30 km/h; the distance between the vehicle and the front congestion road section ranges from 50 meters to 300 meters; the vehicle speed of the vehicle ranges from 80 km/h to 130 km/h. In a specific scene corresponding to traffic participants and behaviors, the length of a front congestion road section is 40 meters; the moving speed of the vehicle in the congestion condition takes 30 kilometers per hour; the distance between the vehicle and the front congestion road section is 200 meters; the speed of the vehicle is 100 km/h.
As shown in table 4, the functional scenarios corresponding to the environment include: the season is summer and rainy. In a logical scene corresponding to the environment, the temperature is in a range of 10 to 40 degrees centigrade, and the rainfall is in a range of 20 to 100 micrometers. In a specific scene corresponding to the environment, the temperature is 20 ℃, and the rainfall is 30 microns.
In table 4, each scene corresponding to traffic control is blank, indicating that no control measure is provided in the scene. In addition, the above table 4 is only a few examples of the above static scenes and dynamic scenes. In practical applications, other examples of static scenes and dynamic scenes may be used, which are not listed here.
And S203, determining the running track of the dynamic body according to the design constraint of the static body and the behavior constraint of the dynamic body by combining traffic flow simulation.
The traffic flow simulation refers to a traffic flow constructed in a simulation scene or a traffic flow in a simulated reality scene, and is used for constructing a dynamic simulation scene and testing related functions and performances of the unmanned vehicle. As shown in fig. 3, after the dynamic ontology model is constructed, the static ontology and the dynamic ontology need to be combined through traffic flow simulation, and then the driving trajectory of the dynamic ontology needs to be estimated. The unmanned vehicle is equivalent to an unmanned vehicle, an intelligent driving vehicle, and the like.
In a possible design scenario, in combination with fig. 3, in step S203, determining the driving trajectory of the dynamic ontology according to the design constraint of the static ontology and the behavior constraint of the dynamic ontology in combination with the traffic flow simulation may include the following steps:
and step 31, determining the initial state of the dynamic body according to the design constraint of the static body, the behavior constraint of the dynamic body and the scene design requirement.
Specifically, the determining the initial state of the dynamic ontology according to the design constraint of the static ontology, the behavior constraint of the dynamic ontology and the scene design requirement may include the following steps: and determining the initial state of the dynamic body according to the design constraint of the static body, the behavior constraint and the scene design requirement of the dynamic body, the test target and the behavior model of the dynamic body. The design constraints of the static ontology, the behavior constraints of the dynamic ontology and the scene design requirements are mentioned above, and the test targets and the behavior models of the dynamic ontology are described in detail below and are not described herein again. In addition, for an example of the value of the initial state of the dynamic body, reference may be made to the values of the traffic participants and the behaviors in table 4, which is not described herein again.
Fig. 5 is a third flowchart of a virtual test scenario construction method provided in the embodiment of the present application. As shown in fig. 3 and 5, the driving trajectory of the dynamic ontology needs to be determined by the initial state and the time-series trajectory of the dynamic ontology. The time sequence track refers to a running track of the dynamic body in a future period of time.
As shown in fig. 5, an initial state module may be disposed in the dynamic ontology, and the initial state module is used to generate an initial state of the dynamic ontology. The initial state module can be provided with a scene initialization state generation rule and a scene initialization case. The scene initialization state generation rule defines the initial state of the dynamic ontology.
The initial state may include behavior information of the dynamic ontology in addition to basic information such as a position, a speed, and a heading angle of the dynamic ontology. The behavior information includes parameters of actions performed by the dynamic ontology for a period of time in the future. For example, the action parameters may include, but are not limited to: lane changing, left turning, straight going, right turning and turning around. The behavior information may also include driving style parameters of the driver. For example, driving style parameters may include, but are not limited to: comfort deceleration, desired vehicle speed, maximum acceleration, reaction time, desired parking distance.
Wherein the behavior model of the dynamic ontology comprises one or more of the following items: the system comprises a following model, a lane changing model, a signal lamp reaction model, a random interference model, a public traffic and pedestrian behavior model, a clearance receiving model, an importing model, a cooperation model or a collision avoiding model. It should be noted that the behavior model of the dynamic ontology will be described in step 32 in conjunction with the traffic flow simulation.
And step 32, determining the running track of the dynamic body according to the initial state of the dynamic body by combining traffic flow simulation.
Illustratively, fig. 6 is a flow chart of a traffic flow simulation provided in an embodiment of the present application. As shown in fig. 6, the traffic flow simulation may include the steps of:
step 321 initializes the road network according to the model of the road network structure, path generation, etc., and defines parameters such as the traffic flow and the Origin and Destination (OD) flow on the road network.
Step 322, updating the road network state.
The updating of the road network state refers to acquiring road network information around the vehicle and updating the state of traffic participants around the vehicle.
At step 323, the vehicle may be initialized according to the vehicle arrival model.
The vehicle arrival model refers to that the vehicle state can be initialized based on the initialization module when the traffic participants enter the road network according to information such as traffic flow and start/stop point flow on the road network.
In step 324, vehicles are selected in order from downstream to upstream by road.
Selecting a vehicle means sequentially calculating the vehicle states in the road network and updating the calculated vehicle states.
Step 325, update the vehicle according to one or more of the following calculations: the system comprises a lane selection model, a following model, a lane changing model, a signal lamp reaction model, a random interference model, a public traffic and pedestrian behavior model, a gap acceptance model, an influx model, a cooperation model or a collision avoidance model.
At step 326, it is determined whether the update vehicle has reached the end of the lane.
The determination method may be to compare whether the position of the update vehicle coincides with the position of the end of the lane. And if the position of the update vehicle is consistent with the position of the end of the lane, the update vehicle reaches the end of the lane, and whether the update vehicle reaches the travel terminal is continuously judged. For example, the vehicle position may be acquired in real time, and it may be determined whether the vehicle reaches the set destination based on the vehicle position.
And if the updated vehicle reaches the travel terminal, deleting the updated vehicle from the traffic flow. And if the updating vehicle does not reach the travel terminal or the position of the updating vehicle is not consistent with the position of the end of the lane, executing the updating vehicle.
Step 327, determine whether the vehicle is the last vehicle.
For example, the last vehicle that needs to calculate the updated vehicle status may be determined as the last vehicle in the list by obtaining the list of vehicles upstream and downstream in the road network. If the update vehicle is not the last vehicle, then return is made to step 324. And if the updated vehicle is not the last vehicle, outputting the dynamic scene in an animation mode through a visualization technology.
Step 328, determine if the simulation termination time has been reached.
And ending the simulation operation if the simulation termination time is reached. And if the simulation time does not reach, executing the simulation of the next simulation step length, and returning to the step of updating the road network state.
The behavior model of the dynamic ontology will be described in detail with reference to fig. 7-12.
The following model can comprise an on-road following model, a lane changing following model, a converging following model and a diverging following model.
The same-lane following model is suitable for a scene that a following vehicle (namely, a controlled vehicle related to the following formula) does not perform lane change, and is used for simulating the behavior of the following vehicle and a preceding vehicle in the scene. In a scene that the following vehicle does not change lanes, the same-lane following model can control the behavior of the following vehicle according to the motion state information of the preceding vehicle. The motion state information may include information such as the position and speed of the leading vehicle. Alternatively, when the traffic state is changed from a congestion state to a non-congestion state and the following vehicle follows the preceding vehicle, the same-lane following model can adopt an intelligent driver model. The formula for the intelligent driver model is as follows:
Figure BDA0002665549330000151
Figure BDA0002665549330000152
in the above-mentioned two formulas, the first and second formulas,
Figure BDA0002665549330000153
for acceleration of the controlled vehicle, v is the speed of the controlled vehicle, v0Δ v is the difference between the speed of the controlled vehicle and the speed of the leading vehicle, which is the desired speed of the controlled vehicle; s*To control the desired distance between the vehicle and the preceding vehicle, s0For a stationary safety distance, s is the distance between the controlled vehicle and the preceding vehicle, s1Selecting parameters for a speed-dependent safety distance of the controlled vehicle; a is comfort deceleration, b is acceleration index, and T is reaction time.
Lane changing and following model: for a rear vehicle having a forced lane change motivation but not completing the lane change due to insufficient clearance, the lane change following model needs to control the rear vehicle to stop at the end of the path to wait. And the lane changing and following model controls the rear vehicle to execute lane changing until a proper gap exists. The lane change following model requires calculating the acceleration of the following vehicle to the preceding vehicle, and the acceleration (i.e., deceleration) of the following vehicle to the stopping point brake, and selecting a more conservative acceleration (i.e., the acceleration of the following vehicle to the preceding vehicle is smaller and/or the deceleration is larger) as the final following acceleration.
Illustratively, fig. 7 is a schematic express way diagram provided in an embodiment of the present application. The expressway shown in fig. 7 includes a ramp, a first lane, a second lane, and a third lane. If a vehicle needs to change lanes into a ramp, there will be a minimum lane change distance between the vehicle and the ramp entrance in the direction of the lane. If the distance between the vehicle and the ramp port is smaller than the minimum lane changing distance, the vehicle enters the ramp after changing the lane, and the driving safety cannot be ensured.
Each lane corresponds to a minimum lane change distance. For example, in fig. 7, if the vehicle is located on the third lane, the minimum lane change distance between the vehicle and the ramp intersection when the vehicle needs to change lanes may be 80 meters. That is, if the vehicle is located on the third lane, the vehicle completes the lane change before the distance between the vehicle and the ramp port is more than 80 meters, so that the driving safety can be ensured. If the vehicle is located on the second lane, the minimum lane change distance between the vehicle and the ramp intersection when the vehicle needs to change lanes is 40 meters. That is, if the vehicle is located on the second lane, the vehicle completes the lane change before the distance between the vehicle and the ramp port is more than 40 meters, so that the driving safety can be ensured. If the vehicle is on the first lane, the lane change can be made to the ramp as long as the vehicle does not miss the ramp. It should be understood that the minimum lane change distance may be adjusted according to actual requirements.
It should be noted that when there is an incentive for the vehicle to enter the ramp, the first lane adjacent to the ramp junction is the desired lane, i.e., the vehicle needs to change lane to enter the first lane. Assuming that there are situations where it is desirable to have vehicles in line waiting to merge into a lane, vehicles in the second and third lanes also need to wait for a suitable gap to change lanes at the stopping point. In this case, if the stopping points of the vehicles on the second lane and the third lane are located at the same place, there is a high possibility that the vehicles on the second lane and the third lane may not merge into the desired lane to cause congestion. Therefore, the stop point of the vehicle on the lane farther from the desired lane should be located further upstream. That is, the vehicle in the third lane has a greater stop distance from the ramp than the vehicle in the second lane.
Exemplarily, fig. 8 is a schematic diagram of connections between different road segments provided in an embodiment of the present application. As shown in fig. 8, the link a includes a lane 0, a lane 1, and a lane 2, and the vehicle 1 travels on the lane 0 and the vehicle 2 travels on the lane 1. The road section B includes a lane 3, a lane 4, a lane 5, and a lane 6, the vehicle 3 travels on the lane 3, and the vehicle 4 travels on the lane 5. Route 10 is a route connecting lane 0 and lane 3, route 11 is a route connecting lane 0 and lane 4, route 12 is a route connecting lane 1 and lane 5, and route 13 is a route connecting lane 2 and lane 6. The road section D includes a lane 7, a lane 8, and a lane 9. Route 15 is a route connecting lane 3 and lane 7, route 16 is a route connecting lane 4 and lane 8, route 17 is a route connecting lane 5 and lane 9, route 14 is a route connecting lane 0 and link C, and route 18 is a route connecting lane 6 and link E.
As shown in fig. 8, on the road segment B, the expected lane of the right-turn vehicle is lane 3, the expected lane of the straight-going vehicle may include lanes 4, 5 and 6, and the expected lane of the left-turn vehicle is lane 6. On road section a, the desired lane to turn or to go straight needs to be determined by road section B downstream. For example, finding the desired lane for a right turn vehicle on road segment a requires finding the lane for a right turn on road segment B, i.e., lane 3. Then, the lane connected to lane 3 on road a, i.e., lane 0, is inferred, and lane 0 is the desired lane for the right-turn vehicle on road a. Similarly, the expected lanes of the straight-going vehicles on the road segment a are lane 0 and lane 1, and the expected lane of the left-turning vehicles on the road segment a is lane 2.
For the following model at confluence: the confluence refers to that a plurality of lanes on the upstream road section are connected with one lane on the downstream road section. Referring to fig. 9, fig. 9 is a schematic view of a car following under confluence according to an embodiment of the present application. In fig. 9, a link a includes a lane 0, a lane 1, a lane 2, and a lane 3, and a link B includes a lane 4, a lane 5, and a lane 6, and the vehicle 3 travels on the lane 4 and the vehicle 4 travels on the lane 6. Route 7 is a route connecting lane 0 and lane 4, route 8 is a route connecting lane 1 and lane 4, route 9 is a route connecting lane 2 and lane 5, and route 10 is a route connecting lane 3 and lane 6.
As shown in fig. 9, the vehicle 1 travels on a route 8, and the vehicle 2 travels on a route 7. In this case, if the vehicle 1 and the vehicle 2 enter the merge range, it is necessary to determine who the vehicle 1 and the vehicle 2 are the preceding vehicles in the following model. The confluence range refers to the distance between the vehicle and the tail end of the path being within a first preset value range. For example, if the distance between the vehicle and the end of the path is less than 30 meters, the vehicle can be considered to enter the merge area. In the following model under confluence, the determination process of the preceding vehicle is as follows: a first distance between vehicle 1 and the end of path 8 and a second distance between vehicle 2 and the end of path 7 are obtained. And comparing the first distance with the second distance, wherein the vehicle corresponding to the minimum distance in the first distance and the second distance is the front vehicle in the following model.
For the following model under split: by diversion is meant that one lane on an upstream road segment has a connected relationship with multiple lanes on a downstream road segment. Referring to fig. 10, fig. 10 is a schematic following diagram under split flow according to an embodiment of the present application. In fig. 10, a link a includes lane 0, lane 1, and lane 2, and a link B includes lane 3, lane 4, lane 5, and lane 6. Route 7 is a route connecting lane 0 and lane 3, route 8 is a route connecting lane 0 and lane 4, route 9 is a route connecting lane 1 and lane 5, and route 10 is a route connecting lane 2 and lane 6.
As shown in fig. 10, if the vehicle 3 enters the diversion range, it is necessary to determine which one is the preceding vehicle under diversion. The shunting range refers to the distance between the vehicle and the tail end of the path being within a second preset value range. For example, if the distance between the vehicle and the end of the path is less than 20 meters, the vehicle can be considered to enter the diversion range. The vehicle ahead of the diversion is a vehicle whose tail has not completely exited the lane on the upstream link, which has a connection relationship with the plurality of lanes on the downstream link. In the following model under the shunting, the front vehicle under the shunting is taken as a following object. For example, the vehicle 2 in fig. 10 is a leading vehicle under a split-flow, and the following object of the vehicle 3 is the vehicle 2.
For the lane change model: the lane change behavior is a process in which the driver changes lanes according to the driving characteristics of the driver and the state information of the surrounding vehicles. The lane change is classified into a Mandatory Lane Changing (MLC) and an arbitrary lane changing (DLC). The mandatory lane change and the optional lane change can both comprise the following four steps:
step 41, a lane change motivation is generated.
The generation conditions of the lane change motivation are listed in order from high priority to low priority.
(1) And detecting that the lane of the current vehicle is not the expected lane, and generating a mandatory lane change motivation. After the forced lane change engine is generated, the forced lane change engine does not disappear until the lane change is completed even if there is no lane change gap.
(2) When the speed of the vehicle is greater than that of the front vehicle, the speed of the front vehicle is greater than that of the front vehicle, and the distance between the vehicle and the front vehicle on the target lane is greater than the distance between the vehicle and the front vehicle on the current lane by more than 2 times the length of the vehicle, an arbitrary lane change motor is generated. The target lane may be a desired lane, or may be a lane between the vehicle and the desired lane.
(3) When the speed of the vehicle is lower than the expected speed (or the lane speed limit) by 20 kilometers/hour, the speed of the front vehicle is lower than the speed of the vehicle by more than 20%, the speed of the front vehicle on the target lane is higher than the speed of the front vehicle on the current lane by more than 10 kilometers/hour, and the distance between the vehicle and the front vehicle on the target lane is more than 2 times the length of the vehicle, an arbitrary lane change motor is generated.
(4) And if the front vehicle on the current lane is a large vehicle, the distance between the front vehicle and the large vehicle in front is less than 2 times of the time distance, the speed of the front vehicle on the target lane is higher than that of the large vehicle on the current lane, and the distance between the front vehicle on the target lane and the front vehicle on the target lane is more than 2 times of the length of the front vehicle on the current lane, generating the arbitrary lane changing motor. Among these, large vehicles include, but are not limited to: large trucks, large buses, large trucks, fire trucks.
It should be noted that if the vehicle is not in the desired lane, the arbitrary lane change needs to be performed toward the desired lane. If the vehicle is on the desired lanes, the vehicle can only change lanes between the desired lanes at will. No matter the lane is changed forcibly or randomly, the front vehicle on the current lane needs to be in a non-lane changing state.
Step 42, selecting a lane.
For the arbitrary lane change, if the current lane is not on the expected lane of the vehicle, the vehicle selects a direction close to the expected lane to change the lane; if the vehicle is located on the desired lane, the vehicle can only change lanes between the desired lanes. For example, if the left lane is an undesired lane, the vehicle cannot change lanes to the left. If the lanes on the two sides are expected lanes, the probability of lane changing to the left or lane changing to the right is 50 percent. If the clearance of the left lane does not meet the lane change requirement, the lane change to the right lane is considered.
For a forced lane change, the vehicle will change lanes towards the lane closer to the target lane.
Step 43, select a gap.
The vehicle that generates the lane change motive needs to evaluate whether there is an appropriate gap in the adjacent lane to allow the host vehicle to safely perform the lane change, and cannot perform the lane change if there is no appropriate gap. The description will be given by taking a TransModeller linear model as an example:
Figure BDA0002665549330000171
wherein the content of the first and second substances,
Figure BDA0002665549330000172
is a set minimum clearance; g is a front lead or rear lag gap; beta is agColumn vector parameters for the acceptance of the front and back gaps; xiA row vector value of calibrated beta;
Figure BDA0002665549330000173
mean 0, variance set by the user.
Wherein for the column vector βgIs (b) ofg 0、βg 1、βg 2、βg3The calculation parameters of } are described below:
βg 0is a constant; beta is ag 1Is 0 and Δ Vi gMaximum value of (d); beta is ag 2Is 0 and Δ Vi gMinimum value of (d); beta is ag 3Is a Vg. If g ═ lead, Δ Vi g=Vi subject-Vi lead(ii) a If g is lag, Δ Vi g=Vi lag-Vi subject;VgIs a VleadOr Vlag
For mandatory lane changes, β corresponds to i-4g 4The formula of (1) is as follows:
Figure BDA0002665549330000174
in the above formula, α is a distance influence coefficient; d is the terminal point of the critical distance from the forced lane change; i.e. the location of the downstream decision point or incident point. If not, then d is not present in the above equation. For example, into a prohibited lane.
Step 44, lane change is performed.
After the vehicle generates the lane change motivation, the driving track of the vehicle when lane change is executed needs to be planned. A third order bezier curve may be used for the simulation. The third-order Bezier curve can ensure the continuity and the boundedness of the curve, and the vehicle can run according to the planned curve.
The formula of the third order bezier curve is:
B(t)=P0(1-t)3+3P1t(1-t)2+3P2t2(1-t)+P3t3
wherein t is ∈ [0,1 ]],P0、P1、P2And P3For the control point, B (t) is a third order Bezier curve starting at P0Run in the direction of P1And from P2Comes to P3
Fig. 11 is a schematic diagram of a third-order bezier curve provided in the embodiment of the present application. As shown in fig. 11, the vehicle 1 performs lane change from lane 0 to lane 1. The initial position of the lane change of the vehicle is P0The end position of the lane change of the vehicle is P3. Control point P in the above formula of the third order Bezier curve1And a control point P2The selection method comprises the following steps:
step 441, according to the lane change distance of the vehicle and the initial position P of the lane change of the vehicle0Calculating the end position P of the lane change of the vehicle3
Step 442, the initial position P of the lane change is determined by the vehicle0Drawing a ray 1 in the tangential direction to the end position P of the lane change of the vehicle3Ray 2 is drawn in the opposite direction to the tangent.
Step 443, connecting the initial position P of the vehicle lane change0And the end position P of lane change of the vehicle3And taking trisection point 1 and trisection point 2 on the connecting line.
Step 444, with threeThe bisector point 1 and the trisection point 2 are perpendicular to the ray 1 and the ray 2, respectively. Wherein, the vertical point of the intersection of the vertical line made from the trisection point 1 to the ray 1 and the ray 1 is the control point P1The vertical point where the vertical line from the trisection point 2 to the ray 2 intersects with the ray 2 is a control point P2
Step 445, according to the control point P1And a control point P2And an initial position P0And an end position P3A bezier curve b (t) is generated.
For the signal lamp reaction model: a signal lamp is placed at the end of the entrance section. When the vehicle enters the range of the threshold value upstream of the signal lamp, the vehicle generates decision-making behaviors according to the color of the signal lamp. The threshold may be 50 meters, or may be other values, and the present application is not limited specifically. The following describes the behavior of a vehicle to make a decision based on the color of the signal lights:
(1) when the color of the signal lamp is green, the vehicle runs along with the front vehicle by adopting the same-lane following model.
(2) When the signal lamp is a red lamp, only the arriving vehicle closest to the signal lamp will react to the signal lamp, and the vehicle behind the arriving vehicle only needs to follow the front vehicle. At this time, the stop line can be assumed to be a stationary vehicle (the rear of the vehicle is level with the stop line).
(3) When the signal light is yellow, the first vehicle located downstream of the road section needs to judge whether the signal light can be passed at the current speed. If the judgment result is that the vehicle can not pass through the signal lamp, a static virtual vehicle is placed behind the yellow lamp by default, the first vehicle decelerates, and the subsequent vehicles can follow and queue; and if the judgment result is that the vehicle can pass, continuing to follow the front vehicle by the same-track following model.
For the collision avoidance model: the collision avoidance model is used in the situation that path intersection exists in the junction or the ramp inward-entering area or the outward-exiting area. Referring to fig. 12, fig. 12 is a schematic diagram of collision avoidance of a vehicle according to an embodiment of the present application.
In fig. 12, a link a includes lane 0, lane 1, and lane 2, and a link B includes lane 3 and lane 4. The road section C includes lanes 5, 6 and 7, and the road section D includes lanes 8 and 9. Route 10 is a route connecting lane 3 and lane 8, route 11 is a route connecting lane 4 and lane 9, and route 12 is a route connecting lane 2 and lane 5. Vehicle 1 travels on route 12, vehicle 2 travels on route 10, vehicle 3 travels on route 11, and vehicle 4 travels on lane 4.
In fig. 12, a traveling vehicle 1 collides with the vehicles 2 and 3. The point where the path 10 and the path 12 intersect is the conflict point 1, and the point where the path 11 and the path 12 intersect is the conflict point 2. At this time, the method for controlling the vehicle by the collision avoidance model may include the steps of:
and step 51, confirming the sequence of the conflicting vehicles passing the corresponding conflict points.
And for each conflict point, calculating the time of the conflict vehicle passing through the conflict point according to the distance between the conflict vehicle and the conflict point and the current speed of each vehicle. As shown in fig. 12, assuming that the vehicle speeds of the vehicle 1, the vehicle 2, and the vehicle 3 are the same, it is necessary to calculate the time when the vehicle 1 passes through the conflict points 1 and 2, the time when the vehicle 2 passes through the conflict point 1, and the time when the vehicle 3 passes through the conflict point 2. For conflict point 1, vehicle 1 passes conflict point 1 before vehicle 2. For the conflict point 2, the vehicle 3 passes the conflict point 2 earlier than the vehicle 1.
And step 52, regarding each conflict point, taking the vehicle which passes through the conflict point firstly as a front vehicle, taking other vehicles as rear vehicles, and enabling the rear vehicles to avoid the front vehicle.
And if the time difference is smaller than or equal to the threshold value, the rear vehicle needs to make an avoidance action. That is, when the time for the front vehicle and the time for the rear vehicle to reach the corresponding conflict point are relatively close, the rear vehicle needs to avoid the front vehicle, and the front vehicle passes through the conflict point in advance. The collision avoidance model can assume that a static front vehicle exists at the collision point, and the following acceleration of the following vehicle following the front vehicle is calculated. And if the same vehicle has a plurality of collision avoidance demands, taking the minimum value from a plurality of following accelerations.
As shown in fig. 12, when the vehicle 1 passes through the collision 1 earlier than the vehicle 2 at the collision point 1, the vehicle 1 is a preceding vehicle and the vehicle 2 is a following vehicle. If the time difference between the vehicle 1 and the vehicle 2 passing through the conflict point 1 is less than 2s, the vehicle 2 needs to make an avoidance action. In the conflict point 2, when the vehicle 3 passes through the conflict point 2 earlier than the vehicle 1, the vehicle 3 is a preceding vehicle and the vehicle 1 is a following vehicle. If the time difference between the vehicle 1 and the vehicle 3 passing through the conflict point 2 is greater than 2s, the vehicle 1 does not need to make an avoidance action.
After the initial state of the dynamic body is determined, the dynamic body runs under the control of the corresponding behavior model in different dynamic scenes, and therefore the running track of the dynamic body can be determined. By determining the initial state and the driving track of the dynamic ontology, complex dynamic interaction among all traffic participants can be embodied in a dynamic scene. Therefore, when the unmanned vehicle is tested in the virtual test scene, interactive test data meeting the design can be obtained, and the virtual test efficiency of the unmanned vehicle can be improved.
And S204, generating a test scene model.
The test scene model comprises a driving track, and the driving track is used for testing whether the driving behavior of the unmanned vehicle meets the design requirement.
Taking a static scene comprising 3 static ontologies and a dynamic scene comprising 1 dynamic ontology as examples:
the parameter space corresponding to 3 static ontologies in a static scene is defined as follows:
center separator type: a reticle separator and a separator;
width of central dividing belt: width of the dividing strip of the marked line: 0.5 m; median width: 1.5 m to 3 m, and the sampling interval is 0.1 m;
the number of lanes: 1 to 5;
lane width: 3.5 m to 4.5 m, and the sampling interval is 0.05 m;
roadside bank width: 1.5 m to 3 m, and the sampling interval is 0.05 m;
the parameter space of the dynamic ontology in the dynamic scene is defined as follows:
initial lane of vehicle: 1 to 3;
initial ordinate of vehicle: 0m to 50 m, and the sampling interval is 10 m;
initial speed of the vehicle: 10 m/s to 35 m/s, and the sampling interval is 5 m/s;
vehicle initial lane change intention: 0: the lane is not changed; 1: changing tracks to the left; 2: and changing the way to the right.
After the parameter space is determined, various reasonable parameter combinations are continuously traversed by computer power, and 9600 static test scenes and 6326 dynamic test scenes are finally obtained.
In a possible design, the method for constructing a virtual test scenario shown in fig. 2 may further include the following steps:
and 61, acquiring the driving behavior of the unmanned vehicle in a first test scene.
Based on the parameter space and the rule of each body in the constructed virtual test scene, the first test scene is a virtual test scene formed by a larger sampling interval of the parameter space. And placing the unmanned vehicle into a first test scene for testing to obtain the driving behavior of the unmanned vehicle in the first test scene.
Step 62, a second test scenario is determined.
The second test scene is a test scene in which the driving behavior of the unmanned vehicle in the first test scene does not meet the design requirement. That is, the search direction of a key scene is determined through a coarse grain test, and the key scene refers to a scene with poor reaction of the unmanned vehicle.
And 63, carrying out encryption sampling on the parameter interval corresponding to the second test scene to generate a third test scene.
And facing to the key scene, performing fine-grained encrypted sampling on a parameter interval corresponding to the key scene to generate a third test scene. In this way, the targeted test can be performed to achieve the effect of accelerating the test.
And step 64, testing the third test scene.
A first test scene can be constructed through coarse-grained parameter value taking, and a parameter interval corresponding to the first test scene, of which the driving behavior does not meet the design requirement, is encrypted and sampled according to the driving behavior of the unmanned vehicle in the first test scene to generate a third test scene. Therefore, the virtual test scene with poor performance of the unmanned vehicle is directionally sampled and tested for multiple times, the problem that the virtual test scene constructed due to the fact that the parameter value is too thin is too much, and the test efficiency is low can be solved, the virtual test efficiency and the test pertinence of the unmanned vehicle can be improved, and the method is more suitable for the unmanned test compared with the traditional test scene design based on human driving behaviors.
Optionally, the method for constructing a virtual test scenario shown in fig. 2 may further include: and adding a label to the virtual test scene based on the interactive relation between the static ontology and the dynamic ontology. Wherein the label is used for managing the virtual test scenario.
The driving track of the dynamic ontology can provide a lot of information for the scene. For example, a desired test function of a scenario may be defined based on the interaction behavior between vehicles. Wherein the interaction behavior includes but is not limited to: following, changing lanes, entering and exiting. For example, a car-following label may be added to the virtual test scenario in which the interactive behavior is car-following, indicating that the virtual test scenario is a car-following scenario. In the merged following scenario shown in fig. 9, a merged following tag may be added. In the split-down follow-up scenario shown in fig. 10, a split-down follow-up tag may be added. In the collision avoidance scenario shown in fig. 12, collision avoidance tags may be added.
Furthermore, the danger degree of the virtual test scene can be defined according to the interaction severity degree between the interaction behaviors, namely, whether one virtual test scene is a dangerous working condition or a normal working condition. When the virtual test scene is a dangerous working condition, a dangerous label can be added to the virtual test scene.
By performing labeling processing on the virtual test scene, such as function classification of the virtual test scene and the risk degree of the virtual test scene, management in a virtual test scene library is facilitated, and a required virtual test scene is conveniently called for testing during testing.
For example, when the collision avoidance scene tests whether the driving behavior of the unmanned vehicle meets the expectation, whether the collision avoidance scene exists in the tag can be inquired. For example, when a collision avoidance tag exists through query in the scene library, a collision avoidance scene corresponding to the collision avoidance tag is called. Next, the simulated unmanned vehicle is placed in the collision avoidance scenario. Then, the driving track of the unmanned vehicle in the collision avoidance scene is recorded. And finally, analyzing the running track of the unmanned vehicle, and determining whether the driving behavior of the unmanned vehicle meets the driving behavior specification.
As shown in fig. 12, the unmanned vehicle may be any one of the vehicle 1, the vehicle 2, or the vehicle 3. Taking the example where the vehicle 1 is an unmanned vehicle, the running unmanned vehicle collides with the vehicles 2 and 3. Assuming that the speeds of the unmanned vehicle, the vehicle 2, and the vehicle 3 are the same, it is necessary to calculate the time when the unmanned vehicle passes through the conflict points 1 and 2, the time when the vehicle 2 passes through the conflict point 1, and the time when the vehicle 3 passes through the conflict point 2.
For conflict point 1, if the unmanned vehicle passes conflict 1 before vehicle 2, then the unmanned vehicle is considered as the leading vehicle and vehicle 2 is considered as the trailing vehicle. For conflict point 2, if vehicle 3 passes conflict 2 before the unmanned vehicle, then vehicle 3 is the leading vehicle and the unmanned vehicle is the trailing vehicle. If the time difference between the unmanned vehicle and the vehicle 3 passing through the conflict point 2 is less than 2s, the unmanned vehicle needs to make an avoidance action. Whether the unmanned vehicle makes an avoidance action through the conflict point 2 can be confirmed by analyzing the running track of the unmanned vehicle. If the unmanned vehicle makes an avoidance action, for example, the unmanned vehicle makes an avoidance operation such as parking or deceleration, the driving behavior of the unmanned vehicle in the collision avoidance scene conforms to the driving behavior specification. And if the unmanned vehicle does not make an avoidance action, the driving behavior of the unmanned vehicle in the conflict avoidance scene does not accord with the driving behavior specification.
Based on the virtual test scenario construction method shown in fig. 2, the static ontology model constructed by combining expert experience and road specifications can embody the design constraints among the static ontologies, and the problem that the value of the parameter combination traversed by the computer is unreasonable due to the fact that the constructed virtual test scenario does not conform to the road specifications can be solved. In addition, the dynamic body model constructed based on the static body model can embody the behavior constraint between the dynamic body and the static body, and further determine the driving track of the dynamic body based on the behavior constraint and the traffic flow simulation, so that the problem that the dynamic body in a virtual test scene extracted based on natural driving data cannot dynamically interact with an unmanned vehicle can be solved, and unreasonable behaviors of the dynamic body can be eliminated. Therefore, a large number of reasonable and complete virtual test scenes capable of dynamically interacting with the unmanned vehicle can be constructed, and the test efficiency of the unmanned vehicle can be improved.
The virtual test scenario construction method provided by the embodiment of the present application is described in detail above with reference to fig. 2 to 12. The virtual test scenario constructing apparatus provided in the embodiment of the present application is described in detail below with reference to fig. 13 to 14.
Fig. 13 is a schematic structural diagram of a virtual test scenario constructing apparatus according to an embodiment of the present application. As shown in fig. 13, the virtual test scenario construction apparatus 1300 includes: a building module 1301, a determining module 1302 and a generating module 1303. For convenience of explanation, fig. 13 shows only main components of the virtual test scenario construction apparatus.
In one possible design, the virtual test scenario constructing apparatus 1300 may be adapted to the traffic scenario diagram shown in fig. 1, and execute the virtual test scenario constructing method shown in fig. 2.
The building module 1301 is used for building the static ontology model by combining road specifications and expert experience. Wherein the static ontology model is used to describe design constraints of the static ontology. The static ontology comprises one or more of: road topology, road infrastructure, traffic control, or environment.
The building module 1301 is further configured to build a dynamic ontology model based on the static ontology model. The dynamic ontology model is used for describing the behavior constraint of the dynamic ontology, and the dynamic ontology comprises one or more of the following items: a vehicle, a pedestrian, or an animal.
And the determining module 1302 is configured to determine the driving track of the dynamic ontology according to the design constraint of the static ontology and the behavior constraint of the dynamic ontology in combination with the traffic flow simulation.
And a generating module 1303, configured to generate a test scene model. The test scene model comprises a driving track, and the driving track is used for testing whether the driving behavior of the unmanned vehicle meets the design requirement.
In one possible design, module 1301 is further configured to define a static ontology class. The static ontology class is used for describing one or more of the following corresponding static ontologies: virtual test scenes, roads, banks, lanes, traffic signs, weather, or lighting conditions.
The building module 1301 is further configured to define static object properties. And the static object attributes are used for describing the mapping relation between the static ontology classes.
The building module 1301 is further configured to define static data attributes. The static data attribute is used for describing a parameter set of the static ontology class.
The building module 1301 is further used for defining static design rules. The static design rule is used for describing a mapping relation among the static ontology class, the static object attribute and the static data attribute, and the static design rule is used for determining the rationality of the generated virtual test scene.
Optionally, the design constraints of the static ontology include one or more of: the position relation among roads, the position relation among roads and lanes, the position relation among lanes and dividing strips, the position and type of a lane line and the constraint relation among lanes and/or dividing strips, and the position relation among traffic signs and lanes and/or dividing strips.
Further, the building module 1301 is also used for defining a dynamic ontology class. The dynamic ontology class is used for describing one or more items corresponding to the dynamic ontology: a vehicle, a pedestrian, or an animal.
The building module 1301 is further configured to define the dynamic object attribute. The dynamic object attributes are used for describing mapping relations among dynamic ontology classes and constraint relations among the dynamic ontologies and the static ontologies.
The building module 1301 is further configured to define the dynamic data attribute. And the dynamic data attribute is used for describing a parameter set of the dynamic ontology class.
The building module 1301 is further configured to define a dynamic design rule. The dynamic design rule is used for describing the mapping relation among the dynamic ontology class, the dynamic object attribute and the dynamic data attribute, and the dynamic design rule is used for determining the rationality of the generated test scene.
Optionally, the behavior constraint of the dynamic ontology includes one or more of the following items: the vehicle-to-lane position relation, the vehicle speed limit, the vehicle running direction constraint, the vehicle steering constraint, the vehicle lane change constraint, the vehicle cut-in constraint and the vehicle cut-out constraint.
Still further, the determining module 1302 is further configured to determine the initial state of the dynamic ontology according to the design constraint of the static ontology, the behavior of the dynamic ontology, and the scene design requirement.
The determining module 1302 is further configured to determine a driving track of the dynamic ontology according to the initial state of the dynamic ontology in combination with the traffic flow simulation.
Optionally, the building module 1301 is further configured to determine an initial state of the dynamic ontology according to the design constraint of the static ontology, the behavior constraint and the scenario design requirement of the dynamic ontology, and the test target and the behavior model of the dynamic ontology.
Optionally, the behavior model of the dynamic ontology includes one or more of the following items: the system comprises a following model, a lane changing model, a signal lamp reaction model, a random interference model, a public traffic and pedestrian behavior model, a clearance receiving model, an importing model, a cooperation model or a collision avoiding model.
In a possible design, as shown in fig. 13, the virtual test scenario constructing apparatus 1300 may further include an obtaining module 1304 and a testing module 1305. Wherein the content of the first and second substances,
an obtaining module 1304 is configured to obtain a driving behavior of the unmanned vehicle in a first test scenario.
The determining module 1302 is further configured to determine a second test scenario. The second test scene is a test scene in which the driving behavior of the unmanned vehicle in the first test scene does not meet the design requirement.
The generating module 1303 is further configured to perform encryption sampling on the parameter interval corresponding to the second test scenario to generate a third test scenario.
A testing module 1305, configured to perform a test for the third testing scenario.
In a possible design, as shown in fig. 13, the virtual test scenario constructing apparatus 1300 may further include an adding module 1306. An adding module 1306, configured to add a label to the virtual test scenario based on an interaction relationship between the static ontology and the dynamic ontology. Wherein the label is used for managing the virtual test scenario.
It should be noted that the building module 1301, the determining module 1302, the generating module 1303, the obtaining module 1304, the testing module 1305, and the adding module 1306 may be the same processing module. The processing module can execute the functions corresponding to the various modules.
Optionally, the virtual test scenario construction apparatus 1300 may further include a storage module (not shown in fig. 13) that stores programs or instructions. When the processing module executes the program or the instructions, the virtual test scenario construction apparatus 1300 may execute the virtual test scenario construction method shown in fig. 2.
It should be noted that the virtual test scenario constructing apparatus 1300 may be any network device, for example, a server, or may also be a chip (system) or other component or assembly disposed in the network device, which is not limited in this embodiment of the present application.
In addition, the technical effects of the virtual test scenario constructing apparatus 1300 may refer to the technical effects of the virtual test scenario constructing method shown in fig. 2, which are not described herein again.
Fig. 14 is a schematic structural diagram of a virtual test scenario constructing apparatus according to an embodiment of the present application. The virtual test scenario constructing apparatus may be a network device, such as a server, or may be a chip (system) or other component or assembly that can be installed in a terminal device or a network device. As shown in fig. 14, the virtual test scenario construction apparatus 1400 may include a processor 1401. Optionally, the virtual test scenario construction apparatus 1400 may further include a memory 1402 and/or a transceiver 1403. Wherein the processor 1401 is coupled to the memory 1402 and the transceiver 1403, such as may be connected by a communication bus.
The following describes each component of the virtual test scenario constructing apparatus 1400 in detail with reference to fig. 14:
the processor 1401 is a control center of the virtual test scenario constructing apparatus 1400, and may be a single processor or a collective term for a plurality of processing elements. For example, processor 1401 is one or more Central Processing Units (CPUs), and may be an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement embodiments of the present application, such as: one or more microprocessors (digital signal processors, DSPs), or one or more Field Programmable Gate Arrays (FPGAs).
Alternatively, the processor 1401 may execute various functions of the virtual test scenario construction apparatus 1400 by running or executing a software program stored in the memory 1402 and calling data stored in the memory 1402.
In particular implementations, processor 1401 may include one or more CPUs such as CPU0 and CPU1 shown in fig. 14 as one example.
In a specific implementation, as an embodiment, the virtual test scenario construction apparatus 1400 may also include a plurality of processors, such as the processor 1401 and the processor 1404 shown in fig. 14. Each of these processors may be a single-Core Processor (CPU) or a multi-Core Processor (CPU). A processor herein may refer to one or more communication devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
The memory 1402 is configured to store a software program for executing the scheme of the present application, and is controlled by the processor 1401 to execute the software program.
Alternatively, memory 1402 may be a read-only memory (ROM) or other type of static storage communication device that may store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage communication device that may store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, Blu-ray disc, etc.), a disk storage medium or other magnetic storage communication device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 1402 may be integrated with the processor 1401, or may exist independently, and is coupled to the processor 1401 through an input/output port (not shown in fig. 14) of the virtual test scenario constructing apparatus 1400, which is not specifically limited in this embodiment of the present application.
A transceiver 1403 for communication with other communication devices. For example, the virtual test scenario constructing apparatus 1400 is a network device, and the transceiver 1403 may be used for communicating with a terminal device or another network device.
Optionally, the transceiver 1403 may include a receiver and a transmitter (not separately shown in fig. 14). Wherein the receiver is configured to implement a receive function and the transmitter is configured to implement a transmit function.
Optionally, the transceiver 1403 may be integrated with the processor 1401, or may exist independently, and is coupled to the processor 1401 through an input/output port (not shown in fig. 14) of the virtual test scenario constructing apparatus 1400, which is not specifically limited in this embodiment of the present application.
It should be noted that the structure of the virtual test scenario construction apparatus 1400 shown in fig. 14 does not constitute a limitation to the communication apparatus, and an actual virtual test scenario construction apparatus may include more or less components than those shown in the figure, or combine some components, or arrange different components.
The embodiment of the application provides a virtual test scene construction system. The virtual test scenario construction system comprises one or more network devices.
It should be understood that the processor in the embodiments of the present application may be a Central Processing Unit (CPU), and the processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will also be appreciated that the memory in the embodiments of the subject application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of Random Access Memory (RAM) are available, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchlink DRAM (SLDRAM), and direct bus RAM (DR RAM).
The above embodiments may be implemented in whole or in part by software, hardware (e.g., circuitry), firmware, or any combination thereof. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer instructions or computer programs. The procedures or functions according to the embodiments of the present application are wholly or partially generated when the computer instructions or the computer program are loaded or executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more collections of available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. The semiconductor medium may be a solid state disk.
It should be understood that the term "and/or" herein is merely one type of association relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. In addition, the "/" in this document generally indicates that the former and latter associated objects are in an "or" relationship, but may also indicate an "and/or" relationship, which may be understood with particular reference to the former and latter text.
In the present application, "at least one" means one or more, "a plurality" means two or more. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (25)

1. A virtual test scene construction method is characterized by comprising the following steps:
constructing a static ontology model by combining road specifications and expert experience; wherein the static ontology model is used to describe design constraints of a static ontology, the static ontology comprising one or more of: road topology, road infrastructure, traffic control, or environment;
constructing a dynamic ontology model based on the static ontology model; wherein the dynamic ontology model is used for describing the behavior constraint of a dynamic ontology, and the dynamic ontology comprises one or more of the following items: a vehicle, pedestrian, or animal;
determining the running track of the dynamic body according to the design constraint of the static body and the behavior constraint of the dynamic body by combining traffic flow simulation;
generating a test scene model; the test scene model comprises the driving track, and the driving track is used for testing whether the driving behavior of the unmanned vehicle meets the design requirement.
2. The virtual test scenario construction method of claim 1, wherein the construction of the static ontology model in combination with road specifications and expert experience comprises:
defining a static ontology class; wherein the static ontology class is used for describing one or more of the following items corresponding to the static ontology: virtual test scenes, roads, barriers, lanes, traffic signs, weather, or lighting conditions;
defining static object attributes; the static object attributes are used for describing mapping relations among the static ontology classes;
defining static data attributes; wherein the static data attribute is used for describing a parameter set of the static ontology class;
defining a static design rule; the static design rule is used for describing a mapping relation among the static ontology class, the static object attribute and the static data attribute, and the static design rule is used for determining the rationality of the generated virtual test scene.
3. The virtual test scenario construction method of claim 2, wherein the design constraints of the static ontology include one or more of: the position relation among roads, the position relation among roads and lanes, the position relation among lanes and dividing strips, the position and type of a lane line and the constraint relation among lanes and/or dividing strips, and the position relation among traffic signs and lanes and/or dividing strips.
4. The virtual test scenario construction method according to claim 2 or 3, wherein the constructing a dynamic ontology model based on the static ontology model comprises:
defining a dynamic ontology class; wherein the dynamic ontology class is used for describing one or more of the following items corresponding to the dynamic ontology: a vehicle, pedestrian, or animal;
defining dynamic object attributes; the dynamic object attribute is used for describing a mapping relation between the dynamic ontology classes and a constraint relation between the dynamic ontology and the static ontology;
defining dynamic data attributes; wherein the dynamic data attribute is used for describing a parameter set of the dynamic ontology class;
defining a dynamic design rule; the dynamic design rule is used for describing a mapping relation among the dynamic ontology class, the dynamic object attribute and the dynamic data attribute, and the dynamic design rule is used for determining the reasonability of the generated test scene.
5. The virtual test scenario construction method of claim 4, wherein the behavior constraints of the dynamic ontology include one or more of the following: the vehicle-to-lane position relation, the vehicle speed limit, the vehicle running direction constraint, the vehicle steering constraint, the vehicle lane change constraint, the vehicle cut-in constraint and the vehicle cut-out constraint.
6. The virtual test scenario construction method according to claim 5, wherein the determining the driving trajectory of the dynamic ontology according to the design constraint of the static ontology and the behavior constraint of the dynamic ontology in combination with traffic flow simulation includes:
determining an initial state of the dynamic ontology according to the design constraint of the static ontology, the behavior constraint of the dynamic ontology and the scene design requirement;
and determining the running track of the dynamic body according to the initial state of the dynamic body by combining traffic flow simulation.
7. The method for constructing virtual test scenarios according to claim 6, wherein the determining the initial state of the dynamic ontology according to the design constraints of the static ontology, the behavior constraints of the dynamic ontology and the scenario design requirements comprises:
and determining the initial state of the dynamic body according to the design constraint of the static body, the behavior constraint and the scene design requirement of the dynamic body, the test target and the behavior model of the dynamic body.
8. The virtual test scenario construction method of claim 7, wherein the behavior model of the dynamic ontology comprises one or more of the following: the system comprises a following model, a lane changing model, a signal lamp reaction model, a random interference model, a public traffic and pedestrian behavior model, a clearance receiving model, an importing model, a cooperation model or a collision avoiding model.
9. The virtual test scenario construction method of any one of claims 1-8, further comprising:
acquiring the driving behavior of the unmanned vehicle in a first test scene;
determining a second test scenario; the second test scene is a test scene in which the driving behavior of the unmanned vehicle in the first test scene does not meet the design requirement;
carrying out encryption sampling on the parameter interval corresponding to the second test scene to generate a third test scene;
and testing the third test scene.
10. The virtual test scenario construction method of any one of claims 1-9, further comprising:
adding a label to the virtual test scene based on the interactive relationship between the static ontology and the dynamic ontology; wherein the label is used for managing a virtual test scenario.
11. A virtual test scene constructing device is characterized by comprising a constructing module, a determining module and a generating module; wherein the content of the first and second substances,
the construction module is used for constructing a static ontology model by combining road specifications and expert experience; wherein the static ontology model is used to describe design constraints of a static ontology, the static ontology comprising one or more of: road topology, road infrastructure, traffic control, or environment;
the building module is further used for building a dynamic ontology model based on the static ontology model; wherein the dynamic ontology model is used for describing the behavior constraint of a dynamic ontology, and the dynamic ontology comprises one or more of the following items: a vehicle, pedestrian, or animal;
the determining module is used for determining the running track of the dynamic body according to the design constraint of the static body and the behavior constraint of the dynamic body by combining traffic flow simulation;
the generating module is used for generating a test scene model; the test scene model comprises the driving track, and the driving track is used for testing whether the driving behavior of the unmanned vehicle meets the design requirement.
12. The virtual test scenario construction apparatus of claim 11,
the building module is also used for defining a static ontology class; wherein the static ontology class is used for describing one or more of the following items corresponding to the static ontology: virtual test scenes, roads, barriers, lanes, traffic signs, weather, or lighting conditions;
the building module is also used for defining static object attributes; the static object attributes are used for describing mapping relations among the static ontology classes;
the building module is also used for defining static data attributes; wherein the static data attribute is used for describing a parameter set of the static ontology class;
the construction module is also used for defining a static design rule; the static design rule is used for describing a mapping relation among the static ontology class, the static object attribute and the static data attribute, and the static design rule is used for determining the rationality of the generated virtual test scene.
13. The virtual test scenario construction apparatus of claim 12, wherein the design constraints of the static ontology include one or more of: the position relation among roads, the position relation among roads and lanes, the position relation among lanes and dividing strips, the position and type of a lane line and the constraint relation among lanes and/or dividing strips, and the position relation among traffic signs and lanes and/or dividing strips.
14. The virtual test scenario construction apparatus of claim 12 or 13,
the building module is also used for defining a dynamic ontology class; wherein the dynamic ontology class is used for describing one or more of the following items corresponding to the dynamic ontology: a vehicle, pedestrian, or animal;
the building module is also used for defining the dynamic object attribute; the dynamic object attribute is used for describing a mapping relation between the dynamic ontology classes and a constraint relation between the dynamic ontology and the static ontology;
the building module is also used for defining dynamic data attributes; wherein the dynamic data attribute is used for describing a parameter set of the dynamic ontology class;
the building module is also used for defining dynamic design rules; the dynamic design rule is used for describing a mapping relation among the dynamic ontology class, the dynamic object attribute and the dynamic data attribute, and the dynamic design rule is used for determining the reasonability of the generated test scene.
15. The virtual test scenario construction apparatus of claim 14, wherein the behavior constraints of the dynamic ontology include one or more of: the vehicle-to-lane position relation, the vehicle speed limit, the vehicle running direction constraint, the vehicle steering constraint, the vehicle lane change constraint, the vehicle cut-in constraint and the vehicle cut-out constraint.
16. The virtual test scenario construction apparatus of claim 15,
the determining module is further configured to determine an initial state of the dynamic ontology according to the design constraint of the static ontology, the behavior constraint of the dynamic ontology, and a scene design requirement;
the determining module is further used for determining the running track of the dynamic body according to the initial state of the dynamic body by combining with traffic flow simulation.
17. The virtual test scenario construction apparatus of claim 16,
the building module is further configured to determine an initial state of the dynamic ontology according to the design constraint of the static ontology, the behavior constraint and the scene design requirement of the dynamic ontology, a test target and a behavior model of the dynamic ontology.
18. The virtual test scenario construction apparatus of claim 17, wherein the behavior model of the dynamic ontology comprises one or more of: the system comprises a following model, a lane changing model, a signal lamp reaction model, a random interference model, a public traffic and pedestrian behavior model, a clearance receiving model, an importing model, a cooperation model or a collision avoiding model.
19. The virtual test scenario construction apparatus of any one of claims 11-18, wherein the apparatus further comprises an acquisition module and a test module; wherein the content of the first and second substances,
the acquisition module is used for acquiring the driving behavior of the unmanned vehicle in a first test scene;
the determining module is further configured to determine a second test scenario; the second test scene is a test scene in which the driving behavior of the unmanned vehicle in the first test scene does not meet the design requirement;
the generating module is further configured to perform encryption sampling on the parameter interval corresponding to the second test scenario to generate a third test scenario;
and the test module is used for testing the third test scene.
20. The virtual test scenario construction apparatus of any one of claims 11-19, wherein the apparatus further comprises an add module; wherein the content of the first and second substances,
the adding module is used for adding a label to the virtual test scene based on the interactive relation between the static ontology and the dynamic ontology; wherein the label is used for managing a virtual test scenario.
21. A virtual test scenario construction apparatus, wherein the virtual test scenario construction apparatus is configured to execute the virtual test scenario construction method according to any one of claims 1 to 10.
22. A virtual test scenario construction apparatus, comprising: a processor; wherein the content of the first and second substances,
the processor configured to perform the virtual test scenario construction method of any one of claims 1-10.
23. A virtual test scenario construction apparatus, comprising: a processor coupled with a memory;
the memory for storing a computer program;
the processor configured to execute the computer program stored in the memory to cause the virtual test scenario construction apparatus to perform the virtual test scenario construction method of any one of claims 1-10.
24. A computer-readable storage medium, comprising a computer program or instructions which, when run on a computer, cause the computer to carry out the virtual test scenario construction method of any one of claims 1-10.
25. A computer program product, the computer program product comprising: computer program or instructions which, when run on a computer, cause the computer to perform the virtual test scenario construction method of any one of claims 1-10.
CN202010917524.2A 2020-09-03 2020-09-03 Virtual test scene construction method and device Pending CN114139329A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010917524.2A CN114139329A (en) 2020-09-03 2020-09-03 Virtual test scene construction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010917524.2A CN114139329A (en) 2020-09-03 2020-09-03 Virtual test scene construction method and device

Publications (1)

Publication Number Publication Date
CN114139329A true CN114139329A (en) 2022-03-04

Family

ID=80438407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010917524.2A Pending CN114139329A (en) 2020-09-03 2020-09-03 Virtual test scene construction method and device

Country Status (1)

Country Link
CN (1) CN114139329A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115048972A (en) * 2022-03-11 2022-09-13 北京智能车联产业创新中心有限公司 Traffic scene deconstruction classification method and virtual-real combined automatic driving test method
CN115098989A (en) * 2022-05-09 2022-09-23 北京智行者科技有限公司 Road environment modeling method and device, storage medium, terminal and mobile device
CN115168810A (en) * 2022-09-08 2022-10-11 南京慧尔视智能科技有限公司 Traffic data generation method and device, electronic equipment and storage medium
CN115359664A (en) * 2022-10-21 2022-11-18 深圳市城市交通规划设计研究中心股份有限公司 Traffic simulation method and device for three-dimensional composite highway
CN115687163A (en) * 2023-01-05 2023-02-03 中汽智联技术有限公司 Scene library construction method, device, equipment and storage medium
CN115855531A (en) * 2023-02-16 2023-03-28 中国汽车技术研究中心有限公司 Test scene construction method, device and medium for automatic driving automobile
CN117393089A (en) * 2023-12-11 2024-01-12 西北工业大学 Crystal evolution simulation method based on single-mode Bessel crystal phase field model

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115048972A (en) * 2022-03-11 2022-09-13 北京智能车联产业创新中心有限公司 Traffic scene deconstruction classification method and virtual-real combined automatic driving test method
CN115048972B (en) * 2022-03-11 2023-04-07 北京智能车联产业创新中心有限公司 Traffic scene deconstruction classification method and virtual-real combined automatic driving test method
CN115098989A (en) * 2022-05-09 2022-09-23 北京智行者科技有限公司 Road environment modeling method and device, storage medium, terminal and mobile device
CN115168810A (en) * 2022-09-08 2022-10-11 南京慧尔视智能科技有限公司 Traffic data generation method and device, electronic equipment and storage medium
CN115168810B (en) * 2022-09-08 2022-11-29 南京慧尔视智能科技有限公司 Traffic data generation method and device, electronic equipment and storage medium
CN115359664A (en) * 2022-10-21 2022-11-18 深圳市城市交通规划设计研究中心股份有限公司 Traffic simulation method and device for three-dimensional composite highway
CN115687163A (en) * 2023-01-05 2023-02-03 中汽智联技术有限公司 Scene library construction method, device, equipment and storage medium
CN115687163B (en) * 2023-01-05 2023-04-07 中汽智联技术有限公司 Scene library construction method, device, equipment and storage medium
CN115855531A (en) * 2023-02-16 2023-03-28 中国汽车技术研究中心有限公司 Test scene construction method, device and medium for automatic driving automobile
CN115855531B (en) * 2023-02-16 2023-05-16 中国汽车技术研究中心有限公司 Method, equipment and medium for constructing test scene of automatic driving automobile
CN117393089A (en) * 2023-12-11 2024-01-12 西北工业大学 Crystal evolution simulation method based on single-mode Bessel crystal phase field model
CN117393089B (en) * 2023-12-11 2024-02-06 西北工业大学 Crystal evolution simulation method based on single-mode Bessel crystal phase field model

Similar Documents

Publication Publication Date Title
CN114139329A (en) Virtual test scene construction method and device
JP6443550B2 (en) Scene evaluation device, driving support device, and scene evaluation method
CN110506303B (en) Method for determining data of traffic scene
JP6443552B2 (en) Scene evaluation device, driving support device, and scene evaluation method
WO2017013749A1 (en) Driving plan device, travel support device, and driving plan method
Fellendorf et al. Microscopic traffic flow simulator VISSIM
Tengilimoglu et al. Implications of automated vehicles for physical road environment: A comprehensive review
CN109902899B (en) Information generation method and device
WO2017126250A1 (en) Driving assistance method and device
CN104298829A (en) Cellular automaton model based urban road network traffic flow simulation design method
US11935417B2 (en) Systems and methods for cooperatively managing mixed traffic at an intersection
Hsu et al. A modified cellular automaton model for accounting for traffic behaviors during signal change intervals
JP2022550058A (en) Safety analysis framework
Al-Dabbagh et al. The impact of road intersection topology on traffic congestion in urban cities
JP6443551B2 (en) Scene evaluation device, driving support device, and scene evaluation method
Gorodokin et al. Optimization of adaptive traffic light control modes based on machine vision
Choi et al. Framework for connected and automated bus rapid transit with sectionalized speed guidance based on deep reinforcement learning: Field test in Sejong city
EP3967978B1 (en) Detecting a construction zone by a lead autonomous vehicle (av) and updating routing plans for following autonomous vehicles (avs)
Bassan Evaluating the relationship between decision sight distance and stopping sight distance: open roads and road tunnels
Makarova et al. Improving the City’s Transport System Safety by Regulating Traffic and Pedestrian Flows
Gremmelmaier et al. Cyclist behavior: a survey on characteristics and trajectory modeling
Zhang et al. Research on construction method and application of autonomous driving test scenario database
US20230185993A1 (en) Configurable simulation test scenarios for autonomous vehicles
Sharma et al. Prediction of vehicle’s movement using convolutional neural network (cnn)
Li et al. Architecture Design of Vehicle-Road Collaborative Cloud Control Platform for Expressway Operation Management Unit

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination