CN116009556A - Scene generation method and device and electronic equipment - Google Patents

Scene generation method and device and electronic equipment Download PDF

Info

Publication number
CN116009556A
CN116009556A CN202310102416.3A CN202310102416A CN116009556A CN 116009556 A CN116009556 A CN 116009556A CN 202310102416 A CN202310102416 A CN 202310102416A CN 116009556 A CN116009556 A CN 116009556A
Authority
CN
China
Prior art keywords
track
offset
sum
scene
displacement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310102416.3A
Other languages
Chinese (zh)
Inventor
于宁
孟琳
潘安
赵世杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202310102416.3A priority Critical patent/CN116009556A/en
Publication of CN116009556A publication Critical patent/CN116009556A/en
Pending legal-status Critical Current

Links

Images

Abstract

The disclosure provides a scene generation method, a scene generation device and electronic equipment, relates to the technical field of artificial intelligence, and particularly relates to the technical field of automatic driving. The specific implementation scheme is as follows: determining a first scene, wherein in the first scene, the target vehicle moves according to a first track, and the obstacle moves according to a second track; dividing the second track into a plurality of discrete path segments; determining displacement offset amounts respectively offset from the plurality of path segments in displacement to obtain a sum of the displacement offset amounts offset from the second track; determining speed offsets respectively offset in the plurality of path segments in speed to obtain a sum of the speed offsets offset in the second track; based on the second track, the sum of displacement offset and the sum of velocity offset, a third track is obtained; a second scene is generated based on the first track and the third track. According to the method and the device, a large number of accurate test scenes can be efficiently generated on the basis of cost control.

Description

Scene generation method and device and electronic equipment
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to a scene generation method, a scene generation device and electronic equipment in the field of automatic driving.
Background
In the autopilot technique, the safety assessment of an autopilot vehicle is an important step. While testing and evaluating the safety of an autonomous vehicle requires testing and evaluating based on a variety of possible safety scenarios. The diversity of the scenes can accurately reflect the real driving scenes, and the safety of the automatic driving vehicle can be tested and evaluated more accurately.
However, in the related art, when the scenes subjected to the test evaluation are collected, the scenes are obtained from daily real scenes, but scenes in which safety problems (e.g., collisions) occur are rare under a large number of real road conditions.
Disclosure of Invention
The present disclosure provides a scene generation method, apparatus, electronic device, non-transitory computer-readable storage medium storing computer instructions, and computer program product.
According to an aspect of the present disclosure, there is provided a scene generating method, including: determining a first scene, wherein a target vehicle moves according to a first track and an obstacle moves according to a second track in the first scene; dividing the second trajectory into a discrete plurality of path segments; determining displacement offset amounts respectively offset from the plurality of path segments in displacement to obtain a sum of the displacement offset amounts offset from the second track; determining speed offsets respectively offset in speed from the plurality of path segments to obtain a sum of the speed offsets offset from the second track; based on the second track, the sum of displacement offset and the sum of velocity offset, a third track is obtained; a second scene is generated based on the first track and the third track.
According to another aspect of the present disclosure, there is provided a scene generating apparatus including: the first determining module is used for determining a first scene, wherein the target vehicle moves according to a first track and the obstacle moves according to a second track in the first scene; a dividing module for dividing the second track into a plurality of discrete path segments; the second determining module is used for determining displacement offset amounts respectively offset from the path segments in displacement to obtain a sum of the displacement offset amounts offset from the second track; a third determining module, configured to determine velocity offsets that are respectively shifted in velocity from the plurality of path segments, and obtain a sum of velocity offsets that are shifted from the second track; the processing module is used for obtaining a third track based on the second track, the sum of displacement offset and the sum of velocity offset; and the generation module is used for generating a second scene based on the first track and the third track.
According to still another aspect of the present disclosure, there is provided an electronic apparatus including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any one of the methods described above.
According to yet another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the method according to any one of the above.
According to a further aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method according to any of the preceding claims.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow chart of a scenario generation method provided according to an embodiment of the present disclosure;
FIG. 2 is a schematic structural view of an autopilot system provided in accordance with an embodiment of the present disclosure;
FIG. 3 is a schematic illustration of a vehicle steering model provided in accordance with an embodiment of the present disclosure;
fig. 4 is a block diagram of a scene generating apparatus provided according to an embodiment of the present disclosure;
Fig. 5 illustrates a schematic block diagram of an example electronic device 500 that may be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the related art, most driving systems are still training and evaluating natural scenes collected from daily life or heuristically generated collision scenes when testing and evaluating an automatically driven vehicle.
However, in general, a large number of vehicles results in an extremely low collision rate, which indicates that there are few safety critical scenarios in the collected real data. Therefore, the scene accumulation requires a large number of vehicles, and a large number of tests and operations are performed in a plurality of real environments, so that the cost is very high. Therefore, the method of artificially generating scenes is important for measuring risks and reducing costs. In order to develop a real simulator for automated driving automobile testing, it is necessary to simulate various scenes in the real world that may occur in the vicinity of the vehicle. However, in the related art, there are some disadvantages in manually editing scenes based on rules: the movement track of the obstacle is predefined and can not truly reflect the movement state of the obstacle in the real environment under the limitation of rules. Accordingly, in the related art, there are problems of high cost and low quality when generating a scene (for example, for an automatic driving test).
In view of the above-described problems, according to an embodiment of the present disclosure, the present disclosure provides a scene generating method, and fig. 1 is a flowchart of the scene generating method provided according to an embodiment of the present disclosure, as shown in fig. 1, the flowchart including the steps of:
step S102, determining a first scene, wherein a target vehicle moves according to a first track and an obstacle moves according to a second track in the first scene;
as an alternative embodiment, the method of the present disclosure may be applied to any scenario of an autopilot test, for example, on a terminal or server that requires an autopilot test. For example, when the method is applied to a terminal for executing the automatic driving test, the terminal can be provided with automatic driving test software, the automatic driving test is realized through the software, and when the method is applied to the terminal, the method can be used for conveniently and simply processing the automatic driving test in time; for example, when the method is applied to a server for executing the autopilot test, the server can be deployed with an autopilot test platform, and the autopilot test is realized by calling various data (such as fine map data, fine positioning data, fine sensing data, and the like), so that when the method is applied to the server, the called data can be more comprehensive and accurate, and further the fine autopilot test can be comprehensively and accurately realized.
It should be noted that the types of the above-mentioned terminals may be various, for example, a mobile terminal may be a mobile phone, an ipad, a notebook, or the like, or may be a fixed computer device. The types of the servers may be various, for example, a local server or a virtual cloud server. The server may be a single computer device according to its computing power, or may be a computer cluster in which a plurality of computer devices are integrated.
As an alternative embodiment, the first scenario may be a basic scenario generated based on real road data, which may be some safety critical scenario, for example, when considering a collision accident in automatic driving, in which a target vehicle collides with an obstacle, or in which a target vehicle does not collide with an obstacle, but there is a risk of collision. That is, in the first scenario, there may be a real collision between the target vehicle and the obstacle, or there may be a potential collision risk.
As an alternative embodiment, the target vehicle may be an autonomous vehicle, also referred to as a host vehicle, and the obstacle may be an obstacle vehicle, or another person or object that may collide with the target vehicle or present a risk of collision.
It should be noted that, in the first scenario, the target vehicle moves according to the first track, and the obstacle moves according to the second track, where the target vehicle and the obstacle have a safety event (i.e., some key scenario of interest), such as a collision. Wherein, the collision is only an example of the present disclosure, and other safety related key scenarios are also part of the present disclosure, and are not exemplified herein.
In addition, the first trajectory and the second trajectory may be trajectories determined based on a predetermined coordinate system, for example, the coordinate system may be a two-dimensional coordinate system of one plane. After the coordinate system is determined, in addition to the first track and the second track in the first scene, the environment where the target vehicle and the obstacle are located in the scene, for example, an interfering vehicle, a traffic facility, a road element (lane line, intersection, etc.), which are caused by a safety event, may be represented in the coordinate system.
Step S104, dividing the second track into a plurality of discrete path segments;
as an alternative embodiment, when dividing the second track into discrete path segments, the curve corresponding to the second track from the start point to the end point is divided into a plurality of segments, and each segment corresponds to a control point (or reference point). The control point or reference point may be a unit to be compared with the second track thereafter. The overall offset relative to the second track is accumulated based on these control points or reference points.
Step S106, determining displacement offset amounts respectively offset from the plurality of path segments in displacement to obtain a sum of the displacement offset amounts offset from the second track;
as an alternative embodiment, the path segments are used as comparison objects, displacement offsets offset from the corresponding path segments in displacement are respectively determined, then the displacement offsets corresponding to the path segments are respectively accumulated, and the sum of the displacement offsets offset from the second track is obtained by directly summing the displacement offsets. It should be noted that, the path segment is also a length, and to obtain the displacement offset corresponding to the length of the path segment, the position points on the length of the path segment may be summed or integrated. In addition, since the displacement of the position point in the path segment may be represented in various manners, for example, the displacement of the corresponding position point in the path segment may be represented by a function of time, or the displacement of the corresponding position point in the path segment may be represented by a function of path length. And the specific selection can be flexibly selected according to the calculation requirement. In this alternative embodiment, the displacement of the corresponding position point in the path segment is expressed as a function of the path length. Thus, the displacement of the corresponding path segment is integrated based on the path length using a function of the path length of the included position points.
It should be noted that the displacement offset amounts that are offset from the plurality of path segments in displacement may be determined based on predetermined constraint conditions. The constraint condition may include various types, for example, a direct displacement constraint, a curvature constraint, and the like.
Step S108, determining the speed offset amounts respectively offset in speed from the plurality of path segments to obtain the sum of the speed offset amounts offset from the second track;
as an alternative embodiment, the path segments are used as comparison objects, the speed offsets that are offset in speed from the corresponding path segments are respectively determined, then the speed offsets that are respectively corresponding to the path segments are respectively accumulated, and the sum of the speed offsets that are offset from the second track is obtained by directly summing the speed offsets. Similarly, a path segment is a length, and to obtain a velocity offset corresponding to the length of the path segment, a position point on the length of the path segment may be summed or integrated. In addition, since the speed of the path segment may be represented in various manners, for example, the speed of the corresponding position point in the path segment may be represented by a function of time, or the speed of the corresponding position point in the path segment may be represented by a function of the path length. And the specific selection can be flexibly selected according to the calculation requirement. In this alternative embodiment, the speed of the corresponding location point in the path segment is expressed as a function of the path length. Thus, the velocity of the corresponding path segment is based on the start and end velocities of the path segment, as represented by a function of the path length, and the acceleration of the path segment.
In addition, the above-described speed offset that is speed-offset from the plurality of path segments may also be determined based on predetermined constraints. The constraint condition may include various types, for example, a direct speed constraint, an acceleration constraint, and the like.
Step S110, obtaining a third track based on the second track, the sum of displacement offset and the sum of velocity offset;
as an alternative embodiment, when the third track is obtained based on the second track, the sum of displacement offsets and the sum of velocity offsets may be directly offset on the basis of the second track, so as to obtain the third track. When the displacement is offset by the sum of displacement offsets, the displacement on the third track is obtained for the displacement of the entire second track on the displacement, and the processing may be realized by the speed offsets of the path segments formed by the sum of displacement offsets. By means of the sum of the offsets directly in displacement and velocity, a new track, i.e. a third track, can be obtained directly and quickly with respect to the second track.
In the test scene, the attention degree of displacement offset and velocity offset is different, or the influence of the displacement offset and the velocity offset on the test is different, so that different weights can be set for the displacement offset and the velocity offset to reflect the different influence of the displacement offset and the velocity offset, different tracks are generated based on the different weights, the real scene can be more accurately closed, and more different new scenes can be expanded. For example, when the third track is obtained based on the second track, the sum of displacement offsets and the sum of velocity offsets, a first weight corresponding to the sum of displacement offsets and a second weight corresponding to the sum of velocity offsets may be determined; and obtaining a third track based on the second track, the sum of displacement offsets, the first weight, the sum of velocity offsets and the second weight. The method has the advantages that the method adopts a mode of distributing weights for displacement offset and velocity offset to represent the influence of displacement and velocity on generation of new tracks, not only is intuitiveness of the influence of displacement considered, but also the influence of velocity is effectively considered, and the generated scene is effectively more real.
It should be noted that, the allocation of the first weight and the second weight may be flexibly determined based on the needs of the consideration scene. For example, the first weight may be set higher when there is a comparative concern for displacement, and the second weight may be set higher when there is a relatively higher concern for speed. Accordingly, different weights are set based on the displacement offset and the velocity offset, respectively, and objects of interest can be considered, respectively, based on specific scenes, so that different scenes can distinguish the interests.
As an alternative embodiment, when the third track is obtained based on the second track, the sum of displacement offsets, the first weight, the sum of velocity offsets, and the second weight, different allocations of the first weight and the second weight may result in a plurality of different third tracks, i.e. new tracks with different magnitudes deviating from the second track are generated. For example, the first weight value and the second weight value corresponding to the multiple adjustments can be obtained by adjusting the weight value of the first weight and the weight value of the second weight multiple times; and determining a third track corresponding to each of the plurality of adjustments based on the second track, the sum of displacement offsets, the sum of velocity offsets, and the first weight value and the second weight value corresponding to each of the plurality of adjustments. Due to the diversity of the first weight and the second weight distribution, the number of the obtained third tracks is also more, and the efficiency of expanding the new tracks is effectively improved.
It should be noted that, based on the second track, the sum of displacement offsets, and the sum of velocity offsets, the solution for obtaining the third track only considers the second track itself to obtain a new track. In addition to considering the original second track itself in the process of generating the scene, in order to make the generated track more fused with the environment or more easily close to the real environment, some factors related to the environment may be considered. For example, other people, things, etc. than the target vehicle and the obstacle may also be considered. For example, pedestrians, vehicles, traffic facilities, or the like that interfere with the target vehicle and the obstacle may be considered.
Based on the above consideration of the environment in the real scene, in the present alternative embodiment, the lane line on which the obstacle is traveling is considered. Because the running of the vehicle is performed based on the traffic rule under the normal condition, the running according to the lane lines is more consistent with the driving behavior of the real scene. Therefore, when the third track is obtained based on the second track, the sum of displacement offset and the sum of velocity offset, the center line of the target lane in the first scene can be determined first; and then, obtaining a third track based on the center line of the target lane, the second track, the sum of displacement offsets and the sum of speed offsets. By adopting the processing mode, when a new track is generated, the second track of the reference obstacle can be defined by combining lane lines in basic traffic rules, so that the newly generated track is not only jointed with the real second track, but also meets the normal running behavior, and the newly generated track is jointed with the second track from one side constraint to meet the driving specification, and is reasonable.
As an alternative embodiment, when the third track is obtained based on the target lane center line, the second track, the sum of displacement offsets, and the sum of velocity offsets, an initial track may be determined first, wherein the initial track is offset in displacement from the second track by the sum of displacement offsets, and in velocity from the second track by the sum of velocity offsets; then, determining the transverse offset between the initial track and the center line of the target lane line; and determining the initial track as a third track in the case that the lateral offset is smaller than the lateral offset threshold. The lateral offset threshold may be a predetermined displacement offset or a predetermined direction.
For example, the width of the lane corresponding to the lane center line may be taken as the magnitude of the lateral offset threshold. When the lateral offset is smaller than the lateral offset threshold, the initial track is still on the lane corresponding to the center line of the target lane line, namely the driving behavior is on the lane, and the initial track accords with the normal driving behavior. When the determined lateral offset between the initial trajectory and the center line of the target lane exceeds the width of the lane, the obstacle is considered to be seriously deviated from the lane when the obstacle runs according to the initial trajectory, namely, the obstacle does not accord with the driving behavior of the vehicle. The generated initial trajectory needs to be corrected so as to meet normal vehicle driving behavior.
Therefore, after the transverse offset between the initial track and the central line of the target lane line is determined, under the condition that the transverse offset is greater than or equal to the transverse offset threshold value, the initial track is adjusted, and a target track with the transverse offset between the central line of the target lane line smaller than the transverse offset threshold value is obtained; and determining the target track as a third track. Through the processing, the third track expanded based on the second track is expanded under the double reference lines, and the expanded third track can be ensured to be more fit with the real driving behavior.
Step S112, generating a second scene based on the first track and the third track.
As an alternative embodiment, when the second scene is generated based on the first track and the third track, various ways may be adopted. For example, the second scene may be constructed in a manner that the first track of the target vehicle is kept unchanged, the second track of the obstacle is replaced by the expanded third track, and other environments are unchanged. For example, a new scene may be expanded by exchanging the driving data of the target vehicle with the driving data of the obstacle in a manner of exchanging the roles of the target vehicle and the obstacle. For example, the target vehicle may be moved according to a third trajectory and the obstacle may be moved according to a first trajectory, thereby constructing a new scene. The second scenario obtained in the two ways above may be of the form: in the second scene, the target vehicle moves according to the first track, and the obstacle moves according to the third track; alternatively, in the second scenario the target vehicle moves according to the third trajectory and the obstacle moves according to the first trajectory. In the extension of the test scenario, a combination of the two forms is also possible.
In addition, when a new scene is extended by adopting a role-exchanging method, not only can a new scene be constructed based on the new track of the obstacle and the original track of the target vehicle, but also a new scene can be constructed directly based on the original track of the obstacle and the original track of the target vehicle. For example, in the first scenario illustrated above, the target vehicle is moving according to a first trajectory, the obstacle vehicle is moving according to a second trajectory, and the new second scenario may be that the target vehicle is moving according to the second trajectory, and the obstacle vehicle is moving according to the first trajectory.
The new scene is expanded by adopting the role interchange mode, so that the target vehicle can drive at a new angle, and the new scene is expanded by adopting the direct role interchange mode, so that only simple data conversion is needed, excessive calculation or data operation is not needed, and the efficiency is higher on the basis of ensuring reality and accuracy.
Through the steps, the first scene is a collected real scene, namely, the second track of the obstacle is a real track, and the deviation is carried out based on the real second track, so that the obtained third track is generated based on the real track, and can be accurately close to the real, in addition, the first track is also real, and therefore, when the second scene is generated based on the real first track and the third track accurately close to the real, the second scene is also more close to the real. In addition, the offset on the relative independent displacement and the offset on the speed are specifically considered, namely the displacement and the speed of the reference are mutually decoupled, and by adopting the processing, the decoupling operation can effectively simplify the calculation process, so that the scene generation efficiency can be effectively improved. By adopting the processing, on one hand, a large amount of real scene data which are rare in nature do not need to be collected, so that the problem of high cost is avoided; on the other hand, the second scene is generated based on the real first track and the third track which is accurately close to the real, and a large number of accurate scenes can be efficiently generated based on the difference of the offsets. Based on the two aspects, the method effectively realizes that the accuracy of the scene is ensured on the basis of reducing the scene generation cost, and the method is closer to the real test scene, thereby effectively meeting the test requirement.
The present disclosure is illustrated below in connection with an autopilot scenario.
Fig. 2 is a schematic structural diagram of an autopilot system provided according to an embodiment of the present disclosure, and as shown in fig. 2, the autopilot system mainly includes a high-precision map module, a positioning module, a sensing module, a global navigation module, a prediction module, a planning module, a control module, and the like. The high-precision map module is used for providing high-precision map services; a positioning module for providing high-precision (centimeter level) positioning service; the sensing module is used for providing omnibearing environment sensing service for the automatic driving vehicle by combining equipment such as a camera, a laser radar, a millimeter wave radar, an ultrasonic radar and the like with an advanced obstacle detection algorithm; the prediction module is used for obtaining the future moment motion trail of the obstacle by inference through means of extracting the historical motion parameters of the obstacle, combining Kalman filtering, a neural network and the like based on the perception data obtained by the upstream perception module as input, and is used for the downstream planning module and the control module; and a global navigation module: according to the initial position and the target position of the vehicle, combining a road network topological structure, and obtaining an optimal global navigation path conforming to the performance evaluation index through a global path searching algorithm; the planning module is used for providing a main vehicle obstacle avoidance and lane change decision, path planning and speed planning service; and the control module is used for carrying out longitudinal and transverse tracking control according to the driving track provided by the decision planning module.
Based on the above-mentioned autopilot scenario, in order to provide an evaluation test for safety of autopilot behavior, more safety scenarios need to be simulated.
In combination with the simulation mode adopted in the related art, for example, based on some constraints which are defined by human beings and are unchanged, the simulated scene cannot reflect the actual motion state of the obstacle in the actual scene, so that the quality of the scene obtained by the module is low, and even the scene cannot be used for testing.
In view of the foregoing, the present disclosure provides an alternative embodiment in connection with an autopilot scenario.
This alternative embodiment mainly includes the following processes:
s1, data collection (test and operation data collection). In the process of data collection, the data may be collected from an automatic driving vehicle running on a real test road, or may be collected from other devices with a collection function running on the real test road, or may be collected from collection devices fixedly installed on the real test road.
S2, generating a basic scene (corresponding to the first scene) based on the real road data, wherein the scene expression mode is as follows:
(A) Map name, version
(B) Raw scene data (obtained based on real scene data collection):
(1) Scene (Scenery): map elements (lanes, intersections, etc.), traffic facilities (traffic lights), temporary structures (cones, fences, etc.), etc.
(2) Environment (Environment): air temperature, weather, light, etc.
(3) Traffic participants: assume that there are N traffic participants a= (a 1 ,A 2 ,…,A n ) Raw trajectory data g= (G) for each obstacle 1 ,G 2 ,…,G n ) Within a time D (M track points) given by the scene data, the track of each obstacle is composed of a plurality of track points G i =(G i1 ,G i2 ,…,G im ) In the case of a static obstacle, the trajectory points coincide at a plurality of times within D.
S3, based on a given basic scene S generated in the S2, transforming the obstacle track, expanding more simulation scenes, and describing two situations:
in the first case, the scene is generated based on a planning algorithm.
In this first case: the ADC (host vehicle) is kept unchanged, i.e. the initial position and the end position are unchanged and the navigation path is unchanged.
(1) Determining a list of obstacles to be transformed O s (an obstacle having overlapping paths or an obstacle having a possibility of overlapping, corresponding to the above-mentioned obstacle that collides or is at risk of colliding);
(2) Traversal O s For each obstacle O therein i From the original trajectory G i Obtain initial track point P si And an end trajectory point P ei In P si Starting from P ei As an end point, loading a map, taking other obstacles and an ADC (host vehicle) as obstacles, and generating a plurality of candidate tracks by adopting a standard method similar to the ADC: g i →G′ i =(G′ i1 ,G′ i2 ,G′ i3 ,…,G′ ik )。
(3) Traversal G' i And keeping the motion trail of other obstacles unchanged to form a new scene Si'.
(4) And (3) completing expansion of a new scene: s→s '= (S' 1 ,S′ 2 ,S′ 3 ,…,S′ n ) Wherein each participant comprises a plurality of trajectories.
The second case is to generate a new scene based on the host and obstacle role transformations:
in this second case: the ADC (host vehicle) concept is weakened, the ADC is regarded as a traffic participant, and the initial position and the final position of the obstacle are regarded as the initial position and the final position of the ADC to be planned.
(1) Determining a list of obstacles to be transformed O s (the paths have overlapping obstacles).
(2) Traversal O s For each obstacle O therein i From the original trajectory G i Obtain initial track point P si And an end trajectory point P ei
(3) With an initial locus point P si And an end trajectory point P ei As initial and end positions of ADC (host vehicle), in original track G i As an ADC (host vehicle) reference trajectory, a new scene Si' is generated.
(4) Completing a new sceneExpansion: s→s '= (S' 1 ,S′ 2 ,S′ 3 ,…,S′ n )。
In the above-described process of generating the candidate trajectory based on the initial position and the end position, various manners may be adopted. One of the implementations is given below.
For scenes in real road testing, two types of important concerns are: first, the main vehicle and the obstacle vehicle are really collided; second, there is a collision between the host vehicle and the obstacle vehicle, but there is a potential risk of collision.
(1) For both cases, an obstacle number that collides with the host vehicle or that has a risk of colliding with the host vehicle is determined.
(2) And generating a new scene for the determined obstacle through a decision planning algorithm.
The generation of a new scene by a decision-making algorithm will be described below using an obstacle as an example of an obstacle vehicle.
The lane center line is taken as a reference (reference line Ref 1), and the original track of the obstacle vehicle is taken as an original track reference (second reference line Ref 2) (the planned track can be ensured as much as possible, and the planned track is similar to the original track to be closer to reality). Based on the reference line Ref1 and the reference line Ref2, the offset of the reference line Ref2 with respect to the reference line Ref1 at each longitudinal position point can be calculated
Figure BDA0004073965340000122
Since the second reference line is offset with respect to the lane center line, the lane center line is also a reference when the subsequently newly generated trajectory is offset with respect to the second reference line. The newly generated track is not only a second reference line corresponding to the obstacle vehicle, but also a center line of the lane, namely, the standard of driving twice close to the obstacle vehicle is realized, and the accuracy of a scene is ensured.
Fig. 3 is a schematic view of a vehicle steering model provided according to an embodiment of the present disclosure, as shown in fig. 3, as follows: each point on the new track is denoted by x, y, θ, k, v, a, t. Corresponding to coordinates, heading, curvature, speed, acceleration and time respectively. Based on the above model, the state equation of the obstacle vehicle is as follows:
Figure BDA0004073965340000121
the path length is used to replace time for transformation to obtain the following differential formula
Figure BDA0004073965340000131
Integrating to obtain the values of x, y, theta and k corresponding to s
Figure BDA0004073965340000132
Figure BDA0004073965340000133
Figure BDA0004073965340000134
k(s)=u(s)
The curvature k is expressed as a polynomial with respect to s, and x, y, θ, k may be expressed by a series of parameters such as the parameter a, b, c, d.
k(s)=a+bs+cs 2 +ds 3 +…
Figure BDA0004073965340000135
Figure BDA0004073965340000136
Figure BDA0004073965340000137
Due to
Figure BDA0004073965340000138
So that
Figure BDA0004073965340000139
When σ=0
Figure BDA00040739653400001310
Figure BDA00040739653400001311
Figure BDA00040739653400001312
Wherein v(s) represents the velocity corresponding to s, v 0 The initial velocity and the acceleration are represented by σ, and a fixed value can be obtained.
The starting point is determined by the initial position, heading angle, initial curvature (default to 0) and initial speed of the obstacle vehicles in the original scene.
The end point is determined by the end position, heading angle, end curvature (default to 0), and end speed of the obstacle vehicles in the original scene.
The starting point information and the end point information are determined by the starting point and the end point data of the original track of the obstacle vehicle.
The deviation of the new trajectory from the original trajectory of the obstacle vehicle is expressed as an overall cost.
Total cost C total =W 1 C 1 +W 2 C 2
Wherein W is 1 、W 2 Representing the weight coefficient, C 1 And C 2 Indicating a deviation from the second reference lineThe displacement cost and the cost of deviating from the original track velocity. For the new track, N path segments can be obtained by dispersing according to a given step length, for the path length s in each path segment, corresponding x(s), y(s), θ(s), k(s), v(s), t(s) can be calculated according to the formula, the position offset and the speed offset of each state point and the original track are calculated, and finally the sum is calculated.
Figure BDA0004073965340000141
Figure BDA0004073965340000142
Where the subscript raw represents the path segment in the original track.
By controlling W 1 、W 2 A new track with different magnitudes deviating from the original track can be generated and used as new scene data.
Based on the above-mentioned alternative embodiments, the following beneficial effects can be achieved:
generating a simulation scene based on real data, and converting and expanding a new simulation scene based on the scene, so that the scene number and coverage rate are greatly improved;
the original map, the initial position of the main vehicle, the target position and the navigation track are kept unchanged, a new track of the obstacle vehicle is regenerated in a planning-based mode, the new track is used as input of a new scene, the new track is more approximate to a real scene, and the new track can be used as a scene of movement diversity of the obstacle vehicle in the real scene to verify the rationality and reliability of a strategy;
The method is not limited to limiting the initial position, the end position and the navigation track of the host vehicle, the initial position and the end position of the obstacle vehicle are converted into the initial position and the target position of the host vehicle (ADC), the original track of the obstacle vehicle is taken as the reference track of the host vehicle (ADC), and new scenes are formed, and can be considered as original mirror scenes, so that the automatic driving capability of the host vehicle in the current scene is seen from the perspective of the obstacle vehicle.
In an embodiment of the present disclosure, there is provided a scene generating apparatus, and fig. 4 is a block diagram of a scene generating apparatus provided according to an embodiment of the present disclosure, as shown in fig. 4, the apparatus includes: the first determining module 41, the dividing module 42, the second determining module 43, the third determining module 44, the processing module 45 and the generating module 46 are explained below.
A first determining module 41, configured to determine a first scene, where the target vehicle moves according to a first track and the obstacle moves according to a second track; a dividing module 42, connected to the first determining module 41, for dividing the second track into a plurality of discrete path segments; a second determining module 43, connected to the dividing module 42, for determining displacement offsets respectively offset from the plurality of path segments in displacement, so as to obtain a sum of displacement offsets offset from the second track; a third determining module 44, connected to the second determining module 43, for determining the speed offsets respectively shifted in speed from the plurality of path segments, to obtain a sum of the speed offsets shifted from the second track; a processing module 45, connected to the third determining module 44, for obtaining a third track based on the second track, the sum of displacement offsets, and the sum of velocity offsets; the generating module 46 is connected to the processing module 45 and is configured to generate the second scene based on the first track and the third track.
As an alternative embodiment, the processing module 45 includes: the device comprises a first determining unit and a first processing unit, wherein the first determining unit is used for determining a first weight corresponding to the sum of displacement offset and a second weight corresponding to the sum of velocity offset; the first processing unit is connected to the first determining unit and is used for obtaining a third track based on the second track, the sum of displacement offsets, the first weight, the sum of velocity offsets and the second weight.
As an optional embodiment, the first processing unit is configured to obtain, by adjusting the weight value of the first weight and the weight value of the second weight multiple times, the first weight value and the second weight value corresponding to the multiple times of adjustment respectively; and determining a third track corresponding to each of the plurality of adjustments based on the second track, the sum of displacement offsets, the sum of velocity offsets, and the first and second weight values corresponding to each of the plurality of adjustments.
As an alternative embodiment, the processing module 45 includes: a second determining unit and a second processing unit, wherein the second determining unit is used for determining a target lane center line in the first scene; the second processing unit is configured to obtain a third track based on the target lane center line, the second track, the sum of displacement offsets, and the sum of velocity offsets.
As an alternative embodiment, the second processing unit includes a first determining subunit, a second determining subunit, and a third determining subunit, where the first determining subunit is configured to determine an initial trajectory, where the initial trajectory is offset by a displacement offset sum from the second trajectory by a displacement offset sum, and is offset by a velocity offset sum from the second trajectory by a velocity offset sum; a second determining subunit, connected to the first determining subunit, for determining a lateral offset between the initial track and the centerline of the target lane line; and the third determining subunit is connected to the second determining subunit and is used for determining the initial track as the second track under the condition that the transverse offset is smaller than the transverse offset threshold value.
As an alternative embodiment, the second processing unit further includes: the adjusting subunit is connected to the second determining subunit and is used for adjusting the initial track to obtain a target track with the transverse offset smaller than the transverse offset threshold value between the initial track and the center line of the target lane line under the condition that the transverse offset is larger than or equal to the transverse offset threshold value; and the fourth determination subunit is connected to the adjustment subunit and is used for determining the target track as the second track.
As an alternative embodiment, in the second scene the target vehicle moves according to the first trajectory and the obstacle moves according to the third trajectory; alternatively, in the second scenario the target vehicle moves according to the third trajectory and the obstacle moves according to the first trajectory.
As an alternative embodiment, the target vehicle collides with the obstacle in the first scene, or the target vehicle does not collide with the obstacle in the first scene, but there is a risk of collision.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Wherein, this electronic equipment includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the scene generating method of any of the above.
The readable storage medium may be a non-transitory computer readable storage medium storing computer instructions for causing a computer to execute the scene generating method of any one of the above.
The computer program product described above, comprising a computer program which, when executed by a processor, implements the scene generation method of any of the above.
Fig. 5 illustrates a schematic block diagram of an example electronic device 500 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the electronic device 500 includes a computing unit 501 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the electronic device 500 may also be stored. The computing unit 501, ROM 502, and RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
A number of components in electronic device 500 are connected to I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, etc.; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508 such as a magnetic disk, an optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the electronic device 500 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 501 performs the respective methods and processes described above, for example, the method scene generating method. For example, in some embodiments, the scene generation method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 500 via the ROM 502 and/or the communication unit 509. When a computer program is loaded into RAM 503 and executed by computing unit 501, one or more steps of the scenario generation method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the scene generation method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (12)

1. A scene generation method, comprising:
determining a first scene, wherein a target vehicle moves according to a first track and an obstacle moves according to a second track in the first scene;
dividing the second trajectory into a discrete plurality of path segments;
determining displacement offset amounts respectively offset from the plurality of path segments in displacement to obtain a sum of the displacement offset amounts offset from the second track;
Determining speed offsets respectively offset in speed from the plurality of path segments to obtain a sum of the speed offsets offset from the second track;
based on the second track, the sum of displacement offset and the sum of velocity offset, a third track is obtained;
a second scene is generated based on the first track and the third track.
2. The method of claim 1, wherein the deriving a third trajectory based on the second trajectory, the displacement offset sum, and the velocity offset sum comprises:
determining a first weight corresponding to the displacement offset sum and a second weight corresponding to the velocity offset sum;
and obtaining the third track based on the second track, the displacement offset sum, the first weight, the velocity offset sum and the second weight.
3. The method of claim 2, wherein the deriving the third trajectory based on the second trajectory, the displacement offset sum, the first weight, the velocity offset sum, and the second weight comprises:
the first weight value and the second weight value corresponding to the multiple adjustments are obtained through multiple adjustments of the weight value of the first weight and the weight value of the second weight;
And determining third tracks corresponding to the multiple adjustments respectively based on the second tracks, the displacement offset sum, the velocity offset sum and the first weight value and the second weight value corresponding to the multiple adjustments respectively.
4. The method of claim 1, wherein the deriving a third trajectory based on the second trajectory, the displacement offset sum, and the velocity offset sum comprises:
determining a target lane centerline in the first scene;
and obtaining the third track based on the target lane center line, the second track, the displacement offset sum and the speed offset sum.
5. The method of claim 4, wherein the deriving the third track based on the target lane centerline, the second track, the displacement offset sum, and the speed offset sum comprises:
determining an initial trajectory, wherein the initial trajectory is offset in displacement from the second trajectory by the displacement offset sum, and is offset in speed from the second trajectory by the speed offset sum;
determining a lateral offset between the initial trajectory and the target lane line centerline;
And determining the initial track as the third track under the condition that the transverse offset is smaller than a transverse offset threshold value.
6. The method of claim 5, wherein the method further comprises:
under the condition that the transverse offset is larger than or equal to the transverse offset threshold, the initial track is adjusted to obtain a target track with the transverse offset smaller than the transverse offset threshold with the center line of the target lane line;
and determining the target track as the third track.
7. The method of claim 1, wherein,
in the second scene, the target vehicle moves according to the first track, and the obstacle moves according to the third track;
or alternatively, the process may be performed,
in the second scene, the target vehicle moves according to the third track, and the obstacle moves according to the first track.
8. The method of any of claims 1-7, wherein the target vehicle collides with the obstacle in the first scenario or the target vehicle does not collide with the obstacle in the first scenario, but there is a risk of collision.
9. A scene generation apparatus comprising:
The first determining module is used for determining a first scene, wherein the target vehicle moves according to a first track and the obstacle moves according to a second track in the first scene;
a dividing module for dividing the second track into a plurality of discrete path segments;
the second determining module is used for determining displacement offset amounts respectively offset from the path segments in displacement to obtain a sum of the displacement offset amounts offset from the second track;
a third determining module, configured to determine velocity offsets that are respectively shifted in velocity from the plurality of path segments, and obtain a sum of velocity offsets that are shifted from the second track;
the processing module is used for obtaining a third track based on the second track, the sum of displacement offset and the sum of velocity offset;
and the generation module is used for generating a second scene based on the first track and the third track.
10. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 8.
11. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1 to 8.
12. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 8.
CN202310102416.3A 2023-01-20 2023-01-20 Scene generation method and device and electronic equipment Pending CN116009556A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310102416.3A CN116009556A (en) 2023-01-20 2023-01-20 Scene generation method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310102416.3A CN116009556A (en) 2023-01-20 2023-01-20 Scene generation method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN116009556A true CN116009556A (en) 2023-04-25

Family

ID=86034009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310102416.3A Pending CN116009556A (en) 2023-01-20 2023-01-20 Scene generation method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN116009556A (en)

Similar Documents

Publication Publication Date Title
US8082102B2 (en) Computing flight plans for UAVs while routing around obstacles having spatial and temporal dimensions
CN113264066B (en) Obstacle track prediction method and device, automatic driving vehicle and road side equipment
CN113753077A (en) Method and device for predicting movement locus of obstacle and automatic driving vehicle
CN112763995B (en) Radar calibration method and device, electronic equipment and road side equipment
CN114506343A (en) Trajectory planning method, device, equipment, storage medium and automatic driving vehicle
KR101155708B1 (en) Method of creating real time terrain following flight path of aircraft by computer
CN112699765A (en) Method and device for evaluating visual positioning algorithm, electronic equipment and storage medium
CN113119999B (en) Method, device, equipment, medium and program product for determining automatic driving characteristics
CN113139696B (en) Trajectory prediction model construction method and trajectory prediction method and device
CN113978465A (en) Lane-changing track planning method, device, equipment and storage medium
CN116499487B (en) Vehicle path planning method, device, equipment and medium
CN117141520A (en) Real-time track planning method, device and equipment
CN116009556A (en) Scene generation method and device and electronic equipment
CN115782876A (en) Lane changing track generation method, device and equipment and automatic driving vehicle
CN114581869A (en) Method and device for determining position of target object, electronic equipment and storage medium
CN113610059A (en) Vehicle control method and device based on regional assessment and intelligent traffic management system
CN113587937A (en) Vehicle positioning method and device, electronic equipment and storage medium
CN114954520A (en) Method and device for controlling unmanned vehicle
CN117068198A (en) Track planning method and device, electronic equipment and automatic driving vehicle
CN114584949A (en) Method and equipment for cooperatively determining attribute value of obstacle by vehicle and road and automatic driving vehicle
CN116700065A (en) Control method and device of unmanned equipment, electronic equipment and storage medium
CN115285147A (en) Unmanned vehicle driving decision method and device and unmanned vehicle
CN117647258A (en) Path planning method, device, equipment and storage medium
CN117252296A (en) Track prediction method, device and system, electronic equipment and automatic driving vehicle
CN115649184A (en) Vehicle control instruction generation method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination