CN115544817B - Driving scene generation method and device, electronic equipment and computer readable medium - Google Patents

Driving scene generation method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN115544817B
CN115544817B CN202211535790.4A CN202211535790A CN115544817B CN 115544817 B CN115544817 B CN 115544817B CN 202211535790 A CN202211535790 A CN 202211535790A CN 115544817 B CN115544817 B CN 115544817B
Authority
CN
China
Prior art keywords
scene
vehicle
information
segment
element information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211535790.4A
Other languages
Chinese (zh)
Other versions
CN115544817A (en
Inventor
李敏
张�雄
洪炽杰
翁元祥
龙文
刘智睿
艾永军
王倩
申苗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GAC Aion New Energy Automobile Co Ltd
Original Assignee
GAC Aion New Energy Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GAC Aion New Energy Automobile Co Ltd filed Critical GAC Aion New Energy Automobile Co Ltd
Priority to CN202211535790.4A priority Critical patent/CN115544817B/en
Publication of CN115544817A publication Critical patent/CN115544817A/en
Application granted granted Critical
Publication of CN115544817B publication Critical patent/CN115544817B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the disclosure discloses a driving scene generation method and device, electronic equipment and a computer readable medium. One embodiment of the method comprises: acquiring a real scene data sequence of a corresponding target vehicle; performing element extraction processing on each real scene data to obtain a scene element information sequence; for each scene element information: constructing an initial scene segment according to the vehicle element information, the obstacle vehicle element information set and the road element information which are included in the scene element information; in response to the detection of the editing operation aiming at the initial scene segment, updating the initial scene segment according to the editing operation to obtain the updated initial scene segment as the scene segment; generating a simulated scene segment sequence according to each obtained scene segment; and generating a driving scene according to the simulated scene segment sequence. The embodiment reduces the difference between the generated driving scene and the real scene, and improves the safety of automatic driving according to the automatic driving related scheme.

Description

Driving scene generation method and device, electronic equipment and computer readable medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a driving scene generation method and device, electronic equipment and a computer readable medium.
Background
The simulated driving scenario may be used to perform simulation testing on automated driving related technologies. At present, when a driving scene is constructed, the method generally adopted is as follows: and editing various elements in the simulation scene.
However, the inventors have found that when the driving scene is constructed in the above manner, there are often technical problems as follows:
first, elements of a real scene cannot be combined in an edited scene, which results in a large difference between the edited scene and the real scene, and an autopilot-related scheme tested according to the edited scene has poor accuracy, which results in poor safety of autopilot-related schemes.
Secondly, the difference between the simulation scene and the real scene cannot be quantified, so that the joint degree between the simulation scene and the real scene cannot be checked, the difference between the simulation scene and the real scene is larger, the accuracy of the automatic driving related scheme tested according to the simulation scene is poorer, and the safety of automatic driving according to the automatic driving related scheme is poorer.
The above information disclosed in this background section is only for enhancement of understanding of the background of the inventive concept and, therefore, it may contain information that does not form the prior art that is already known to a person of ordinary skill in the art in this country.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose driving scenario generation methods, apparatuses, electronic devices, and computer readable media to address one or more of the technical problems noted in the background section above.
In a first aspect, some embodiments of the present disclosure provide a driving scenario generation method, including: acquiring a real scene data sequence of a corresponding target vehicle within a preset time period, wherein the real scene data in the real scene data sequence comprises self vehicle real scene information, an obstacle vehicle real scene information set and road information; performing element extraction processing on each real scene data in the real scene data sequence to obtain a scene element information sequence, wherein the scene element information in the scene element information sequence comprises vehicle element information, obstacle vehicle element information set and road element information; for each scene element information in the sequence of scene element information, performing the steps of: constructing an initial scene segment corresponding to the scene element information according to the vehicle element information, the obstacle vehicle element information set and the road element information which are included in the scene element information, wherein the initial scene segment comprises a vehicle element, an obstacle vehicle element set and a road element; in response to the detection of the editing operation aiming at the initial scene segment, updating the initial scene segment according to the editing operation to obtain an updated initial scene segment as a scene segment; generating a simulated scene segment sequence according to each obtained scene segment; and generating a driving scene according to the simulated scene segment sequence.
In a second aspect, some embodiments of the present disclosure provide a driving scenario generation apparatus, comprising: the system comprises an acquisition unit, a display unit and a control unit, wherein the acquisition unit is configured to acquire a real scene data sequence of a corresponding target vehicle within a preset time period, and real scene data in the real scene data sequence comprises own vehicle real scene information, an obstacle vehicle real scene information set and road information; an extraction unit configured to perform element extraction processing on each real scene data in the real scene data sequence to obtain a scene element information sequence, wherein the scene element information in the scene element information sequence includes vehicle element information, obstacle vehicle element information set, and road element information; an execution unit configured to execute, for each scene element information in the scene element information sequence, the following steps: constructing an initial scene segment corresponding to the scene element information according to the vehicle element information, the obstacle vehicle element information set and the road element information which are included in the scene element information, wherein the initial scene segment comprises a vehicle element, an obstacle vehicle element set and a road element; in response to the detection of the editing operation aiming at the initial scene segment, updating the initial scene segment according to the editing operation to obtain an updated initial scene segment as a scene segment; a first generating unit configured to generate a sequence of simulated scene segments from the obtained individual scene segments; and the second generating unit is configured to generate the driving scene according to the simulated scene segment sequence.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method described in any of the implementations of the first aspect.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the method described in any of the implementations of the first aspect.
The above embodiments of the present disclosure have the following advantages: by the driving scene generation method of some embodiments of the present disclosure, the difference between the generated driving scene and the real scene is reduced, and the safety of automatic driving according to the automatic driving related scheme is improved. Specifically, the reason why the difference between the scene and the real scene is large and the safety of automatic driving according to the automatic driving-related scheme is poor is that: elements of a real scene cannot be combined in an edited scene, so that the difference between the edited scene and the real scene is large, the accuracy of an automatic driving related scheme tested according to the edited scene is poor, and the safety of automatic driving according to the automatic driving related scheme is poor. Based on this, in the driving scene generation method according to some embodiments of the present disclosure, first, a real scene data sequence corresponding to the target vehicle within a preset time period is obtained. The real scene data in the real scene data sequence comprises the real scene information of the vehicle, the real scene information set of the obstacle vehicle and the road information. Thus, the sequence of real scene data may characterize each real scene at the target vehicle within a preset time period. Then, element extraction processing is carried out on each real scene data in the real scene data sequence to obtain a scene element information sequence. The scene element information in the scene element information sequence comprises vehicle element information, an obstacle vehicle element information set and road element information. Thus, the scene element information in the scene element information sequence can be used as a real scene data source for constructing one scene segment. Next, for each scene element information in the scene element information sequence, the following steps are performed: the method includes the first step of constructing an initial scene segment corresponding to the scene element information based on the own vehicle element information, the obstacle vehicle element information set and the road element information included in the scene element information. The initial scene segment comprises a vehicle element, an obstacle vehicle element set and a road element. Thus, the initial scene segment may characterize the constructed real-world scene. And secondly, in response to the detection of the editing operation aiming at the initial scene segment, updating the initial scene segment according to the editing operation to obtain an updated initial scene segment as a scene segment. Thus, the user can be enabled to edit on the constructed real scene. Then, a sequence of simulated scene segments is generated from the obtained scene segments. Thereby, the sequence of simulated scene segments can be used as individual simulated scene segments simulated on the basis of the real scene. And finally, generating a driving scene according to the simulated scene segment sequence. In this way, the simulated driving scene can be composed of the simulated scene segments. Also because the sequence of simulated scene segments is generated in conjunction with each real-world scene, elements of the real-world scene may be incorporated into the driving scene. So that the difference between the driving scene and the real scene can be reduced. And then the accuracy of the automatic driving related scheme tested according to the edited scene can be improved, and the safety of automatic driving according to the automatic driving related scheme is improved. Therefore, the difference between the generated driving scene and the real scene is reduced, and the safety of automatic driving according to the automatic driving related scheme is improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
FIG. 1 is a flow diagram of some embodiments of a driving scenario generation method according to the present disclosure;
FIG. 2 is a schematic block diagram of some embodiments of a driving scenario generation apparatus according to the present disclosure;
FIG. 3 is a schematic block diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The operations of collecting, storing, using and the like of personal information (such as real scene data) of a user involved in the present disclosure include, before performing the corresponding operations, the related organizations or individuals as much as possible to perform obligations such as carrying out evaluation of security influence of personal information, fulfilling notification obligations to a personal information body, and obtaining authorization approval of the personal information body in advance.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates a flow 100 of some embodiments of a driving scenario generation method in accordance with the present disclosure. The driving scene generation method comprises the following steps:
step 101, acquiring a real scene data sequence of a corresponding target vehicle within a preset time period.
In some embodiments, an executing subject (e.g., a computing device) of the driving scene generation method may obtain, through a wired connection manner or a wireless connection manner, a real scene data sequence corresponding to the target vehicle within a preset time period from a database or an in-vehicle terminal of the target vehicle. The target vehicle can be any vehicle needing to construct a driving scene. The preset time period may be a preset time period. Here, the preset time period may be a history time period or a future time period. The specific setting of the preset time period is not limited. The real scene data sequence may be each real scene data spaced by a preset duration within the preset time period. The real scene data may characterize a real scene at a point in time. The real scene data in the real scene data sequence may include, but is not limited to: the system comprises the real scene information of the self vehicle, the real scene information set of the obstacle vehicle and the road information. The own vehicle real scene information may be real scene related information of the target vehicle. The own vehicle reality scene information may include, but is not limited to, at least one of the following: vehicle type, vehicle size, speed, acceleration, lateral speed, longitudinal speed, steering wheel torque. The set of obstacle vehicle real scene information may be real scene-related information of each obstacle vehicle that hinders the target vehicle from traveling. The obstacle vehicle reality scene information in the obstacle vehicle reality scene information set may include, but is not limited to, at least one of: vehicle type, vehicle size, speed, acceleration, lateral speed, longitudinal speed, obstacle vehicle relative position. The relative position of the obstacle vehicle may be a position of the obstacle vehicle with respect to the target vehicle. For example, the obstacle vehicle relative position may be the right rear side of the above-described target vehicle. The road information may be information related to a real scene of a road on which the target vehicle travels. The above-mentioned road information may include, but is not limited to, at least one of the following: signal light information, road type, obstacle information. The signal light information may be related to the signal light in front of the target vehicle, and may include, but is not limited to: signal light type, flashing signal light identification. The flashing signal light identification may be an identification of a lit signal light. The obstacle information may be information of an obstacle that hinders the travel of the target vehicle. The obstacle information may include, but is not limited to: obstacle type, obstacle bounding box information, obstacle relative position. The relative position of the obstacle may be a position of the obstacle with respect to the above-described target vehicle. For example, the obstacle relative position may be a right front side of the above-described target vehicle.
It is noted that the wireless connection means may include, but is not limited to, a 3G/4G connection, a WiFi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, a UWB (ultra wideband) connection, and other wireless connection means now known or developed in the future.
And 102, performing element extraction processing on each real scene data in the real scene data sequence to obtain a scene element information sequence.
In some embodiments, the executing entity may perform element extraction processing on each real scene data in the real scene data sequence to obtain a scene element information sequence. The scene element information in the scene element information sequence may include vehicle element information, obstacle vehicle element information set, and road element information.
Optionally, the real scene data may further include environmental information. The environment information may be information related to a real scene of an environment in which the target vehicle is located. The environmental information may include, but is not limited to, at least one of the following: illumination, humidity, temperature, visibility.
In some optional implementations of some embodiments, first, for each real scene data in the real scene data sequence, the executing body may perform the following steps:
first, the own vehicle real scene information included in the real scene data is determined as own vehicle factor information. Thus, the own vehicle element information can be used as source data for constructing the own vehicle element.
And secondly, determining the obstacle vehicle real scene information set included in the real scene data as an obstacle vehicle element information set. Thus, the respective obstacle vehicle element information can be used as source data for constructing the respective obstacle vehicle elements.
And thirdly, determining the road information included in the real scene data as the road element information. Thus, the road element information can be used as source data for constructing road elements.
And fourthly, determining the environmental information included in the real scene data as environmental element information. Thus, the environment element information can be used as source data for constructing the environment element.
And a fifth step of combining the own vehicle element information, the obstacle vehicle element information set, the road element information, and the environment element information into scene element information. In practice, the execution body may merge the own vehicle element information, the obstacle vehicle element information set, the road element information, and the environment element information into scene element information. Thus, the scene element information can be used as source data for constructing a scene segment.
Then, the respective scene element information may be combined into a scene element information sequence.
Step 103, for each scene element information in the scene element information sequence, executing the following steps:
and step 1031, constructing an initial scene segment corresponding to the scene element information according to the vehicle element information, the obstacle vehicle element information set and the road element information included in the scene element information.
In some embodiments, the execution subject may construct an initial scene segment corresponding to the scene element information, based on the vehicle element information, the obstacle vehicle element information set, and the road element information included in the scene element information. The initial scene segment may include a host vehicle element, an obstacle vehicle element set, and a road element.
In some optional implementations of some embodiments, the executing body may execute the following steps to construct an initial scene segment corresponding to the scene element information according to the vehicle element information, the set of obstacle vehicle element information, and the road element information included in the scene element information:
first, a vehicle model corresponding to the vehicle factor information is generated as a vehicle factor based on the vehicle factor information. In practice, the execution subject may construct, as the own vehicle element, a vehicle model in which the vehicle type is the vehicle type included in the own vehicle element information and the vehicle size is equal to or larger than the vehicle size included in the own vehicle element information. Here, the vehicle model may be a three-dimensional vehicle model. Each piece of information included in the vehicle element information may be set as element information of the vehicle element.
And a second step of generating an obstacle vehicle model corresponding to the obstacle vehicle element information as an obstacle vehicle element, based on each obstacle vehicle element information in the obstacle vehicle element information set. In practice, the executing body may construct, as the obstacle vehicle element, a vehicle model in which the vehicle type is the vehicle type included in the obstacle vehicle element information and the vehicle size is equal to the vehicle size included in the obstacle vehicle element information. Then, each piece of information included in the obstacle vehicle element information may be set as element information of the obstacle vehicle element.
And a third step of determining each of the generated obstacle vehicle elements as an obstacle vehicle element set.
And a fourth step of generating a road model corresponding to the road element information as a road element based on the road element information. In practice, first, the execution body may construct a traffic light model based on traffic light information included in the road element information. Specifically, the execution subject may construct a signal lamp model in which the signal lamp type is the signal lamp type included in the signal lamp information, and the lighted signal lamp corresponds to the blinking signal lamp identifier included in the signal lamp information. The signal lamp model may be a three-dimensional model. Then, a road model may be constructed based on the road type included in the above-described road element information. The road model may be a three-dimensional model. Next, an obstacle model may be constructed based on obstacle information included in the road element information. Specifically, an obstacle model may be constructed in which the obstacle type is the obstacle type included in the obstacle information, the obstacle bounding box is equal to the bounding box corresponding to the obstacle bounding box information included in the obstacle information, and the obstacle position is the obstacle relative position included in the obstacle information. The obstacle model may be a three-dimensional model. Then, the traffic light model, the road model, and the obstacle model may be combined into a road element. Then, each piece of information included in the road element information may be set as element information of the road element.
And a fifth step of generating an environment model corresponding to the environment element information as an environment element based on the environment element information. In practice, the executing entity may construct an environment model with illuminance, humidity, temperature, and visibility respectively being illuminance, humidity, temperature, and visibility included in the environment element information as an environment element. The environmental model may be a three-dimensional model.
And sixthly, performing superposition processing on the vehicle elements, the obstacle vehicle element set, the road elements and the environment elements to obtain an initial scene segment. Wherein the initial scene segment includes the host vehicle element, the set of obstacle vehicle elements, the road element, and the environment element. In practice, the executing body may superimpose the own vehicle element, the obstacle vehicle element set, the road element, and the environment element on the same map layer to obtain an initial scene segment.
Step 1032, in response to detecting the editing operation for the initial scene segment, updating the initial scene segment according to the editing operation, and obtaining the updated initial scene segment as the scene segment.
In some embodiments, the executing entity may update the initial scene segment according to the editing operation in response to detecting the editing operation for the initial scene segment, so as to obtain the updated initial scene segment as the scene segment.
In some optional implementations of some embodiments, in response to detecting an element configuration operation on any scene element in the initial scene segment, the execution main body may update the element information set of the any scene element according to the element configuration information corresponding to the element configuration operation, so as to update the initial scene segment. The arbitrary scene element may be any element of the vehicle element, the obstacle vehicle element, the road element and the environment element included in the initial scene segment. The element arrangement operation may be an operation of arranging element information of the elements. For example, the element configuration operation described above may include, but is not limited to, at least one of: click, slide, enter, hover.
In some optional implementations of some embodiments, first, in response to detecting a road element adding operation on a road element in the initial scene segment, the executing body may add element information corresponding to the road element adding operation to the set of element information of the road element. The road element adding operation may be an operation of adding element information corresponding to the road element. For example, the element information corresponding to the road element adding operation may be lane speed limit information. The lane speed limit information may be a maximum value of the speed of travel on the current road element. Then, in response to detecting a road type element configuration operation on a road element in the initial scene segment, element information of a corresponding road type in the element information set of the road element may be updated to a configured road type corresponding to the road type element configuration operation. The above-described road type element configuration operation may be an operation of modifying a road type. In practice, the executing body may replace the element information of the corresponding road type in the set of element information of the road element with the configured road type. Alternatively, the execution body may update the road type of the road element to the post-configuration road type.
In some optional implementations of some embodiments, the executing entity may delete any scene element from the initial scene segment in response to detecting a deletion operation corresponding to the any scene element in the initial scene segment, so as to update the initial scene segment.
In some optional implementations of some embodiments, the executing entity may add, in response to detecting a component addition operation corresponding to the initial scene segment, an added component corresponding to the component addition operation to the initial scene segment to update the initial scene segment. The element addition operation may be, but is not limited to, at least one of the following: element pasting operation and element importing operation. The element pasting operation may be an operation of pasting elements after copying any of the own vehicle element, each obstacle vehicle element, the road element, and the environment element included in the initial scene segment. The element importing operation may be an operation of importing a new scene element from a scene element database. The scene element database may be a database for storing scene elements. Alternatively, the scene element database may be a database for storing scene elements of real scenes. The added elements may be pasted or imported scene elements.
And 104, generating a simulated scene segment sequence according to the obtained scene segments.
In some embodiments, the execution agent may generate a sequence of simulated scene segments from the obtained scene segments.
In some optional implementations of some embodiments, first, the execution body may combine the scene segments into a scene segment sequence. In practice, the execution subject may combine the scene segments into a sequence of scene segments according to a chronological order. Then, each scene segment in the scene segment sequence may be input to a simulated scene segment generation model trained in advance, so as to obtain a simulated scene segment sequence. And the simulated scene segments in the simulated scene segment sequence correspond to the scene segments in the scene segment sequence. The corresponding relationship between the simulated scene segments in the simulated scene segment sequence and the scene segments in the scene segment sequence may be a one-to-one correspondence. The simulation scene segment generation model may be a generation model that takes a scene segment as an input and takes a simulation scene segment corresponding to the scene segment as an output. For example, the generative model may be a diffusion model.
Optionally, the simulation scene segment generating model may include an input layer, a first generating model, a second generating model, a third generating model, and a decision layer. The input layer described above may be used for feature extraction of input data. The first generative model, the second generative model, and the third generative model may be different types of generative models used to generate new simulated scene segments. The decision layer may be configured to select a scene segment that meets a preset condition from the scene segments generated by the first generation model, the second generation model, and the third generation model as a simulated scene segment. The preset condition may be that the similarity between the generated simulated scene segment and the input scene segment is within a preset range. The specific setting of the preset range is not limited. If a plurality of simulated scene segments meeting the preset condition exist, one of the simulated scene segments can be randomly selected as an output simulated scene segment. The input layer is connected to the first generative model, the second generative model, and the third generative model, respectively. The first generative model, the second generative model, and the third generative model are connected to the decision layer.
Optionally, the simulation scene segment generation model may be obtained by training the following steps:
firstly, a scene element set is obtained from the scene element database as a sample set. In practice, the scene elements corresponding to the target vehicle may be obtained from the scene element database. The execution subject for training the simulation scene segment generation model may be the execution subject, or may be another computing device.
Secondly, performing the following training steps based on the sample set:
the method comprises a first training step of inputting at least one sample in a sample set to an input layer of an initial simulation scene segment generation model respectively to obtain a feature vector corresponding to each sample in the at least one sample.
And a second training step, namely respectively inputting the feature vector corresponding to each sample in the at least one sample into a first generation model, a second generation model and a third generation model of the initial simulation scene segment generation model to obtain a first simulation scene segment, a second simulation scene segment and a third simulation scene segment corresponding to each sample in the at least one sample.
And a third training step, namely inputting the first simulated scene segment, the second simulated scene segment and the third simulated scene segment corresponding to each sample in the at least one sample into a decision layer of an input layer of the initial simulated scene segment generation model to obtain the simulated scene segment corresponding to each sample in the at least one sample.
And a fourth training step, comparing the simulated scene segment corresponding to each sample in the at least one sample with the sample. Here, the manner of comparison may be a manner of determining similarity of the simulated scene segment and the sample.
And a fifth training step, determining whether the initial simulation scene segment generation model reaches a preset optimization target according to the comparison result. The comparison result may include a similarity corresponding to each of the at least one sample. The optimization target may be that the ratio of the similarity in the target range is greater than a preset ratio. The setting of the target range and the preset ratio is not limited.
And a sixth training step, in response to determining that the initial simulation scene segment generation model reaches the optimization goal, taking the initial simulation scene segment generation model as a simulated scene segment generation model after training.
Optionally, the step of training to obtain the simulation scene segment generation model may further include:
a seventh training step of adjusting network parameters of the initial simulation scene segment generation model in response to determining that the initial simulation scene segment generation model does not meet the optimization goal, and forming a sample set using unused samples, using the adjusted initial simulation scene segment generation model as the initial simulation scene segment generation model, and performing the training step again. By way of example, a Back Propagation Algorithm (BP Algorithm) and a gradient descent method (e.g., a random small batch gradient descent Algorithm) may be used to adjust the network parameters of the initial simulation scene segment generation model.
The simulation scene segment generation model and the related content thereof are used as an invention point of the embodiment of the disclosure, and the technical problems mentioned in the background art are solved, namely, the difference between the simulation scene and the real scene cannot be quantized, so that the fitting degree of the simulation scene and the real scene cannot be verified, the difference between the simulation scene and the real scene is larger, the accuracy of the automatic driving related scheme tested according to the simulation scene is poorer, and the safety of automatic driving according to the automatic driving related scheme is poorer. Factors that cause a large difference between a simulation scene and a real scene and poor safety of automatic driving according to an automatic driving-related scheme are often as follows: the difference between the simulation scene and the real scene cannot be quantified, so that the joint degree of the simulation scene and the real scene cannot be checked, the difference between the simulation scene and the real scene is larger, the accuracy of an automatic driving related scheme tested according to the simulation scene is poorer, and the safety of automatic driving according to the automatic driving related scheme is poorer. If the above factors are solved, the effects of reducing the difference between the simulation scene and the real scene and improving the safety of automatic driving according to the automatic driving related scheme can be achieved. In order to achieve the effect, the simulation scene segment generation model is introduced into the simulation scene segment generation model, when the simulation scene segment generation model is trained, the similarity of the simulation scene segment generated by the initial simulation scene segment generation model and the input scene segment is compared, whether the initial simulation scene segment generation model achieves the optimization target or not is determined, the difference between the simulation scene and the real scene is quantized, the fitting degree of the simulation scene and the real scene can be verified, the difference between the simulation scene and the real scene can be reduced, and the difference between the generated driving scene and the real scene is reduced. Therefore, the accuracy of the automatic driving related scheme tested according to the generated driving scene is improved, and the safety of automatic driving according to the automatic driving related scheme is improved.
And 105, generating a driving scene according to the simulated scene segment sequence.
In some embodiments, the execution subject may generate a driving scene according to the simulated scene segment sequence. In practice, the execution subject may sequentially connect the simulated scene segments in the simulated scene segment sequence to obtain the driving scene. Here, the connection mode may be a mode of splicing multiple frames of the simulated scene segments into the simulated scene animation.
Alternatively, the execution subject may store, in the scene element database, each vehicle element, each obstacle vehicle element set, each road element, and each environment element included in each constructed initial scene segment. In this way, the scene element of the constructed real scene can be stored in the scene element database.
The above embodiments of the present disclosure have the following beneficial effects: by the driving scene generation method of some embodiments of the present disclosure, the difference between the generated driving scene and the real scene is reduced, and the safety of automatic driving according to the automatic driving related scheme is improved. Specifically, the reason why the difference between the scene and the real scene is large and the safety of automatic driving according to the automatic driving-related scheme is poor is that: elements of a real scene cannot be combined in an edited scene, so that the difference between the edited scene and the real scene is large, the accuracy of an automatic driving related scheme tested according to the edited scene is poor, and the safety of automatic driving according to the automatic driving related scheme is poor. Based on this, in the driving scene generation method according to some embodiments of the present disclosure, first, a real scene data sequence corresponding to the target vehicle within a preset time period is obtained. The real scene data in the real scene data sequence comprises self-vehicle real scene information, obstacle vehicle real scene information sets and road information. Thus, the sequence of real scene data may characterize each real scene at the target vehicle within a preset time period. Then, element extraction processing is carried out on each real scene data in the real scene data sequence to obtain a scene element information sequence. The scene element information in the scene element information sequence comprises vehicle element information, an obstacle vehicle element information set and road element information. Thus, the scene element information in the scene element information sequence can be used as a real scene data source for constructing one scene segment. Next, for each scene element information in the scene element information sequence, the following steps are performed: the method includes the first step of constructing an initial scene segment corresponding to the scene element information based on the own vehicle element information, the obstacle vehicle element information set and the road element information included in the scene element information. The initial scene segment comprises a vehicle element, an obstacle vehicle element set and a road element. Thus, the initial scene segment may characterize the constructed real scene. And secondly, in response to the detection of the editing operation aiming at the initial scene segment, updating the initial scene segment according to the editing operation to obtain an updated initial scene segment as a scene segment. Thus, the user can be enabled to edit on the constructed real scene. Then, a sequence of simulated scene segments is generated from the obtained scene segments. Thereby, the sequence of simulated scene segments can be used as individual simulated scene segments simulated on the basis of the real scene. And finally, generating a driving scene according to the simulated scene segment sequence. In this way, the simulated driving scene can be composed of the simulated scene segments. Also because the sequence of simulated scene segments is generated in conjunction with each real-world scene, elements of the real-world scene may be incorporated into the driving scene. So that the difference between the driving scene and the real scene can be reduced. And then the accuracy of the automatic driving related scheme tested according to the edited scene can be improved, and the safety of automatic driving according to the automatic driving related scheme is improved. Therefore, the difference between the generated driving scene and the real scene is reduced, and the safety of automatic driving according to the automatic driving related scheme is improved.
With further reference to fig. 2, as an implementation of the methods illustrated in the above figures, the present disclosure provides some embodiments of a driving scenario generation apparatus, which correspond to those method embodiments illustrated in fig. 1, and which may be applied in various electronic devices in particular.
As shown in fig. 2, the driving scenario generation apparatus 200 of some embodiments includes: an acquisition unit 201, an extraction unit 202, an execution unit 203, a first generation unit 204, and a second generation unit 205. The acquiring unit 201 is configured to acquire a real scene data sequence of a corresponding target vehicle within a preset time period, wherein the real scene data in the real scene data sequence comprises own vehicle real scene information, obstacle vehicle real scene information set and road information; the extracting unit 202 is configured to perform element extraction processing on each real scene data in the real scene data sequence to obtain a scene element information sequence, wherein the scene element information in the scene element information sequence includes vehicle element information, obstacle vehicle element information set, and road element information; the execution unit 203 is configured to, for each scene element information in the above-described sequence of scene element information, execute the steps of: constructing an initial scene segment corresponding to the scene element information according to the vehicle element information, the obstacle vehicle element information set and the road element information which are included in the scene element information, wherein the initial scene segment comprises a vehicle element, an obstacle vehicle element set and a road element; in response to the detection of the editing operation aiming at the initial scene segment, updating the initial scene segment according to the editing operation to obtain an updated initial scene segment as a scene segment; the first generating unit 204 is configured to generate a sequence of simulated scene segments from the obtained individual scene segments; the second generating unit 205 is configured to generate a driving scene from the simulated scene segment sequence described above.
It will be appreciated that the units described in the apparatus 200 correspond to the various steps in the method described with reference to figure 1. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 200 and the units included therein, and are not described herein again.
Referring now to FIG. 3, a block diagram of an electronic device 300 (e.g., a server) suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 3, the electronic device 300 may include a processing means 301 (e.g., a central processing unit, a graphics processor, etc.) that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, or the like; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the electronic device 300 to communicate with other devices, wireless or wired, to exchange data. While fig. 3 illustrates an electronic device 300 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 3 may represent one device or may represent multiple devices, as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 309, or installed from the storage device 308, or installed from the ROM 302. The computer program, when executed by the processing apparatus 301, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a real scene data sequence of a corresponding target vehicle within a preset time period, wherein the real scene data in the real scene data sequence comprises self vehicle real scene information, an obstacle vehicle real scene information set and road information; performing element extraction processing on each real scene data in the real scene data sequence to obtain a scene element information sequence, wherein the scene element information in the scene element information sequence comprises vehicle element information, obstacle vehicle element information set and road element information; for each scene element information in the sequence of scene element information, performing the steps of: constructing an initial scene segment corresponding to the scene element information according to the vehicle element information, the obstacle vehicle element information set and the road element information which are included in the scene element information, wherein the initial scene segment comprises a vehicle element, an obstacle vehicle element set and a road element; in response to the detection of the editing operation aiming at the initial scene segment, updating the initial scene segment according to the editing operation to obtain an updated initial scene segment as a scene segment; generating a simulated scene segment sequence according to each obtained scene segment; and generating a driving scene according to the simulated scene segment sequence.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, an extraction unit, an execution unit, a first generation unit, and a second generation unit. The names of these units do not in some cases constitute a limitation on the units themselves, and for example, the acquiring unit may also be described as a "unit that acquires a real scene data sequence of the corresponding target vehicle within a preset time period".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (8)

1. A driving scenario generation method, comprising:
acquiring a real scene data sequence corresponding to a target vehicle within a preset time period, wherein the real scene data in the real scene data sequence comprises self vehicle real scene information, an obstacle vehicle real scene information set, road information and environment information, and the target vehicle is any vehicle needing to construct a driving scene;
performing element extraction processing on each real scene data in the real scene data sequence to obtain a scene element information sequence, wherein the scene element information in the scene element information sequence includes vehicle element information, obstacle vehicle element information set and road element information, and the element extraction processing on each real scene data in the real scene data sequence to obtain the scene element information sequence includes:
for each real scene data in the sequence of real scene data, performing the steps of:
determining the own vehicle real scene information included in the real scene data as own vehicle element information, wherein the own vehicle real scene information is real scene related information of the target vehicle, and the own vehicle real scene information includes: vehicle type, vehicle size, speed, acceleration, lateral speed, longitudinal speed, steering wheel torque;
determining an obstacle vehicle real scene information set included in the real scene data as an obstacle vehicle element information set, wherein the obstacle vehicle real scene information set is real scene related information of each obstacle vehicle which hinders the target vehicle from running, and the obstacle vehicle real scene information in the obstacle vehicle real scene information set includes: vehicle type, vehicle size, speed, acceleration, lateral speed, longitudinal speed, obstacle vehicle relative position;
determining road information included in the real scene data as road element information, wherein the road information is real scene related information of a road on which the target vehicle travels, and the road information includes: signal light information, road type, obstacle information;
determining environmental information included in the real scene data as environmental element information, wherein the environmental information is real scene related information of an environment in which the target vehicle is located, and the environmental information includes: illuminance, humidity, temperature, visibility;
combining the own vehicle element information, the obstacle vehicle element information set, the road element information and the environment element information into scene element information;
combining the scene element information into a scene element information sequence;
for each scene element information in the sequence of scene element information, performing the steps of:
according to the vehicle element information, the obstacle vehicle element information set and the road element information which are included in the scene element information, constructing an initial scene segment corresponding to the scene element information, wherein the initial scene segment comprises a vehicle element, an obstacle vehicle element set and a road element, and according to the vehicle element information, the obstacle vehicle element information set and the road element information which are included in the scene element information, constructing an initial scene segment corresponding to the scene element information, comprises:
constructing a vehicle model of which the vehicle type is the vehicle type included in the vehicle element information and the vehicle size is equal to or larger than the vehicle size included in the vehicle element information as a vehicle element, wherein the vehicle model is a three-dimensional vehicle model;
constructing a vehicle model in which a vehicle type is a vehicle type included in the obstacle vehicle element information and a vehicle size is equal to or larger than a vehicle size included in the obstacle vehicle element information as an obstacle vehicle element;
determining each of the generated obstacle vehicle elements as an obstacle vehicle element set;
generating a road model corresponding to the road element information as a road element according to the road element information;
generating an environment model corresponding to the environment element information as an environment element according to the environment element information;
performing superposition processing on the vehicle element, the obstacle vehicle element set, the road element and the environment element to obtain an initial scene segment, wherein the initial scene segment comprises the vehicle element, the obstacle vehicle element set, the road element and the environment element;
in response to the detection of the editing operation aiming at the initial scene segment, updating the initial scene segment according to the editing operation to obtain an updated initial scene segment as a scene segment;
generating a simulated scene segment sequence according to the obtained scene segments, wherein the generating of the simulated scene segment sequence according to the obtained scene segments comprises:
combining the scene segments into a sequence of scene segments;
inputting each scene segment in the scene segment sequence to a pre-trained simulated scene segment generation model to obtain a simulated scene segment sequence, wherein the simulated scene segments in the simulated scene segment sequence correspond to the scene segments in the scene segment sequence one by one, the simulated scene segment generation model is a generation model taking the scene segments as input and taking the simulated scene segments corresponding to the scene segments as output, the simulated scene segment generation model comprises an input layer, a first generation model, a second generation model, a third generation model and a decision layer, and the simulated scene segment generation model is obtained by training through the following steps:
acquiring a scene element set from a scene element database as a sample set;
performing the following training steps based on the sample set:
respectively inputting at least one sample in a sample set to an input layer of an initial simulation scene segment generation model to obtain a feature vector corresponding to each sample in the at least one sample;
respectively inputting the feature vector corresponding to each sample in the at least one sample into a first generation model, a second generation model and a third generation model of an initial simulation scene segment generation model to obtain a first simulation scene segment, a second simulation scene segment and a third simulation scene segment corresponding to each sample in the at least one sample;
inputting a first simulation scene segment, a second simulation scene segment and a third simulation scene segment corresponding to each sample in the at least one sample to a decision layer of an input layer of an initial simulation scene segment generation model to obtain a simulation scene segment corresponding to each sample in the at least one sample;
comparing the simulated scene segment corresponding to each sample of the at least one sample with the sample;
determining whether the initial simulation scene segment generation model reaches a preset optimization target or not according to the comparison result;
in response to determining that an initial simulation scene segment generation model reaches the optimization goal, taking the initial simulation scene segment generation model as a trained simulation scene segment generation model;
in response to determining that the initial simulated scene segment generative model does not meet the optimization objective, adjusting network parameters of the initial simulated scene segment generative model, and using unused samples to form a sample set, using the adjusted initial simulated scene segment generative model as the initial simulated scene segment generative model, and performing the training step again;
and generating a driving scene according to the simulated scene segment sequence.
2. The method of claim 1, wherein the in response to detecting an editing operation for the initial scene segment, updating the initial scene segment according to the editing operation, resulting in an updated initial scene segment as a scene segment, comprises:
in response to detecting an element configuration operation on any scene element in an initial scene segment, updating an element information set of the any scene element according to element configuration information corresponding to the element configuration operation so as to update the initial scene segment;
in response to detecting a deletion operation corresponding to any scene element in the initial scene segment, deleting the any scene element from the initial scene segment to update the initial scene segment;
in response to detecting the element adding operation corresponding to the initial scene segment, adding an added element corresponding to the element adding operation to the initial scene segment to update the initial scene segment.
3. The method of claim 2, wherein the updating, in response to detecting an element configuration operation on an arbitrary scene element in an initial scene segment, an element information set of the arbitrary scene element according to element configuration information corresponding to the element configuration operation comprises:
in response to detecting a road element adding operation on a road element in an initial scene segment, adding element information corresponding to the road element adding operation in an element information set of the road element;
in response to detecting a road type element configuration operation on a road element in an initial scene segment, updating element information of a corresponding road type in an element information set of the road element into a configured road type corresponding to the road type element configuration operation.
4. The method of claim 1, wherein said generating a sequence of simulated scene segments from the resulting individual scene segments comprises:
combining the scene segments into a scene segment sequence;
and inputting each scene segment in the scene segment sequence to a pre-trained simulated scene segment generation model to obtain a simulated scene segment sequence, wherein the simulated scene segments in the simulated scene segment sequence correspond to the scene segments in the scene segment sequence.
5. The method of claim 4, wherein the method further comprises:
and storing each self vehicle element, each barrier vehicle element set, each road element and each environment element which are included in each constructed initial scene segment into a scene element database.
6. A driving scenario generation apparatus, comprising:
the system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is configured to acquire a real scene data sequence corresponding to a target vehicle within a preset time period, the real scene data in the real scene data sequence comprises self vehicle real scene information, an obstacle vehicle real scene information set, road information and environment information, and the target vehicle is any vehicle needing to construct a driving scene;
an extracting unit configured to perform element extraction processing on each real scene data in the real scene data sequence to obtain a scene element information sequence, wherein the scene element information in the scene element information sequence includes vehicle element information, an obstacle vehicle element information set, and road element information, and the element extraction processing on each real scene data in the real scene data sequence to obtain a scene element information sequence includes:
for each real scene data in the sequence of real scene data, performing the steps of:
determining the own vehicle real scene information included in the real scene data as own vehicle element information, wherein the own vehicle real scene information is real scene related information of the target vehicle, and the own vehicle real scene information includes: vehicle type, vehicle size, speed, acceleration, lateral speed, longitudinal speed, steering wheel torque;
determining an obstacle vehicle real scene information set included in the real scene data as an obstacle vehicle element information set, wherein the obstacle vehicle real scene information set is real scene related information of each obstacle vehicle which hinders the target vehicle from running, and the obstacle vehicle real scene information in the obstacle vehicle real scene information set includes: vehicle type, vehicle size, speed, acceleration, lateral speed, longitudinal speed, obstacle vehicle relative position;
determining road information included in the real scene data as road element information, wherein the road information is real scene related information of a road on which the target vehicle travels, and the road information includes: signal light information, road type, obstacle information;
determining environmental information included in the real scene data as environmental element information, wherein the environmental information is real scene related information of an environment in which the target vehicle is located, and the environmental information includes: illuminance, humidity, temperature, visibility;
combining the own vehicle element information, the obstacle vehicle element information set, the road element information and the environment element information into scene element information;
combining the scene element information into a scene element information sequence;
an execution unit configured to execute, for each scene element information in the sequence of scene element information, the steps of: according to the vehicle element information, the obstacle vehicle element information set and the road element information which are included in the scene element information, constructing an initial scene segment corresponding to the scene element information, wherein the initial scene segment includes a vehicle element, an obstacle vehicle element set and a road element, and according to the vehicle element information, the obstacle vehicle element information set and the road element information which are included in the scene element information, constructing an initial scene segment corresponding to the scene element information, comprises: constructing a vehicle model of which the vehicle type is the vehicle type included in the vehicle element information and the vehicle size is equal to or larger than the vehicle size included in the vehicle element information as a vehicle element, wherein the vehicle model is a three-dimensional vehicle model; constructing a vehicle model of which the vehicle type is the vehicle type included in the obstacle vehicle element information and of which the vehicle size is equal to or greater than the vehicle size included in the obstacle vehicle element information as an obstacle vehicle element; determining each of the generated obstacle vehicle elements as an obstacle vehicle element set; generating a road model corresponding to the road element information as a road element according to the road element information; generating an environment model corresponding to the environment element information as an environment element according to the environment element information; performing superposition processing on the vehicle element, the obstacle vehicle element set, the road element and the environment element to obtain an initial scene segment, wherein the initial scene segment comprises the vehicle element, the obstacle vehicle element set, the road element and the environment element; in response to the detection of the editing operation aiming at the initial scene segment, updating the initial scene segment according to the editing operation to obtain an updated initial scene segment as a scene segment;
a first generating unit configured to generate a sequence of simulated scene segments from the obtained individual scene segments, wherein the generating of the sequence of simulated scene segments from the obtained individual scene segments comprises: combining the scene segments into a scene segment sequence; inputting each scene segment in the scene segment sequence to a simulated scene segment generation model trained in advance to obtain a simulated scene segment sequence, wherein the simulated scene segments in the simulated scene segment sequence correspond to the scene segments in the scene segment sequence one by one, the simulated scene segment generation model is a generation model taking the scene segments as input and the simulated scene segments corresponding to the scene segments as output, the simulated scene segment generation model comprises an input layer, a first generation model, a second generation model, a third generation model and a decision layer, and the simulated scene segment generation model is obtained by training through the following steps: acquiring a scene element set from a scene element database as a sample set; performing the following training steps based on the sample set: respectively inputting at least one sample in a sample set to an input layer of an initial simulation scene segment generation model to obtain a feature vector corresponding to each sample in the at least one sample; respectively inputting the feature vector corresponding to each sample in the at least one sample into a first generation model, a second generation model and a third generation model of an initial simulation scene segment generation model to obtain a first simulation scene segment, a second simulation scene segment and a third simulation scene segment corresponding to each sample in the at least one sample; inputting a first simulation scene segment, a second simulation scene segment and a third simulation scene segment corresponding to each sample in the at least one sample to a decision layer of an input layer of an initial simulation scene segment generation model to obtain a simulation scene segment corresponding to each sample in the at least one sample; comparing the simulated scene segment corresponding to each sample of the at least one sample with the sample; determining whether the initial simulation scene segment generation model reaches a preset optimization target or not according to the comparison result; in response to determining that an initial simulated scene segment generative model reaches the optimization goal, taking the initial simulated scene segment generative model as a trained simulated scene segment generative model; in response to determining that the initial simulated scene segment generation model does not meet the optimization objective, adjusting network parameters of the initial simulated scene segment generation model, and using unused samples to form a sample set, using the adjusted initial simulated scene segment generation model as the initial simulated scene segment generation model, and performing the training step again;
a second generating unit configured to generate a driving scene according to the simulated scene segment sequence.
7. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
8. A computer-readable medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, carries out the method according to any one of claims 1-5.
CN202211535790.4A 2022-12-02 2022-12-02 Driving scene generation method and device, electronic equipment and computer readable medium Active CN115544817B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211535790.4A CN115544817B (en) 2022-12-02 2022-12-02 Driving scene generation method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211535790.4A CN115544817B (en) 2022-12-02 2022-12-02 Driving scene generation method and device, electronic equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN115544817A CN115544817A (en) 2022-12-30
CN115544817B true CN115544817B (en) 2023-03-28

Family

ID=84722101

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211535790.4A Active CN115544817B (en) 2022-12-02 2022-12-02 Driving scene generation method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN115544817B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115876493B (en) * 2023-01-18 2023-05-23 禾多科技(北京)有限公司 Test scene generation method, device, equipment and medium for automatic driving

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021047856A (en) * 2019-09-18 2021-03-25 ▲広▼州大学 Method, apparatus, medium, and equipment for creating vehicle road simulation scene
EP4063792A2 (en) * 2021-06-16 2022-09-28 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Method and apparatus for generating a simulation scene, electronic device and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111694287B (en) * 2020-05-14 2023-06-23 阿波罗智能技术(北京)有限公司 Obstacle simulation method and device in unmanned simulation scene
CN112380137A (en) * 2020-12-04 2021-02-19 清华大学苏州汽车研究院(吴江) Method, device and equipment for determining automatic driving scene and storage medium
CN112965466B (en) * 2021-02-18 2022-08-30 北京百度网讯科技有限公司 Reduction test method, device, equipment and program product of automatic driving system
CN113570727B (en) * 2021-06-16 2024-04-16 阿波罗智联(北京)科技有限公司 Scene file generation method and device, electronic equipment and storage medium
CN115240157B (en) * 2022-08-05 2023-07-18 禾多科技(北京)有限公司 Method, apparatus, device and computer readable medium for persistence of road scene data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021047856A (en) * 2019-09-18 2021-03-25 ▲広▼州大学 Method, apparatus, medium, and equipment for creating vehicle road simulation scene
EP4063792A2 (en) * 2021-06-16 2022-09-28 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Method and apparatus for generating a simulation scene, electronic device and storage medium

Also Published As

Publication number Publication date
CN115544817A (en) 2022-12-30

Similar Documents

Publication Publication Date Title
CN108520220B (en) Model generation method and device
CN108921200B (en) Method, apparatus, device and medium for classifying driving scene data
CN107038478B (en) Road condition prediction method and device, computer equipment and readable medium
CN109858445B (en) Method and apparatus for generating a model
CN110245710B (en) Training method of semantic segmentation model, semantic segmentation method and device
CN112590813B (en) Method, device, electronic device and medium for generating information of automatic driving vehicle
CN112348029B (en) Local map adjusting method, device, equipment and computer readable medium
CN112001287A (en) Method and device for generating point cloud information of obstacle, electronic device and medium
CN115544817B (en) Driving scene generation method and device, electronic equipment and computer readable medium
CN115240157B (en) Method, apparatus, device and computer readable medium for persistence of road scene data
CN115616937B (en) Automatic driving simulation test method, device, equipment and computer readable medium
CN115167182B (en) Automatic driving simulation test method, device, equipment and computer readable medium
CN115546293B (en) Obstacle information fusion method and device, electronic equipment and computer readable medium
CN112183627A (en) Method for generating predicted density map network and vehicle annual inspection mark number detection method
CN115526069B (en) Simulated driving scene generation method, device, equipment and computer readable medium
CN111586295B (en) Image generation method and device and electronic equipment
CN113050643A (en) Unmanned vehicle path planning method and device, electronic equipment and computer readable medium
CN115114329A (en) Method and device for detecting data stream abnormity, electronic equipment and storage medium
CN115326079B (en) Vehicle lane level positioning method, device, equipment and computer readable medium
CN116088537A (en) Vehicle obstacle avoidance method, device, electronic equipment and computer readable medium
CN111310858B (en) Method and device for generating information
CN114639072A (en) People flow information generation method and device, electronic equipment and computer readable medium
CN114912039A (en) Search special effect display method, device, equipment and medium
CN110633596A (en) Method and device for predicting vehicle direction angle
CN114782290B (en) Disparity map correction method, device, equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant