CN113160427A - Virtual scene creating method, device, equipment and storage medium - Google Patents

Virtual scene creating method, device, equipment and storage medium Download PDF

Info

Publication number
CN113160427A
CN113160427A CN202110394125.7A CN202110394125A CN113160427A CN 113160427 A CN113160427 A CN 113160427A CN 202110394125 A CN202110394125 A CN 202110394125A CN 113160427 A CN113160427 A CN 113160427A
Authority
CN
China
Prior art keywords
scene
target
obstacle
behavior data
condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110394125.7A
Other languages
Chinese (zh)
Inventor
刘谱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202110394125.7A priority Critical patent/CN113160427A/en
Publication of CN113160427A publication Critical patent/CN113160427A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Abstract

The application provides a virtual scene creating method, a virtual scene creating device, virtual scene creating equipment and a virtual scene storing medium, and belongs to the technical field of computers. The method comprises the following steps: acquiring a template scene, wherein the template scene comprises scene elements, and the scene elements have corresponding state parameters; adjusting state parameters corresponding to the scene elements based on the scene adjustment information; acquiring behavior data in the process of running the adjusted alternative scene based on the adjusted state parameters, wherein the behavior data represents behaviors occurring in the alternative scene; and determining the candidate scene as the target scene under the condition that the behavior data meets the target condition. The method is based on the scene adjustment information, and a new alternative scene is derived on the basis of the template scene, so that the creating efficiency of the virtual scene is greatly improved. In addition, after the alternative scene is created, the target scene meeting the requirements is determined by operating the alternative scene and taking behavior data collected in the operation process as the basis, so that the effectiveness of the target scene is ensured.

Description

Virtual scene creating method, device, equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for creating a virtual scene.
Background
The automatic driving vehicle is also called as an unmanned vehicle, is an intelligent vehicle for realizing unmanned driving through a computer technology, and has wide application prospect in the transportation field. In order to improve the safety of the autonomous vehicle, it is essential to perform a large number of tests on the autonomous vehicle, and a common method is a simulation test, and before performing the simulation test, a virtual scene for testing the autonomous vehicle needs to be created. In the related art, the virtual scene is generally created manually by a human, that is, parameters in the virtual scene are manually set manually by a human, which results in inefficient creation of the virtual scene.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for creating a virtual scene, which can improve the creating efficiency of the virtual scene. The technical scheme is as follows:
in one aspect, a method for creating a virtual scene is provided, where the method includes:
acquiring a template scene, wherein the template scene comprises scene elements, and the scene elements have corresponding state parameters;
adjusting the state parameters corresponding to the scene elements based on scene adjustment information, wherein the scene adjustment information is used for expressing the adjustment mode of the state parameters corresponding to the scene elements;
acquiring behavior data in the process of running the adjusted alternative scene based on the adjusted state parameters, wherein the behavior data represents behaviors occurring in the alternative scene;
and determining the candidate scene as a target scene under the condition that the behavior data meets a target condition.
In a possible implementation manner, the adjusting, based on the scene adjustment information, the state parameter corresponding to the scene element includes:
adjusting the state parameters for multiple times according to the reference adjustment step length of the state parameters corresponding to the scene elements to obtain multiple alternative scenes;
determining the candidate scene as a target scene under the condition that the behavior data meets a target condition, including:
and under the condition that the behavior data corresponding to any alternative scene meets the target condition, determining the any alternative scene as the target scene.
In a possible implementation manner, the scene adjustment information includes adjustment information of state parameters of multiple dimensions; the adjusting the state parameters corresponding to the scene elements based on the scene adjustment information includes:
and respectively adjusting the state parameters of the corresponding dimensionalities according to the adjustment information corresponding to each dimensionality.
In a possible implementation manner, the adjusting the scene adjustment information includes obstacle adjustment information, and the adjusting the state parameter corresponding to the scene element based on the scene adjustment information includes:
adjusting the state parameters corresponding to the obstacles according to the obstacle adjustment information;
wherein the state parameter corresponding to the obstacle comprises at least one of an initial position of the obstacle, an initial orientation of the obstacle, a moving speed of the obstacle or a moving track of the obstacle.
In a possible implementation manner, the state parameters corresponding to the obstacle include the initial position, the initial orientation, and the movement trajectory; the adjusting the state parameter corresponding to the obstacle according to the obstacle adjustment information includes:
in response to the initial pose of the obstacle not matching the movement trajectory, generating a transition trajectory from the initial pose to the movement trajectory based on the initial pose and the movement trajectory;
fitting the transition track with the movement track;
wherein the initial pose includes the initial position and the initial orientation.
In a possible implementation manner, the adjusting the scene adjustment information includes signal lamp adjustment information, and the adjusting the state parameter corresponding to the scene element based on the scene adjustment information includes:
adjusting the state parameters corresponding to the traffic signal lamps according to the signal lamp adjustment information;
the state parameter corresponding to the traffic signal lamp comprises at least one of duration of an initial state of the traffic signal lamp, change frequency of the traffic signal lamp or a working state of the traffic signal lamp, wherein the working state comprises at least one of normal working, warning or damage.
In one possible implementation, the target condition includes conditions corresponding to a plurality of behaviors; determining the candidate scene as a target scene under the condition that the behavior data meets a target condition, including:
and under the condition that the behavior data accords with the condition corresponding to the behavior, determining the alternative scene as the target scene.
In one possible implementation, the scene elements include obstacles and autonomous vehicles, and the target conditions include: a first distance between the obstacle and an initial position of the autonomous vehicle is not less than a second distance;
before determining the candidate scene as the target scene when the behavior data meets the condition corresponding to the behavior, the method further includes:
determining the second distance based on an initial speed of the obstacle and an initial speed of the autonomous vehicle, the second distance being in a positive correlation with the initial speed of the obstacle and the initial speed of the autonomous vehicle.
In one possible implementation, the scene element includes an obstacle, and the target condition includes: a first angle difference between the orientation of the obstacle at any moment and a track direction is not larger than a second angle difference, and the track direction is the extending direction of a moving track corresponding to the position of the obstacle at any moment;
before determining the candidate scene as the target scene when the behavior data meets the condition corresponding to the behavior, the method further includes:
determining the second angle difference based on the moving speed of the obstacle at any one time, wherein the second angle difference and the moving speed are in a negative correlation relationship.
In one possible implementation, the scene element includes an obstacle, and the target condition includes: the obstacle does not have a compaction line lane change and a continuous lane change.
In one possible implementation, the scene elements include traffic lights and obstacles, and the target conditions include: the behavior of the obstacle is in accordance with the indication of the traffic light.
In one possible implementation, the scene elements include obstacles and autonomous vehicles, and the target conditions include: the obstacle does not appear to actively impact the autonomous vehicle.
In one possible implementation, the scene element includes an obstacle, and the target condition includes: the moving speed of the obstacle at any moment is within a reference speed range;
before determining the candidate scene as the target scene when the behavior data meets the condition corresponding to the behavior, the method further includes:
and determining the reference speed range corresponding to the position based on the position of the obstacle at any time.
In a possible implementation manner, the determining the candidate scenario as the target scenario when the behavior data meets the target condition includes:
and under the condition that the behavior data meet the target conditions, adding the candidate scenes serving as first-type target scenes into a scene database, wherein the scene database is used for storing target scenes for testing, and the first-type target scenes are candidate scenes of which the corresponding behavior data meet the target conditions.
In one possible implementation, the method further includes:
determining a quantity ratio between the first type of target scene and a second type of target scene in the scene database if the behavior data does not meet the target condition;
under the condition that the quantity proportion is larger than a reference proportion, taking the candidate scene as the second type of target scene, and adding the second type of target scene into the scene database;
and the second type of target scene is a candidate scene of which the corresponding behavior data does not meet the target condition.
In a possible implementation manner, the determining the candidate scenario as the target scenario when the behavior data meets the target condition includes:
under the condition that multiple behaviors occur in the alternative scene, determining a behavior score corresponding to each behavior based on behavior data corresponding to each behavior;
and determining the candidate scene as the target scene under the condition that the sum of the behavior scores corresponding to the behaviors is not less than a threshold value.
In another aspect, an apparatus for creating a virtual scene is provided, the apparatus including:
the template scene acquisition module is configured to acquire a template scene, wherein the template scene comprises scene elements, and the scene elements have corresponding state parameters;
a state parameter adjusting module configured to adjust a state parameter corresponding to the scene element based on scene adjustment information, where the scene adjustment information is used to indicate an adjustment manner of the state parameter corresponding to the scene element;
a behavior data collection module configured to collect behavior data representing behaviors occurring in the alternative scene during operation of the adjusted alternative scene based on the adjusted state parameters;
a target scene determination module configured to determine the candidate scene as a target scene if the behavior data meets a target condition.
In a possible implementation manner, the state parameter adjusting module is configured to adjust the state parameter for multiple times according to a reference adjustment step size of the state parameter corresponding to the scene element, so as to obtain multiple candidate scenes;
the target scene determining module is configured to determine any candidate scene as the target scene when the behavior data corresponding to the any candidate scene meets the target condition.
In a possible implementation manner, the scene adjustment information includes adjustment information of state parameters of multiple dimensions; and the state parameter adjusting module is configured to respectively adjust the state parameters of the corresponding dimensionalities according to the adjusting information corresponding to each dimensionality.
In a possible implementation manner, the scene adjustment information includes obstacle adjustment information, and the state parameter adjustment module is configured to adjust a state parameter corresponding to an obstacle according to the obstacle adjustment information; wherein the state parameter corresponding to the obstacle comprises at least one of an initial position of the obstacle, an initial orientation of the obstacle, a moving speed of the obstacle or a moving track of the obstacle.
In a possible implementation manner, the state parameters corresponding to the obstacle include the initial position, the initial orientation, and the movement trajectory;
the state parameter adjusting module is configured to generate a transition track from an initial pose to the moving track based on the initial pose and the moving track in response to the initial pose of the obstacle not matching the moving track; fitting the transition track with the movement track; wherein the initial pose includes the initial position and the initial orientation.
In a possible implementation manner, the scene adjustment information includes signal lamp adjustment information, and the state parameter adjustment module is configured to adjust a state parameter corresponding to a traffic signal lamp according to the signal lamp adjustment information; the state parameter corresponding to the traffic signal lamp comprises at least one of duration of an initial state of the traffic signal lamp, change frequency of the traffic signal lamp or a working state of the traffic signal lamp, wherein the working state comprises at least one of normal working, warning or damage.
In one possible implementation, the target condition includes conditions corresponding to a plurality of behaviors; the target scene determining module is configured to determine the candidate scene as the target scene when the behavior data meets a condition corresponding to the behavior.
In one possible implementation, the scene elements include obstacles and autonomous vehicles, and the target conditions include: a first distance between the obstacle and an initial position of the autonomous vehicle is not less than a second distance;
the device further comprises:
a distance determination module configured to determine the second distance based on an initial speed of the obstacle and an initial speed of the autonomous vehicle, the second distance being in a positive correlation with the initial speed of the obstacle and the initial speed of the autonomous vehicle.
In one possible implementation, the scene element includes an obstacle, and the target condition includes: a first angle difference between the orientation of the obstacle at any moment and a track direction is not larger than a second angle difference, and the track direction is the extending direction of a moving track corresponding to the position of the obstacle at any moment;
the device further comprises:
an angle difference determination module configured to determine the second angle difference based on a moving speed of the obstacle at the any one time, the second angle difference having a negative correlation with the moving speed.
In one possible implementation, the scene element includes an obstacle, and the target condition includes: the obstacle does not have a compaction line lane change and a continuous lane change.
In one possible implementation, the scene elements include traffic lights and obstacles, and the target conditions include: the behavior of the obstacle is in accordance with the indication of the traffic light.
In one possible implementation, the scene elements include obstacles and autonomous vehicles, and the target conditions include: the obstacle does not appear to actively impact the autonomous vehicle.
In one possible implementation, the scene element includes an obstacle, and the target condition includes: the moving speed of the obstacle at any moment is within a reference speed range;
the device further comprises:
a speed determination module configured to determine the reference speed range corresponding to the position based on the position of the obstacle at the any one time.
In one possible implementation, the apparatus further includes:
and the first scene adding module is configured to add the candidate scene as a first type of target scene to a scene database when the behavior data meets the target condition, wherein the scene database is used for storing the target scene for testing, and the first type of target scene is the candidate scene of which the corresponding behavior data meets the target condition.
In one possible implementation, the apparatus further includes:
a second scene adding module configured to determine a quantity ratio between the first type of target scene and a second type of target scene in the scene database if the behavior data does not meet the target condition; under the condition that the quantity proportion is larger than a reference proportion, taking the candidate scene as the second type of target scene, and adding the second type of target scene into the scene database; and the second type of target scene is a candidate scene of which the corresponding behavior data does not meet the target condition.
In a possible implementation manner, the target scenario determination module is configured to, when multiple behaviors occur in the candidate scenario, determine a behavior score corresponding to each behavior based on behavior data corresponding to each behavior; and determining the candidate scene as the target scene under the condition that the sum of the behavior scores corresponding to the behaviors is not less than a threshold value.
In another aspect, a computer device is provided, which includes a processor and a memory, where at least one program code is stored in the memory, and the program code is loaded by the processor and executed to implement the operations executed in the virtual scene creating method in any one of the above possible implementation manners.
In another aspect, a computer-readable storage medium is provided, in which at least one program code is stored, and the program code is loaded and executed by a processor to implement the operations performed in the virtual scene creation method in any one of the above possible implementation manners.
In another aspect, a computer program product is provided, where the computer program product includes at least one program code, and the program code is loaded and executed by a processor to implement the operations performed in the virtual scene creation method in any one of the above possible implementation manners.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
in the embodiment of the application, through setting of the scene adjustment information, when the virtual scene is created, only the template scene needs to be provided, and the state parameters of the scene elements in the template scene can be automatically adjusted according to the scene adjustment information, so that a new alternative scene is obtained, and the creation efficiency of the virtual scene is greatly improved. In addition, after the alternative scene is created, the target scene meeting the requirements is determined by operating the alternative scene and taking behavior data collected in the operation process as the basis, so that the effectiveness of the target scene is ensured.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment provided by an embodiment of the present application;
fig. 2 is a flowchart of a method for creating a virtual scene according to an embodiment of the present application;
fig. 3 is a schematic diagram of a virtual scene creation process provided in an embodiment of the present application;
fig. 4 is a block diagram of a virtual scene creation apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The terms "first," "second," "third," "fourth," and the like as used herein may be used herein to describe various concepts, but these concepts are not limited by these terms unless otherwise specified. These terms are only used to distinguish one concept from another. For example, a first virtual scene may be referred to as a virtual scene, and similarly, a second virtual scene may be referred to as a first virtual scene, without departing from the scope of the present application.
As used herein, the terms "at least one," "a plurality," "each," and "any," at least one of which includes one, two, or more than two, and a plurality of which includes two or more than two, each of which refers to each of the corresponding plurality, and any of which refers to any of the plurality. For example, the plurality of virtual scenes includes 3 virtual scenes, each of the 3 virtual scenes refers to each of the 3 virtual scenes, and any one of the 3 virtual scenes refers to any one of the 3 virtual scenes, which may be the first one, the second one, or the third one.
Fig. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application. Referring to fig. 1, the implementation environment includes a terminal 101 and a server 102. The terminal 101 and the server 102 are connected via a wireless or wired network. Optionally, the terminal 101 is a computer, a mobile phone, a tablet computer, or other terminal. Optionally, the server 102 is a background server or a cloud server providing services such as cloud computing and cloud storage.
Optionally, the terminal 101 has installed thereon a target application served by the server 102, and the terminal 101 can implement functions such as data transmission, message interaction, and the like through the target application. Optionally, the target application is a target application in an operating system of the terminal 101, or a target application provided by a third party. The target application has a function of creating a virtual scene, and optionally, of course, the target application may also have other functions, which is not limited in this application. Optionally, the target application is a simulation application or other applications, which is not limited in this application embodiment.
In the embodiment of the application, the terminal 101 is configured to provide a template scene for the server 102, and the server 102 is configured to adjust the template scene to obtain an alternative scene, then perform a simulation test based on the alternative scene, and determine whether to determine the alternative scene as a target scene based on the appearance of a scene element in the alternative scene under the simulation test.
It should be noted that the embodiment of the present application is described by taking an example in which the implementation environment includes only the terminal 101 and the server 102, and in other embodiments, the implementation environment includes only the terminal 101 or the server 102. The creation of the virtual scene is realized by the terminal 101 or the server 102.
The virtual scene creating method can be applied to a scene of an automatic driving simulation test, for example, when the automatic driving simulation test needs to be carried out on an automatic driving vehicle, the virtual scene can be created through the method provided by the application, and then the automatic driving simulation test is carried out on the automatic driving vehicle based on the virtual scene. Or, a large number of virtual scenes can be created through the method provided by the application, the virtual scenes are placed in the scene database, an interface of the automatic driving simulation test is established, the automatic driving simulation test service is provided through the interface, when any user needs to carry out the simulation test on the automatic driving vehicle, the virtual scenes can be selected from the scene database through calling the interface, and the automatic driving simulation test is carried out based on the selected virtual scenes.
Fig. 2 is a flowchart of a method for creating a virtual scene according to an embodiment of the present application. Referring to fig. 2, this embodiment takes an execution subject as an example for explanation. The embodiment comprises the following steps:
201. the server acquires a template scene, wherein the template scene comprises scene elements, and the scene elements have corresponding state parameters.
The template scene is a virtual scene and can be used for carrying out automatic driving simulation tests. The virtual scene includes scene elements, which are objects constituting the template scene. The scene elements include dynamic scene elements and static scene elements. The dynamic scene element refers to an element that can move in the scene, for example, a pedestrian, a vehicle, and the like on the road. Correspondingly, a static scene element refers to an element that cannot move in the scene, such as a barricade, a tree, etc. In fact, the types of scene elements are very rich, and any object that can be seen in a real driving scene can become a scene element in a template scene, which is not limited in the embodiment of the present application. The scene element has a corresponding state parameter indicating a state of the scene element. For example, for a scene element such as a traffic light, the state parameter can include a change frequency indicating how frequently the traffic light changes. As another example, for a scene element such as a vehicle, the state parameters can include a movement trajectory, which indicates how the vehicle moves.
Due to the rich diversity of scene elements, the types of template scenes formed by the scene elements are also rich and diverse. For example, the template scene is a scene in which a plurality of vehicles pass through an intersection including traffic lights. For another example, the template scene is a scene of changing a unidirectional dual lane into a single lane. Of course, the template scene can also be other virtual scenes, which is not limited in the embodiment of the present application.
Optionally, the server selects a virtual scene from the scene database as the template scene. And the scene database is used for storing the created virtual scene. Optionally, the virtual scene stored in the scene database is created by the method provided by the present application, or created manually by a user, which is not limited in this embodiment of the present application. Optionally, the template scene is made by simulation methods, or by actually sampled data. The template scene is manufactured by a simulation method, wherein the method comprises the following steps: various scene elements in the template scene and state parameters of the scene elements are set based on experience, and a real scene corresponding to the template scene is not necessarily included in the real world. The template scene is made through actually sampled data, which includes: scene elements and corresponding state parameters are collected in a real driving scene by various sensing devices, a template scene is made based on collected data, and a real scene corresponding to the template scene exists in the real world. The sensing device includes various sensing devices such as a camera and a laser radar, which is not limited in the embodiments of the present application.
202. And the server adjusts the state parameters corresponding to the scene elements based on the scene adjustment information.
The scene adjustment information is used for indicating an adjustment mode of the state parameter corresponding to the scene element. After the server adjusts the state parameters corresponding to the scene elements based on the scene adjustment information, a new alternative scene can be generated based on the adjusted state parameters of the scene elements. The candidate scene is not the target scene finally determined by the server, and the candidate scene can be used as the target scene after the validity tests in steps 203 and 204.
In a possible implementation manner, the scene element includes an obstacle, optionally, the obstacle is an automobile, a bicycle, a pedestrian, or another obstacle, which is not limited in this embodiment of the present application. The scene adjustment information includes obstacle adjustment information indicating an adjustment method of a state parameter corresponding to the obstacle. Correspondingly, the server adjusts the state parameters corresponding to the scene elements based on the scene adjustment information, including: and the server adjusts the state parameters corresponding to the obstacles according to the obstacle adjustment information. The state parameter corresponding to the obstacle comprises at least one of an initial position of the obstacle, an initial orientation of the obstacle, a moving speed of the obstacle or a moving track of the obstacle. It should be noted that the movement track refers to a movement track set for the obstacle, and is not a real movement track of the obstacle, that is, when a simulation test is performed based on a template scene, the obstacle does not necessarily move according to the movement track, and a deviation may exist between the real movement track and the set movement track.
In the embodiment of the application, by setting the obstacle adjustment information, when the virtual scene is created, the state parameters of the obstacles can be adjusted based on the template scene, so that a new alternative scene with the obstacle state different from that of the template scene is obtained, and the method is simple and efficient.
In one possible implementation, the scene element includes a traffic light, and the scene adjustment information includes signal light adjustment information indicating an adjustment manner of a state parameter corresponding to the traffic light. Correspondingly, the server adjusts the state parameters corresponding to the scene elements based on the scene adjustment information, including: and the server adjusts the state parameters corresponding to the traffic signal lamps based on the signal lamp adjustment information. The state parameter corresponding to the traffic signal lamp comprises at least one of duration of an initial state of the traffic signal lamp, change frequency of the traffic signal lamp or a working state of the traffic signal lamp. The duration of the initial state of the traffic light indicates the duration of the state of the traffic light before the initial change. The operational state of the traffic signal light includes at least one of normal operation, warning, or damage. Wherein, normal work means that the traffic signal lamp indicates traffic behaviors through the alternate conversion of red light, green light and yellow light. The warning means that the traffic signal lamp warns people to observe surrounding conditions by continuously flashing yellow light, and the people can automatically decide how to pass through the intersection. Damage refers to the failure of the traffic signal to indicate traffic behavior.
In the embodiment of the application, by setting the signal lamp adjustment information, when the virtual scene is created, the state parameters of the traffic signal lamp can be adjusted based on the template scene, so that a new alternative scene with a traffic signal lamp state different from that of the template scene is obtained, and the method is simple and efficient.
In a possible implementation manner, the adjusting, by the server, the state parameter corresponding to the scene element based on the scene adjustment information includes: and the server adjusts the state parameters corresponding to the scene elements for multiple times according to the reference adjustment step length of the state parameters corresponding to the scene elements to obtain multiple alternative scenes.
Wherein, the state parameter corresponding to the scene element has a reference adjustment step size, which indicates how accurately the state parameter is adjusted. Next, a mode of adjusting the state parameter corresponding to the scene element multiple times according to the reference adjustment step length of the state parameter is described by taking the state parameter corresponding to the obstacle as an example.
(1) For the initial position of the obstacle, the corresponding reference adjustment step length is the reference distance, and correspondingly, the method for the server to adjust the initial position of the obstacle is as follows: the server moves the reference distance in any direction a plurality of times starting from the initial position.
Optionally, the direction of movement is a randomly selected direction, or the direction of movement is any determined direction, e.g., forward, rearward, left, right, etc. The reference distance is any distance, such as 1 meter, 2 meters, etc., and the embodiment of the present application does not limit this.
(2) For the initial orientation of the obstacle, the corresponding reference adjustment step is a reference angle, and correspondingly, the way for the server to adjust the initial orientation of the obstacle is as follows: the server shifts the reference angle to the left or right a plurality of times starting from the initial orientation of the obstacle. Alternatively, the reference angle is an arbitrary angle, for example, the reference angle is 1 degree.
(3) For the moving speed of the obstacle, the corresponding reference adjustment step length is the reference speed, and correspondingly, the manner of adjusting the moving speed of the obstacle by the server is as follows: the server increases or decreases the current moving speed of the obstacle by the reference speed a plurality of times. Alternatively, the reference speed is an arbitrary speed, for example, 1 m/s.
(4) For the moving track of the obstacle, the corresponding reference adjustment step length is the reference distance, and correspondingly, the moving track of the obstacle is adjusted by the server in the following manner: and the server moves the moving track of the obstacle for a plurality of times in any direction by the reference distance.
Optionally, the direction of movement is a randomly selected direction, or the direction of movement is any determined direction, e.g., forward, rearward, left, right, etc. The reference distance is any distance, such as 1 meter, 2 meters, etc., and the embodiment of the present application does not limit this. It should be noted that, the reference distance corresponding to the initial position is the same as or different from the reference distance corresponding to the movement track, and the moving direction of the initial position is the same as or different from the moving direction of the movement track, which is not limited in this embodiment of the application.
In a possible implementation manner, trajectory linkage can be further set, that is, after the initial pose of the obstacle is adjusted, if the initial pose is not matched with the current movement trajectory, the movement trajectory can be synchronously adjusted, so that the initial pose is matched with the movement trajectory. Wherein, the initial pose comprises an initial position and an initial orientation. Correspondingly, the server adjusts the state parameters corresponding to the obstacles according to the obstacle adjustment information, and the method comprises the following steps: the server responds to the fact that the initial pose of the obstacle is not matched with the moving track, and generates a transition track from the initial pose to the moving track based on the initial pose and the moving track of the obstacle; and fitting the transition track and the moving track.
For example, if the initial orientation of the obstacle is west and the movement trajectory is a straight trajectory from north to south, then the server generates a transition trajectory based on the initial orientation "west" and the straight trajectory from north to south in response to the initial orientation of the obstacle not matching the movement trajectory, e.g., the transition trajectory is a trajectory that turns from the initial position from west to south until a position on the straight trajectory is reached, and then the server fits the transition trajectory to the straight trajectory, then the new movement trajectory of the obstacle is the fitted trajectory.
In the embodiment of the application, a track linkage adjusting scheme is provided, that is, under the condition that the initial pose of the obstacle is not matched with the moving track, a transition track from the initial pose to the moving track is automatically generated, and the transition track is fitted with the moving track, so that the situation that the initial pose of the obstacle is not matched with the moving track in the created alternative scene, which causes a fault in the alternative scene during the automatic driving simulation test, can be prevented, and therefore, the reliability of the created alternative scene can be improved.
Next, a mode of adjusting the state parameter corresponding to the scene element for multiple times according to the reference adjustment step length of the state parameter is described by taking the state parameter corresponding to the traffic signal lamp as an example.
(1) For the duration of the initial state of the traffic signal lamp, the corresponding reference adjustment step length is the reference duration, and correspondingly, the manner of adjusting the duration of the initial state of the traffic signal lamp by the server is as follows: the server increases or decreases the duration of the initial state of the traffic signal lamp by a reference time length for a plurality of times. Optionally, the reference time duration is an arbitrary time duration, for example, the reference time duration is 1 second.
(2) For the change frequency of the traffic signal lamp, the corresponding reference adjustment step length is the reference frequency, and correspondingly, the mode of adjusting the change frequency of the traffic signal lamp by the server is as follows: and the server increases or decreases the change frequency of the traffic signal lamp in the reference time length for multiple times by the reference times. Optionally, the reference time duration is any time duration, and the reference number is any number of times, for example, the reference time duration is 10 minutes, and the reference number is 1 time.
(3) For the working state of the traffic signal lamp, the corresponding reference adjustment step length is the reference duration, and correspondingly, the method for adjusting the working state of the traffic signal lamp by the server is as follows: and the server increases or decreases the reference time length for a plurality of times according to the change cycle of the working state of the traffic signal lamp. Alternatively, the reference time period is an arbitrary time period, for example, the reference time period is 5 minutes.
For example, the change cycle of the working state of the original traffic signal lamp is as follows: and if the change period is increased by 5 minutes, the traffic signal lamp changes the working state every 25 minutes.
It should be noted that, in the case of adjusting the state parameter corresponding to the scene element for multiple times, the adjustment threshold corresponding to the state parameter can be set, for example, the offset angle of the initial orientation of the obstacle cannot be greater than a certain angle threshold, or the change frequency of the traffic signal lamp cannot be greater than a certain frequency threshold, so as to ensure the reasonability of the state parameter corresponding to the scene element in the adjusted scene.
In the embodiment of the application, the reference adjustment step length of the state parameter corresponding to the scene element is set, and the state parameter corresponding to the scene element is adjusted according to the reference adjustment step length to obtain the alternative scene different from the template scene.
In one possible implementation, the scene adjustment information includes adjustment information of state parameters of multiple dimensions. Correspondingly, the server adjusts the state parameters corresponding to the scene elements based on the scene adjustment information, including: and the server respectively adjusts the state parameters of the corresponding dimensionality according to the adjustment information corresponding to each dimensionality.
In the embodiment of the application, the server can divide the dimension of the state parameter at any angle. Optionally, the dimension of the state parameter is divided by an angle of the scene element, the dimension of the state parameter includes a traffic light dimension, an obstacle dimension, and the like, and correspondingly, the scene adjustment information includes signal light adjustment information of the traffic light dimension, obstacle adjustment information of the obstacle dimension, and the like. Optionally, the dimension of the state parameter is divided by the type of the state parameter, and the dimension of the state parameter includes a dimension of an initial pose of the obstacle, a dimension of a moving speed of the obstacle, a dimension of a moving track of the obstacle, a dimension of an operating state of the traffic signal lamp, a dimension of a change frequency of the traffic signal lamp, and the like. Correspondingly, the scene adjustment information includes adjustment information of an initial pose of the obstacle, adjustment information of a moving speed of the obstacle, adjustment information of a moving track of the obstacle, adjustment information of a working state of the traffic light, adjustment information of a change frequency of the traffic light, and the like, which are not limited in the embodiment of the present application.
In the embodiment of the application, the adjustment information of the state parameters of multiple dimensions is set, and the state parameters of corresponding dimensions are respectively adjusted according to the adjustment information corresponding to each dimension, so that the types of the created virtual scene can be enriched.
203. The server collects behavior data in the process of running the adjusted alternative scene based on the adjusted state parameters, wherein the behavior data represents behaviors occurring in the alternative scene.
Alternatively, the server can collect behavior data representing any one or more behaviors, for example, the collected behavior data representing the interaction behavior of an obstacle with a traffic light, which can reflect an indication of whether the obstacle is complying with the traffic light. Or the behavior data represents lane change behaviors of the barrier, and can reflect whether the barrier has behaviors which do not accord with traffic rules, such as continuous lane change, compaction line lane change and the like. Alternatively, the behavior data indicates overspeed driving behavior of an obstacle, and the like, but the behavior data may also indicate other behaviors, and the present embodiment is not limited thereto.
Optionally, in the process that the server runs the adjusted candidate scene based on the adjusted state parameter, the implementation manner of acquiring the behavior data is as follows: and the server carries out simulation test based on the adjusted alternative scene and acquires behavior data in the process of the simulation test.
204. And the server determines the alternative scene as the target scene under the condition that the behavior data meet the target conditions.
The target condition is a virtual scene which is used for judging whether the alternative scene meets the automatic driving simulation test requirement, and the target condition can be set to any condition according to the scene requirement, which is not limited by the embodiment of the application. The target scene is a virtual scene which is finally determined and meets the requirements of the automatic driving simulation test.
In one possible implementation, the target condition includes conditions corresponding to a plurality of behaviors. Correspondingly, the server determines the alternative scene as the target scene when the behavior data meets the target condition, and the method includes: and the server determines the alternative scene as the target scene under the condition that the behavior data accords with the condition corresponding to the behavior generated in the alternative scene.
In the embodiment of the application, it is considered that multiple behaviors may occur in the running process of the candidate scene, but the requirements of the automatic driving simulation test on each behavior in the virtual scene may be different, so that conditions corresponding to the multiple behaviors are set, the target scene is screened through the conditions corresponding to the multiple behaviors, and the effectiveness of the determined target scene can be ensured.
In one possible implementation, the scene elements include an obstacle and an autonomous vehicle, the behavior data includes a first distance between the obstacle and an initial position of the autonomous vehicle, and initial velocities of the obstacle and the autonomous vehicle, and the target conditions include: the first distance between the obstacle and the initial position of the autonomous vehicle is not less than the second distance. Correspondingly, when the behavior data meets the condition corresponding to the behavior occurring in the candidate scene, the server determines the candidate scene as the target scene, and the method further includes: the server determines a second distance based on the initial speed of the obstacle and the initial speed of the autonomous vehicle, the second distance having a positive correlation with the initial speed of the obstacle and the initial speed of the autonomous vehicle.
In order to ensure the safety of the autonomous vehicle and the obstacle, the obstacle should be kept a certain distance from the initial position of the autonomous vehicle, and the larger the initial speed of the obstacle and the autonomous vehicle is, the farther the distance is, the collision between the obstacle and the autonomous vehicle can be effectively prevented.
In one possible implementation manner, the scene element includes an obstacle, the behavior data includes an orientation, a moving speed, and a track direction of the obstacle at any time, the track direction is an extending direction of a moving track corresponding to a position of the obstacle at any time, and the target condition includes: a first angular difference between the orientation of the obstacle at any one time and the direction of the trajectory is no greater than a second angular difference. Correspondingly, when the behavior data meets the condition corresponding to the behavior occurring in the candidate scene, the server determines the candidate scene as the target scene, and the method further includes: the server determines a second angle difference based on the moving speed of the obstacle at any one time, and the second angle difference is in a negative correlation relation with the moving speed.
Considering that, when the direction deviation between the direction of the obstacle and the set trajectory direction is large during the movement of the obstacle, the obstacle needs to change the direction to continue moving along the set movement trajectory, and in this case, when the movement speed of the obstacle is high, the risk is high, and therefore, the second angular difference is determined based on the movement speed of the obstacle at any time, and the larger the movement speed is, the smaller the second angular difference is, the angle difference between the direction of the obstacle and the trajectory direction at that time is limited by the second angular difference, so that the safe movement of the obstacle can be ensured, and the reliability of the created virtual scene can be improved.
In one possible implementation, the scene element includes an obstacle, the behavior data includes lane change data of the obstacle, and the target condition includes: the obstacle has no compaction line lane change and continuous lane change.
Because the lane change by pressing the solid line and the lane change by continuous lines both belong to behaviors violating traffic regulations and bring potential safety hazards, in order to ensure safe driving of the obstacle and the automatic driving vehicle, lane change data of the obstacle is used as an evaluation index of the alternative scene, and the alternative scene is determined as the first virtual scene under the condition that the lane change by pressing the solid line and the lane change by continuous lines do not occur in the obstacle, so that the reliability of the created virtual scene can be improved.
In one possible implementation, the scene element includes a traffic light and an obstacle, the behavior data includes interaction data of the obstacle with the traffic light, and the target condition includes: the behavior of the obstacle corresponds to the indication of the traffic light. Because the indication of not complying with the traffic light belongs to the behavior of violating the traffic rules, and potential safety hazards are brought, in order to ensure the safe driving of the obstacles and the automatic driving vehicle, the interactive data of the obstacles and the traffic light is used as the evaluation indexes of the alternative scenes to screen the target scenes, and the reliability of the created virtual scenes can be ensured.
In one possible implementation, the scene element includes an obstacle and an autonomous vehicle, the behavior data includes interaction data of the obstacle with the autonomous vehicle, and the target condition includes: no active impact of the obstacle to the autonomous vehicle occurs. Due to the fact that the obstacle actively collides with the automatic driving vehicle, the automatic driving vehicle belongs to an abnormal situation in driving and brings great safety problems, interactive data of the obstacle and the automatic driving vehicle are used as evaluation indexes of alternative scenes to determine a target scene, and reliability of a created virtual scene can be guaranteed.
In one possible implementation, the scene element includes an obstacle, the behavior data includes a moving speed of the obstacle at any one time, and the target condition includes: the moving speed of the obstacle at any moment is within the reference speed range; correspondingly, when the behavior data meets the condition corresponding to the behavior occurring in the candidate scene, the server determines the candidate scene as the target scene, and the method further includes: the server determines a reference speed range corresponding to the position of the obstacle at any time based on the position.
In the virtual scene, when the obstacle is at different positions, the requirement on the speed of the obstacle is different, for example, when the obstacle is on a highway, the moving speed of the obstacle is required to be not less than a certain speed threshold, or when the obstacle is on a bridge, the moving speed of the obstacle is required to be not more than a certain speed threshold, so that in order to ensure the safety of the obstacle and an automatic driving vehicle in an automatic driving simulation test, the speed of the obstacle at any moment is acquired, and a reference speed range corresponding to whether the speed is at the position is used as an evaluation index of the alternative scene, so that the reliability of the created virtual scene can be ensured.
In a possible implementation manner, if, in step 202, the server adjusts the state parameters for multiple times according to the reference adjustment step length of the state parameters corresponding to the scene elements to obtain multiple candidate scenes, the server needs to acquire behavior data corresponding to each candidate scene, and determines any candidate scene as the target scene when the behavior data corresponding to the candidate scene meets the target condition. In this way, the server can quickly and efficiently create a large number of reliable virtual scenes.
In one possible implementation manner, the determining, by the server, the candidate scenario as the target scenario when the behavior data meets the target condition includes: the method comprises the steps that under the condition that multiple behaviors occur in an alternative scene, a server determines a behavior score corresponding to each behavior based on behavior data corresponding to each behavior; and under the condition that the sum of the behavior scores corresponding to the behaviors is not less than the threshold value, determining the alternative scene as the target scene.
For example, if the behavior occurring in the candidate scene includes a behavior in which the moving speed of the obstacle is not within the reference speed range, the server can determine a speed deviation of the moving speed of the obstacle from the reference speed range based on the moving speed of the obstacle, determine a score corresponding to the behavior based on the speed deviation, and the score is in a negative correlation with the speed deviation, that is, the larger the speed deviation is, the smaller the score is, the smaller the speed deviation is, the larger the score is. As another example, if the behavior occurring in the alternative scenario includes a first distance of the obstacle from the initial position of the autonomous vehicle being less than a second distance, the server can determine a score corresponding to the behavior based on a distance deviation of the first distance from the second distance, and the score is inversely related to the distance deviation, i.e., the greater the distance deviation, the smaller the score, the smaller the distance deviation, and the greater the score. As another example, if the behavior occurring in the candidate scenario includes a behavior of the obstacle that does not comply with the indication of the traffic light, the server can determine a score corresponding to the behavior based on the number of behaviors of the obstacle that do not comply with the indication of the traffic light, and the score is negatively correlated with the number, i.e., the larger the number, the smaller the score, the smaller the number, and the larger the score. As another example, if the behavior occurring in the candidate scene includes a first angular difference between the orientation of the obstacle and the trajectory direction being greater than a second angular difference, the server can determine a score corresponding to the behavior based on an angular deviation of the first angular difference from the second angular difference, and the score is in a negative correlation with the angular deviation, i.e., the larger the angular deviation, the smaller the score, and the smaller the angular deviation, the larger the score.
In the embodiment of the application, when the target scene is determined by taking the behavior occurring in the candidate scene as the evaluation index, the evaluation index is quantized by converting the behavior data into the score, so that the target scene meeting the requirement can be more conveniently determined from the candidate scene.
Optionally, when some behaviors of the scene element have a larger influence on the effect of the simulation test than other behaviors, the score corresponding to some behaviors is set to be larger, and the score corresponding to other behaviors is set to be smaller, so that when the score is used as the evaluation index of the alternative scene to screen the virtual scene, it can be ensured that some behaviors of the scene element having a larger influence on the simulation test in the finally determined virtual scene are in line with the requirement, and thus the simulation test effect of the created virtual scene can be ensured.
In the embodiment of the application, corresponding scores are set for behaviors of scene elements in a simulation test process, and the sum of the scores corresponding to the behaviors meeting target conditions is used as an evaluation index of the alternative scene, so that the influence of the behaviors of the scene elements in the alternative scene on the alternative scene can be quantified, and the virtual scene meeting requirements can be more conveniently determined.
Fig. 3 is a schematic diagram of a process for creating a virtual scene. Referring to fig. 3, first, a template scene is selected from a template database for storing template scenes, and scene adjustment information, such as obstacle adjustment information or traffic signal adjustment information, is selected from an information database for storing scene adjustment information, then it is determined whether the number of derived scenes meets the requirement, and if the number of derived scenes does not meet the requirement, the derived scenes are adjusted according to the selected scene adjustment information to generate alternative scenes, such as adjusting the state parameters of obstacles according to the obstacle adjustment information, or adjusting the state parameters of traffic signals according to the traffic signal adjustment information. The derived scene is a target scene obtained by adjusting state parameters of scene elements in the template scene. After the candidate scene is obtained, the validity scoring is automatically performed on the candidate scene, that is, the simulation test is performed on the basis of the candidate scene, and the scoring is determined on the basis of the behavior data acquired in the simulation test process. And then, under the condition that the score is qualified, putting the alternative scenes into a scene database, and increasing the number of the derived scenes. In the case of a rating failure, the candidate scenes are discarded. And then, judging whether the number of the derived scenes meets the requirements again, adjusting the template scenes again according to the selected scene adjustment information under the condition that the number of the derived scenes does not meet the requirements, generating alternative scenes, and repeating the steps until a sufficient number of the derived scenes are obtained.
In one possible implementation manner, the determining, by the server, the candidate scenario as the target scenario when the behavior data meets the target condition includes: and under the condition that the behavior data meet the target conditions, the server adds the alternative scene as a first type of target scene to the scene database. The scene database is used for storing target scenes for testing, and the first type of target scenes are alternative scenes with corresponding behavior data meeting target conditions.
In a possible implementation manner, the server determines the quantity ratio between the first type of target scenes and the second type of target scenes in the scene database under the condition that the behavior data corresponding to the candidate scenes do not meet the target conditions; and under the condition that the quantity proportion is larger than the reference proportion, adding the candidate scene as a second type of target scene into the scene database. And the second type of target scene is a candidate scene of which the corresponding behavior data does not meet the target condition.
Alternatively, the reference ratio is set to any value, for example, the server determines the number of obstacles whose behaviors meet the target condition and the number of obstacles whose behaviors do not meet the target condition in the real situation, and takes the ratio of the two numbers as the reference ratio.
In the embodiment of the application, considering that in an actual driving scene, the situation that the behavior of the obstacle does not meet the target condition exists, in order to test the strain capacity of the automatic driving vehicle under a special situation, a certain number of second type target scenes are reserved, and the proportion of the first type target scenes to the second type target scenes is controlled, so that the virtual scenes in the scene database are more consistent with the real situation, and the effect of performing simulation test on the automatic driving vehicle through the virtual scenes in the scene database is ensured.
In the embodiment of the application, through setting of the scene adjustment information, when the virtual scene is created, only the template scene needs to be provided, and the state parameters of the scene elements in the template scene can be automatically adjusted according to the scene adjustment information, so that a new alternative scene is obtained, and the creation efficiency of the virtual scene is greatly improved. In addition, after the alternative scene is created, the target scene meeting the requirements is determined by operating the alternative scene and taking behavior data collected in the operation process as the basis, so that the effectiveness of the target scene is ensured.
All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.
Fig. 4 is a block diagram of a virtual scene creation apparatus according to an embodiment of the present application. Referring to fig. 4, the embodiment includes:
a template scene obtaining module 401 configured to obtain a template scene, where the template scene includes scene elements, and the scene elements have corresponding state parameters;
a state parameter adjusting module 402, configured to adjust a state parameter corresponding to a scene element based on scene adjustment information, where the scene adjustment information is used to indicate an adjustment manner of the state parameter corresponding to the scene element;
a behavior data collection module 403 configured to collect behavior data during the running of the adjusted candidate scenario based on the adjusted state parameter, the behavior data representing behaviors occurring in the candidate scenario;
and a target scene determining module 404 configured to determine the candidate scene as the target scene if the behavior data meets the target condition.
In a possible implementation manner, the state parameter adjusting module 402 is configured to adjust the state parameter multiple times according to a reference adjustment step size of the state parameter corresponding to the scene element, so as to obtain multiple candidate scenes;
and the target scene determining module 404 is configured to determine any candidate scene as the target scene when the behavior data corresponding to any candidate scene meets the target condition.
In one possible implementation, the scene adjustment information includes adjustment information of state parameters of multiple dimensions; the state parameter adjusting module 402 is configured to adjust the state parameters of the corresponding dimensions according to the adjustment information corresponding to each dimension.
In a possible implementation manner, the scene adjustment information includes obstacle adjustment information, and the state parameter adjustment module 402 is configured to adjust a state parameter corresponding to an obstacle according to the obstacle adjustment information; the state parameter corresponding to the obstacle comprises at least one of an initial position of the obstacle, an initial orientation of the obstacle, a moving speed of the obstacle or a moving track of the obstacle.
In one possible implementation, the state parameters corresponding to the obstacle include an initial position, an initial orientation, and a movement trajectory;
a state parameter adjustment module 402 configured to generate a transition trajectory from an initial pose to a movement trajectory based on the initial pose and the movement trajectory in response to a mismatch between the initial pose of the obstacle and the movement trajectory; fitting the transition track and the moving track; wherein, the initial pose comprises an initial position and an initial orientation.
In one possible implementation, the scene adjustment information includes signal lamp adjustment information, and the state parameter adjustment module 402 is configured to adjust a state parameter corresponding to a traffic signal lamp according to the signal lamp adjustment information; the state parameter corresponding to the traffic signal lamp comprises at least one of duration of an initial state of the traffic signal lamp, change frequency of the traffic signal lamp or a working state of the traffic signal lamp, and the working state comprises at least one of normal working, warning or damage.
In one possible implementation, the target condition includes conditions corresponding to a plurality of behaviors; and the target scene determining module 404 is configured to determine the candidate scene as the target scene if the behavior data meets the condition corresponding to the behavior.
In one possible implementation, the scene elements include obstacles and autonomous vehicles, and the target conditions include: a first distance between the obstacle and an initial position of the autonomous vehicle is not less than a second distance;
the device still includes:
a distance determination module configured to determine a second distance based on the initial speed of the obstacle and the initial speed of the autonomous vehicle, the second distance being in a positive correlation with the initial speed of the obstacle and the initial speed of the autonomous vehicle.
In one possible implementation, the scene element includes an obstacle, and the target condition includes: a first angle difference between the orientation of the obstacle at any moment and a track direction is not larger than a second angle difference, and the track direction is the extending direction of a moving track corresponding to the position of the obstacle at any moment;
the device still includes:
and the angle difference determination module is configured to determine a second angle difference based on the moving speed of the obstacle at any moment, and the second angle difference is in a negative correlation relation with the moving speed.
In one possible implementation, the scene element includes an obstacle, and the target condition includes: the obstacle has no compaction line lane change and continuous lane change.
In one possible implementation, the scene elements include traffic lights and obstacles, and the target conditions include: the behavior of the obstacle corresponds to the indication of the traffic light.
In one possible implementation, the scene elements include obstacles and autonomous vehicles, and the target conditions include: no active impact of the obstacle to the autonomous vehicle occurs.
In one possible implementation, the scene element includes an obstacle, and the target condition includes: the moving speed of the obstacle at any moment is within the reference speed range;
the device still includes:
and the speed determination module is configured to determine a reference speed range corresponding to the position based on the position of the obstacle at any moment.
In one possible implementation, the apparatus further includes:
and the first scene adding module is configured to add the candidate scene as a first type of target scene to a scene database under the condition that the behavior data meets the target conditions, wherein the scene database is used for storing the target scene for testing, and the first type of target scene is the candidate scene of which the corresponding behavior data meets the target conditions.
In one possible implementation, the apparatus further includes:
the second scene adding module is configured to determine the quantity proportion between the first type of target scenes and the second type of target scenes in the scene database under the condition that the behavior data do not accord with the target conditions; under the condition that the quantity proportion is larger than the reference proportion, the alternative scene is used as a second type of target scene and is added into the scene database; and the second type of target scene is a candidate scene of which the corresponding behavior data does not meet the target condition.
In one possible implementation, the target scenario determining module 404 is configured to determine, when multiple behaviors occur in the alternative scenario, a behavior score corresponding to each behavior based on behavior data corresponding to each behavior; and under the condition that the sum of the behavior scores corresponding to the behaviors is not less than the threshold value, determining the alternative scene as the target scene.
In the embodiment of the application, through setting of the scene adjustment information, when the virtual scene is created, only the template scene needs to be provided, and the state parameters of the scene elements in the template scene can be automatically adjusted according to the scene adjustment information, so that a new alternative scene is obtained, and the creation efficiency of the virtual scene is greatly improved. In addition, after the alternative scene is created, the target scene meeting the requirements is determined by operating the alternative scene and taking behavior data collected in the operation process as the basis, so that the effectiveness of the target scene is ensured.
It should be noted that: the virtual scene creating apparatus provided in the foregoing embodiment is only illustrated by the division of the functional modules when creating a virtual scene, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the computer device is divided into different functional modules to complete all or part of the functions described above. In addition, the virtual scene creation apparatus provided in the above embodiments and the virtual scene creation method embodiment belong to the same concept, and specific implementation processes thereof are described in the method embodiment and are not described herein again.
The embodiment of the present application further provides a computer device, where the computer device includes a processor and a memory, where the memory stores at least one program code, and the at least one program code is loaded and executed by the processor, so as to implement the operations executed in the virtual scene creation method of the foregoing embodiment.
Optionally, the computer device is provided as a terminal. Fig. 5 shows a block diagram of a terminal 500 according to an exemplary embodiment of the present application. The terminal 500 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 500 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
The terminal 500 includes: a processor 501 and a memory 502.
The processor 501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 501 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 501 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 501 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, processor 501 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
Memory 502 may include one or more computer-readable storage media, which may be non-transitory. Memory 502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 502 is used to store at least one program code for execution by the processor 501 to implement the method of creating a virtual scene provided by the method embodiments herein.
In some embodiments, the terminal 500 may further optionally include: a peripheral interface 503 and at least one peripheral. The processor 501, memory 502 and peripheral interface 503 may be connected by a bus or signal lines. Each peripheral may be connected to the peripheral interface 503 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 504, display screen 505, camera assembly 506, audio circuitry 507, positioning assembly 508, and power supply 509.
The peripheral interface 503 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 501 and the memory 502. In some embodiments, the processor 501, memory 502, and peripheral interface 503 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 501, the memory 502, and the peripheral interface 503 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 504 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 504 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 504 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 504 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 504 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 504 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 505 is a touch display screen, the display screen 505 also has the ability to capture touch signals on or over the surface of the display screen 505. The touch signal may be input to the processor 501 as a control signal for processing. At this point, the display screen 505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 505 may be one, providing the front panel of the terminal 500; in other embodiments, the display screens 505 may be at least two, respectively disposed on different surfaces of the terminal 500 or in a folded design; in other embodiments, the display 505 may be a flexible display disposed on a curved surface or a folded surface of the terminal 500. Even more, the display screen 505 can be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 505 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 506 is used to capture images or video. Optionally, camera assembly 506 includes a front camera and a rear camera. The front camera is arranged on the front panel of the terminal, and the rear camera is arranged on the back of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 506 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuitry 507 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 501 for processing, or inputting the electric signals to the radio frequency circuit 504 to realize voice communication. The speaker is used to convert electrical signals from the processor 501 or the radio frequency circuit 504 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. The positioning component 508 is used for positioning the current geographic Location of the terminal 500 for navigation or LBS (Location Based Service). The Positioning component 508 may be a Positioning component based on the united states GPS (Global Positioning System), the chinese beidou System, the russian graves System, or the european union's galileo System.
Power supply 509 is used to power the various components in terminal 500. The power source 509 may be alternating current, direct current, disposable or rechargeable. When power supply 509 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 500 also includes one or more sensors 510. The one or more sensors 510 include, but are not limited to: acceleration sensor 511, gyro sensor 512, pressure sensor 513, fingerprint sensor 514, optical sensor 515, and proximity sensor 516.
The acceleration sensor 511 may detect the magnitude of acceleration on three coordinate axes of the coordinate system established with the terminal 500. For example, the acceleration sensor 511 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 501 may control the display screen 505 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 511. The acceleration sensor 511 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 512 may detect a body direction and a rotation angle of the terminal 500, and the gyro sensor 512 may cooperate with the acceleration sensor 511 to acquire a 3D motion of the user on the terminal 500. The processor 501 may implement the following functions according to the data collected by the gyro sensor 512: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 513 may be disposed on a side frame of the terminal 500 and/or underneath the display screen 505. When the pressure sensor 513 is disposed on the side frame of the terminal 500, a user's holding signal of the terminal 500 may be detected, and the processor 501 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 513. When the pressure sensor 513 is disposed at the lower layer of the display screen 505, the processor 501 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 505. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 514 is used for collecting a fingerprint of the user, and the processor 501 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 514, or the fingerprint sensor 514 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 501 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc.
The optical sensor 515 is used to collect the ambient light intensity. In one embodiment, the processor 501 may control the display brightness of the display screen 505 based on the ambient light intensity collected by the optical sensor 515. Specifically, when the ambient light intensity is high, the display brightness of the display screen 505 is increased; when the ambient light intensity is low, the display brightness of the display screen 505 is reduced. In another embodiment, processor 501 may also dynamically adjust the shooting parameters of camera head assembly 506 based on the ambient light intensity collected by optical sensor 515.
A proximity sensor 516, also called a distance sensor, is provided at the front panel of the terminal 500. The proximity sensor 516 is used to collect the distance between the user and the front surface of the terminal 500. In one embodiment, when the proximity sensor 516 detects that the distance between the user and the front surface of the terminal 500 gradually decreases, the processor 501 controls the display screen 505 to switch from the bright screen state to the dark screen state; when the proximity sensor 516 detects that the distance between the user and the front surface of the terminal 500 becomes gradually larger, the display screen 505 is controlled by the processor 501 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 5 is not intended to be limiting of terminal 500 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Optionally, the computer device is provided as a server. Fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 600 may generate relatively large differences due to different configurations or performances, and may include one or more processors (CPUs) 601 and one or more memories 602, where at least one program code is stored in the memory 602, and the at least one program code is loaded and executed by the processors 601 to implement the method for creating a virtual scene provided by the above method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
An embodiment of the present application further provides a computer-readable storage medium, where at least one program code is stored in the computer-readable storage medium, and the at least one program code is loaded and executed by a processor, so as to implement the operations executed in the virtual scene creating method according to the foregoing embodiments.
The embodiment of the present application further provides a computer program, where at least one program code is stored in the computer program, and the at least one program code is loaded and executed by a processor, so as to implement the operations executed in the virtual scene creating method according to the foregoing embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A method for creating a virtual scene, the method comprising:
acquiring a template scene, wherein the template scene comprises scene elements, and the scene elements have corresponding state parameters;
adjusting the state parameters corresponding to the scene elements based on scene adjustment information, wherein the scene adjustment information is used for expressing the adjustment mode of the state parameters corresponding to the scene elements;
acquiring behavior data in the process of running the adjusted alternative scene based on the adjusted state parameters, wherein the behavior data represents behaviors occurring in the alternative scene;
and determining the candidate scene as a target scene under the condition that the behavior data meets a target condition.
2. The method of claim 1, wherein the adjusting the state parameter corresponding to the scene element based on the scene adjustment information comprises:
adjusting the state parameters for multiple times according to the reference adjustment step length of the state parameters corresponding to the scene elements to obtain multiple alternative scenes;
determining the candidate scene as a target scene under the condition that the behavior data meets a target condition, including:
and under the condition that the behavior data corresponding to any alternative scene meets the target condition, determining the any alternative scene as the target scene.
3. The method of claim 1, wherein the scene adjustment information comprises adjustment information of state parameters of a plurality of dimensions; the adjusting the state parameters corresponding to the scene elements based on the scene adjustment information includes:
and respectively adjusting the state parameters of the corresponding dimensionalities according to the adjustment information corresponding to each dimensionality.
4. The method of claim 1, wherein the scene adjustment information comprises obstacle adjustment information, and wherein adjusting the state parameter corresponding to the scene element based on the scene adjustment information comprises:
adjusting the state parameters corresponding to the obstacles according to the obstacle adjustment information;
wherein the state parameter corresponding to the obstacle comprises at least one of an initial position of the obstacle, an initial orientation of the obstacle, a moving speed of the obstacle or a moving track of the obstacle.
5. The method of claim 4, wherein the state parameters corresponding to the obstacle comprise the initial position, the initial orientation, and the movement trajectory; the adjusting the state parameter corresponding to the obstacle according to the obstacle adjustment information includes:
in response to the initial pose of the obstacle not matching the movement trajectory, generating a transition trajectory from the initial pose to the movement trajectory based on the initial pose and the movement trajectory;
fitting the transition track with the movement track;
wherein the initial pose includes the initial position and the initial orientation.
6. The method of claim 1, wherein the scene adjustment information comprises signal lamp adjustment information, and wherein adjusting the state parameter corresponding to the scene element based on the scene adjustment information comprises:
adjusting the state parameters corresponding to the traffic signal lamps according to the signal lamp adjustment information;
the state parameter corresponding to the traffic signal lamp comprises at least one of duration of an initial state of the traffic signal lamp, change frequency of the traffic signal lamp or a working state of the traffic signal lamp, wherein the working state comprises at least one of normal working, warning or damage.
7. The method of claim 1, wherein the target condition comprises conditions corresponding to a plurality of behaviors; determining the candidate scene as a target scene under the condition that the behavior data meets a target condition, including:
and under the condition that the behavior data accords with the condition corresponding to the behavior, determining the alternative scene as the target scene.
8. The method of claim 7, wherein the scene elements include obstacles and autonomous vehicles, and the target conditions include: a first distance between the obstacle and an initial position of the autonomous vehicle is not less than a second distance;
before determining the candidate scene as the target scene when the behavior data meets the condition corresponding to the behavior, the method further includes:
determining the second distance based on an initial speed of the obstacle and an initial speed of the autonomous vehicle, the second distance being in a positive correlation with the initial speed of the obstacle and the initial speed of the autonomous vehicle.
9. The method of claim 7, wherein the scene element comprises an obstacle, and wherein the target condition comprises: a first angle difference between the orientation of the obstacle at any moment and a track direction is not larger than a second angle difference, and the track direction is the extending direction of a moving track corresponding to the position of the obstacle at any moment;
before determining the candidate scene as the target scene when the behavior data meets the condition corresponding to the behavior, the method further includes:
determining the second angle difference based on the moving speed of the obstacle at any one time, wherein the second angle difference and the moving speed are in a negative correlation relationship.
10. The method of claim 1, wherein determining the alternative scene as a target scene if the behavior data meets a target condition comprises:
and under the condition that the behavior data meet the target conditions, adding the candidate scenes serving as first-type target scenes into a scene database, wherein the scene database is used for storing target scenes for testing, and the first-type target scenes are candidate scenes of which the corresponding behavior data meet the target conditions.
11. The method of claim 10, further comprising:
determining a quantity ratio between the first type of target scene and a second type of target scene in the scene database if the behavior data does not meet the target condition;
under the condition that the quantity proportion is larger than a reference proportion, taking the candidate scene as the second type of target scene, and adding the second type of target scene into the scene database;
and the second type of target scene is a candidate scene of which the corresponding behavior data does not meet the target condition.
12. The method of claim 1, wherein determining the alternative scene as a target scene if the behavior data meets a target condition comprises:
under the condition that multiple behaviors occur in the alternative scene, determining a behavior score corresponding to each behavior based on behavior data corresponding to each behavior;
and determining the candidate scene as the target scene under the condition that the sum of the behavior scores corresponding to the behaviors is not less than a threshold value.
13. An apparatus for creating a virtual scene, the apparatus comprising:
the template scene acquisition module is configured to acquire a template scene, wherein the template scene comprises scene elements, and the scene elements have corresponding state parameters;
a state parameter adjusting module configured to adjust a state parameter corresponding to the scene element based on scene adjustment information, where the scene adjustment information is used to indicate an adjustment manner of the state parameter corresponding to the scene element;
a behavior data collection module configured to collect behavior data representing behaviors occurring in the alternative scene during operation of the adjusted alternative scene based on the adjusted state parameters;
a target scene determination module configured to determine the candidate scene as a target scene if the behavior data meets a target condition.
14. A computer device, characterized in that it comprises a processor and a memory in which at least one program code is stored, which is loaded and executed by the processor to implement the operations performed by the creation method of a virtual scene according to any one of claims 1 to 12.
15. A computer-readable storage medium, wherein at least one program code is stored in the storage medium, and the program code is loaded and executed by a processor to implement the operations performed by the method for creating a virtual scene according to any one of claims 1 to 12.
CN202110394125.7A 2021-04-13 2021-04-13 Virtual scene creating method, device, equipment and storage medium Withdrawn CN113160427A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110394125.7A CN113160427A (en) 2021-04-13 2021-04-13 Virtual scene creating method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110394125.7A CN113160427A (en) 2021-04-13 2021-04-13 Virtual scene creating method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113160427A true CN113160427A (en) 2021-07-23

Family

ID=76890081

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110394125.7A Withdrawn CN113160427A (en) 2021-04-13 2021-04-13 Virtual scene creating method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113160427A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113715817A (en) * 2021-11-02 2021-11-30 腾讯科技(深圳)有限公司 Vehicle control method, vehicle control device, computer equipment and storage medium
CN113918475A (en) * 2021-12-15 2022-01-11 腾讯科技(深圳)有限公司 Test processing method and device, computer equipment and storage medium
CN116358902A (en) * 2023-06-02 2023-06-30 中国第一汽车股份有限公司 Vehicle function testing method and device, electronic equipment and storage medium
CN116822259A (en) * 2023-08-30 2023-09-29 北京国网信通埃森哲信息技术有限公司 Evaluation information generation method and device based on scene simulation and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102160006A (en) * 2008-07-15 2011-08-17 空中侦察辨识和避免技术有限责任公司 System and method for preventing a collis
CN110795818A (en) * 2019-09-12 2020-02-14 腾讯科技(深圳)有限公司 Method and device for determining virtual test scene, electronic equipment and storage medium
WO2020052421A1 (en) * 2018-09-13 2020-03-19 腾讯科技(深圳)有限公司 Method for configuring virtual scene, device, storage medium, and electronic device
CN111091739A (en) * 2018-10-24 2020-05-01 百度在线网络技术(北京)有限公司 Automatic driving scene generation method and device and storage medium
CN111694287A (en) * 2020-05-14 2020-09-22 北京百度网讯科技有限公司 Obstacle simulation method and device in unmanned simulation scene
CN111714886A (en) * 2020-07-24 2020-09-29 腾讯科技(深圳)有限公司 Virtual object control method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102160006A (en) * 2008-07-15 2011-08-17 空中侦察辨识和避免技术有限责任公司 System and method for preventing a collis
WO2020052421A1 (en) * 2018-09-13 2020-03-19 腾讯科技(深圳)有限公司 Method for configuring virtual scene, device, storage medium, and electronic device
CN111091739A (en) * 2018-10-24 2020-05-01 百度在线网络技术(北京)有限公司 Automatic driving scene generation method and device and storage medium
CN110795818A (en) * 2019-09-12 2020-02-14 腾讯科技(深圳)有限公司 Method and device for determining virtual test scene, electronic equipment and storage medium
CN111694287A (en) * 2020-05-14 2020-09-22 北京百度网讯科技有限公司 Obstacle simulation method and device in unmanned simulation scene
CN111714886A (en) * 2020-07-24 2020-09-29 腾讯科技(深圳)有限公司 Virtual object control method, device, equipment and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113715817A (en) * 2021-11-02 2021-11-30 腾讯科技(深圳)有限公司 Vehicle control method, vehicle control device, computer equipment and storage medium
CN113918475A (en) * 2021-12-15 2022-01-11 腾讯科技(深圳)有限公司 Test processing method and device, computer equipment and storage medium
CN116358902A (en) * 2023-06-02 2023-06-30 中国第一汽车股份有限公司 Vehicle function testing method and device, electronic equipment and storage medium
CN116358902B (en) * 2023-06-02 2023-08-22 中国第一汽车股份有限公司 Vehicle function testing method and device, electronic equipment and storage medium
CN116822259A (en) * 2023-08-30 2023-09-29 北京国网信通埃森哲信息技术有限公司 Evaluation information generation method and device based on scene simulation and electronic equipment
CN116822259B (en) * 2023-08-30 2023-11-24 北京国网信通埃森哲信息技术有限公司 Evaluation information generation method and device based on scene simulation and electronic equipment

Similar Documents

Publication Publication Date Title
CN111126182B (en) Lane line detection method, lane line detection device, electronic device, and storage medium
CN113160427A (en) Virtual scene creating method, device, equipment and storage medium
CN112307642B (en) Data processing method, device, system, computer equipment and storage medium
CN111125442B (en) Data labeling method and device
CN110920631B (en) Method and device for controlling vehicle, electronic equipment and readable storage medium
CN110864913B (en) Vehicle testing method and device, computer equipment and storage medium
CN111192341A (en) Method and device for generating high-precision map, automatic driving equipment and storage medium
CN112669464A (en) Method and equipment for sharing data
CN113205515A (en) Target detection method, device and computer storage medium
WO2022213733A1 (en) Method and apparatus for acquiring flight route, and computer device and readable storage medium
CN112130945A (en) Gift presenting method, device, equipment and storage medium
CN115269097A (en) Navigation interface display method, navigation interface display device, navigation interface display equipment, storage medium and program product
CN112269939B (en) Automatic driving scene searching method, device, terminal, server and medium
CN113343457B (en) Automatic driving simulation test method, device, equipment and storage medium
CN112991439B (en) Method, device, electronic equipment and medium for positioning target object
CN111754564B (en) Video display method, device, equipment and storage medium
CN111147738A (en) Police vehicle-mounted panoramic and coma system, device, electronic equipment and medium
CN111223311B (en) Traffic flow control method, device, system, control equipment and storage medium
CN112734346B (en) Method, device and equipment for determining lane coverage and readable storage medium
CN110399688B (en) Method and device for determining environment working condition of automatic driving and storage medium
CN112699906B (en) Method, device and storage medium for acquiring training data
CN111259252A (en) User identification recognition method and device, computer equipment and storage medium
CN112863168A (en) Traffic grooming method and device, electronic equipment and medium
CN113239901B (en) Scene recognition method, device, equipment and storage medium
CN113361386B (en) Virtual scene processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210723

WW01 Invention patent application withdrawn after publication