CN113361386A - Virtual scene processing method, device, equipment and storage medium - Google Patents

Virtual scene processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN113361386A
CN113361386A CN202110619626.0A CN202110619626A CN113361386A CN 113361386 A CN113361386 A CN 113361386A CN 202110619626 A CN202110619626 A CN 202110619626A CN 113361386 A CN113361386 A CN 113361386A
Authority
CN
China
Prior art keywords
scene
virtual
category
label
running
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110619626.0A
Other languages
Chinese (zh)
Other versions
CN113361386B (en
Inventor
董国勇
林辰
戈洋洋
朱峻林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Zhijia Technology Co Ltd
Original Assignee
Suzhou Zhijia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Zhijia Technology Co Ltd filed Critical Suzhou Zhijia Technology Co Ltd
Priority to CN202110619626.0A priority Critical patent/CN113361386B/en
Publication of CN113361386A publication Critical patent/CN113361386A/en
Application granted granted Critical
Publication of CN113361386B publication Critical patent/CN113361386B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a processing method, a processing device, processing equipment and a storage medium of a virtual scene, and belongs to the technical field of computers. The method comprises the following steps: displaying a plurality of category labels in a label display interface, wherein the category labels are obtained by classifying a plurality of virtual scenes for carrying out automatic driving tests, and each category label corresponds to a virtual scene belonging to the same category; acquiring a category label selected from a label display interface; and operating the virtual scene corresponding to the selected category label. According to the method, the virtual scenes for automatic driving test are classified to obtain the class labels corresponding to the virtual scenes of each class, so that the virtual scenes required by the simulation test can be rapidly screened out by using the class labels during the simulation test, and the screened virtual scenes are operated, therefore, under the condition of large number of the virtual scenes, the efficiency of obtaining the virtual scenes required by the simulation test can be improved, and the efficiency of the simulation test can be improved.

Description

Virtual scene processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for processing a virtual scene.
Background
The automatic driving vehicle is also called as an unmanned vehicle, is an intelligent vehicle for realizing unmanned driving through a computer technology, and has wide application prospect in the transportation field. In order to improve the safety of autonomous vehicles, it is essential to conduct a lot of tests thereon. The simulation test method based on the virtual scene has great advantages in the aspects of test efficiency, test cost and the like, is an important means for testing the automatic driving vehicle, and has become a current research hotspot. In the related art, the number of virtual scenes for performing the simulation test is huge, and the virtual scenes required by the test are not convenient to select during the simulation test, so that the efficiency of the simulation test is low.
Disclosure of Invention
The embodiment of the application provides a processing method, a processing device, equipment and a storage medium of a virtual scene, which are convenient for obtaining the virtual scene required by a simulation test and can improve the efficiency of the simulation test. The technical scheme is as follows:
in one aspect, a method for processing a virtual scene is provided, where the method includes:
displaying a plurality of category labels in a label display interface, wherein the category labels are obtained by classifying a plurality of virtual scenes for carrying out automatic driving tests, and each category label corresponds to a virtual scene belonging to the same category;
acquiring a category label selected from the label display interface;
and operating the virtual scene corresponding to the selected category label.
In one possible implementation manner, before displaying the plurality of category labels in the label display interface, the method further includes:
determining scene elements for constituting a virtual scene;
determining a state parameter of the scene element;
generating the virtual scene based on the scene elements and the state parameters of the scene elements.
In one possible implementation, the determining scene elements for constituting a virtual scene includes:
displaying driving conditions to be met by an autonomous vehicle in the virtual scene, wherein the driving conditions comprise scene elements related to the autonomous vehicle;
acquiring the scene element input based on the driving condition.
In one possible implementation, the determining the state parameter of the scene element includes:
selecting a numerical value from a reference numerical value range at every sampling interval and determining the numerical value as the current state parameter of the scene element;
the generating the virtual scene based on the scene element and the state parameter of the scene element includes:
and generating the virtual scene based on the scene element and the current state parameter of the scene element.
In one possible implementation, the determining scene elements for constituting a virtual scene includes:
at least one of scene elements acquired by an automatic driving vehicle in a historical driving process, scene elements acquired by a non-automatic driving vehicle in the historical driving process, or scene elements in a virtual scene stored in a scene database is acquired.
In one possible implementation, the determining the state parameter of the scene element includes:
acquiring at least one of state parameters of scene elements acquired by an automatic driving vehicle in a historical driving process, state parameters of scene elements acquired by a non-automatic driving vehicle in the historical driving process, or state parameters of scene elements in a virtual scene acquired in a virtual scene running process in a scene database.
In one possible implementation, the generating the virtual scene based on the scene element and the state parameter of the scene element includes:
generating a candidate virtual scene corresponding to a target time period based on the state parameter of the scene element at each moment in the target time period;
intercepting scene segments which accord with scene intercepting conditions from the alternative virtual scene;
and acquiring the virtual scene based on the scene segment.
In a possible implementation manner, before the intercepting a scene segment meeting a scene intercepting condition from the alternative virtual scene, the method further includes:
displaying driving safety information, wherein the driving safety information comprises driving conditions which need to be met by an automatic driving vehicle in a target scene;
acquiring the scene interception condition input based on the target scene.
In a possible implementation manner, the obtaining the virtual scene based on the scene segment includes:
displaying the intercepted scene segments in a virtual scene display interface;
in response to a selection operation on any scene segment, determining the scene segment as the virtual scene.
In one possible implementation manner, before displaying the plurality of category labels in the label display interface, the method further includes:
acquiring scene labels corresponding to the plurality of virtual scenes respectively;
classifying the virtual scenes based on the scene labels respectively corresponding to the virtual scenes to obtain multiple categories, and acquiring the category label input for each category of virtual scenes.
In a possible implementation manner, the obtaining scene tags respectively corresponding to the multiple virtual scenes includes:
for any virtual scene, acquiring scene labels of multiple dimensions of the virtual scene;
the classifying the plurality of virtual scenes based on the scene tags respectively corresponding to the plurality of virtual scenes to obtain a plurality of categories, and obtaining the category tag input for each category of virtual scenes includes:
classifying the plurality of virtual scenes based on scene tags of a first dimension of the plurality of virtual scenes to obtain a plurality of categories on the first dimension, and acquiring a first category tag input for each category of virtual scenes;
wherein the first dimension is any one of the plurality of dimensions.
In a possible implementation manner, after the classifying the plurality of virtual scenes based on the scene tags of the first dimension of the plurality of virtual scenes to obtain a plurality of categories in the first dimension and acquiring the first category tags input for the virtual scenes of each category, the method further includes:
classifying a plurality of virtual scenes with the same first class labels based on second-dimension scene labels of the virtual scenes to obtain a plurality of classes on the second dimension, and acquiring second class labels input for the virtual scenes of each class;
wherein the second dimension is any one of the plurality of dimensions except the first dimension.
In a possible implementation manner, the tag presentation interface includes a plurality of category tags with different dimensions, and the obtaining of the category tag selected from the tag presentation interface includes:
obtaining a plurality of category labels with different dimensions selected from the label display interface;
the running of the virtual scene corresponding to the selected category label comprises the following steps:
and operating the virtual scene corresponding to the selected multiple category labels.
In one possible implementation manner, the displaying a plurality of category labels in the label display interface includes:
displaying a first category label of a plurality of first dimensions in the label display interface;
and responding to the triggering operation of a pull-down control corresponding to any first category label, and displaying a plurality of second-dimension second category labels under the first category labels in the label display interface.
In a possible implementation manner, the tag presentation interface further includes a scene tag of each virtual scene corresponding to any category tag, and the method further includes:
acquiring a scene label selected from the label display interface;
and operating the virtual scene corresponding to the selected scene label.
In a possible implementation manner, the running of the virtual scene corresponding to the selected category label includes:
acquiring the operation condition of the virtual scene corresponding to the selected category label, wherein the operation condition is a condition which needs to be met by behavior data of scene elements in the virtual scene in the operation process of the virtual scene;
acquiring behavior data of the scene elements in the process of running the virtual scene, wherein the behavior data represents the behavior of the scene elements in the virtual scene;
and generating an operation result based on the behavior data and the operation condition.
In one possible implementation, the generating a running result based on the behavior data and the running condition includes:
determining the operation result as operation passing under the condition that the behavior data meet the operation condition;
and determining the operation result as non-operation passing under the condition that the behavior data does not meet the operation condition.
In a possible implementation manner, the determining that the operation result is a pass operation in the case that the behavior data satisfies the operation condition includes:
determining that the running result passes the running condition when the running condition of the virtual scene is multiple and the behavior data corresponding to the multiple running conditions respectively meet the multiple running conditions;
determining that the operation result is not passed through if the behavior data does not satisfy the operation condition, including:
and determining that the running result is not passed when the running conditions of the virtual scene are multiple and the behavior data aimed at any running condition does not meet any running condition.
In a possible implementation manner, after generating the operation result based on the behavior data and the operation condition, the method further includes:
displaying a running result display interface, wherein the running result display interface comprises at least one of the number of virtual scenes which are passed through in running, the number of virtual scenes which are not passed through in running, running conditions which are met by behavior data corresponding to the virtual scenes for each running virtual scene, running conditions which are not met by the behavior data corresponding to the virtual scenes for each running virtual scene, running passing rate of virtual scenes of any category, running passing rate of any running condition, or ratio of the number of running virtual scenes to the total number of virtual scenes.
In a possible implementation manner, the obtaining of the operating condition of the virtual scene corresponding to the selected category label includes:
displaying an operation condition setting interface;
and acquiring the input running condition of the virtual scene based on the running condition setting interface.
In another aspect, an apparatus for processing a virtual scene is provided, the apparatus including:
the label display module is configured to display a plurality of category labels in a label display interface, wherein the category labels are obtained by classifying a plurality of virtual scenes for automatic driving test, and each category label corresponds to a virtual scene belonging to the same category;
the label obtaining module is configured to obtain a category label selected from the label display interface;
and the scene operation module is configured to operate the virtual scene corresponding to the selected category label.
In one possible implementation, the apparatus further includes:
an element determination module configured to determine scene elements for constituting a virtual scene;
a parameter determination module configured to determine a state parameter of the scene element;
a scene generation module configured to generate the virtual scene based on the scene elements and state parameters of the scene elements.
In one possible implementation, the element determination module is configured to present driving conditions to be met by an autonomous vehicle in the virtual scene, the driving conditions including scene elements related to the autonomous vehicle;
acquiring the scene element input based on the driving condition.
In a possible implementation manner, the parameter determining module is configured to select a value from a reference value range at every sampling interval and determine the value as the current state parameter of the scene element;
the scene generation module is configured to generate one virtual scene based on the scene element and the current state parameter of the scene element.
In one possible implementation, the element determination module is configured to obtain at least one of scene elements collected by an autonomous vehicle during historical driving, scene elements collected by a non-autonomous vehicle during historical driving, or scene elements in a virtual scene stored in a scene database.
In one possible implementation, the parameter determination module is configured to obtain at least one of a state parameter of a scene element acquired by an autonomous vehicle during historical driving, a state parameter of a scene element acquired by a non-autonomous vehicle during historical driving, or a state parameter of a scene element in a virtual scene acquired during running of the virtual scene in a scene database.
In one possible implementation manner, the scene generation module includes:
the scene generating unit is configured to generate a candidate virtual scene corresponding to a target time period based on the state parameter of the scene element at each moment in the target time period;
a segment intercepting unit configured to intercept scene segments meeting scene intercepting conditions from the candidate virtual scene;
a scene acquisition unit configured to acquire the virtual scene based on the scene segment.
In a possible implementation manner, the scene generation module further includes:
the intercepting condition acquisition unit is configured to display driving safety information, and the driving safety information comprises driving conditions which need to be met by the automatic driving vehicle in a target scene; acquiring the scene interception condition input based on the target scene.
In a possible implementation manner, the scene obtaining unit is configured to display the intercepted scene segment in a virtual scene display interface;
in response to a selection operation on any scene segment, determining the scene segment as the virtual scene.
In one possible implementation, the apparatus further includes:
a scene tag obtaining module configured to obtain scene tags corresponding to the plurality of virtual scenes, respectively;
the category label obtaining module is configured to classify the plurality of virtual scenes based on the scene labels respectively corresponding to the plurality of virtual scenes to obtain a plurality of categories, and obtain a category label input for each category of virtual scenes.
In a possible implementation manner, the scene tag obtaining module is configured to obtain, for any virtual scene, scene tags of multiple dimensions of the virtual scene;
the category label obtaining module is configured to classify the plurality of virtual scenes based on scene labels of a first dimension of the plurality of virtual scenes to obtain a plurality of categories in the first dimension, and obtain a first category label input for each category of virtual scenes; wherein the first dimension is any one of the plurality of dimensions.
In a possible implementation manner, the category label obtaining module is further configured to, for a plurality of virtual scenes having the same first category label, classify the plurality of virtual scenes based on scene labels of a second dimension of the plurality of virtual scenes to obtain a plurality of categories in the second dimension, and obtain a second category label input for each category of virtual scenes; wherein the second dimension is any one of the plurality of dimensions except the first dimension.
In a possible implementation manner, the tag display interface includes a plurality of category tags with different dimensions, and the tag obtaining module is configured to obtain the plurality of category tags with different dimensions selected from the tag display interface;
the scene operation module is configured to operate virtual scenes corresponding to the selected multiple category labels.
In one possible implementation manner, the tag display module is configured to display a plurality of first-dimension first category tags in the tag display interface; and responding to the triggering operation of a pull-down control corresponding to any first category label, and displaying a plurality of second-dimension second category labels under the first category labels in the label display interface.
In a possible implementation manner, the tag presentation interface further includes a scene tag of each virtual scene corresponding to any category tag, and the scene operation module is further configured to acquire a scene tag selected from the tag presentation interface; and operating the virtual scene corresponding to the selected scene label.
In one possible implementation manner, the scene running module includes:
the operation condition acquisition unit is configured to acquire an operation condition of a virtual scene corresponding to the selected category label, wherein the operation condition is a condition which needs to be met by behavior data of scene elements in the virtual scene in the operation process of the virtual scene;
a behavior data acquisition unit configured to acquire behavior data of the scene element during the operation of the virtual scene, the behavior data representing a behavior of the scene element in the virtual scene;
an operation result generation unit configured to generate an operation result based on the behavior data and the operation condition.
In one possible implementation manner, the operation result generation unit includes:
a first determining subunit configured to determine that the operation result is a pass operation if the behavior data satisfies the operation condition;
a second determination subunit configured to determine that the operation result is not passed if the behavior data does not satisfy the operation condition.
In a possible implementation manner, the first determining subunit is configured to determine that the running result passes when the running condition of the virtual scene is multiple and the behavior data for the multiple running conditions respectively satisfy the multiple running conditions;
the second determining subunit is configured to determine that the operation result is not passed when the operation condition of the virtual scene is multiple and the behavior data targeted by any operation condition does not satisfy any operation condition.
In one possible implementation, the apparatus further includes:
the interface display module is configured to display a running result display interface, and the running result display interface comprises at least one of the number of virtual scenes which are passed through in running, the number of virtual scenes which are not passed through in running, running conditions which are met by behavior data corresponding to the virtual scenes for each virtual scene which is run, running conditions which are not met by the behavior data corresponding to the virtual scenes for each virtual scene which is run, running passing rate of virtual scenes of any category, running passing rate of any running condition, or the ratio of the number of running virtual scenes to the total number of virtual scenes.
In a possible implementation manner, the operating condition obtaining unit is configured to display an operating condition setting interface; and acquiring the input running condition of the virtual scene based on the running condition setting interface.
In another aspect, a computer device is provided, which includes a processor and a memory, where at least one program code is stored in the memory, and the program code is loaded by the processor and executed to implement the operations executed in the processing method of the virtual scene in any one of the above possible implementation manners.
In another aspect, a computer-readable storage medium is provided, in which at least one program code is stored, and the program code is loaded and executed by a processor to implement the operations performed in the processing method of the virtual scene in any one of the above possible implementation manners.
In another aspect, a computer program product is provided, where the computer program product includes at least one program code, and the program code is loaded and executed by a processor to implement the operations performed in the processing method of the virtual scene in any one of the above possible implementation manners.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
in the embodiment of the application, the virtual scenes for automatic driving test are classified to obtain the category labels corresponding to the virtual scenes of each category, so that the virtual scenes required by the simulation test can be rapidly screened out by using the category labels when the simulation test is performed, and the screened virtual scenes are operated, therefore, under the condition of large number of the virtual scenes, the efficiency of acquiring the virtual scenes required by the simulation test can be improved, and the efficiency of the simulation test can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment provided by an embodiment of the present application;
fig. 2 is a flowchart of a processing method of a virtual scene according to an embodiment of the present application;
fig. 3 is a flowchart of a processing method of a virtual scene according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a tag display interface provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of an operation result display interface provided by an embodiment of the present application;
fig. 6 is a schematic diagram of a processing procedure of a virtual scene according to an embodiment of the present disclosure;
fig. 7 is a block diagram of a processing apparatus for a virtual scene according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a server provided in an embodiment of the present application;
fig. 9 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The terms "first," "second," "third," "fourth," and the like as used herein may be used herein to describe various concepts, but these concepts are not limited by these terms unless otherwise specified. These terms are only used to distinguish one concept from another. For example, a first dimension may be referred to as a dimension, and similarly, a second dimension may be referred to as a first dimension, without departing from the scope of the present application.
As used herein, the terms "at least one," "a plurality," "each," and "any," at least one of which includes one, two, or more than two, and a plurality of which includes two or more than two, each of which refers to each of the corresponding plurality, and any of which refers to any of the plurality. For example, the plurality of virtual scenes includes 3 virtual scenes, each of the 3 virtual scenes refers to each of the 3 virtual scenes, and any one of the 3 virtual scenes refers to any one of the 3 virtual scenes, which may be the first one, the second one, or the third one.
Fig. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application. Referring to fig. 1, the implementation environment includes a terminal 101 and a server 102. The terminal 101 and the server 102 are connected via a wireless or wired network. Optionally, the terminal 101 is a computer, a mobile phone, a tablet computer, or other terminal. Optionally, the server 102 is a background server or a cloud server providing services such as cloud computing and cloud storage.
Optionally, the terminal 101 has installed thereon a target application served by the server 102, and the terminal 101 can implement functions such as data transmission, message interaction, and the like through the target application. Optionally, the target application is a target application in an operating system of the terminal 101, or a target application provided by a third party. The target application has a function of processing a virtual scene, for example, the target application can create a virtual scene, classify the virtual scene, search the virtual scene, execute the virtual scene, generate an execution result of the virtual scene, and the like. Optionally, of course, the target application can also have other functions, and the embodiment of the present application does not limit this. Optionally, the target application is a simulation application or a management application of a virtual scene, which is not limited in this embodiment of the application.
In the embodiment of the present application, the server 102 classifies a plurality of virtual scenes for performing an automated driving simulation test to obtain a plurality of category labels, and transmits the plurality of category labels to the terminal 101. The terminal 101 is configured to display the plurality of category labels, determine a category label selected from the plurality of category labels, acquire a virtual scene corresponding to the selected category label from the server 102, and perform a simulation test based on the virtual scene.
It should be noted that the embodiment of the present application is described by taking an example in which the implementation environment includes only the terminal 101 and the server 102, and in other embodiments, the implementation environment includes only the terminal 101 or the server 102. The virtual scene is processed by the terminal 101 or the server 102.
The processing method of the virtual scene can be applied to a scene of an automatic driving simulation test, for example, when the automatic driving simulation test is required to be carried out on an automatic driving vehicle, the required virtual scene can be searched out through the method provided by the application, and then the automatic driving simulation test is carried out on the automatic driving vehicle based on the virtual scene. Of course, the processing method of the virtual scene provided by the present application can also be applied to other scenes, and the embodiment of the present application does not limit this.
Fig. 2 is a flowchart of a processing method of a virtual scene according to an embodiment of the present application. The execution subject is a computer device. Referring to fig. 2, the embodiment includes:
201. displaying a plurality of category labels in a label display interface, wherein the category labels are obtained by classifying a plurality of virtual scenes for carrying out automatic driving tests, and each category label corresponds to a virtual scene belonging to the same category.
202. And acquiring the category label selected from the label display interface.
203. And operating the virtual scene corresponding to the selected category label.
In the embodiment of the application, the virtual scenes for automatic driving test are classified to obtain the category labels corresponding to the virtual scenes of each category, so that the virtual scenes required by the simulation test can be rapidly screened out by using the category labels when the simulation test is performed, and the screened virtual scenes are operated, therefore, under the condition of large number of the virtual scenes, the efficiency of acquiring the virtual scenes required by the simulation test can be improved, and the efficiency of the simulation test can be improved.
In one possible implementation, before displaying the plurality of category labels in the label display interface, the method further includes:
determining scene elements for constituting a virtual scene;
determining state parameters of the scene elements;
generating a virtual scene based on the scene elements and the state parameters of the scene elements.
In one possible implementation, determining scene elements for composing a virtual scene includes:
displaying driving conditions to be met by the automatic driving vehicle in the virtual scene, wherein the driving conditions comprise scene elements related to the automatic driving vehicle;
scene elements input based on driving conditions are acquired.
In one possible implementation, determining the state parameter of the scene element includes:
selecting a numerical value from the reference numerical value range at every sampling interval and determining the numerical value as the current state parameter of the scene element;
generating a virtual scene based on the scene elements and the state parameters of the scene elements, including:
and generating a virtual scene based on the scene elements and the current state parameters of the scene elements.
In one possible implementation, determining scene elements for composing a virtual scene includes:
at least one of scene elements acquired by an automatic driving vehicle in a historical driving process, scene elements acquired by a non-automatic driving vehicle in the historical driving process, or scene elements in a virtual scene stored in a scene database is acquired.
In one possible implementation, determining the state parameter of the scene element includes:
acquiring at least one of state parameters of scene elements acquired by an automatic driving vehicle in a historical driving process, state parameters of scene elements acquired by a non-automatic driving vehicle in the historical driving process, or state parameters of scene elements in a virtual scene acquired in a virtual scene in a process of running the virtual scene in a scene database.
In one possible implementation, generating a virtual scene based on scene elements and state parameters of the scene elements includes:
generating a candidate virtual scene corresponding to the target time period based on the state parameters of the scene elements at each moment in the target time period;
intercepting scene fragments meeting scene intercepting conditions from the alternative virtual scene;
based on the scene segments, a virtual scene is obtained.
In a possible implementation manner, before the scene segment meeting the scene cut condition is cut from the candidate virtual scene, the method further includes:
displaying driving safety information, wherein the driving safety information comprises driving conditions which need to be met by the automatic driving vehicle in a target scene;
and acquiring a scene interception condition input based on the target scene.
In one possible implementation, acquiring a virtual scene based on a scene segment includes:
displaying the intercepted scene segments in a virtual scene display interface;
in response to a selection operation for any of the scene segments, the scene segment is determined as a virtual scene.
In one possible implementation, before displaying the plurality of category labels in the label display interface, the method further includes:
acquiring scene labels corresponding to a plurality of virtual scenes respectively;
the method comprises the steps of classifying a plurality of virtual scenes based on scene labels respectively corresponding to the virtual scenes to obtain a plurality of categories, and obtaining category labels input for the virtual scenes of each category.
In a possible implementation manner, obtaining scene tags corresponding to a plurality of virtual scenes respectively includes:
for any virtual scene, scene labels of multiple dimensions of the virtual scene are obtained;
classifying the plurality of virtual scenes based on the scene tags respectively corresponding to the plurality of virtual scenes to obtain a plurality of categories, and acquiring the category tag input for each category of virtual scenes, the method comprises the following steps:
classifying the plurality of virtual scenes based on scene tags of a first dimension of the plurality of virtual scenes to obtain a plurality of categories on the first dimension, and acquiring a first category tag input for each category of virtual scenes;
wherein the first dimension is any one of a plurality of dimensions.
In a possible implementation manner, after classifying the plurality of virtual scenes based on scene tags of a first dimension of the plurality of virtual scenes to obtain a plurality of categories in the first dimension and acquiring a first category tag input for each category of virtual scenes, the method further includes:
classifying a plurality of virtual scenes with the same first class labels based on second-dimension scene labels of the plurality of virtual scenes to obtain a plurality of classes on a second dimension, and acquiring second class labels input for the virtual scenes of each class;
the second dimension is any one of the dimensions except the first dimension.
In a possible implementation manner, the tag display interface includes a plurality of category tags with different dimensions, and the obtaining of the category tag selected from the tag display interface includes:
obtaining a plurality of category labels with different dimensions selected from a label display interface;
operating the virtual scene corresponding to the selected category label, including:
and operating the virtual scene corresponding to the selected multiple category labels.
In one possible implementation, displaying a plurality of category labels in a label presentation interface includes:
displaying a plurality of first dimension first category labels in a label display interface;
and responding to the triggering operation of the pull-down control corresponding to any first category label, and displaying a plurality of second category labels of a second dimension under the first category labels in a label display interface.
In a possible implementation manner, the tag display interface further includes a scene tag of each virtual scene corresponding to any category tag, and the method further includes:
acquiring a scene label selected from a label display interface;
and operating the virtual scene corresponding to the selected scene label.
In a possible implementation manner, running the virtual scene corresponding to the selected category label includes:
acquiring the operation condition of the virtual scene corresponding to the selected category label, wherein the operation condition is a condition which is required to be met by behavior data of scene elements in the virtual scene in the operation process of the virtual scene;
in the process of running the virtual scene, behavior data of scene elements are collected, and the behavior data represent behaviors of the scene elements in the virtual scene;
based on the behavior data and the operating conditions, an operating result is generated.
In one possible implementation, generating the operation result based on the behavior data and the operation condition includes:
determining the operation result as operation passing under the condition that the behavior data meet the operation conditions;
and determining the operation result as non-operation passing in the case that the behavior data does not meet the operation condition.
In one possible implementation, in a case that the behavior data satisfies the operation condition, determining that the operation result is a pass operation, includes:
determining that the operation result passes under the condition that the operation conditions of the virtual scene are multiple and the behavior data corresponding to the multiple operation conditions respectively meet the multiple operation conditions;
determining the operation result as not-passed operation under the condition that the behavior data does not meet the operation condition, wherein the step comprises the following steps:
and determining that the operation result is not passed under the condition that the operation conditions of the virtual scene are multiple and the behavior data aimed at by any one of the operation conditions does not meet any one of the operation conditions.
In one possible implementation, after generating the operation result based on the behavior data and the operation condition, the method further includes:
displaying a running result display interface, wherein the running result display interface comprises at least one of the number of virtual scenes which are passed through in running, the number of virtual scenes which are not passed through in running, running conditions which are met by behavior data corresponding to the virtual scenes for each running virtual scene, running conditions which are not met by the behavior data corresponding to the virtual scenes for each running virtual scene, running passing rates of virtual scenes of any category, running passing rates of any running conditions, or the ratio of the number of running virtual scenes to the total number of the virtual scenes.
In a possible implementation manner, obtaining the operating condition of the virtual scene corresponding to the selected category label includes:
displaying an operation condition setting interface;
and setting an interface based on the running conditions, and acquiring the running conditions of the input virtual scene.
Fig. 3 is a flowchart of a processing method of a virtual scene according to an embodiment of the present application. Referring to fig. 3, the embodiment includes:
301. the terminal acquires a plurality of virtual scenes for automatic driving tests.
The virtual scene is a virtual three-dimensional scene obtained by modeling. The virtual scene includes scene elements, which are objects constituting the virtual scene. The scene elements include dynamic scene elements and static scene elements. The dynamic scene element refers to an element that can move in the scene, for example, a pedestrian, a vehicle, and the like on the road. Correspondingly, a static scene element refers to an element that cannot move in the scene, such as a barricade, a tree, etc. In fact, the types of scene elements are very rich, and any object that can be seen in a real driving scene can become a scene element in a virtual scene, which is not limited in the embodiment of the present application. For example, a weather element, optionally including rain, fog, snow, light, and the like. The scene element has a corresponding state parameter indicating a state of the scene element. For example, for a scene element such as a traffic light, the state parameter can include a change frequency indicating how frequently the traffic light changes. As another example, for a scene element such as a vehicle, the state parameters can include a moving speed and a moving trajectory, indicating how fast and trajectory the vehicle is moving. As another example, for a scene element such as snow, the state parameters can include the snow depth on the ground, and the like. As another example, for a scene element such as fog, the state parameters can include the intensity level of the fog, etc.
In a possible implementation manner, the acquiring, by the terminal, the virtual scene includes: the terminal determines scene elements for forming a virtual scene; determining a state parameter of the scene element; and the terminal generates a virtual scene based on the scene element and the state parameter of the scene element.
In one possible implementation, the terminal determining scene elements for constituting the virtual scene includes: the terminal displays driving conditions to be met by the automatic driving vehicle in the virtual scene, wherein the driving conditions comprise scene elements related to the automatic driving vehicle; the terminal acquires a scene element input based on the driving condition.
The driving conditions to be met by the automatic driving vehicle in the virtual scene are the driving conditions in any driving safety information, wherein the driving safety information comprises the driving conditions to be met by the automatic driving vehicle in various scenes. The driving safety information includes driving-related law and regulation information, functional safety information, expected functional safety information, functional specification information, and the like. The functional safety information includes a response to the automatic driving vehicle when a certain component of the automatic driving vehicle is failed, a degree of damage to the automatic driving vehicle when the certain component of the automatic driving vehicle is failed, and the like. The expected functional safety information includes performance conditions that need to be met by certain components of the autonomous vehicle. The functional specification information includes some conditions that the autonomous vehicle needs to meet during the development and testing phases.
In the embodiment of the application, the driving conditions required to be met by the automatic driving vehicle in the virtual scene are displayed, so that a user can analyze the driving conditions, scene elements related to the automatic driving vehicle are determined from the driving conditions, and then the scene elements are input, so that the scene elements for forming the virtual scene can be determined by human experience, and the required virtual scene can be designed.
Optionally, the terminal displays a scene element setting interface, the scene element setting interface includes driving conditions that need to be met by an autonomous vehicle in the virtual scene, and the terminal acquires the input scene elements based on the scene element setting interface. Optionally, the scene element setting interface includes a scene element input control in which the user can input the scene element. Correspondingly, the terminal acquires the scene elements input in the scene element input control. Optionally, the scene element setting interface includes a plurality of scene elements and a selection control corresponding to each scene element. The user can select the scene element by triggering the selection control corresponding to the scene element, and correspondingly, the terminal acquires the scene element corresponding to the triggered selection control.
In another possible implementation manner, the determining, by the terminal, scene elements for constituting the virtual scene includes: the terminal acquires at least one of scene elements acquired by an automatic driving vehicle in the historical driving process, scene elements acquired by a non-automatic driving vehicle in the historical driving process, or scene elements in a virtual scene stored in a scene database.
Wherein the scene elements collected by the autonomous vehicle during historical driving include: the scene elements in the autonomous driving test data, that is, the scene elements acquired by the autonomous vehicle during driving in an open field (for example, an open road), and the scene elements in the closed field test data, that is, the scene elements acquired by the autonomous vehicle during driving in a closed field. Scene elements collected by non-autonomous vehicles during historical driving, i.e., scene elements in natural driving data. The virtual scenes stored in the scene database include virtual scenes created by the party to which the scene database belongs, and also include virtual scenes provided by a third party. Optionally, data including a traffic accident scene in data such as the automatic driving drive test data, the closed field test data, and the natural driving data is referred to as traffic accident data, and accordingly, the scene element acquired by the terminal includes a scene element in the traffic accident data.
Optionally, the terminal acquires the scene element through various sensors on an autonomous driving vehicle or a non-autonomous driving vehicle, where the sensors include various sensors such as a camera and a lidar, which is not limited in this embodiment of the present application.
In the embodiment of the application, various methods for determining scene elements for forming a virtual scene are provided, including acquiring scene elements acquired by an autonomous vehicle, scene elements acquired by a non-autonomous vehicle, and scene elements in a generated virtual scene, so that the acquired scene elements are richer, and the virtual scene generated based on the scene elements is richer. Moreover, scene elements acquired by the automatic driving vehicle and the non-automatic driving vehicle in the historical driving process are scene elements existing in the real environment, so that the virtual scene generated based on the scene elements is more consistent with the real environment, and the effect of carrying out simulation test based on the virtual scene can be ensured.
In one possible implementation manner, the determining, by the terminal, the state parameter of the scene element includes: selecting a value by the terminal from the reference value range at every sampling interval and determining the value as the current state parameter of the scene element; the terminal generates a virtual scene based on the scene element and the state parameter of the scene element, and the method comprises the following steps: and the terminal generates a virtual scene based on the scene element and the current state parameter of the scene element.
The reference value range is an arbitrary value range, and for example, the reference value range is 0 km/h to 120 km/h for the state parameter of the speed of the vehicle. The sampling interval is an arbitrary numerical interval, for example, 5 km/h for a state parameter of the speed of the vehicle.
In the embodiment of the application, a plurality of different state parameters can be quickly acquired by using the reference value range and the sampling interval, so that a plurality of virtual scenes with different state parameters are generated, the number of the virtual scenes can be increased, and the coverage rate of the virtual scenes is improved.
Optionally, an implementation manner of the terminal obtaining the reference value range and the sampling interval includes: and the terminal displays a state parameter setting interface, wherein the state parameter setting interface comprises a state parameter range input control and a sampling interval input control. The user can enter a reference value range in a range input control and a sampling interval in a sampling interval input control. Correspondingly, the terminal obtains the reference value range input in the range input control and obtains the sampling interval input in the sampling interval input control.
In another possible implementation manner, the determining, by the terminal, the state parameter of the scene element includes: the terminal acquires at least one of the state parameters of the scene elements acquired by the automatic driving vehicle in the historical driving process, the state parameters of the scene elements acquired by the non-automatic driving vehicle in the historical driving process, or the state parameters of the scene elements in the virtual scene acquired in the process of running the virtual scene in the scene database.
Wherein the state parameters of the scene elements collected by the autonomous vehicle during the historical driving process comprise: the state parameters of the scene elements in the automated driving drive test data, that is, the state parameters of the scene elements acquired by the automated driving vehicle during the driving process in the open field (for example, open road), and the state parameters of the scene elements in the closed field test data, that is, the state parameters of the scene elements acquired by the automated driving vehicle during the driving process in the closed field. The state parameters of the scene elements collected by the non-autonomous driving vehicle during the historical driving process, namely the state parameters of the scene elements in the natural driving data. The virtual scenes stored in the scene database include virtual scenes created by the party to which the scene database belongs, and also include virtual scenes provided by a third party. Optionally, data including a traffic accident scene in data such as the automatic driving drive test data, the closed field test data, and the natural driving data is referred to as traffic accident data, and accordingly, the scene element acquired by the terminal includes a state parameter of the scene element in the traffic accident data.
Optionally, the terminal acquires the state parameters of the scene elements through various sensors on an autonomous driving vehicle or a non-autonomous driving vehicle, where the sensors include various sensors such as a camera and a laser radar, which is not limited in this embodiment of the present application.
In the embodiment of the application, various methods for determining the state parameters of the scene elements are provided, including acquiring the state parameters of the scene elements acquired by the automatic driving vehicle, the state parameters of the scene elements acquired by the non-automatic driving vehicle, and the state parameters of the scene elements in the generated virtual scene, so that the acquired state parameters of the scene elements are richer, and the virtual scene generated based on the state parameters is richer. In addition, because the state parameters of the scene elements acquired by the automatic driving vehicle and the non-automatic driving vehicle in the historical driving process are the state parameters of the scene elements existing in the real environment, the virtual scene generated based on the state parameters of the scene elements is more consistent with the real environment, and the effect of carrying out simulation test based on the virtual scene can be ensured.
In a possible implementation manner, the generating, by the terminal, the virtual scene based on the scene element and the state parameter of the scene element includes: the terminal generates a candidate virtual scene corresponding to the target time period based on the state parameters of the scene elements at each moment in the target time period; the terminal intercepts scene segments which accord with scene intercepting conditions from the alternative virtual scene; and the terminal acquires a virtual scene based on the scene segment. Optionally, the target time period is an arbitrary time period. Optionally, the implementation manner of the terminal acquiring the virtual scene based on the scene segment is as follows: and the terminal determines the intercepted scene segment as a virtual scene.
The virtual scene corresponds to a time period in which state parameters of scene elements in the virtual scene may change at different times, for example, a speed of a vehicle in the virtual scene is 50 km/h at a previous time, and the speed of the vehicle is 60 km/h at a current time. As another example, at a previous time, another vehicle in the virtual scene and the autonomous vehicle respectively travel in two lanes of the dual lane, while at a current time, the other vehicle merges from the front of the autonomous vehicle into the lane in which the autonomous vehicle is located. Since the state parameter of the scene element acquired by the terminal may be a state parameter corresponding to each time in a longer time period, the generated candidate virtual scene may correspond to the longer time period, and the virtual scene required by the simulation test may be a certain scene segment in the candidate virtual scene, so that the required scene segment needs to be intercepted from the candidate virtual scene. And the scene intercepting condition is used for intercepting the required scene segment from the alternative virtual scene.
In a possible implementation manner, before a terminal intercepts a scene segment meeting a scene interception condition from a candidate virtual scene, the terminal acquires the scene interception condition first, and the implementation manner is as follows: the terminal displays driving safety information, wherein the driving safety information comprises driving conditions which need to be met by the automatic driving vehicle in a target scene; the terminal acquires a scene interception condition input based on the target scene. Alternatively, the driving safety information includes driving-related law and regulation information, functional safety information, expected functional safety information, functional specification information, and the like. The target scene is any scene, which is not limited in the embodiment of the present application.
In the embodiment of the application, after the terminal displays the driving safety information, the user can analyze the driving safety information to determine the driving conditions which need to be met by the automatic driving vehicle in the driving safety information under the target scene, so that the scene capturing conditions are designed for the target scene to ensure that the terminal can capture the scene segments corresponding to the target scene from the alternative virtual scene according to the scene capturing conditions. And subsequently, a virtual scene is obtained based on the scene segment, and the automatic driving vehicle is subjected to simulation test based on the virtual scene, so that the automatic driving vehicle can be ensured to conform to the driving conditions specified in the driving safety information.
In a possible implementation manner, the acquiring, by the terminal, the virtual scene based on the scene segment includes: the terminal displays the intercepted scene segments in the virtual scene display interface; the terminal responds to the selection operation of any scene segment, and determines the scene segment as the virtual scene.
In the embodiment of the application, it is considered that the scene capturing condition may have an error, so that the scene segment captured according to the scene capturing condition is not the scene segment required by the simulation test, and therefore, the scene segment captured according to the scene capturing condition is displayed, so that a user can check the captured scene segment, and the scene segment required by the simulation test is screened out. Therefore, the accuracy of the obtained virtual scene can be ensured.
It should be noted that the above-provided method for acquiring a virtual scene is only an exemplary illustration, and a virtual scene can also be acquired in other ways, which is not limited in this embodiment of the application. Optionally, the server obtains a plurality of virtual scenes in any manner described above, and the terminal obtains the plurality of virtual scenes from the server.
302. The terminal acquires scene labels corresponding to the plurality of virtual scenes respectively.
Each virtual scene has a corresponding scene tag, which is used to distinguish different virtual scenes. In a possible implementation manner, the acquiring, by a terminal, scene tags corresponding to a plurality of virtual scenes respectively includes: for any virtual scene, the terminal acquires scene tags of multiple dimensions of the virtual scene.
Optionally, the terminal acquires a scene tag of any dimension of the virtual scene, for example, the scene tags of multiple dimensions acquired by the terminal include a scene tag of a driving safety information dimension. The scene tags of the driving safety information dimension represent for which driving safety information a virtual scene is designed. Assuming that the driving safety information includes law and regulation information, functional safety information, expected functional safety information, and functional specification information, optionally, for the virtual scene, the scene tag of the driving safety information dimension acquired by the terminal is the law and regulation information, the functional safety information, the expected functional safety information, or the functional specification information. For another example, the scene tags with multiple dimensions acquired by the terminal include a scene tag with a weather dimension, and optionally, for the virtual scene, the scene tag with the weather dimension acquired by the terminal is rainy day, foggy day, snowy day, or sunny day. For another example, the scene tags with multiple dimensions acquired by the terminal include scene tags with road dimensions, and optionally, for the virtual scene, the scene tags acquired by the terminal are national road, provincial road, county road, special road, and the like. Certainly, the terminal can also obtain scene tags of the virtual scene in other dimensions, which is not limited in the embodiment of the present application.
Optionally, the implementation manner of the terminal obtaining the scene tags respectively corresponding to the plurality of virtual scenes is as follows: and the terminal displays a scene label setting interface and acquires the input scene label set for each virtual scene based on the scene label setting interface. Thus, the user can customize the scene label of the virtual scene.
303. The terminal classifies the virtual scenes based on the scene labels respectively corresponding to the virtual scenes to obtain various categories, and acquires the category label input for each category of virtual scenes.
In one possible implementation, the virtual scene has scene tags of multiple dimensions. Correspondingly, the steps comprise: the terminal classifies the virtual scenes based on scene labels of a first dimension of the virtual scenes to obtain multiple categories on the first dimension, and obtains a first category label input for each category of virtual scenes. Wherein the first dimension is any one of a plurality of dimensions.
Optionally, the terminal classifies the multiple virtual scenes based on the scene tags of the first dimension of the multiple virtual scenes, and an implementation manner of obtaining multiple categories in the first dimension is as follows: the terminal determines virtual scenes, including the same characters, in the scene tags of the first dimension in the plurality of virtual scenes, as the same category. For example, if the scene tag of the first dimension is the scene tag of the driving safety information dimension, the terminal determines virtual scenes including "legal and legal information" in the scene tag as the same category, the terminal determines virtual scenes including "functional safety information" in the scene tag as the same category, the terminal determines virtual scenes including "expected functional safety information" in the scene tag as the same category, and the terminal determines virtual scenes including "functional specification information" in the scene tag as the same category. Optionally, the first category label input for each category of virtual scene is an arbitrary label. For example, the category label input for the virtual scene including the "law and regulation information" in the scene label is "law and regulation information" or "scene related to the law and regulation information", and the like, which is not limited in the embodiment of the present application.
In the embodiment of the application, the virtual scene has the scene labels with multiple dimensions, so that the terminal can classify the virtual scene according to the scene labels with any dimension, and the classification mode is more flexible.
In a possible implementation manner, after the terminal classifies the plurality of virtual scenes based on scene tags of a first dimension of the plurality of virtual scenes to obtain a plurality of categories in the first dimension, and acquires a first category tag input for each category of virtual scenes, the method further includes: for a plurality of virtual scenes with the same first class labels, classifying the plurality of virtual scenes by the terminal based on scene labels of a second dimension of the plurality of virtual scenes to obtain a plurality of classes on the second dimension, and acquiring a second class label input for each class of virtual scene; the second dimension is any one of the dimensions except the first dimension. For a plurality of virtual scenes with the same first class labels, the terminal classifies the plurality of virtual scenes based on the second-dimension scene labels of the plurality of virtual scenes to obtain the implementation modes of the plurality of classes in the second dimension, and classifies the plurality of virtual scenes based on the first-dimension scene labels of the plurality of virtual scenes to obtain the implementation modes of the plurality of classes in the first dimension.
In the embodiment of the application, after the virtual scenes are classified according to the scene labels of the first dimension, the virtual scenes of each category can be further classified according to the scene labels of the second dimension, so that the virtual scenes are classified in multiple dimensions, the category labels of the multiple dimensions are provided for the virtual scenes, and the virtual scenes required by simulation testing can be screened out accurately.
Optionally, after the terminal classifies the plurality of virtual scenes based on scene tags of a second dimension of the plurality of virtual scenes to obtain a plurality of categories in the second dimension and obtains a second category tag input for each category of virtual scenes, the method further includes: for a plurality of virtual scenes with the same second class label, the terminal classifies the plurality of virtual scenes based on the third-dimension scene label of the plurality of virtual scenes to obtain a plurality of classes on the third dimension, and obtains the third class label input for each class of virtual scene; wherein the third dimension is any one of the plurality of dimensions except the first dimension and the second dimension. And repeating the steps until the terminal finishes classification according to the scene label of each dimension of the virtual scene.
It should be noted that, in the case that the virtual scene has only one dimension of the scene tag, the terminal classifies the virtual scene according to the dimension of the scene tag and obtains the implementation manner of the class tag of each class of virtual scene, which is the same as the implementation manner of the terminal classifying the virtual scene according to the first dimension of the scene tag and obtaining the first class tag of each class of virtual scene in the case that the virtual scene has multiple dimensions of the scene tag, and is not described herein again.
304. The terminal displays a plurality of category labels in a label display interface, and each category label corresponds to a virtual scene belonging to the same category.
After the terminal displays the category label in the label display interface, the user can search the virtual scene required by the simulation test by means of the category label, so that the efficiency of the simulation test can be improved.
In one possible implementation manner, the displaying, by the terminal, a plurality of category labels in the label display interface includes: the terminal displays a plurality of first-dimension first-class labels in a label display interface; and the terminal responds to the trigger operation of the pull-down control corresponding to any first category label, and displays a plurality of second-dimension second category labels under the first category labels in the label display interface.
Optionally, a plurality of first category labels of a first dimension in the label display interface are arranged and displayed in an arbitrary manner, and a plurality of second category labels of a second dimension under the first category labels are arranged and displayed in an arbitrary manner. For example, the first category label and the second category label are displayed in a vertical row, or in a horizontal row. Optionally, the pull-down control corresponding to the first category label is displayed around the first category label, for example, on the left side of the first category label.
Optionally, the terminal responds to a trigger operation of a pull-down control corresponding to any second category label, and displays a plurality of third category labels of a third dimension under the second category label in the label display interface, and so on until no category label of another dimension exists under the category label of a certain dimension. In this case, the category label of the certain dimension does not have a corresponding pull-down control. And the pull-down control corresponding to any category label is used for reminding the user that the category label with another dimension is arranged under the category label.
In the embodiment of the application, the plurality of first category labels with the first dimensionality are displayed in the label display interface firstly, and the plurality of second category labels with the second dimensionality under a certain first category label are displayed in the label display interface under the condition that a user triggers a pull-down control corresponding to the first category label, so that the second category labels under the first category labels which are not required to be checked by the user can be prevented from being displayed in the label display interface, the interference of redundant category labels to the user is reduced, and the user can conveniently and quickly screen out the category labels corresponding to the virtual scene required by the simulation test. Moreover, by displaying the category labels of multiple dimensions in the label display interface step by step, under the condition that the number of virtual scenes is huge, a user can narrow the range of the selected virtual scenes step by step through the category labels of multiple dimensions, so that the category labels corresponding to the virtual scenes required by the simulation test are accurately screened out.
Fig. 4 is a schematic diagram of a label display interface, and referring to fig. 4, a scene directory is shown on the left side of fig. 4, where the scene directory includes a plurality of category labels, which are an "automatic driving function specification scene", a "regulation scene", a "safety scene", an "operable design domain scene", and an "accident scene", respectively. And a pull-down control, namely a triangular control in the figure, is arranged on the left side of each category label, the pull-down control is triggered, and the terminal can display at least one category label with other dimensions below the corresponding category label. The right side of fig. 4 is a pie chart corresponding to the scene database, and the pie chart is used for representing the proportion of the number of the virtual scenes of various categories to the total number of the virtual scenes in the scene database.
305. And the terminal acquires the category label selected from the label display interface.
After the terminal displays the plurality of category labels in the label display interface, a user can select the category label corresponding to the virtual scene required by the simulation test, and correspondingly, the terminal obtains the category label selected from the label display interface.
Optionally, the user selects at least one first-class label from the label display interface, and correspondingly, the terminal acquires the at least one first-class label. Optionally, the user selects at least one second category label from the label display interface, and correspondingly, the terminal obtains the at least one second category label. Optionally, the user selects at least one first category label and at least one second category label from the label display interface, and correspondingly, the terminal obtains the at least one first category label and the at least one second category label. Certainly, under the condition that the tag display interface includes category tags of other dimensions, the user can also select the category tags of other dimensions from the tag display interface, and then the terminal can also obtain the category tags of other dimensions selected by the user.
306. And the terminal operates the virtual scene corresponding to the selected category label.
The terminal runs the virtual scene corresponding to the selected category label, that is, the terminal performs an automatic driving simulation test based on the virtual scene corresponding to the selected category label.
In a possible implementation manner, the tag display interface includes a plurality of category tags with different dimensions, and the obtaining of the category tag selected from the tag display interface includes: the terminal obtains a plurality of category labels with different dimensions selected from the label display interface. For example, the tag presentation interface includes a category tag for a driving safety information dimension, a category tag for a weather dimension, and a category tag for a road dimension. The user selects a category label of a driving safety information dimension, namely 'law and regulation information', a category label of a weather dimension, namely 'rainy day', and a category label of a road dimension, namely 'special road', from a label display interface, and the category labels of a plurality of different dimensions acquired by the terminal are 'law and regulation information', 'rainy day' and 'special road'.
The terminal operates the virtual scene corresponding to the selected category label, and the method comprises the following steps: and the terminal operates the virtual scene corresponding to the selected multiple category labels. In connection with the above example, the terminal runs a virtual scene corresponding to "law and regulatory information", "rainy day", and "special road".
In the embodiment of the application, the plurality of different-dimension category labels are displayed on the label display interface, so that a user can accurately screen out virtual scenes required by simulation tests from a large number of virtual scenes through the different-dimension category labels.
In a possible implementation manner, the running, by the terminal, of the virtual scene corresponding to the selected category label includes: the terminal obtains the operation condition of the virtual scene corresponding to the selected category label, wherein the operation condition is a condition which is required to be met by behavior data of scene elements in the virtual scene in the operation process of the virtual scene. The method comprises the steps that in the process of running a virtual scene, a terminal collects behavior data of scene elements, wherein the behavior data represents behaviors of the scene elements in the virtual scene; and the terminal generates an operation result based on the behavior data and the operation condition.
And under the condition that the number of the virtual scenes corresponding to the selected category labels is multiple, the terminal acquires the operating conditions of each virtual scene, and the operating conditions corresponding to the multiple virtual scenes are the same or different. The operating conditions have targeted behavior data. For example, the operating condition includes that the autonomous vehicle cannot collide with the obstacle, and the behavior data for the operating condition includes a distance between the autonomous vehicle and the obstacle during the operation of the virtual scene. For example, the operating condition includes a speed of the autonomous vehicle being within a reference speed range, and the behavior data for the operating condition includes the speed of the autonomous vehicle during the operation of the virtual scene.
In the embodiment of the application, by setting the running condition corresponding to the virtual scene and generating the running result based on the running condition and the behavior data for the running condition, the running result can reflect the relationship between the behavior data and the running condition of the scene element in the virtual scene.
In a possible implementation manner, the acquiring, by the terminal, the operating condition of the virtual scene corresponding to the selected category label includes: the terminal displays an operation condition setting interface; and setting an interface based on the running conditions, and acquiring the running conditions of the input virtual scene.
Optionally, the operation condition setting interface includes at least one virtual scene corresponding to the category label, and an operation condition input control corresponding to each virtual scene, and a user can input an operation condition of the virtual scene in the operation condition input control. Correspondingly, the terminal acquires the running conditions of the virtual scene input in the running condition input control. Optionally, the operation condition setting interface includes a plurality of candidate operation conditions, and the user can select an operation condition corresponding to each virtual scene from the operation condition setting interface. Correspondingly, the terminal sets an interface based on the operating conditions, and obtains the operating conditions selected for each virtual scene. Optionally, the operation condition setting interface further includes other information, which is not limited in this application embodiment.
In a possible implementation manner, the terminal generates an operation result based on the behavior data and the operation condition, including: the terminal determines the operation result as operation passing under the condition that the behavior data meet the operation conditions; and determining the operation result as non-operation passing in the case that the behavior data does not meet the operation condition.
In a possible implementation manner, the determining, by the terminal, that the operation result is a pass operation in the case that the behavior data satisfies the operation condition includes: and the terminal determines that the operation result is passed under the condition that the operation conditions of the virtual scene are multiple and the behavior data corresponding to the multiple operation conditions respectively meet the multiple operation conditions. The terminal determines that the operation result is not passed through when the behavior data does not meet the operation condition, and the method comprises the following steps: and the terminal determines that the operation result is not passed under the condition that the operation conditions of the virtual scene are multiple and the behavior data aimed at any operation condition does not meet any operation condition.
In a possible implementation manner, the tag display interface further includes a scene tag of each virtual scene corresponding to any category tag, and the method further includes: the terminal acquires a scene label selected from the label display interface; and operating the virtual scene corresponding to the selected scene label. Therefore, the user can not only screen the virtual scene required by the simulation test according to the category label, but also screen the virtual scene required by the simulation test by combining the scene label of the virtual scene, so that the screening precision of the virtual scene can be further improved.
307. And displaying an operation result display interface by the terminal.
In a possible implementation manner, the operation result display interface includes at least one of the number of virtual scenes that have passed the operation, the number of virtual scenes that have not passed the operation, the operation condition that is satisfied by the behavior data corresponding to each virtual scene that has been run, the operation condition that is not satisfied by the behavior data corresponding to each virtual scene that has been run, the operation passing rate of any kind of virtual scenes, the operation passing rate of any operation condition, or the ratio of the number of virtual scenes that have been run to the total number of virtual scenes.
The terminal runs the selected virtual scenes, and after the running results are generated, further statistics can be performed on the running results, for example, according to the total number of the virtual scenes in the scene database and the number of the running virtual scenes, the coverage rate of the running virtual scenes, that is, the ratio of the number of the running virtual scenes to the total number of the virtual scenes is determined. For another example, the running passing rate of the virtual scenes in any category, that is, the ratio of the number of virtual scenes running through in any category to the number of virtual scenes running through in any category, is counted. For another example, the running passing rate of any running condition, that is, the ratio of the number of virtual scenes corresponding to any running condition that run through to the number of virtual scenes corresponding to any running condition that run is counted. In fact, the terminal can perform statistics from various angles based on the operation result to obtain various statistical results, so as to help the user to better understand the operation condition of the virtual scene, which is not limited in the embodiment of the present application.
Optionally, the terminal displays various statistics data on the operation result display interface in the form of a graph or a table, and optionally, the graph includes any form of graphs such as a sector graph, a column graph, a pie graph, a line graph and the like.
In the embodiment of the application, after the virtual scene is operated, various statistical data are displayed in the operation result display interface, so that the operation condition of the virtual scene can be conveniently and intuitively known.
It should be noted that, after the terminal runs the virtual scene and generates the running result according to the running condition input by the user, the user can adjust the running condition according to the running result to make the running condition more reasonable, and then the terminal generates the running result according to the adjusted running condition. Thus, problems due to unreasonable setting of the operating conditions can be avoided.
Fig. 5 is a schematic diagram of a running result display interface, and referring to fig. 5, the total number of running virtual scenes is 5, including "straight-cut scene _ 001", "straight-cut scene _ 002", "straight-cut scene _ 003", "straight-cut scene _ 004", and "straight-cut scene _ 005", where the running results of the first four virtual scenes are successful, indicating that the first four virtual scenes run through. The operation result of the fifth virtual scene is failure, which indicates that the fifth virtual scene does not operate and passes. Each virtual scene corresponds to 5 operating conditions, including: maximum speed error, maximum deceleration error, maximum acceleration error, crash, maximum rate of change of deceleration. The operation result of each operation condition of the first 4 virtual scenes is successful, that is, the operation passes, and the operation result of the operation condition "collision" corresponding to the fifth virtual scene is failure, which indicates that the behavior data of the fifth virtual scene does not satisfy the operation condition "collision". The operation passing rate of each operation condition is indicated below the operation condition, wherein the operation passing rates of the operation conditions "maximum speed error", "maximum deceleration error", "maximum acceleration error" and "maximum deceleration change rate" are all 100%, and the operation passing rate of the operation condition "collision" is 80%. The right side of the running condition is a detailed information interface of the running condition of the virtual scene, and a plurality of information viewing controls in the detailed information interface are used for triggering and displaying data generated in the running process of the corresponding virtual scene. And the user triggers the corresponding information viewing control to view the corresponding simulation data.
Fig. 6 is a schematic diagram of a process of processing a virtual scene. Referring to fig. 6, first, according to the driving safety information in fig. 1, a scene requirement analysis is performed, that is, a plurality of virtual scenes to be created are determined according to each kind of driving safety information. Then, systematic scene recognition is performed, that is, a plurality of virtual scenes which need to be created are screened and sorted, and the virtual scene which needs to be created finally is determined, that is, scene elements in each virtual scene which needs to be created finally are determined. Then, sampling the data space of the scene, including determining a reference value range and a sampling interval, selecting a value from the reference value range according to the sampling interval, and determining the selected value as the state parameter of the scene element. And then, generating a virtual scene according to the scene elements and the state parameters corresponding to the scene elements. In addition, it is also possible to generate a virtual scene candidate by the scene elements and corresponding state parameters included in the various data in 2, intercept a required scene segment from the virtual scene candidate according to a scene interception condition, then check the intercepted scene segment, select an accurate scene segment, and determine the selected scene segment as a virtual scene. Then, determining an operating condition for the acquired virtual scene, and storing the operating condition and the virtual scene in a scene database. The virtual scenes in the scene database are classified to obtain various types of virtual scenes, and a category label corresponding to each type of virtual scene is set. When simulation testing is carried out, virtual scenes for testing are searched out from the scene database based on the class labels, and the running condition of each virtual scene is selected. And then carrying out simulation test based on the selected virtual scene, and generating a test result according to behavior data and operating conditions acquired in the simulation test process. Then, the test result can be subjected to statistical analysis, and automatic driving risk assessment can be performed according to the result of the statistical analysis.
It should be noted that the present application provides various methods for acquiring a virtual scene, including generating a virtual scene based on a large amount of drive test data, and enriching a scene database. The method provided by the embodiment of the application can provide comprehensive and feasible guidance for establishing a perfect simulation scene database, provide a systematic framework and a classification system for long-term comprehensive scene coverage, and perform data statistics and analysis functions on the test results of large-scale automatic driving simulation tests.
In the embodiment of the application, the virtual scenes for automatic driving test are classified to obtain the category labels corresponding to the virtual scenes of each category, so that the virtual scenes required by the simulation test can be rapidly screened out by using the category labels when the simulation test is performed, and the screened virtual scenes are operated, therefore, under the condition of large number of the virtual scenes, the efficiency of acquiring the virtual scenes required by the simulation test can be improved, and the efficiency of the simulation test can be improved.
Fig. 7 is a block diagram of a processing apparatus for a virtual scene according to an embodiment of the present application. Referring to fig. 7, the apparatus includes:
the label display module 701 is configured to display a plurality of category labels in a label display interface, wherein the category labels are obtained by classifying a plurality of virtual scenes for performing an automatic driving test, and each category label corresponds to a virtual scene belonging to the same category;
a tag obtaining module 702 configured to obtain a category tag selected from the tag display interface;
and a scene running module 703 configured to run the virtual scene corresponding to the selected category label.
In one possible implementation, the apparatus further includes:
an element determination module configured to determine scene elements for constituting a virtual scene;
a parameter determination module configured to determine a state parameter of a scene element;
a scene generation module configured to generate a virtual scene based on the scene elements and the state parameters of the scene elements.
In one possible implementation, the element determination module is configured to present driving conditions to be met by the autonomous vehicle in the virtual scene, the driving conditions including scene elements related to the autonomous vehicle;
scene elements input based on driving conditions are acquired.
In one possible implementation manner, the parameter determining module is configured to select a value from the reference value range at sampling intervals and determine the value as the current state parameter of the scene element;
and the scene generation module is configured to generate a virtual scene based on the scene elements and the current state parameters of the scene elements.
In one possible implementation, the element determination module is configured to obtain at least one of scene elements collected by an autonomous vehicle during historical driving, scene elements collected by a non-autonomous vehicle during historical driving, or scene elements in a virtual scene stored in a scene database.
In one possible implementation, the parameter determination module is configured to obtain at least one of a state parameter of a scene element acquired by an autonomous vehicle during historical driving, a state parameter of a scene element acquired by a non-autonomous vehicle during historical driving, or a state parameter of a scene element in a virtual scene acquired during running of the virtual scene in the scene database.
In one possible implementation, the scene generation module includes:
the scene generating unit is configured to generate a candidate virtual scene corresponding to the target time period based on the state parameters of the scene elements at each moment in the target time period;
a segment intercepting unit configured to intercept scene segments meeting scene intercepting conditions from the candidate virtual scene;
a scene acquisition unit configured to acquire a virtual scene based on the scene segments.
In a possible implementation manner, the scene generation module further includes:
the intercepting condition acquisition unit is configured to display driving safety information, and the driving safety information comprises driving conditions which need to be met by the automatic driving vehicle in a target scene; and acquiring a scene interception condition input based on the target scene.
In a possible implementation manner, the scene obtaining unit is configured to display the intercepted scene segments in the virtual scene display interface;
in response to a selection operation for any of the scene segments, the scene segment is determined as a virtual scene.
In one possible implementation, the apparatus further includes:
a scene tag obtaining module 702 configured to obtain scene tags corresponding to a plurality of virtual scenes, respectively;
the category label obtaining module 702 is configured to classify the plurality of virtual scenes based on the scene labels respectively corresponding to the plurality of virtual scenes to obtain a plurality of categories, and obtain a category label input for each category of virtual scenes.
In one possible implementation, the scene tag obtaining module 702 is configured to obtain, for any virtual scene, scene tags of multiple dimensions of the virtual scene;
a category label obtaining module 702, configured to classify the multiple virtual scenes based on scene labels of a first dimension of the multiple virtual scenes, to obtain multiple categories in the first dimension, and to obtain a first category label input for each category of virtual scenes; wherein the first dimension is any one of a plurality of dimensions.
In a possible implementation manner, the category label obtaining module 702 is further configured to, for a plurality of virtual scenes with the same first category label, classify the plurality of virtual scenes based on a second-dimensional scene label of the plurality of virtual scenes to obtain a plurality of categories in the second dimension, and obtain a second category label input for each category of virtual scene; the second dimension is any one of the dimensions except the first dimension.
In a possible implementation manner, the tag display interface includes a plurality of category tags with different dimensions, and the tag obtaining module 702 is configured to obtain the plurality of category tags with different dimensions selected from the tag display interface;
a scene running module 703 configured to run virtual scenes corresponding to the selected plurality of category labels.
In one possible implementation manner, the label display module 701 is configured to display a plurality of first category labels of a first dimension in a label display interface; and responding to the triggering operation of the pull-down control corresponding to any first category label, and displaying a plurality of second category labels of a second dimension under the first category labels in a label display interface.
In a possible implementation manner, the tag display interface further includes a scene tag of each virtual scene corresponding to any category tag, and the scene operation module 703 is further configured to obtain a scene tag selected from the tag display interface; and operating the virtual scene corresponding to the selected scene label.
In one possible implementation, the scene running module 703 includes:
the operation condition acquisition unit is configured to acquire an operation condition of the virtual scene corresponding to the selected category label, wherein the operation condition is a condition which needs to be met by behavior data of scene elements in the virtual scene in the operation process of the virtual scene;
the behavior data acquisition unit is configured to acquire behavior data of scene elements in the process of running the virtual scene, wherein the behavior data represents behaviors of the scene elements in the virtual scene;
and the operation result generation unit is configured to generate an operation result based on the behavior data and the operation condition.
In one possible implementation, the operation result generation unit includes:
a first determining subunit configured to determine that the operation result is a pass operation if the behavior data satisfies the operation condition;
and the second determining subunit is configured to determine the operation result as not-passed-operation in the case that the behavior data does not satisfy the operation condition.
In a possible implementation manner, the first determining subunit is configured to determine that the running result passes when the running condition of the virtual scene is multiple and the behavior data for the multiple running conditions respectively satisfy the multiple running conditions;
and the second determining subunit is configured to determine that the operation result is not passed in the case that the operation conditions of the virtual scene are multiple and the behavior data for any one of the operation conditions does not satisfy any one of the operation conditions.
In one possible implementation, the apparatus further includes:
the interface display module is configured to display a running result display interface, and the running result display interface comprises at least one of the number of virtual scenes which run through, the number of virtual scenes which do not run through, running conditions which are met by behavior data corresponding to the virtual scenes for each running virtual scene, running conditions which are not met by the behavior data corresponding to the virtual scenes for each running virtual scene, running passing rates of virtual scenes of any category, running passing rates of any running conditions, or a ratio of the number of running virtual scenes to the total number of virtual scenes.
In one possible implementation manner, the operation condition obtaining unit is configured to display an operation condition setting interface; and setting an interface based on the running conditions, and acquiring the running conditions of the input virtual scene.
In the embodiment of the application, the virtual scenes for automatic driving test are classified to obtain the category labels corresponding to the virtual scenes of each category, so that the virtual scenes required by the simulation test can be rapidly screened out by using the category labels when the simulation test is performed, and the screened virtual scenes are operated, therefore, under the condition of large number of the virtual scenes, the efficiency of acquiring the virtual scenes required by the simulation test can be improved, and the efficiency of the simulation test can be improved.
All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.
It should be noted that: in the processing apparatus for virtual scenes provided in the above embodiments, only the division of the functional modules is illustrated when processing the virtual scenes, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the computer device may be divided into different functional modules to complete all or part of the above described functions. In addition, the processing apparatus of a virtual scene and the processing method of a virtual scene provided in the above embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiments and are not described herein again.
The embodiment of the present application further provides a computer device, where the computer device includes a processor and a memory, where the memory stores at least one program code, and the at least one program code is loaded and executed by the processor, so as to implement the operations executed in the processing method of the virtual scene in the foregoing embodiment.
Optionally, the computer device is provided as a terminal. Fig. 8 shows a block diagram of a terminal 800 according to an exemplary embodiment of the present application. The terminal 800 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The terminal 800 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
The terminal 800 includes: a processor 801 and a memory 802.
The processor 801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 801 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 801 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 801 may be integrated with a GPU (Graphics Processing Unit) which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 801 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 802 may include one or more computer-readable storage media, which may be non-transitory. Memory 802 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 802 is used to store at least one program code for execution by the processor 801 to implement the processing method of the virtual scene provided by the method embodiments herein.
In some embodiments, the terminal 800 may further include: a peripheral interface 803 and at least one peripheral. The processor 801, memory 802 and peripheral interface 803 may be connected by bus or signal lines. Various peripheral devices may be connected to peripheral interface 803 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 804, a display screen 805, a camera assembly 806, an audio circuit 807, a positioning assembly 808, and a power supply 809.
The peripheral interface 803 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 801 and the memory 802. In some embodiments, the processor 801, memory 802, and peripheral interface 803 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 801, the memory 802, and the peripheral interface 803 may be implemented on separate chips or circuit boards, which are not limited by this embodiment.
The Radio Frequency circuit 804 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 804 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 804 converts an electrical signal into an electromagnetic signal to be transmitted, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 804 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 804 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 804 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 805 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 805 is a touch display, the display 805 also has the ability to capture touch signals on or above the surface of the display 805. The touch signal may be input to the processor 801 as a control signal for processing. At this point, the display 805 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 805 may be one, providing the front panel of the terminal 800; in other embodiments, the display 805 may be at least two, respectively disposed on different surfaces of the terminal 800 or in a folded design; in other embodiments, the display 805 may be a flexible display disposed on a curved surface or a folded surface of the terminal 800. Even further, the display 805 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display 805 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 806 is used to capture images or video. Optionally, camera assembly 806 includes a front camera and a rear camera. The front camera is arranged on the front panel of the terminal, and the rear camera is arranged on the back of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 806 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 807 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 801 for processing or inputting the electric signals to the radio frequency circuit 804 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 800. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 801 or the radio frequency circuit 804 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 807 may also include a headphone jack.
The positioning component 808 is used to locate the current geographic position of the terminal 800 for navigation or LBS (Location Based Service). The Positioning component 808 may be a Positioning component based on the GPS (Global Positioning System) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
Power supply 809 is used to provide power to various components in terminal 800. The power supply 809 can be ac, dc, disposable or rechargeable. When the power source 809 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 800 also includes one or more sensors 810. The one or more sensors 810 include, but are not limited to: acceleration sensor 811, gyro sensor 812, pressure sensor 813, fingerprint sensor 814, optical sensor 815 and proximity sensor 816.
The acceleration sensor 811 may detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 800. For example, the acceleration sensor 811 may be used to detect the components of the gravitational acceleration in three coordinate axes. The processor 801 may control the display 805 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 811. The acceleration sensor 811 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 812 may detect a body direction and a rotation angle of the terminal 800, and the gyro sensor 812 may cooperate with the acceleration sensor 811 to acquire a 3D motion of the user with respect to the terminal 800. From the data collected by the gyro sensor 812, the processor 801 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 813 may be disposed on the side frames of terminal 800 and/or underneath display 805. When the pressure sensor 813 is disposed on the side frame of the terminal 800, the holding signal of the user to the terminal 800 can be detected, and the processor 801 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 813. When the pressure sensor 813 is disposed at a lower layer of the display screen 805, the processor 801 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 805. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 814 is used for collecting a fingerprint of the user, and the processor 801 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 814, or the fingerprint sensor 814 identifies the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 801 authorizes the user to perform relevant sensitive operations including unlocking a screen, viewing encrypted information, downloading software, paying for and changing settings, etc. Fingerprint sensor 814 may be disposed on the front, back, or side of terminal 800. When a physical button or a vendor Logo is provided on the terminal 800, the fingerprint sensor 814 may be integrated with the physical button or the vendor Logo.
The optical sensor 815 is used to collect the ambient light intensity. In one embodiment, processor 801 may control the display brightness of display 805 based on the ambient light intensity collected by optical sensor 815. Specifically, when the ambient light intensity is high, the display brightness of the display screen 805 is increased; when the ambient light intensity is low, the display brightness of the display 805 is reduced. In another embodiment, the processor 801 may also dynamically adjust the shooting parameters of the camera assembly 806 based on the ambient light intensity collected by the optical sensor 815.
A proximity sensor 816, also called a distance sensor, is provided on the front panel of the terminal 800. The proximity sensor 816 is used to collect the distance between the user and the front surface of the terminal 800. In one embodiment, when the proximity sensor 816 detects that the distance between the user and the front surface of the terminal 800 gradually decreases, the processor 801 controls the display 805 to switch from the bright screen state to the dark screen state; when the proximity sensor 816 detects that the distance between the user and the front surface of the terminal 800 becomes gradually larger, the display 805 is controlled by the processor 801 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 8 is not intended to be limiting of terminal 800 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Optionally, the computer device is provided as a server. Fig. 9 is a schematic structural diagram of a server provided in this embodiment of the present application, where the server 900 may generate relatively large differences due to different configurations or performances, and may include one or more processors (CPUs) 901 and one or more memories 902, where the memory 902 stores at least one program code, and the at least one program code is loaded and executed by the processors 901 to implement the processing method of the virtual scene provided in the foregoing method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
An embodiment of the present application further provides a computer-readable storage medium, where at least one program code is stored in the computer-readable storage medium, and the at least one program code is loaded and executed by a processor, so as to implement the operations executed in the processing method of a virtual scene in the foregoing embodiment.
The embodiment of the present application further provides a computer program, where at least one program code is stored in the computer program, and the at least one program code is loaded and executed by a processor, so as to implement the operations executed in the processing method of the virtual scene in the foregoing embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (23)

1. A method for processing a virtual scene, the method comprising:
displaying a plurality of category labels in a label display interface, wherein the category labels are obtained by classifying a plurality of virtual scenes for carrying out automatic driving tests, and each category label corresponds to a virtual scene belonging to the same category;
acquiring a category label selected from the label display interface;
and operating the virtual scene corresponding to the selected category label.
2. The method of claim 1, wherein prior to displaying the plurality of category labels in the label presentation interface, the method further comprises:
determining scene elements for constituting a virtual scene;
determining a state parameter of the scene element;
generating the virtual scene based on the scene elements and the state parameters of the scene elements.
3. The method of claim 2, wherein determining scene elements for composing the virtual scene comprises:
displaying driving conditions to be met by an autonomous vehicle in the virtual scene, wherein the driving conditions comprise scene elements related to the autonomous vehicle;
acquiring the scene element input based on the driving condition.
4. The method of claim 2, wherein determining the state parameter of the scene element comprises:
selecting a numerical value from a reference numerical value range at every sampling interval and determining the numerical value as the current state parameter of the scene element;
the generating the virtual scene based on the scene element and the state parameter of the scene element includes:
and generating the virtual scene based on the scene element and the current state parameter of the scene element.
5. The method of claim 2, wherein determining scene elements for composing the virtual scene comprises:
at least one of scene elements acquired by an automatic driving vehicle in a historical driving process, scene elements acquired by a non-automatic driving vehicle in the historical driving process, or scene elements in a virtual scene stored in a scene database is acquired.
6. The method of claim 2, wherein determining the state parameter of the scene element comprises:
acquiring at least one of state parameters of scene elements acquired by an automatic driving vehicle in a historical driving process, state parameters of scene elements acquired by a non-automatic driving vehicle in the historical driving process, or state parameters of scene elements in a virtual scene acquired in a virtual scene running process in a scene database.
7. The method of claim 2, wherein generating the virtual scene based on the scene element and the state parameter of the scene element comprises:
generating a candidate virtual scene corresponding to a target time period based on the state parameter of the scene element at each moment in the target time period;
intercepting scene segments which accord with scene intercepting conditions from the alternative virtual scene;
and acquiring the virtual scene based on the scene segment.
8. The method according to claim 7, wherein before the scene segment meeting the scene cut condition is cut from the alternative virtual scene, the method further comprises:
displaying driving safety information, wherein the driving safety information comprises driving conditions which need to be met by an automatic driving vehicle in a target scene;
acquiring the scene interception condition input based on the target scene.
9. The method of claim 7, wherein the obtaining the virtual scene based on the scene segments comprises:
displaying the intercepted scene segments in a virtual scene display interface;
in response to a selection operation on any scene segment, determining the scene segment as the virtual scene.
10. The method of claim 1, wherein prior to displaying the plurality of category labels in the label presentation interface, the method further comprises:
acquiring scene labels corresponding to the plurality of virtual scenes respectively;
classifying the virtual scenes based on the scene labels respectively corresponding to the virtual scenes to obtain multiple categories, and acquiring the category label input for each category of virtual scenes.
11. The method according to claim 10, wherein the obtaining scene tags corresponding to the plurality of virtual scenes respectively comprises:
for any virtual scene, acquiring scene labels of multiple dimensions of the virtual scene;
the classifying the plurality of virtual scenes based on the scene tags respectively corresponding to the plurality of virtual scenes to obtain a plurality of categories, and obtaining the category tag input for each category of virtual scenes includes:
classifying the plurality of virtual scenes based on scene tags of a first dimension of the plurality of virtual scenes to obtain a plurality of categories on the first dimension, and acquiring a first category tag input for each category of virtual scenes;
wherein the first dimension is any one of the plurality of dimensions.
12. The method of claim 11, wherein after classifying the plurality of virtual scenes based on the scene tags of the first dimension of the plurality of virtual scenes, obtaining a plurality of categories in the first dimension, and obtaining the first category tags input for each category of virtual scenes, the method further comprises:
classifying a plurality of virtual scenes with the same first class labels based on second-dimension scene labels of the virtual scenes to obtain a plurality of classes on the second dimension, and acquiring second class labels input for the virtual scenes of each class;
wherein the second dimension is any one of the plurality of dimensions except the first dimension.
13. The method according to claim 1, wherein the label presentation interface includes a plurality of category labels with different dimensions, and the obtaining the category label selected from the label presentation interface includes:
obtaining a plurality of category labels with different dimensions selected from the label display interface;
the running of the virtual scene corresponding to the selected category label comprises the following steps:
and operating the virtual scene corresponding to the selected multiple category labels.
14. The method of claim 1, wherein displaying a plurality of category labels in a label presentation interface comprises:
displaying a first category label of a plurality of first dimensions in the label display interface;
and responding to the triggering operation of a pull-down control corresponding to any first category label, and displaying a plurality of second-dimension second category labels under the first category labels in the label display interface.
15. The method of claim 1, wherein the label presentation interface further comprises a scene label for each virtual scene corresponding to any category label, and wherein the method further comprises:
acquiring a scene label selected from the label display interface;
and operating the virtual scene corresponding to the selected scene label.
16. The method of claim 1, wherein the running the virtual scene corresponding to the selected category label comprises:
acquiring the operation condition of the virtual scene corresponding to the selected category label, wherein the operation condition is a condition which needs to be met by behavior data of scene elements in the virtual scene in the operation process of the virtual scene;
acquiring behavior data of the scene elements in the process of running the virtual scene, wherein the behavior data represents the behavior of the scene elements in the virtual scene;
and generating an operation result based on the behavior data and the operation condition.
17. The method of claim 16, wherein generating an operational result based on the behavioral data and the operational condition comprises:
determining the operation result as operation passing under the condition that the behavior data meet the operation condition;
and determining the operation result as non-operation passing under the condition that the behavior data does not meet the operation condition.
18. The method of claim 17, wherein determining the operation result as a run pass if the behavior data satisfies the operation condition comprises:
determining that the running result passes the running condition when the running condition of the virtual scene is multiple and the behavior data corresponding to the multiple running conditions respectively meet the multiple running conditions;
determining that the operation result is not passed through if the behavior data does not satisfy the operation condition, including:
and determining that the running result is not passed when the running conditions of the virtual scene are multiple and the behavior data aimed at any running condition does not meet any running condition.
19. The method of claim 16, wherein after generating an operational result based on the behavioral data and the operational condition, the method further comprises:
displaying a running result display interface, wherein the running result display interface comprises at least one of the number of virtual scenes which are passed through in running, the number of virtual scenes which are not passed through in running, running conditions which are met by behavior data corresponding to the virtual scenes for each running virtual scene, running conditions which are not met by the behavior data corresponding to the virtual scenes for each running virtual scene, running passing rate of virtual scenes of any category, running passing rate of any running condition, or ratio of the number of running virtual scenes to the total number of virtual scenes.
20. The method according to claim 16, wherein the obtaining of the operating condition of the virtual scene corresponding to the selected category label comprises:
displaying an operation condition setting interface;
and acquiring the input running condition of the virtual scene based on the running condition setting interface.
21. An apparatus for processing a virtual scene, the apparatus comprising:
the label display module is configured to display a plurality of category labels in a label display interface, wherein the category labels are obtained by classifying a plurality of virtual scenes for automatic driving test, and each category label corresponds to a virtual scene belonging to the same category;
the label obtaining module is configured to obtain a category label selected from the label display interface;
and the scene operation module is configured to operate the virtual scene corresponding to the selected category label.
22. A computer device, characterized in that it comprises a processor and a memory, in which at least one program code is stored, which is loaded and executed by the processor to implement the operations performed by the processing method of a virtual scene according to any one of claims 1 to 20.
23. A computer-readable storage medium, wherein at least one program code is stored in the storage medium, and the program code is loaded and executed by a processor to implement the operations performed by the processing method of a virtual scene according to any one of claims 1 to 20.
CN202110619626.0A 2021-06-03 2021-06-03 Virtual scene processing method, device, equipment and storage medium Active CN113361386B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110619626.0A CN113361386B (en) 2021-06-03 2021-06-03 Virtual scene processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110619626.0A CN113361386B (en) 2021-06-03 2021-06-03 Virtual scene processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113361386A true CN113361386A (en) 2021-09-07
CN113361386B CN113361386B (en) 2022-11-15

Family

ID=77531623

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110619626.0A Active CN113361386B (en) 2021-06-03 2021-06-03 Virtual scene processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113361386B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984792A (en) * 2022-09-30 2023-04-18 北京瑞莱智慧科技有限公司 Countermeasure test method, system and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657355A (en) * 2018-12-20 2019-04-19 安徽江淮汽车集团股份有限公司 A kind of emulation mode and system of road vehicle virtual scene
CN110188482A (en) * 2019-05-31 2019-08-30 初速度(苏州)科技有限公司 A kind of test scene creation method and device based on intelligent driving
CN111199087A (en) * 2018-10-31 2020-05-26 百度在线网络技术(北京)有限公司 Scene recognition method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111199087A (en) * 2018-10-31 2020-05-26 百度在线网络技术(北京)有限公司 Scene recognition method and device
CN109657355A (en) * 2018-12-20 2019-04-19 安徽江淮汽车集团股份有限公司 A kind of emulation mode and system of road vehicle virtual scene
CN110188482A (en) * 2019-05-31 2019-08-30 初速度(苏州)科技有限公司 A kind of test scene creation method and device based on intelligent driving

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984792A (en) * 2022-09-30 2023-04-18 北京瑞莱智慧科技有限公司 Countermeasure test method, system and storage medium
CN115984792B (en) * 2022-09-30 2024-04-30 北京瑞莱智慧科技有限公司 Countermeasure test method, system and storage medium

Also Published As

Publication number Publication date
CN113361386B (en) 2022-11-15

Similar Documents

Publication Publication Date Title
CN111182453B (en) Positioning method, positioning device, electronic equipment and storage medium
CN111125442B (en) Data labeling method and device
CN110095128B (en) Method, device, equipment and storage medium for acquiring missing road information
CN110865756B (en) Image labeling method, device, equipment and storage medium
CN110864913B (en) Vehicle testing method and device, computer equipment and storage medium
CN110991491A (en) Image labeling method, device, equipment and storage medium
CN113205515B (en) Target detection method, device and computer storage medium
CN113378705B (en) Lane line detection method, device, equipment and storage medium
CN110955972A (en) Virtual scene generation method and device, computer equipment and storage medium
CN113160427A (en) Virtual scene creating method, device, equipment and storage medium
CN114332821A (en) Decision information acquisition method, device, terminal and storage medium
CN112802369A (en) Method and device for acquiring flight route, computer equipment and readable storage medium
CN110457571B (en) Method, device and equipment for acquiring interest point information and storage medium
CN111782950A (en) Sample data set acquisition method, device, equipment and storage medium
WO2022142713A1 (en) Method and apparatus for monitoring vehicle driving information
CN112269939B (en) Automatic driving scene searching method, device, terminal, server and medium
CN113361386B (en) Virtual scene processing method, device, equipment and storage medium
CN112053360A (en) Image segmentation method and device, computer equipment and storage medium
CN111754564B (en) Video display method, device, equipment and storage medium
CN112235609A (en) Content item data playing method and device, computer equipment and storage medium
CN115965936A (en) Edge position marking method and equipment
CN110399688B (en) Method and device for determining environment working condition of automatic driving and storage medium
CN112560612B (en) System, method, computer device and storage medium for determining business algorithm
CN114598992A (en) Information interaction method, device, equipment and computer readable storage medium
CN113936240A (en) Method, device and equipment for determining sample image and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant