CN112513951A - Scene file acquisition method and device - Google Patents

Scene file acquisition method and device Download PDF

Info

Publication number
CN112513951A
CN112513951A CN202080004249.3A CN202080004249A CN112513951A CN 112513951 A CN112513951 A CN 112513951A CN 202080004249 A CN202080004249 A CN 202080004249A CN 112513951 A CN112513951 A CN 112513951A
Authority
CN
China
Prior art keywords
traffic
scene
vehicle
environment data
around
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080004249.3A
Other languages
Chinese (zh)
Inventor
朱杰
夏子为
杨帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN112513951A publication Critical patent/CN112513951A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
    • G08G1/096725Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control

Abstract

The embodiment of the application discloses a scene file acquisition method and device, which are applied to the field of automatic driving in the field of artificial intelligence. The method comprises the following steps: acquiring traffic environment data around the host vehicle, wherein the traffic environment data around the host vehicle indicates position information of traffic participating entities around the host vehicle and is obtained based on data acquired by entities in traffic roads; in the case that it is determined that there are traffic-participating entities satisfying a first preset condition around the host vehicle according to the traffic environment data around the host vehicle, a start point of a first scene file in the traffic environment data and a first end point of the first scene file in the traffic environment data for showing a first traffic scene are determined. The scene file is automatically acquired, so that the method is more labor-saving and convenient; the obtained scene file is more real, so that the subsequent simulation training process of the automatic driving vehicle is closer to reality; and the first traffic scene is configurable, so that scene files of the traffic scene really wanted by the user can be automatically screened out.

Description

Scene file acquisition method and device
Technical Field
The application relates to the field of automobiles, in particular to a scene file obtaining method and device.
Background
In order to be able to drive an autonomous vehicle safely and reliably, a large number of scene data sets in road traffic, i.e. scene files for displaying traffic scenes, are required. So as to carry out simulation evaluation and model training on a path planning model and a vehicle control model configured in the automatic driving vehicle or other types of models and the like.
At present, the simulation scene editor is used to manually edit the road attributes around the host vehicle and the behavior of the traffic participant around the host vehicle to generate a desired scene file.
However, the types of scenes that can be covered by the method are limited and are very laborious, so that a more convenient method for acquiring the scene files is urgently needed to be proposed.
Disclosure of Invention
The application provides a method and a device for acquiring a scene file, and provides a scheme for automatically acquiring the scene file for displaying a traffic scene, so that the method is more labor-saving and convenient; because the scene file is from the traffic environment data around the main vehicle, and the traffic environment data around the main vehicle is obtained from the data acquired based on the entities in the traffic roads, the finally obtained scene file is more real, so that the subsequent simulation training process of the automatic driving vehicle is closer to reality, and the safety of the trained automatic driving vehicle is improved; and the first traffic scene is configurable, so that scene files of the traffic scene really wanted by the user can be automatically screened out.
In order to solve the technical problem, the application provides the following technical scheme:
in a first aspect, the application provides a method for acquiring a scene file, which can be used in the field of automatic driving in the field of artificial intelligence. The method comprises the following steps: the electronic equipment acquires traffic environment data around the main vehicle, wherein the traffic environment data around the main vehicle is used for indicating position information of traffic participating entities around the main vehicle, and the traffic environment data around the main vehicle is obtained based on data collected by entities in traffic roads; the traffic participating entities around the host vehicle refer to entities around the host vehicle that are directly or indirectly related to traffic, and include, but are not limited to, all entities participating in traffic, such as autonomous vehicles, pedestrians, cyclists, obstacles, traffic directors, and the like around the host vehicle. The electronic equipment acquires at least one first preset condition corresponding to at least one first traffic scene one by one, wherein the first traffic scene is configurable; the first traffic scene is a traffic scene in which the host vehicle and traffic participating entities around the host vehicle have direct or indirect interaction. For any first traffic scene of the at least one traffic scene, the electronic device determines a first start point and a first end point corresponding to the first traffic scene if it is determined that a traffic participant entity satisfying a first preset condition exists around the host vehicle according to the traffic environment data around the host vehicle. Wherein the first start point indicates a start point of a first scene file for showing the first traffic scene in the traffic environment data, and the first end point indicates an end point of the first scene file for showing the first traffic scene in the traffic environment data.
In the embodiment of the application, the scheme for automatically acquiring the scene file for displaying the traffic scene is provided, and compared with a mode of manually editing and generating the scene file, the scheme is more labor-saving and convenient; because the scene file is from the traffic environment data around the main vehicle, and the traffic environment data around the main vehicle is obtained from the data acquired based on the entities in the traffic roads, the finally obtained scene file is more real, so that the subsequent simulation training process of the automatic driving vehicle is closer to reality, and the safety of the trained automatic driving vehicle is improved; the technical staff researches and discovers that most of traffic environment data in the data collected by the road end are data of common traffic scenes, only less traffic environment data are data of traffic scenes needing important attention, and the first traffic scene in the scheme is configurable, so that scene files of the traffic scenes really wanted by the user can be automatically screened out in a mode of configuring the first traffic scene, and the efficiency of the scene file generation process is further improved.
In one possible implementation of the first aspect, the electronic device obtains traffic environment data around the host vehicle, including: the electronic equipment acquires original traffic environment data around the main vehicle, inputs the original traffic environment data around the main vehicle into the target tracking model, and performs target tracking on traffic participating entities around the main vehicle through the target tracking model to obtain an inference result output by the target tracking model. The target tracking model may be embodied as a neural network, including but not limited to an IOU tracking model, a kalman filter tracking model, or other models for tracking a target. The inference result is used for indicating the position information of the traffic participant entities around the host vehicle, and the position information of the traffic participant entities around the host vehicle can be used for the running track of the traffic participant entities around the host vehicle, and the running track not only can indicate that the positions of the traffic participant entities around the host vehicle change, but also can indicate that the positions of the traffic participant entities around the host vehicle do not change. Further, the inference result is identification information, prediction type, and coordinate information (i.e., a concrete expression of position information) of each traffic-participating entity around the host vehicle at each time. The original traffic environment data and the inference result around the main vehicle belong to the traffic environment data around the main vehicle. The electronic device, in a case where it is determined that there are traffic-participating entities satisfying a first preset condition around the host from traffic environment data around the host, determines a first start point and a first end point corresponding to a first traffic scene, including: the electronic equipment determines a first starting point and a first ending point corresponding to a first traffic scene under the condition that the traffic participation entities with running tracks meeting first preset conditions exist around the host vehicle according to the reasoning result.
In the embodiment of the application, the traffic environment around the main vehicle is input into the target tracking model, the position information of the traffic participant entities around the main vehicle at each moment is obtained through the target tracking model, so that the running tracks of the traffic participant entities around the main vehicle are obtained, and whether the traffic participant entities with the running tracks meeting the first preset condition exist around the main vehicle is judged. Namely, the target tracking model is used for generating the running track of the traffic participant entity around the main vehicle so as to improve the accuracy of the running track generation process.
In one possible implementation of the first aspect, the electronic device obtains control data of the host vehicle, the control data of the host vehicle being indicative of a trajectory of the host vehicle; the control data of the main vehicle are acquired by a control system of the main vehicle, and information including the running direction, the steering angle, the speed, the acceleration and the like of the main vehicle can be generated and used for indicating the running track of the main vehicle. The electronic equipment acquires at least one second preset condition corresponding to at least one second traffic scene, wherein the second traffic scene is configurable and is a traffic scene for judging the driving behavior of the host vehicle. The electronic equipment determines a second starting point and a second ending point corresponding to a second traffic scene under the condition that the running track of the main vehicle meets a second preset condition according to the control data of the main vehicle; wherein the second starting point indicates a starting point of a second scene file for showing the second traffic scene in the traffic environment data, and the second ending point indicates an ending point of the second scene file for showing the second traffic scene in the traffic environment data.
In the embodiment of the application, the scene file used for displaying the traffic scene of the main vehicle and the traffic participation around the main vehicle, which are actually in direct or indirect interaction, can be collected, the scene file used for displaying the traffic scene of the running track of the main vehicle can be collected, the traffic scene of the scene file can be collected by the scheme, and the abundance of the obtained scene file is improved.
In a possible implementation manner of the first aspect, the electronic device is a server, the entity located in the traffic road includes a host vehicle and/or a road-end collecting device, the first starting point is a first starting time, and the first ending point is a first ending time. The server acquires original traffic environment data around the host vehicle, including: the server receives original traffic environment data around the host vehicle in a first time period sent by the host vehicle and/or the road-side acquisition equipment. If the original traffic environment data around are collected through a radar, the original traffic environment data are expressed in a form of point cloud data; further, the point cloud data over the first time period may include a plurality of subsets of point cloud data, each subset of point cloud data for presenting raw traffic environment data around the host vehicle at a time within the first time period. If the original traffic environment data around is collected by the camera, the original traffic environment data is represented as a video or a plurality of images constituting the video. The method further comprises the following steps: the server acquires a first scene file from traffic environment data around the host vehicle in a first time period according to the first starting time and the first ending time, and marks a label of the first scene file as a first traffic scene.
In the embodiment of the application, the main vehicle or the road end acquisition equipment is responsible for acquiring the traffic environment data around the main vehicle, and the server is responsible for selecting the scene file for displaying the first traffic scene from the traffic environment data around the main vehicle, so that the occupation of computer resources of the main vehicle or the road end acquisition equipment is avoided, the perceptibility of the scheme for a user of an automatic driving vehicle is reduced as much as possible, and the poor user experience of the user of the automatic driving vehicle is avoided; and because the computer resource of the server is relatively abundant, carry out the selection operation by the server, also help to raise the efficiency of the selection process.
In one possible implementation manner of the first aspect, for any one of at least one set of a first start time and a first end time, the server acquires a first scene file from traffic environment data around the host vehicle within a first time period, including: the server may cut the traffic environment data in the first time period according to the set of first start time and first end time to obtain a scene file for displaying the first traffic scene. Or if the original traffic environment data is represented in the form of point cloud data, the server acquires a plurality of point cloud sub-collections corresponding to a plurality of moments between a first starting moment and a first ending moment from the point cloud data collection to obtain a scene file corresponding to a group of first starting moments and first ending moments. Or, since the inference result is the identification information, the prediction type, and the coordinate information of each traffic-participating entity around the host vehicle at each of the N times (i.e., the N times included in the first time period), the server acquires inference results corresponding to a plurality of times between the first start time and the first end time from the inference results corresponding to the N times, and regards the inference results corresponding to the plurality of times between the first start time and the first end time as one first scene file. Or, the server acquires inference results corresponding to a plurality of moments between the first starting moment and the first ending moment from the inference results corresponding to the N moments, acquires original traffic environment data corresponding to a plurality of moments between the first starting moment and the first ending moment from the original traffic environment data of the first time period, and comprehensively determines the first starting moment and the first ending moment as the first scene file.
In one possible implementation manner of the first aspect, the position information of the traffic-participating entities around the host vehicle can be used for a running track of the traffic-participating entities around the host vehicle, and the first preset condition includes a first trigger condition and a first termination condition. The server determines a first start point and a first end point corresponding to a first traffic scene in a case where it is determined that there are traffic-participating entities satisfying a first preset condition around the host from traffic environment data around the host, including: the server generates a first starting moment according to the traffic environment data around the host vehicle and the first trigger condition, wherein the first starting moment is determined according to the moment that the running track of the first traffic participant entity around the host vehicle meets the first trigger condition. Specifically, the server may directly determine the time when the first traffic participant entity satisfies the first trigger condition as the first start time, or determine the time a second duration before the time when the first traffic participant entity satisfies the first trigger condition as the first start time. If the running track of the first traffic participant entity meets the first confirmation condition within the first preset time length, generating a first termination time, wherein the first termination time is determined according to the time when the running track of the first traffic participant entity meets the first confirmation condition, and the first preset time length is configurable. Specifically, the server may directly determine the time when the first traffic participant entity satisfies the first confirmation condition as the first termination time, or may determine a time a third duration after the time when the first traffic participant entity satisfies the first confirmation condition as the first termination time.
In the embodiment of the application, a specific generation mode of the first starting time and the first ending time is provided, and the operability of the scheme is improved.
In one possible implementation manner of the first aspect, the electronic device is a host vehicle, the entity located in the traffic road is the host vehicle, and the first preset condition includes a first trigger condition and a first confirmation condition. The primary vehicle obtains primary traffic environment data around the primary vehicle, including: the primary vehicle collects the original traffic environment data around the primary vehicle in real time. The method for determining a first starting point and a first ending point corresponding to a first traffic scene by a host under the condition that the host is determined to have traffic participating entities meeting a first preset condition around the host according to traffic environment data around the host comprises the following steps: and when the main vehicle determines that a first traffic participant entity with a running track meeting a first trigger condition exists around the main vehicle according to the reasoning result, recording traffic environment data around the main vehicle. If the running track of the first traffic participant entity meets the first confirmation condition within the first preset time length, the main vehicle determines the recording stopping time of the traffic environment data around the main vehicle, determines the recorded traffic environment data around the main vehicle as a first scene file for displaying the first traffic scene, adds a label to the first scene file, and sends the first scene file added with the label to the server. And the recording stopping moment is determined according to the moment that the running track of the first traffic participant entity meets the first confirmation condition. And if the running track of the first traffic participant entity does not meet the first confirmation condition within the first preset time length, deleting the recorded traffic environment data by the main vehicle.
In the embodiment of the application, the traffic environment data around the main vehicle is collected in real time by the main vehicle, and the scene file for displaying the traffic scene is selected from the traffic environment data around the main vehicle, so that the main vehicle only sends the scene file for displaying the traffic scene to the server, unnecessary files are prevented from being sent to the server, the use of communication resources is reduced, and the waste of server storage resources is also reduced.
In one possible implementation of the first aspect, the original traffic environment data and the inference result (i.e., the traffic environment data around the host after the secondary processing) around the host vehicle are two different types of traffic environment data around the host vehicle. The master vehicle starts recording traffic environment data around the master vehicle, comprising: the main vehicle starts recording inference results corresponding to original traffic environment data around the main vehicle; or the main vehicle starts recording original traffic environment data around the main vehicle; or the host vehicle starts recording of the inference result corresponding to the original traffic environment data around the host vehicle and the original traffic environment data around the host vehicle.
In one possible implementation manner of the first aspect, the method further includes: the electronic equipment acquires map data matched with traffic environment data around the main vehicle; the map data may be a high-precision map, a navigation map, or other types of map data, which may be obtained based on absolute position information of the host vehicle. The electronic device, in a case where it is determined that there are traffic-participating entities satisfying a first preset condition around the host from traffic environment data around the host, determines a first start point and a first end point corresponding to a first traffic scene, including: the electronic device may align surrounding traffic environment data acquired by the host vehicle with map data after acquiring the map data matching the surrounding traffic environment data of the host vehicle, and determine a first start point and a first end point corresponding to a first traffic scene in a case where it is determined that a traffic participant entity satisfying a first preset condition exists around the host vehicle according to the surrounding traffic environment data of the host vehicle and the map data.
In the embodiment of the application, due to the influences of factors such as weather and obstacles around the main vehicle, the accuracy of the traffic environment data around the main vehicle collected by the main vehicle may not be high enough, and the map data matched with the traffic environment data around the main vehicle is used for making up for the traffic environment data around the main vehicle collected by the main vehicle, so that the accuracy of the scene file obtaining process can be improved.
In one possible implementation of the first aspect, the position information of the traffic-participating entities around the host vehicle is used to reflect a trajectory of travel of the traffic-participating entities around the host vehicle. The method further comprises the following steps: the electronic device obtains control data of the host vehicle, the control data of the host vehicle being indicative of a trajectory of the host vehicle. The electronic equipment acquires a first screening condition corresponding to a first traffic scene; the first screening condition is used for carrying out secondary screening on the scene files meeting the first preset condition so as to avoid the error acquisition of the scene files; the specific condition of the first screening condition needs to be determined in combination with the scene type of the first traffic scene. In a case where the electronic device determines, from traffic environment data around the host vehicle, that there are traffic participant entities around the host vehicle that satisfy a first preset condition, determining a first start point and a first end point corresponding to a first traffic scene includes: the method comprises the steps that when the electronic equipment determines that a first traffic participant entity with a running track meeting a first preset condition exists around the host car according to traffic environment data around the host car and control data of the host car, and determines that the running track of the host car and/or the running track of the first traffic participant entity meet a first screening condition, a first starting point and a first ending point corresponding to a first traffic scene are determined.
In the embodiment of the application, as the collected traffic scene generated by interaction between the traffic participation entities moving around the main vehicle and the main vehicle is different from the traffic scene generated by interaction between the surrounding traffic participation entities and the main vehicle, the running tracks of the surrounding traffic participation entities relative to the main vehicle are possibly the same, the possibility of misjudgment is judged only by using the first preset condition, and after the traffic participation entities meeting the first preset condition exist around the main vehicle, secondary detection is performed by using the first screening condition, so that the probability of mistakenly acquiring a scene file is reduced.
In one possible implementation of the first aspect, if the first traffic scenario is that a leading vehicle is cut out of a host vehicle lane, the first screening condition may include any one or a combination of more than one of: the traversing speed of the front vehicle is in a first speed range, the longitudinal absolute displacement of the front vehicle is greater than or equal to a third preset distance, the steering angle of the host vehicle is less than or equal to a first preset angle, and the speed of the host vehicle is in a second speed range. If the first traffic scene is a traffic scene in which a sidecar cuts into a host lane, the first screening condition may include any one or a combination of: the transverse moving speed of the first traffic participation entity meeting the first preset condition is in a third speed range, the longitudinal absolute displacement of the first traffic participation entity meeting the first preset condition is positive, and the first traffic participation entity meeting the first preset condition is the nearest vehicle in front of the main vehicle.
In one possible implementation manner of the first aspect, the first preset condition includes a first trigger condition and a first confirmation condition, and the second preset condition includes a second trigger condition and a second confirmation condition. The first traffic scene is a traffic scene cut by a front vehicle from a lane of the main vehicle, the first trigger condition comprises that a first bypass vehicle which is the closest to the longitudinal distance of the main vehicle in front of the main vehicle drives out of a first trigger line, the first confirmation condition comprises that the first bypass vehicle drives out of a first confirmation line, and the transverse distance between the first confirmation line and the main vehicle is larger than the transverse distance between the first trigger line and the main vehicle. Or the first traffic scene is a traffic scene in which the side car cuts into the main car lane, the first trigger condition comprises that the side car is located in a first preset range in front of the main car, the transverse distance between the side car and the main car is changed from being larger than a first preset distance to being smaller than or equal to the first preset distance, the first confirmation condition comprises that the transverse distance between the side car and the main car is changed from being larger than a second preset distance to being smaller than or equal to a second preset distance, and the second preset distance is smaller than the first preset distance. Or the second traffic scene is a traffic scene in which the main vehicle makes a U-shaped turn, the second trigger condition is that the left-turn direction angle of the main vehicle is greater than the first angle and the vehicle speed of the main vehicle is greater than the first vehicle speed, and the second confirmation condition is that the left-turn direction angle of the main vehicle is less than the second angle.
In the embodiment of the application, a plurality of specific traffic scenes are disclosed, and the specific expression forms of the first preset condition or the second preset condition are disclosed under different traffic scenes, so that the realization flexibility of the scheme is enhanced.
In a second aspect, an embodiment of the present application provides a method for acquiring a traffic scene file, which may be used in the field of automatic driving in the field of artificial intelligence. The method comprises the following steps: the electronic equipment acquires traffic environment data around the main vehicle and control data of the main vehicle, wherein the traffic environment data around the main vehicle is obtained based on data collected by entities in traffic roads, and the control data of the main vehicle is used for indicating the running track of the main vehicle; the electronic device obtains a second preset condition corresponding to a second traffic scene, wherein the second traffic scene is configurable. Determining a second starting point and a second ending point corresponding to a second traffic scene under the condition that the electronic equipment determines that the running track of the host vehicle meets a second preset condition according to the control data of the host vehicle; wherein the second starting point indicates a starting point of a second scene file for showing the second traffic scene in the traffic environment data, and the second ending point indicates an ending point of the second scene file for showing the second traffic scene in the traffic environment data.
In one possible implementation of the second aspect, the traffic environment data surrounding the host vehicle is used to indicate position information of traffic-participating entities surrounding the host vehicle, the method further comprising: the method comprises the steps that electronic equipment obtains a first preset condition corresponding to a first traffic scene, wherein the first traffic scene is configurable; the electronic equipment determines a first starting point and a first ending point corresponding to a first traffic scene under the condition that the traffic participation entities meeting a first preset condition exist around the host vehicle according to the traffic environment data around the host vehicle; wherein the first start point indicates a start point of a first scene file for showing the first traffic scene in the traffic environment data, and the first end point indicates an end point of the first scene file for showing the first traffic scene in the traffic environment data.
In the second aspect of the embodiment of the present application, the electronic device may further perform steps performed by the electronic device in various possible implementation manners of the first aspect, and for the concepts, specific implementation steps, and beneficial effects brought by each possible implementation manner of the terms in the second aspect and the various possible implementation manners of the second aspect of the embodiment of the present application, reference may be made to descriptions in the various possible implementation manners of the first aspect, and details are not repeated here.
In a third aspect, an embodiment of the present application provides a method for acquiring a traffic scene file, which may be used in the field of automatic driving in the field of artificial intelligence. The method comprises the following steps: the electronic equipment receives configuration operation of a user on a first traffic scene through a display interface, and acquires a first preset condition corresponding to the first traffic scene. Specifically, in one implementation, a plurality of different types of traffic scenes may be displayed in a display interface of the electronic device, so that a user may perform a selection operation on one or more traffic scenes to input a configuration operation on a first traffic scene. In another implementation manner, a display interface of the electronic device may display a traffic scene limiting condition, a user may perform a selection operation on the traffic scene limiting condition to complete configuration of one first traffic scene, and the user may repeatedly perform the foregoing operation to complete configuration of a plurality of first traffic scenes. The electronic equipment acquires traffic environment data around the host vehicle in a first time period, wherein the traffic environment data around the host vehicle is used for indicating position information of traffic participating entities around the host vehicle, and the traffic environment data around the host vehicle is obtained based on data collected by entities in traffic roads; in the case that it is determined that there are traffic-participating entities satisfying a first preset condition around the host vehicle according to traffic environment data around the host vehicle within a first time period, a first start time and a first end time corresponding to a first traffic scene are determined from within the first time period, and the first start time and the first end time are presented to a user. The first starting time indicates the starting time of the first scene file for showing the first traffic scene in the traffic environment data, and the first ending time indicates the ending time of the first scene file for showing the first traffic scene in the traffic environment data.
In the third aspect of the embodiment of the present application, the electronic device may further perform steps performed by the electronic device in various possible implementation manners of the first aspect, and for the concepts, specific implementation steps, and beneficial effects brought by each possible implementation manner of the terms in the third aspect and the various possible implementation manners of the third aspect of the embodiment of the present application, reference may be made to descriptions in the various possible implementation manners of the first aspect, and details are not repeated here.
In a fourth aspect, an embodiment of the present application provides an apparatus for acquiring a traffic scene file, which may be used in the field of automatic driving in the field of artificial intelligence. The system comprises an acquisition module, a traffic information acquisition module and a traffic information acquisition module, wherein the acquisition module is used for acquiring traffic environment data around a main vehicle, the traffic environment data around the main vehicle is used for indicating position information of traffic participating entities around the main vehicle, and the traffic environment data around the main vehicle is obtained based on data acquired by entities in traffic roads; the acquisition module is further used for acquiring a first preset condition corresponding to a first traffic scene, wherein the first traffic scene is configurable; the determining module is used for determining a first starting point and a first ending point corresponding to the first traffic scene under the condition that the traffic participating entities meeting the first preset condition exist around the host vehicle according to the traffic environment data around the host vehicle. Wherein the first start point indicates a start point of a first scene file for showing the first traffic scene in the traffic environment data, and the first end point indicates an end point of the first scene file for showing the first traffic scene in the traffic environment data.
In a fourth aspect of the embodiment of the application, the obtaining apparatus of the traffic scene file may further perform steps performed by the electronic device in the first aspect and various possible implementation manners of the first aspect, and for concepts, specific implementation steps, and beneficial effects brought by each possible implementation manner of the terms in the fourth aspect and some possible implementation manners of the fourth aspect, reference may be made to descriptions in various possible implementation manners of the first aspect, and details are not repeated here.
In a fifth aspect, an embodiment of the present application provides an apparatus for acquiring a traffic scene file, which may be used in the field of automatic driving in the field of artificial intelligence. The device comprises: the system comprises an acquisition module, a control module and a display module, wherein the acquisition module is used for acquiring traffic environment data around a main vehicle and control data of the main vehicle, the traffic environment data around the main vehicle is obtained based on data acquired by entities in traffic roads, and the control data of the main vehicle is used for indicating the running track of the main vehicle; the acquisition module is further used for acquiring a second preset condition corresponding to a second traffic scene, wherein the second traffic scene is configurable; and the determining module is used for determining a second starting point and a second ending point corresponding to the second traffic scene under the condition that the running track of the host vehicle is determined to meet a second preset condition according to the control data of the host vehicle. Wherein the second starting point indicates a starting point of a second scene file for showing the second traffic scene in the traffic environment data, and the second ending point indicates an ending point of the second scene file for showing the second traffic scene in the traffic environment data.
In a fifth aspect of the embodiment of the application, the apparatus for acquiring a traffic scene file may further perform steps performed by the electronic device in the second aspect and various possible implementation manners of the second aspect, and for concepts, specific implementation steps, and beneficial effects brought by each possible implementation manner of the terms in the fifth aspect and some possible implementation manners of the fifth aspect, reference may be made to descriptions in various possible implementation manners of the second aspect, and details are not repeated here.
In a sixth aspect, the embodiment of the application provides an obtaining device of a traffic scene file, which can be used in the field of automatic driving in the field of artificial intelligence. The device comprises: the receiving module is used for receiving configuration operation of a user on a first traffic scene through a display interface and acquiring a first preset condition corresponding to the first traffic scene; the system comprises an acquisition module, a traffic information acquisition module and a traffic information acquisition module, wherein the acquisition module is used for acquiring traffic environment data around a main vehicle in a first time period, the traffic environment data around the main vehicle is used for indicating position information of traffic participating entities around the main vehicle, and the traffic environment data around the main vehicle is obtained based on data acquired by entities in traffic roads; the display module is used for determining a first starting time and a first ending time corresponding to a first traffic scene from a first time period and displaying the first starting time and the first ending time to a user under the condition that a traffic participation entity meeting a first preset condition exists around the host vehicle according to traffic environment data around the host vehicle in the first time period; the first starting time indicates the starting time of the first scene file for showing the first traffic scene in the traffic environment data, and the first ending time indicates the ending time of the first scene file for showing the first traffic scene in the traffic environment data.
In a sixth aspect of the embodiment of the application, the apparatus for acquiring a traffic scene file may further perform steps performed by the electronic device in various possible implementation manners of the third aspect and the third aspect, and for concepts, specific implementation steps, and beneficial effects brought by each possible implementation manner of the terms in the sixth aspect and some possible implementation manners of the sixth aspect, reference may be made to descriptions in various possible implementation manners of the third aspect, and details are not repeated here.
In a seventh aspect, an embodiment of the present application provides a computer program, which when running on a computer, causes the computer to execute the method for acquiring a scene file according to the foregoing aspects.
In an eighth aspect, an embodiment of the present application provides an electronic device, including a processor, coupled with the memory; the memory is used for storing programs; the processor is configured to execute the program in the memory, so that the execution device executes the method for acquiring a scene file according to the above aspects.
In a ninth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute the method for acquiring a scene file according to the foregoing aspects.
In a tenth aspect, an embodiment of the present application provides a circuit system, where the circuit system includes a processing circuit configured to execute the scene file obtaining method according to the foregoing aspects.
In an eleventh aspect, embodiments of the present application provide a chip system, which includes a processor, and is configured to support implementation of the functions referred to in the foregoing aspects, for example, sending or processing data and/or information referred to in the foregoing methods. In one possible design, the system-on-chip further includes a memory for storing program instructions and data necessary for the server or the communication device. The chip system may be formed by a chip, or may include a chip and other discrete devices.
Drawings
Fig. 1 is a system architecture diagram of a system for acquiring a traffic scene file according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a method for acquiring a scene file according to an embodiment of the present application;
fig. 3 is another schematic flow chart of a scene file acquiring method according to an embodiment of the present application;
fig. 4 is two schematic diagrams illustrating first preset conditions in a scene file obtaining method according to an embodiment of the present application;
fig. 5 is a schematic diagram of a first start time and a second start time in the method for acquiring a scene file according to the embodiment of the present application;
fig. 6 is a schematic diagram of a second traffic scene in the scene file acquiring method according to the embodiment of the present application;
fig. 7 is a schematic flowchart of another method for acquiring a scene file according to an embodiment of the present application;
fig. 8 is a schematic flowchart of a further method for acquiring a scene file according to an embodiment of the present application;
fig. 9 is a schematic flowchart of another method for acquiring a scene file according to an embodiment of the present application;
fig. 10 is a schematic diagram of a first traffic scenario provided by an embodiment of the present application;
fig. 11 is a schematic flowchart of another scene file acquiring method according to an embodiment of the present application;
fig. 12 is a schematic view of a configuration interface of a traffic scene in the scene file acquiring method according to the embodiment of the present application;
fig. 13 is another schematic diagram of a configuration interface of a traffic scene in the scene file acquisition method according to the embodiment of the present application;
fig. 14 is a schematic structural diagram of an apparatus for acquiring a scene file according to an embodiment of the present application;
fig. 15 is another schematic structural diagram of an apparatus for acquiring a scene file according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of an apparatus for acquiring a scene file according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 18 is a schematic structural diagram of an autonomous vehicle according to an embodiment of the present application.
Detailed Description
The terms "first," "second," and the like in the description and in the claims of the embodiments of the application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances and are merely descriptive of the various embodiments of the application and how objects of the same nature can be distinguished in the embodiments. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Embodiments of the present application will be described below with reference to the accompanying drawings. As can be known to those skilled in the art, with the development of technology and the emergence of new scenarios, the technical solution provided in the embodiments of the present application is also applicable to similar technical problems.
The embodiment of the application can be applied to various scenes needing to use the traffic scene files, for example, when simulation evaluation is carried out on a path planning model configured in an automatic driving vehicle by using simulation software, the scene files for displaying the traffic scene need to be input into the simulation software, so that the simulation evaluation is carried out on the performance of the path planned for the automatic driving vehicle by the path planning model through the simulation software. As another example, for example, when a vehicle control model configured in an autonomous vehicle is trained, a large number of scene files for displaying a traffic scene may be used, and the planning control model is iteratively trained to obtain a high-performance vehicle control model, that is, during model training, simulation evaluation, anomaly detection, compliance supervision and the like performed on various models configured in the autonomous vehicle, the scene files for displaying the traffic scene may be used, and the application scenarios in the embodiments of the present application are not exhaustive.
In order to automatically acquire the traffic scene file, the embodiment of the application provides an acquisition method of the traffic scene file, and the scene file for displaying the traffic scene can be automatically acquired from real traffic environment data around a main vehicle by electronic equipment. Referring to fig. 1, fig. 1 is a system architecture diagram of a system for acquiring a traffic scene file according to an embodiment of the present application. The system for acquiring the traffic scene file may include a cloud server 10, an autonomous vehicle 20, and a road end (road side) acquisition device 30. The autonomous vehicle 20 and the road-end collecting device 30 may be regarded as entities located in a traffic road, and may collect real traffic environment data. Both autonomous vehicle 20 and end-of-road collection device 30 are equipped with sensors capable of collecting real traffic environment data, including but not limited to radar, laser range finders, cameras or other types of sensors, etc. The sensors configured in the autonomous vehicle 20 may further include a positioning system (the positioning system may be a Global Positioning System (GPS), a compass system or other positioning systems), an Inertial Measurement Unit (IMU), and the like, where the inertial measurement unit may be embodied as a gyroscope, an accelerometer, a mileage gauge, a compass, and the like.
In one implementation, a scene file for presenting a first traffic scene is obtained by the server 10 from traffic environment data surrounding the host vehicle. Specifically, the autonomous vehicle 20 and/or the roadside collecting device 30 are configured to collect traffic environment data around the host vehicle in a first time period, and to transmit the traffic environment data around the host vehicle in the first time period to the server 10, where the traffic environment data around the host vehicle in the first time period indicates a traveling locus of traffic-participating entities around the host vehicle. The server 10 obtains a first preset condition corresponding to a first traffic scene, and judges whether a traffic participant entity meeting the first preset condition exists around the host according to traffic environment data around the host, if so, the server 10 determines a first starting time and a first ending time corresponding to the first traffic scene, and obtains a scene file for displaying the first traffic scene from the traffic environment data in a first time period according to the first starting time and the first ending time. It should be noted that, in this implementation, the autonomous vehicle 20 and the roadside assistance data collection device 30 may be configured simultaneously, or only the autonomous vehicle 20 may be configured, or only the roadside assistance data collection device 30 may be configured.
In another implementation, a scene file that demonstrates a first traffic scene is obtained by an autonomous vehicle (also referred to as a host vehicle) 20 from traffic environment data surrounding the host vehicle. Specifically, the autonomous driving vehicle 20 acquires surrounding traffic environment data in real time, and obtains a first preset condition corresponding to a first traffic scene, where the first preset condition includes a first trigger condition and a first confirmation condition. When the autonomous vehicle 20 confirms that there is a traffic-involved entity around the host vehicle that satisfies the first trigger condition, recording of traffic environment data around the host vehicle may be started, a time at which recording is stopped may be determined according to a time at which the traffic-involved entity satisfies the first confirmation condition, the recorded traffic environment data may be determined as a first scene file for showing a first traffic scene, and then the first scene file may be transmitted to the server 10.
It should be understood that although fig. 1 shows one server 10 communicatively connected to only two autonomous vehicles 20 and two end-of-road collection devices 30, in practice, one server 10 may be communicatively connected to a plurality of autonomous vehicles 20 and/or a plurality of end-of-road collection devices 30, and the example in fig. 1 is merely for convenience of understanding the system architecture of the present solution.
As can be seen from the above description, the method for acquiring a scene file provided in the embodiment of the present application may have two modes, and because the two implementation modes have different implementation flows, the two modes are described below respectively.
Firstly, a scene file for displaying a traffic scene is obtained from traffic environment data in a first time period by a server
Specifically, referring to fig. 2, fig. 2 is a schematic flow chart of a method for acquiring a scene file according to an embodiment of the present application, where the method for acquiring a scene file according to the embodiment of the present application may include:
201. the server obtains raw traffic environment data around the host vehicle.
In the embodiment of the application, due to the fact that sensors such as radars and cameras are arranged in entities (including the host vehicle, the road end acquisition device or other devices with data acquisition capacity and the like) located in the traffic road, the original traffic environment data of the first time period around the host vehicle can be acquired. If the main collection body is a main vehicle, the main vehicle can collect the surrounding original traffic environment data in the operation process of the main vehicle and send the surrounding original traffic environment data of the main vehicle in the first time period to the server, and correspondingly, the server can receive the surrounding original traffic environment data of the main vehicle in the first time period. If the acquisition device is a road end acquisition device, the road end acquisition device acquires original traffic environment data, and after the server obtains the original traffic environment data in the first time period, a certain vehicle in the original traffic environment data can be determined as a main vehicle, so that the original traffic environment data acquired by the road end acquisition device can be regarded as the original traffic environment data around the determined main vehicle. It should be noted that, in the following embodiments, the detailed description is given by taking the collecting device as a main vehicle, and the analogy is applicable to the case where the collecting device is a road-end collecting device.
Specifically, the main tractor can send original traffic environment data around the main tractor to the server once every fixed time; as an example, the aforementioned fixed time period may be, for example, 5 hours, 8 hours, 12 hours, 24 hours, or the like. The host vehicle may also send the raw traffic environment data to the server at each fixed point in time, which may be, for example, 2 am each day. The host vehicle can also send the original traffic environment data around the host vehicle to the server when each fixed event occurs; for example, the fixed events are events such as vehicle shutdown, vehicle startup, vehicle software upgrade, etc., and the examples herein are merely for convenience of understanding the present solution, and in an actual product, the manner of sending the original traffic environment data to the server may be determined in combination with an actual application scenario, and is not limited herein.
The value of the first time period may be determined by combining the interval duration of the original traffic environment data sent to the server by the host vehicle, for example, the value of the first time period may be 5 hours, 8 hours, 12 hours, and the like, and the specific value may be determined by combining the actual application environment, which is not limited herein. If the main vehicle acquires surrounding original traffic environment data through a radar, the original traffic environment data is expressed in a point cloud (point cloud) data form; further, the point cloud data over the first time period may include a plurality of subsets of point cloud data, each subset of point cloud data for presenting raw traffic environment data around the host vehicle at a time within the first time period. If the host vehicle collects surrounding original traffic environment data through the camera, the original traffic environment data is represented as a video or a plurality of images forming the video.
Optionally, since the host vehicle may also be configured with a positioning system, the host vehicle may also send the position information of the host vehicle within the first time period to the server, and correspondingly, the server may receive the position information of the host vehicle within the first time period. The position information may be absolute position information of the host vehicle, such as longitude and latitude information obtained through GPS measurement, and may also be relative position information of the host vehicle, and the like, and may be flexibly determined specifically according to actual scene needs. Further, the position information within the first time period may include a plurality of position information indicating a change in position of the host vehicle within the first time period.
Optionally, the server may also obtain map data that matches the raw traffic environment data surrounding the host vehicle. The map data may be a high-precision map (HD map), a navigation map, or other types of map data, so as to improve the understanding degree of the server on the traffic environment around the host vehicle.
Specifically, in one implementation, the server may obtain map data that matches environmental data surrounding the host vehicle directly from the absolute position information of the host vehicle after obtaining the absolute position information of the host vehicle. In another implementation, the host vehicle may also send, to the server, map data that matches the original traffic environment data surrounding the host vehicle when the original traffic environment data surrounding is sent to the server.
202. The server acquires control data of the host vehicle.
In some embodiments of the present application, the host vehicle may also send control data of the host vehicle for a first time period to the server, the control data of the host vehicle indicating a trajectory of travel of the host vehicle. The control data of the main vehicle are acquired by a control system of the main vehicle, and information including the running direction, the steering angle, the speed, the acceleration and the like of the main vehicle can be generated and used for indicating the running track of the main vehicle.
It should be noted that, the present application does not limit the execution sequence of steps 201 and 202, and steps 201 and 202 may be executed simultaneously, that is, the host sends the traffic environment data around the host and the control data of the host to the server simultaneously in the first time period; step 201 may be executed first, and then step 202 is executed; step 202 may be performed first, and then step 201 may be performed.
203. The server inputs the original traffic environment data around the main vehicle into the target tracking model to obtain an inference result output by the target tracking model, and the inference result is used for indicating the position information of the traffic participant entities around the main vehicle.
In the embodiment of the application, after the server acquires the original traffic environment data around the main vehicle in the first time period, the original traffic environment data around the main vehicle needs to be input into the target tracking model, so that the traffic participating entities around the main vehicle are subjected to target tracking through the target tracking model, and an inference result (i.e., traffic environment data around the main vehicle after secondary processing) output by the target tracking model is obtained.
The target tracking model may be embodied as a neural network, including but not limited to an IOU tracking (tracking) model, a Kalman filter tracking (Kalman filtering) model, or other models for tracking the target. The traffic participating entity around the host vehicle refers to an entity around the host vehicle, which has a direct or indirect relationship with traffic, and includes, but is not limited to, all entities participating in traffic, such as autonomous vehicles, pedestrians, cyclists, obstacles, traffic directors, and the like around the host vehicle.
The inference result is used for indicating the position information of the traffic participant entities around the host vehicle, and can reflect the running track of the traffic participant entities around the host vehicle, and the running track not only can indicate that the positions of the traffic participant entities around the host vehicle change, but also can indicate that the positions of the traffic participant entities around the host vehicle do not change. The original traffic environment data around the host vehicle in the first time period comprises original traffic environment data around the host vehicle at N moments in the first time period, one or more traffic participating entities exist around the host vehicle at each of the N moments, the inference result is identification information, prediction type and coordinate information (namely a concrete expression form of position information) of each traffic participating entity around the host vehicle at each moment in the N moments, and N is an integer greater than 1. Further, the coordinate information included in the inference result may be coordinates in a host coordinate system, a world coordinate system, or an image coordinate system, and it should be noted that the host coordinate system, the world coordinate system, and the image coordinate system may be freely converted due to a positioning system configured in the host vehicle. Therefore, the running track of a certain traffic participant entity in the first time period can be obtained according to the coordinate information of the traffic participant entity at the N moments.
204. The server acquires a first preset condition corresponding to the first traffic scene.
In some embodiments of the present application, the server may obtain at least one first preset condition corresponding to one or more first traffic scenarios one to one, respectively. The first traffic scene is a traffic scene in which the host vehicle and traffic participating entities around the host vehicle have direct or indirect interaction. As an example, the first traffic scene may be a scene in which a host vehicle lane is cut out of a preceding vehicle, a bypass vehicle is cut into a host vehicle lane, a vehicle runs according to guidance of a police or a traffic director, can run along with a vehicle in the front lane, a low-speed or static obstacle appears ahead during running, an obstacle appearing fast in front, a vehicle having illegal running or dangerous running on a traffic road, a vehicle in a target lane is avoided and merged into a traffic flow, an obstacle exists in an adjacent lane, an obstacle occupying part of lanes on the traffic road, an obstacle appearing ahead with lane borrowing, a faulty vehicle or an emergency vehicle having a warning prompt on the traffic road, and the like, and the first traffic scene is a scene in which interaction exists between traffic participating entities around the host vehicle and the host vehicle, and is specifically selected from which scenes, and is not exhaustive here.
Specifically, the server may perform one or more operations in parallel, where one operation is used to execute an acquisition task of a scene file of a traffic scene, that is, one operation is used to determine whether a scene file for displaying a certain traffic scene exists in traffic environment data around the host vehicle in the first time period. Therefore, the server can judge whether the scene files for showing one or more traffic scenes exist in the first time period through one or more parallel jobs. As an example, for example, if the server acquires three first traffic scenes including driving according to the guidance of the traffic director, cutting out of the host lane by a preceding vehicle, and cutting in of the host lane by a bypass vehicle, respectively, the server may acquire three preset conditions corresponding to the three traffic scenes, respectively, determine whether a scene file showing the traffic scene driving according to the guidance of the traffic director exists in the traffic environment data around the host vehicle in the first time period through a first operation, determine whether a scene file showing the traffic scene cutting out of the host lane by a preceding vehicle exists in the traffic environment data around the host vehicle in the first time period through a second operation, determine whether a scene file showing the traffic scene driving according to the guidance of the traffic director exists in the traffic environment data around the host vehicle in the first time period through a third operation, and the like, it should be understood that the examples are only for convenience of understanding and are not intended to limit the present disclosure.
For a more intuitive understanding of the present disclosure, please refer to fig. 3, and fig. 3 is a schematic flowchart of a method for acquiring a scene file according to an embodiment of the present disclosure. In fig. 3, taking the example that the server acquires the scene files for displaying the first traffic scene, the second traffic scene, and the third traffic scene from the traffic environment data around the host vehicle through three parallel operations, the server inputs the traffic environment data around the host vehicle in the first time period into the target tracking model to obtain the inference result output by the target tracking model, and executes three tasks through the three operations, that is, the preset condition (that is, including the trigger condition and the confirmation condition) corresponding to the first traffic scene is acquired through the first operation, and according to the inference result, whether there is a traffic participant around the host vehicle that satisfies the preset condition corresponding to the first traffic scene is determined, so as to acquire the positioning information (that is, the start time and the end time) corresponding to the first traffic scene, so as to obtain the positioning information corresponding to the first traffic scene, acquiring a scene file for displaying a first traffic scene from traffic environment data around a main vehicle; the preset conditions (that is, the trigger conditions and the confirmation conditions) corresponding to the traffic scene two are obtained through the second operation, whether the traffic participation objects meeting the preset conditions corresponding to the traffic scene two exist around the main vehicle is judged according to the inference result, and then the positioning information (that is, the starting time and the ending time) corresponding to the traffic scene two is obtained, so that the scene file for displaying the traffic scene two is obtained from the traffic environment data around the main vehicle according to the positioning information corresponding to the traffic scene two, the description of the traffic scene three is similar to the traffic scene one and the traffic scene two, and no repeated description is needed here, it should be understood that the example in fig. 3 is only convenient for understanding that the server can collect the scene files for displaying different traffic scenes in parallel, and is not used for limiting the scheme.
Wherein the first traffic scenario is configurable; further, the server may be configured with a presentation interface, and the technician may configure the one or more first traffic scenarios through the presentation interface of the server. Alternatively, the technician may send configuration instructions to the server, the configuration instructions including configuration information for the one or more first traffic scenarios.
The first preset condition includes a first trigger condition and a first confirmation condition. Further, in some traffic scenes, the collected traffic scenes generated by interaction between traffic participating entities moving around the host vehicle and the host vehicle are collected, and the first preset condition is used for judging whether the running tracks of the traffic participating entities around the host vehicle meet the first trigger condition and the first confirmation condition. For example, in the case where the first traffic scene is a traffic scene in which a preceding vehicle cuts out (cut out) from the lane of the host vehicle, the first trigger condition includes that a first bypass vehicle in front of the host vehicle, which is closest to the longitudinal distance of the host vehicle, exits the first trigger line, and the first confirmation condition includes that the first bypass vehicle exits the first confirmation line, and the lateral distance of the first confirmation line from the host vehicle is greater than the lateral distance of the first trigger line from the host vehicle. For a more intuitive understanding of the present disclosure, please refer to fig. 4, where fig. 4 is two schematic diagrams of a first preset condition in the method for acquiring a scene file according to the embodiment of the present application. Fig. 4 includes a left sub-diagram and a right sub-diagram, wherein in the left diagram of fig. 4, E1 and E2 represent first trigger lines, i.e., trigger lines of a traffic scene cut out of the main vehicle lane by the front vehicle; e3 and E4 represent first confirmation lines, that is, confirmation lines of traffic scenes cut out of the lane of the host vehicle by the front vehicle, and both the object a and the object B in the left sub-diagram of fig. 4 represent traffic-participating objects around the host vehicle, and the travel locus of the traffic-participating object a shows the traffic scenes cut out of the lane of the host vehicle.
As another example, for example, in the case where the first traffic scene is a traffic scene in which a side car cuts into a host lane (cut in), the first trigger condition includes that the side car is located within a first preset range ahead of the host car and a lateral distance of the side car from the host car changes from being greater than a first preset distance to being less than or equal to the first preset distance, the first confirmation condition includes that the lateral distance of the side car from the host car changes from being greater than a second preset distance to being less than or equal to a second preset distance, the second preset distance being less than the first preset distance; for a more intuitive understanding of the present solution, please refer to the right sub-diagram of fig. 4, where F1 and F2 represent a first preset distance, that is, a trigger distance of a traffic scene in which a side car cuts into a main car lane; f3 and F4 represent a second preset distance, that is, a confirmation distance of a traffic scene in which the bypass cuts into the lane of the host vehicle, and an object a in the right sub-diagram of fig. 4 represents a traffic participating object around the host vehicle, and an object a in the right sub-diagram of fig. 4 shows the traffic scene in which the bypass cuts into the lane of the host vehicle.
In other traffic scenarios, the collected traffic scenarios are traffic scenarios generated by interaction between static traffic participating entities around the host and the host, for example, the first traffic scenario is that the host drives through the construction point, the first trigger condition may be that the construction point is located in a first range in front of the host, the first confirmation condition may be that the construction point is located outside a second range behind the host, and the like, which are not exhaustive here.
Optionally, the server may further obtain a first screening condition corresponding to the first traffic scene, where the first screening condition is used to perform secondary screening on the scene files meeting the first preset condition, so as to avoid erroneous obtaining of the scene files. The specific condition of the first screening condition needs to be determined in combination with the scene type of the first traffic scene.
Wherein, if the first traffic scene is that the leading vehicle is cut out from the main vehicle lane, the first screening condition may include any one or a combination of more than one of the following: the traversing speed of the front vehicle is in a first speed range, the longitudinal absolute displacement of the front vehicle is greater than or equal to a third preset distance, the steering angle of the host vehicle is less than or equal to a first preset angle, and the speed of the host vehicle is in a second speed range.
Further, the longitudinal absolute displacement of the front vehicle is greater than or equal to a third preset distance, or the obtained scene that the speed of the main vehicle is within the second speed range can be excluded is a third traffic scene, wherein the third traffic scene is a traffic scene that the main vehicle is in a parking state and the front vehicle drives in front of the main vehicle. The third preset distance and the second speed range may be flexibly set in combination with the actual scene, for example, the value of the third preset distance may be 5 meters, 6 meters, or other values, and the second speed range is greater than or equal to 5 meters/second, greater than or equal to 6 meters/second, and the like, which is not limited herein.
The situation that the transverse moving speed of the front vehicle is within the first speed range or the situation that the steering angle of the main vehicle is smaller than or equal to the first preset angle can be excluded is the fourth traffic situation, and the fourth traffic situation is the traffic situation of the turning of the main vehicle. The values of the first speed range and the first preset angle can be flexibly set by combining with an actual scene, and are not limited herein.
If the first traffic scene is a traffic scene in which a sidecar cuts into a host lane, the first screening condition may include any one or a combination of: the transverse moving speed of the first traffic participation entity meeting the first preset condition is in a third speed range, the longitudinal absolute displacement of the first traffic participation entity meeting the first preset condition is positive, and the first traffic participation entity meeting the first preset condition is the nearest vehicle in front of the main vehicle. The value of the third speed range can be flexibly set by combining with the actual situation, and is not limited here.
Optionally, the first preset condition may further include a second filtering condition, where the second filtering condition is used to pre-filter traffic participant entities around the host vehicle to exclude objects of which the types do not conform to the first traffic scene. As an example, for example, if the first traffic scenario is a cut-through of a bypass into a host lane, the second screening condition is whether the traffic participant entity is a motor vehicle. As another example, for example, if the first traffic scene is a pedestrian crossing, the second filtering condition is whether the traffic participating entity is a person, and the like, and the specific cases of the second filtering condition are not exhaustive here.
205. The server judges whether a first traffic participation entity meeting a first preset condition exists around the main vehicle according to the reasoning result, and if so, the step 206 is carried out; and if not, confirming that the scene file for displaying the first traffic scene does not exist in the traffic environment data around the host vehicle.
In this embodiment of the application, for any one of the at least one first traffic scenario, after obtaining the first preset condition and the inference result corresponding to the one first traffic scenario, the server may determine whether a first traffic participant entity meeting the first preset condition exists around the host vehicle according to the inference result, and if so, enter step 206; and if not, confirming that the scene file for displaying the first traffic scene does not exist in the traffic environment data around the host vehicle.
In some traffic scenes, the collected traffic scene is generated by interaction between traffic participating entities moving around the host vehicle and the host vehicle, step 205 is used to determine whether a first traffic participating entity with a running track meeting a first preset condition exists around the host vehicle. Specifically, after obtaining the inference result, the server may obtain coordinate information of each traffic participant entity around the host vehicle at each of the N times. For any traffic participant entity in one or more traffic participant entities around the main vehicle, the server may determine whether the running track of each traffic participant entity satisfies a first trigger condition according to the coordinate information of each traffic participant entity at each of N times, determine whether the running track of the traffic participant entity satisfies a first confirmation condition within a first preset time period if the running track of the traffic participant entity satisfies the first trigger condition, and confirm that the traffic participant entity is the first traffic participant entity if the running track of the traffic participant entity satisfies the first confirmation condition. If the running track of the traffic participation entity does not meet the first trigger condition, or if the running track of the traffic participation entity meets the first trigger condition but does not meet the first confirmation condition, continuing to judge the next traffic participation entity. The first preset duration may be fixed or configurable, and the specific configuration method may refer to the configuration manner of the first traffic scene in step 204, which is not described herein again. It should be noted that, in the first time period, the movement trajectory of the same traffic participant entity may satisfy the first preset condition more than once.
The server can execute the above operations on each traffic participating entity around the main vehicle, so that whether a first traffic participating entity with a running track meeting the first trigger condition exists around the main vehicle can be determined, if so, several traffic participating entities with running tracks meeting the first trigger condition around the main vehicle can be determined, and the identification information of each traffic participating entity is what.
In other traffic scenes, the collected traffic scene is generated by interaction between static traffic participating entities around the host and the host, and step 205 is used to determine whether there is a first traffic participating entity around the host whose position information satisfies a first preset condition. The specific implementation manner is similar to the traffic scene, and the difference is that the running track of the traffic participant entity in the traffic scene is replaced by the relative position between the traffic participant entity around the host vehicle and the host vehicle. No matter the coordinate information included in the inference result is established in the coordinate system of the main vehicle, the coordinate system of the world or the coordinate system of the image, the relative position between the traffic participation entity around the main vehicle and the main vehicle can be inferred according to the inference result.
In the embodiment of the application, the traffic environment around the main vehicle is input into the target tracking model, the position information of the traffic participant entities around the main vehicle at each moment is obtained through the target tracking model, so that the running tracks of the traffic participant entities around the main vehicle are obtained, and whether the traffic participant entities with the running tracks meeting the first preset condition exist around the main vehicle is judged. Namely, the target tracking model is used for generating the running track of the traffic participant entity around the main vehicle so as to improve the accuracy of the running track generation process.
Alternatively, if the server further acquires map data matching the traffic environment data around the host vehicle in step 201, then in step 205, after acquiring the map data matching the traffic environment data around the host vehicle, the server may align the traffic environment data around the host vehicle with the map data, determine whether a first traffic participant entity meeting a first preset condition exists around the host vehicle according to the traffic environment data around the host vehicle and the map data, if so, proceed to step 206, and if not, confirm that a scene file for displaying the first traffic scene does not exist in the traffic environment data around the host vehicle. In the embodiment of the application, due to the influences of factors such as weather and obstacles around the main vehicle, the accuracy of the traffic environment data around the main vehicle collected by the main vehicle may not be high enough, and the map data matched with the traffic environment data around the main vehicle is used for making up for the traffic environment data around the main vehicle collected by the main vehicle, so that the accuracy of the scene file obtaining process can be improved.
Optionally, if the server further obtains a second screening condition corresponding to the first traffic scene, before the server determines whether there is a first traffic participant entity around the host vehicle whose movement trajectory satisfies the first trigger condition according to the inference result, the server may further perform preliminary screening on the plurality of traffic participant entities around the host vehicle according to the prediction type and the second screening condition of each traffic participant entity around the host vehicle, so as to screen out the traffic participant entities whose prediction types do not meet the requirements of the first traffic scene, and obtain the plurality of screened traffic participant entities. And further determining whether a first traffic participant entity with a running trajectory meeting the first trigger condition exists in the screened multiple traffic participant entities, and for the step of actual determination, reference may be made to the above description, which is not described herein again.
206. The server judges whether the running track of the main vehicle and/or the running track of the first traffic participant entity meet the first screening condition, if so, the step 207 is carried out; if not, step 205 is re-entered.
In some embodiments of the present application, if the server further obtains a first filtering condition corresponding to the first traffic scene, after determining that the running trajectory of a first traffic participant entity around the host meets a first preset condition, the server may further determine whether the running trajectory of the host and/or the running trajectory of the first traffic participant entity meet the first filtering condition, and if the first filtering condition is met, determine that no erroneous determination occurs in step 205, so as to enter step 207, so as to perform the step of obtaining the scene file; if not, the method re-enters step 205, that is, whether a traffic participant entity meeting the first preset condition exists around the host vehicle is continuously judged according to the remaining traffic environment data. The specific meaning of the first screening condition may refer to the description in step 204, which is not described herein.
In the embodiment of the application, as the collected traffic scene generated by interaction between the traffic participation entities moving around the main vehicle and the main vehicle is different from the traffic scene generated by interaction between the surrounding traffic participation entities and the main vehicle, the running tracks of the surrounding traffic participation entities relative to the main vehicle are possibly the same, the possibility of misjudgment is judged only by using the first preset condition, and after the traffic participation entities meeting the first preset condition exist around the main vehicle, secondary detection is performed by using the first screening condition, so that the probability of mistakenly acquiring a scene file is reduced.
It should be noted that step 206 is an optional step, and if step 206 is not executed, step 207 may be directly executed after step 205 is executed.
207. The server determines a first starting time according to the time when the first traffic participant entity meets the first trigger condition.
In some embodiments of the present application, the server, upon determining that there is at least one first traffic entity around the host vehicle that satisfies a first preset condition; alternatively, the server may determine the first starting time according to a time when the first traffic participant entity around the host vehicle satisfies the first trigger condition after determining that at least one first traffic entity satisfying the first preset condition exists around the host vehicle and the running track of the host vehicle and/or the running track of the first traffic participant entity satisfy the first screening condition. Specifically, the server may directly determine the time when the first traffic participant entity satisfies the first trigger condition as the first start time, or determine the time a second duration before the time when the first traffic participant entity satisfies the first trigger condition as the first start time. The second time period may take a value of 10 seconds, 20 seconds, 30 seconds, etc., and is not limited herein. As an example, if the first traffic entity satisfies the first trigger condition at 10 minutes and 20 seconds, the first starting time may be 10 minutes and 0 seconds, which is only given as an example for convenience of understanding the present solution and is not used to limit the present solution.
208. The server determines a first termination time according to the time when the first traffic participant entity meets the first confirmation condition.
In some embodiments of the application, the server may determine the first termination time based on a time of the first confirmation condition of the first traffic-participating entities around the host after determining that at least one first traffic entity satisfying the first preset condition exists around the host. Specifically, the server may directly determine the time when the first traffic participant entity satisfies the first confirmation condition as the first termination time, or may determine a time a third duration after the time when the first traffic participant entity satisfies the first confirmation condition as the first termination time. The value of the third time period may be 10 seconds, 15 seconds, 20 seconds, 30 seconds, etc., and is not limited herein. As an example, if the first transportation entity meets the first confirmation condition at 15 minutes and 30 seconds, the first termination time may be 15 minutes and 40 seconds, which is only given as an example for convenience of understanding of the present solution and is not used to limit the present solution.
In the embodiment of the application, a specific generation mode of the first starting time and the first ending time is provided, and the operability of the scheme is improved.
For a more intuitive understanding of the present solution, please refer to fig. 5, and fig. 5 is a schematic diagram of a first start time and a second start time in the method for acquiring a scene file according to the embodiment of the present application. In fig. 5, traffic environment data of surroundings collected in 2019, 1 month, 22 days by a host vehicle with a license plate number of shangxxxxx is taken as an example, and whether scene files of four first traffic scenes are included in the traffic environment data in a first time period is judged, where a first line indicates a position of a first traffic scene in the traffic environment data, a second line indicates a position of a second traffic scene in the traffic environment data, a third line indicates a position of a third traffic scene in the traffic environment data, and a fourth line indicates a position of a fourth traffic scene in the traffic environment data, it should be understood that the example in fig. 5 is merely for convenience of understanding the scheme, and is not used for limiting the scheme.
It should be noted that the server may perform steps 207 and 208 after performing step 206, or may perform steps 207 and 208 once after determining that the travel trajectory of a traffic participant entity around the host vehicle satisfies the first preset condition, that is, step 206 and steps 207 and 208 may be performed in a crossed manner. In addition, the execution sequence of steps 207 and 208 is not limited in the embodiment of the present application, and step 207 may be executed first, and then step 208 may be executed; step 208 may be performed first, and then step 207 may be performed.
209. The server acquires a first scene file from the traffic environment data in the first time period according to the first starting time and the first ending time.
In some embodiments of the present application, for any of one or more first traffic scenarios, after obtaining at least one set of a first start time and a first end time corresponding to the one first traffic scenario, the server may obtain a first scenario file from traffic environment data within a first time period.
Specifically, in one implementation, the server obtains a first scene file from the original traffic environment data of the first time period. More specifically, in one case, if the traffic environment data is in the form of a video, the server may cut the traffic environment data in the first time period according to the at least one set of the first start time and the first end time to obtain one or more scene files for showing the first traffic scene.
In another case, if the traffic environment data is in the form of point cloud data, the server may obtain, according to the at least one set of first start time and first end time, for a first end time after the set of first start time, because one point cloud subset corresponds to one time, a plurality of point cloud subsets corresponding to a plurality of times between the first start time and the first end time from the point cloud data set by the server, to obtain one scene file corresponding to the set of first start time and the first end time, and the server performs the foregoing operations on each set of first start time and first end time, to obtain one or more scene files for displaying the first traffic scene.
In one implementation, in order to obtain the first scenario file from the inference result of the first time period, since the inference result is identification information, prediction type, and coordinate information of each traffic-participating entity around the host vehicle at each of the N times (i.e., the N times included in the first time period), the server obtains the inference results corresponding to a plurality of times between the first start time and the first end time from the inference results corresponding to the N times, and regards the inference results corresponding to the plurality of times between the first start time and the first end time as one first scenario file.
In another implementation manner, the server may obtain inference results corresponding to a plurality of times between the first start time and the first end time from the inference results corresponding to the N times, and obtain original traffic environment data corresponding to a plurality of times between the first start time and the first end time from the original traffic environment data of the first time period, and comprehensively determine the first start time and the first end time as the first scene file.
Optionally, after obtaining each first scene file, the server may further add a label to each first scene file, that is, mark the label obtained from the first scene file as a corresponding first traffic scene, so as to facilitate subsequent formation of a scene data set.
210. The server acquires a second preset condition corresponding to the second traffic scene.
In this embodiment of the application, the server may further obtain a second preset condition corresponding to the second scene. The second traffic scenario may also be configurable, and the specific configuration manner may refer to the configuration manner of the first traffic scenario in step 204, which is not described herein again. The second traffic scenario is different from the first traffic scenario in that the second traffic scenario is a traffic scenario in which the own driving behavior of the host vehicle is judged. By way of example, the second traffic scenario is, for example, a main vehicle U-turn, a curve run, a slope run, an entrance to and exit from a main road and a subsidiary road, a roundabout run, an overpass run, a tunnel run, a toll station run, an entrance to and exit from a ramp, a railroad crossing, a high-rise building area run, a deceleration strip run, a pedestrian crossing run, an entrance to and exit from a parking lot, an automatic parking, a lane change, an on-edge parking and starting run, a temporary parking and starting run, a running on demand for speed limit, a restricted lane identification and correct use, a no-lane marking area identification and passage, a lane guidance instruction identification and response, a light and a whistle correct use, a temporary traffic facility detection, a temporary traffic control facility response, a road pit detection and response, a temporary traffic signal detection, an obstacle detection around a destination, an on-edge parking, a road foreign object detection and response, and the like, the second traffic scenario is not exhaustive here.
Similar to step 204, the second traffic scenario is also configurable, and the configuration manner may refer to the description in step 204, which is not described herein again. The second preset condition may include a second trigger condition and a second confirmation condition. The specific expression form of the second trigger condition and the second confirmation condition can be flexibly set in combination with a specific second traffic scene. As an example, for example, if the second traffic scenario is that the host vehicle makes a U-turn, the second trigger condition is that the left-turn direction angle of the host vehicle is greater than the first angle and the vehicle speed of the host vehicle is greater than the first vehicle speed, i.e. it is proved that the host vehicle starts turning in a traveling state; the second confirmation condition is that the host vehicle left-turn steering angle is less than the second angle, i.e., it is verified that the host vehicle steering process has been completed. In the embodiment of the application, a plurality of specific traffic scenes are disclosed, and the specific expression forms of the first preset condition or the second preset condition are disclosed under different traffic scenes, so that the realization flexibility of the scheme is enhanced.
Optionally, the server may further obtain a third filtering condition corresponding to the second traffic scene, where the third filtering condition is used to perform secondary filtering on the scene files meeting the second preset condition, so as to avoid erroneous obtaining of the scene files. As an example, for example, the second traffic scenario is that the host vehicle makes a U-turn, the third filtering condition is that the time of the whole turn is longer than the fourth time, and the difference between the initial direction angle of the host vehicle and the ending direction angle of the host vehicle is between 160 degrees and 220 degrees, and the like.
For a more intuitive understanding of the present disclosure, please refer to fig. 6, and fig. 6 is a schematic diagram of a second traffic scene in the method for acquiring a scene file according to the embodiment of the present disclosure. In fig. 6, a second traffic scenario is taken as an example of a main car performing U-turn, where fig. 6 shows an initial direction angle of the main car, an end direction angle of the main car, and a running direction of a traveling process of the main car, and it should be understood that the example in fig. 6 is only for convenience of understanding the scheme, and is not used to limit the scheme.
211. The server judges whether the running track of the main vehicle meets a second preset condition or not according to the control data of the main vehicle, and if so, the step 212 is carried out; and if not, confirming that the scene file for displaying the second traffic scene does not exist in the traffic environment data around the host vehicle.
In this embodiment of the application, after obtaining a second preset condition corresponding to one of the at least one second traffic scene, the server may determine whether the running trajectory of the host vehicle meets the second preset condition according to control data of the host vehicle in the first time period, and if so, enter step 212; and if not, confirming that the scene file for displaying the second traffic scene does not exist in the traffic environment data around the host vehicle.
Optionally, if the server may further obtain the position information of the host vehicle in the first time period, the server may further generate a first travel track of the host vehicle at the first time according to the position information of the host vehicle in the first time period, and determine whether the travel track of the host vehicle meets the requirement of the second traffic scene according to the first travel track, so as to improve the accuracy of the second traffic scene determination process.
Optionally, if the server further obtains a third filtering condition corresponding to the second traffic scene, after determining that the traveling trajectory of the host vehicle satisfies the second preset condition, the server further determines whether the traveling trajectory of the host vehicle satisfies the third filtering condition, and only if the traveling trajectory of the host vehicle satisfies both the second preset condition and the third filtering condition, the step 212 is performed; and if the running track of the main vehicle does not meet the third screening condition, continuing to judge whether the running track of the main vehicle meets a second preset condition or not according to the rest control data.
212. The server determines a second start time and a second end time corresponding to the second traffic scenario.
In some embodiments of the application, the server determines a second starting time according to the time when the running track of the host vehicle meets a second trigger condition; and determining a second termination time according to the time when the running track of the host vehicle meets a second confirmation condition.
Specifically, the server may directly determine the time when the travel trajectory of the host vehicle satisfies the second trigger condition as the second start time, or may determine a time a fifth time period before the time when the travel trajectory of the host vehicle satisfies the second trigger condition as the second start time. The value of the fifth time period may be 10 seconds, 20 seconds, 30 seconds, and the like, which is not limited herein.
The server may directly determine a time when the travel trajectory of the host vehicle satisfies the second confirmation condition as the second termination time, or may determine a time that is a sixth duration after the time when the travel trajectory of the host vehicle satisfies the second confirmation condition as the second termination time. The value of the sixth time period may be 10 seconds, 15 seconds, 20 seconds, 30 seconds, and the like, which is not limited herein.
It should be noted that, since the trajectory of the host vehicle may satisfy the second preset condition in more than one segment in the first time period, steps 211 and 212 may be executed in a crossed manner, or after executing step 211 multiple times, a plurality of sets of second start time and second end time may be generated through step 212 in one time.
213. And the server acquires a second scene file from the traffic environment data in the first time period according to the second starting time and the second ending time.
In this embodiment of the application, a specific implementation manner of step 213 is similar to that of step 209, and the difference is that the first start time in step 209 is replaced by the second start time in step 213, and the first end time in step 209 is replaced by the second end time in step 213, which is not described herein again.
Optionally, corresponding to step 209, after obtaining each second scene file, the server may add a label to each second scene file, that is, mark the label of the obtained second scene file as the corresponding second scene file, so as to facilitate subsequent formation of a scene data set.
It should be noted that steps 204 to 209 and steps 210 to 213 are optional steps, and if steps 210 to 213 are executed, the embodiment of the present application does not limit the execution sequence of steps 204 to 209 and steps 210 to 213, and steps 204 to 209 and steps 210 to 213 may be executed in parallel or executed in an intersecting manner, and the like, which are not limited herein.
In the embodiment of the application, the scene file used for displaying the traffic scene of the main vehicle and the traffic participation around the main vehicle, which are actually in direct or indirect interaction, can be collected, the scene file used for displaying the traffic scene of the running track of the main vehicle can be collected, the traffic scene of the scene file can be collected by the scheme, and the abundance of the obtained scene file is improved.
The main vehicle or the road end acquisition equipment is responsible for acquiring traffic environment data around the main vehicle, and the server is responsible for selecting a scene file for displaying a first traffic scene from the traffic environment data around the main vehicle, so that the occupation of computer resources of the main vehicle or the road end acquisition equipment is avoided, the perceptibility brought to a user of an automatic driving vehicle by the scheme is reduced as much as possible, and the poor user experience brought to the user of the automatic driving vehicle is avoided; and because the computer resource of the server is relatively abundant, carry out the selection operation by the server, also help to raise the efficiency of the selection process.
Secondly, acquiring a scene file for displaying a traffic scene from traffic environment data around the main vehicle by the main vehicle
Specifically, referring to fig. 7, fig. 7 is a schematic flow chart of a method for acquiring a scene file according to an embodiment of the present application, where the method for acquiring a scene file according to the embodiment of the present application may include:
701. the primary vehicle collects the surrounding original traffic environment data in real time.
In the embodiment of the present application, the host vehicle can acquire the surrounding original traffic environment data in real time, and the acquisition mode and the specific representation form of the original traffic environment data can refer to the description in step 201 in the embodiment corresponding to fig. 2, which is not described herein again.
702. The host vehicle obtains control data of the host vehicle in real time.
In some embodiments of the application, the host computer may further be capable of acquiring control data of the host computer in real time, and a specific expression form of the control data may refer to the description in step 202 in the embodiment corresponding to fig. 2, which is not described herein again.
703. The main vehicle obtains a first preset condition corresponding to the first traffic scene.
In the embodiment of the application, the server may send first preset conditions corresponding to one or more first traffic scenes to the host in advance, so that the host may obtain at least one first preset condition sent by the server, and execute the task of obtaining the scene file of at least one traffic scene in a parallel manner by performing one or more operations. The specific first traffic scene, the first preset condition and the specific implementation manner of step 703 may all refer to the description of step 204 in the embodiment shown in fig. 2, but since the recording is started only at the time when the first trigger condition is satisfied in the host car, but the start time of the scene file intercepted by the server may be a time when the first trigger condition is satisfied, so that, compared to the first trigger condition shown in step 204 (i.e. the first trigger condition configured in the server), the specific value of the first trigger condition configured in the host car may be adjusted, the trigger range of the first trigger condition configured in the host car may be larger, for example, in the case that the first traffic scene is a traffic scene in which a side car cuts into a host car lane, the first trigger condition includes that the side car is located in a first preset range ahead, and the lateral distance between the side car and the host car is changed from being greater than the first preset distance to being less than or equal to the first preset distance, if the value of the first preset distance in the server is 4 meters, the value of the first preset distance in the main vehicle can be 4.5 meters, and it should be understood that the example is only for convenience in understanding the scheme, and is not used for limiting the scheme.
704. The primary vehicle inputs the original traffic environment data around the primary vehicle into the target tracking model to obtain an inference result output by the target tracking model, and the inference result is used for indicating the position information of the traffic participant entities around the primary vehicle.
705. The main vehicle judges whether a first traffic participation entity meeting a first trigger condition exists around the main vehicle according to the reasoning result, if so, the step 706 is carried out; if not, step 704 is re-entered.
In an embodiment of the application, the method is applied to one of one or more first traffic scenes. The method comprises the steps that as a main vehicle collects original traffic environment data around the main vehicle in real time, the original traffic environment data obtained in real time can be input into a target tracking model to obtain position information of traffic participating entities around the main vehicle at each moment, and running tracks of the traffic participating entities around the main vehicle can be reflected, so that the main vehicle can judge whether a first traffic participating entity meeting a first trigger condition exists around the main vehicle in real time, and if the first traffic participating entity meets the first trigger condition, the method enters step 706; if not, step 704 is re-entered, that is, the newly acquired original traffic environment data is input into the target tracking model, it should be noted that steps 704 and 705 are performed in a crossing manner until it is determined that there is a first traffic participant entity around the host vehicle whose travel trajectory satisfies the first trigger condition.
For the specific meanings of the nouns in steps 704 and 705 and the specific implementation manners of steps 704 and 705, reference may be made to the descriptions in steps 203 and 205 in the corresponding embodiment of fig. 2, which are not described herein again.
706. The master vehicle starts recording traffic environment data around the master vehicle.
In some embodiments of the present application, after determining whether there is a first traffic participant entity around the host that satisfies the first trigger condition, the host initiates recording of traffic environment data around the host (i.e., storing the traffic environment data around the host), where the original traffic environment data around the host and the inference result are both the traffic environment data around the host. Specifically, in one implementation, the host vehicle may store the inference result generated in step 704 in real time; in another implementation, the host vehicle may store the surrounding original traffic environment data acquired in real time in step 701; in another implementation, the host vehicle may store the inference result generated in real time in step 704 and the surrounding raw traffic environment data collected in real time in step 701.
707. The master vehicle judges whether the first traffic participation entity meets the first confirmation condition, and if so, the master vehicle enters step 708; if not, deleting the recorded traffic environment data.
In the embodiment of the application, after determining that a first traffic participant entity meeting a first trigger condition exists around the host vehicle, the host vehicle may determine whether the first traffic participant entity meets a first confirmation condition within a first preset time period, and if so, go to step 708; if not, deleting the recorded traffic environment data. For the meaning of the first confirmation condition and the specific implementation manner of step 708, refer to the description of step 205 in the embodiment corresponding to fig. 2, which is not described herein again.
708. The main vehicle determines the recording stopping time of the traffic environment data around the main vehicle, wherein the recording stopping time is determined according to the time when the first traffic participant entity meets the first confirmation condition.
In some embodiments of the application, after determining that the running trajectory of the first traffic participant entity satisfies the first confirmation condition, the host vehicle may determine a recording stop time of the traffic environment data around the host vehicle according to a time when the first traffic participant entity satisfies the first confirmation condition, and stop recording the traffic environment data around the host vehicle at the recording stop time, where a specific confirmation manner may refer to the description in step 208 in the embodiment corresponding to fig. 2, and details of the confirmation manner are not repeated here.
709. The main vehicle judges whether the running track of the main vehicle and/or the running track of the first traffic participant entity meet the first screening condition, if so, the step 710 is carried out; if not, step 704 is re-entered.
In the embodiment of the present application, a specific implementation manner of step 709 may refer to the description in step 206 in the embodiment corresponding to fig. 2, which is not described herein again.
710. The recorded traffic environment data around the host vehicle is determined by the host vehicle as a first scene file for showing the first traffic scene.
In the embodiment of the present application, step 709 is an optional step, and if step 709 is not executed, step 710 may be directly executed after step 708 is executed. After the recorded traffic environment data around the host vehicle is determined to be used for displaying the first scene file corresponding to the first traffic scene by the host vehicle, a label can be added to the first scene file, and the first scene file added with the label is sent to the server.
711. The host vehicle obtains a second preset condition corresponding to the second traffic scene.
712. Judging whether the running track of the main vehicle meets a second trigger condition or not according to the control data of the main vehicle, and if so, entering a step 713; if not, continuing to judge according to the newly acquired control data.
In the embodiment of the present application, the specific implementation manner of steps 711 and 712 may refer to the description in steps 210 and 211 in the corresponding embodiment of fig. 2, which is not described herein again. It should be noted that, since recording is started only at the time that the second trigger condition is satisfied in the host vehicle, but the start time of the scene file intercepted by the server may be a time that is advanced forward by the time that the second trigger condition is satisfied, a specific value of the second trigger condition configured in the host vehicle is adjusted, and a trigger range of the second trigger condition configured in the host vehicle is larger than that of the second trigger condition configured in the server.
713. The master vehicle starts recording traffic environment data around the master vehicle.
714. The main vehicle judges that the running track of the main vehicle meets a second confirmation condition, and if the running track of the main vehicle meets the second confirmation condition, the step 715 is executed; if not, deleting the recorded traffic environment data.
In the embodiment of the present application, the specific implementation manner of step 714 may refer to the description in step 211 in the corresponding embodiment of fig. 2, which is not described herein again.
715. The main vehicle determines the recording stopping time of the traffic environment data around the main vehicle, wherein the recording stopping time is determined according to the time when the running track of the main vehicle meets the second confirmation condition.
In this embodiment of the application, a specific implementation manner of determining the recording stopping time according to the time when the moving trajectory of the host vehicle meets the second confirmation condition in step 715 may refer to the description of determining the second termination time in step 212 in the embodiment corresponding to fig. 2, which is not described herein again. And stopping recording the surrounding traffic environment data according to the determined recording stopping time so as to obtain a second scene file.
Optionally, if the host vehicle further obtains a third screening condition corresponding to the second traffic scene, the server determines whether the running trajectory of the host vehicle meets the second confirmation condition, and then determines whether the running trajectory of the host vehicle meets the third screening condition, and only if the running trajectory of the host vehicle meets both the second preset condition and the third screening condition, step 715 is executed; and if the running track of the main vehicle does not meet the third screening condition, deleting the recorded traffic environment data, and re-entering the step 712 to continuously judge whether the running track of the main vehicle meets the second preset condition or not according to the remaining control data.
716. And the host vehicle determines the recorded traffic environment data around the host vehicle as a second scene file for showing the second traffic scene.
In the embodiment of the application, after the recorded traffic environment data around the host vehicle is determined to be used for displaying the second scene file corresponding to the second traffic scene, the host vehicle may further add a label to the second scene file, and send the second scene file added with the label to the server.
It should be noted that steps 711 to 716 are optional steps, and if steps 711 to 716 are executed, the execution order of steps 704 to 710 and steps 711 to 716 is not limited in the embodiment of the present application, and steps 704 to 710 and steps 711 to 716 may be executed in parallel, or alternatively executed, and the like, which are not limited herein.
For a more intuitive understanding of the present disclosure, please refer to fig. 8, and fig. 8 is a schematic flow chart of a method for acquiring a scene file according to an embodiment of the present disclosure, in which a vehicle end (i.e., a main vehicle) is configured with a data acquisition module and a traffic scene configuration module of a vehicle, and the main vehicle acquires traffic environment data around the main vehicle through the data acquisition module of the vehicle and inputs the acquired traffic environment data into a target tracking model in real time to obtain an inference result output by the target tracking model. The traffic scene configuration module is used for judging whether the triggering condition and the confirmation condition of the target traffic scene are met according to the output reasoning result, so that when the triggering condition of the target traffic scene is met, the traffic environment data for displaying the target traffic scene is recorded in a directional mode, and after the confirmation condition of the target traffic scene is met, the recording of the traffic environment data for displaying the target traffic scene is stopped, so that a directionally recorded scene file (namely the scene file for displaying the target traffic scene) is obtained. The vehicle end sends the directionally recorded scene files to the cloud end (that is, the server end), the cloud end is configured with a data set conversion module, the cloud end forms a scenized data set according to the directionally recorded scene files sent by the plurality of vehicle ends, and specific implementation manners of the steps in fig. 8 can refer to the description in the embodiment corresponding to fig. 7, it should be understood that the example in fig. 8 is only for facilitating understanding of the architecture of the method for obtaining the scene files provided in the embodiment of the present application, and is not used for limiting the present solution.
In the embodiment of the application, the traffic environment data around the main vehicle is collected in real time by the main vehicle, and the scene file for displaying the traffic scene is selected from the traffic environment data around the main vehicle, so that the main vehicle only sends the scene file for displaying the traffic scene to the server, unnecessary files are prevented from being sent to the server, the use of communication resources is reduced, and the waste of server storage resources is also reduced.
The scheme for automatically acquiring the scene file for displaying the traffic scene is provided, and is more labor-saving and convenient compared with a mode of manually editing and generating the scene file; because the scene file is from the traffic environment data around the main vehicle, and the traffic environment data around the main vehicle is obtained from the data acquired based on the entities in the traffic roads, the finally obtained scene file is more real, so that the subsequent simulation training process of the automatic driving vehicle is closer to reality, and the safety of the trained automatic driving vehicle is improved; the technical staff researches and discovers that most of traffic environment data in the data collected by the road end are data of common traffic scenes, only less traffic environment data are data of traffic scenes needing important attention, and the first traffic scene in the scheme is configurable, so that scene files of the traffic scenes really wanted by the user can be automatically screened out in a mode of configuring the first traffic scene, and the efficiency of the scene file generation process is further improved.
In order to understand the scheme more intuitively, the method for acquiring the scene file provided by the embodiment of the application is introduced by taking two traffic scenes of a bypass cutting into a main lane as an example.
Specifically, referring to fig. 9, fig. 9 is a schematic flowchart of a method for acquiring a scene file according to an embodiment of the present disclosure, where in fig. 9, a server acquires a scene file for displaying a traffic scene from traffic environment data in a first time period as an example, the method for acquiring a scene file according to an embodiment of the present disclosure may include:
901. the server obtains the original traffic environment data around the host vehicle within 3 hours.
902. The server obtains control data of the host vehicle within 3 hours.
903. The server inputs the original traffic environment data around the host vehicle in the 3 rd hour into the target tracking model to obtain an inference result output by the target tracking model, wherein the inference result is used for indicating the position information of the traffic participant entities around the host vehicle at each moment in the 3 rd hour.
904. The server obtains a first preset condition corresponding to a first traffic scene, the first traffic scene is that a side vehicle cuts into a main vehicle lane, the first preset condition comprises a first trigger condition and a first confirmation condition, the first trigger condition comprises that the side vehicle is located in a first preset range in front of the main vehicle, the transverse distance between the side vehicle and the main vehicle is changed from being larger than a first preset distance to being smaller than or equal to the first preset distance, the first confirmation condition comprises that the transverse distance between the side vehicle and the main vehicle is changed from being larger than a second preset distance to being smaller than or equal to a second preset distance, and the second preset distance is smaller than the first preset distance.
In the embodiment of the present application, specific meanings of the terms in steps 901 to 904 and specific implementation manners of the steps can be referred to the descriptions of steps 201 to 204 in the embodiment corresponding to fig. 2, which are not described herein again.
905. The server judges whether a first traffic participation entity meeting a first trigger condition exists around the motion trail of the main vehicle according to the reasoning result, if so, the step 906 is carried out; and if not, confirming that the scene file for displaying the first traffic scene does not exist in the traffic environment data around the host vehicle.
906. The server judges whether the running track of the first traffic participant entity meets the first confirmation condition within 10 seconds according to the reasoning result, and if so, the server enters a step 907; if not, step 905 is re-entered.
In the embodiment of the present application, the specific meaning of the nouns in steps 905 and 906 and the specific implementation manner of the steps can refer to the description of step 205 in the embodiment corresponding to fig. 2, which is not described herein again.
For a more intuitive understanding of the present disclosure, please refer to fig. 10, and fig. 10 is a schematic diagram of a first traffic scenario provided in the embodiment of the present disclosure. As shown in the figure, the circle of the broken line is a range perceived by the host vehicle (i.e., a range included in traffic environment data around the host vehicle). Before executing step 905, the server may filter a plurality of objects according to the object types of the traffic participant objects around the host vehicle to screen out the objects that need attention in the current scene task. Since the current first traffic scene is a traffic scene in which a side vehicle cuts into a main vehicle lane, the objects needing attention are motor vehicles, and the traffic participation objects C and D with object types not being the motor vehicles are screened out. The server acquires the position information of the traffic participation object a and the traffic participation object B at each time from the inference result, so as to obtain the movement trajectories of the traffic participation object a and the traffic participation object B, and in fig. 10, the movement trajectories of the traffic participation object a and the traffic participation object B are visually displayed (the visual trajectories are generated according to the position information of each traffic participation object at each time). Wherein, the distance between the two lines represented by G1 and G2 in fig. 10 and the centerline of the host vehicle is a first preset distance, and the first preset distance may be 1.5 meters, 2 meters or other distance values; the distance between the two lines represented by G3 and G4 and the centerline of the host vehicle is a second predetermined distance, which may be 0.5 meters, 1 meter, or other distance value. It should be noted that the distance between the traffic-participating object around the host vehicle and the host vehicle may also refer to a distance between a center line of the traffic-participating object and a center line of the host vehicle, a distance between a left edge line of the traffic-participating object and an edge line of the host vehicle, a distance between a right edge line of the traffic-participating object and a right edge line of the host vehicle, and the like.
The method comprises the steps that firstly, aiming at a traffic participation object A around a main vehicle, the motion trail of the traffic participation object A meets a first trigger condition, within a first preset time length, the motion trail of the traffic participation object A meets a first confirmation condition, the value of the first preset time length can be 5 seconds, 9 seconds, 30 seconds or other time lengths, and the like, and the traffic participation object A around the main vehicle is determined to be the traffic participation object around the main vehicle meeting the first preset condition. Referring to the traffic participant B around the host vehicle, the movement track of the traffic participant B satisfies the first trigger condition, but the movement track of the traffic participant B does not satisfy the first termination condition, and the traffic participant B around the host vehicle is determined as the traffic participant around the host vehicle that does not satisfy the first preset condition.
907. The server judges whether the steering angle of the host vehicle is smaller than or equal to a first preset angle or not and the speed of the host vehicle is larger than or equal to a second speed or not, and if the judgment result is yes, the step 908 is carried out; if not, step 905 is re-entered.
908. The server determines a first starting time for the 9 th second before the time when the first traffic participation entity meets the first trigger condition, and determines a first ending time for the 9 th second after the time when the first traffic participation entity meets the first confirmation condition.
909. The server acquires a first scene file from the traffic environment data in the first time period according to the first starting time and the first ending time.
In the embodiment of the present application, specific meanings of the terms in steps 907 to 904 and specific implementation manners of the steps can be referred to the description in the corresponding embodiment of fig. 2, which is not described herein again.
Referring to fig. 11, fig. 11 is a schematic flow diagram of a method for acquiring a scene file according to an embodiment of the present application, where the method for acquiring a scene file according to an embodiment of the present application may include:
1101. the electronic equipment receives configuration operation of a user on the first traffic scene through the display interface.
In the embodiment of the application, the electronic device may receive configuration operation of the user on at least one first traffic scene through the presentation interface. Specifically, in one implementation, a plurality of different types of traffic scenes may be displayed in a display interface of the electronic device, so that a user may perform a selection operation on one or more traffic scenes to input a configuration operation on a first traffic scene. In another implementation manner, a display interface of the electronic device may display a traffic scene limiting condition, a user may perform a selection operation on the traffic scene limiting condition to complete configuration of one first traffic scene, and the user may repeatedly perform the foregoing operation to complete configuration of a plurality of first traffic scenes.
For a more intuitive understanding of the present disclosure, please refer to fig. 12, where fig. 12 is a schematic diagram of a configuration interface of a traffic scene in a scene file obtaining method according to an embodiment of the present disclosure. In fig. 12, the limiting conditions of the traffic scene shown in the presentation interface are taken as examples, in fig. 12, "by-pass cut in", "forward cut out", "motorcycle crossing", "dawn", and "tunnel" are all limiting conditions of the traffic scene, and the user may select the limiting conditions to implement configuration of a certain traffic scene, and after the user completes configuration of a traffic scene, the user may perform a selection operation on the "confirm" button to implement configuration of a traffic scene, it should be understood that the example in fig. 12 is only for convenience of understanding of the present solution, and is not used to limit the present solution.
1102. The electronic device obtains raw traffic environment data surrounding the host vehicle over a first time period.
In an embodiment of the application, in an implementation manner, the original traffic environment data around the host vehicle obtained by the electronic device in the first time period is preset, that is, the user cannot autonomously select the original traffic environment data. In another implementation manner, the electronic device obtains, through the display interface, a configuration operation of a user on original traffic environment data around the host vehicle in a first time period, and further obtains the original traffic environment data pointed by the configuration operation. The original traffic environment data around the main vehicle is acquired by a sensor configured by an entity located in a traffic road.
For a more intuitive understanding of the present disclosure, please refer to fig. 13, and fig. 13 is a schematic diagram of a configuration interface of a traffic scene in the scene file acquiring method according to the embodiment of the present application. Fig. 13 is to be understood in conjunction with the above description of fig. 12, and unlike fig. 12, an input interface for raw traffic environment data is added to fig. 13. In one implementation, a user may input a storage address of the original traffic environment data through the interface to implement configuration of the original traffic environment data. In another implementation, when the user clicks on the interface, a pop-up pull-down menu may be triggered, in which a number for each piece of original traffic environment data is shown, so that the user may configure the original traffic environment data by clicking on the corresponding number. In another implementation, when the user clicks the interface, an access function to the data stored in the electronic device may be triggered, so that the user may select the original traffic environment data from the data stored in the electronic device to implement configuration of the original traffic environment data, and the like.
1103. The electronic equipment inputs the original traffic environment data around the main vehicle into the target tracking model to obtain an inference result output by the target tracking model, and the inference result is used for indicating the position information of the traffic participant entities around the main vehicle.
1104. The electronic equipment acquires a first preset condition corresponding to a first traffic scene.
1105. The electronic equipment judges whether a first traffic participation entity meeting a first preset condition exists around the main vehicle according to the reasoning result, and if so, the electronic equipment enters step 1106; and if not, confirming that the scene file for displaying the first traffic scene does not exist in the traffic environment data around the host vehicle.
1106. The electronic equipment judges whether the running track of the main vehicle and/or the running track of the first traffic participant entity meet the first screening condition, and if so, the step 1107 is carried out; if not, step 1105 is re-entered.
1107. The electronic equipment determines a first starting time and a first ending time corresponding to the first traffic scene, and displays the first starting time and the first ending time to a user.
1108. The electronic equipment acquires a first scene file from traffic environment data in a first time period.
In the embodiment of the present application, the specific implementation manner of steps 1103 to 1108 may refer to the description in steps 209 to 208 in the embodiment corresponding to fig. 2, which is not described herein again.
1109. And the electronic equipment receives configuration operation of the user on the second traffic scene through the display interface.
1110. The electronic device obtains control data for the host vehicle over a first time period.
In the embodiment of the present application, the specific implementation manner of steps 1109 and 1110 may refer to the description of steps 1101 and 1102 described above, and details are not described herein again. It should be noted that the execution sequence between steps 1101 and 1102 and steps 1109 and 1110 is not limited in the embodiments of the present application, and the specific execution sequence may depend on whether the user configures the second traffic scenario or the first traffic scenario first.
1111. The electronic device obtains a second preset condition corresponding to a second traffic scene.
1112. The electronic equipment judges whether the running track of the main vehicle meets a second preset condition or not according to the control data of the main vehicle, and if so, the step 1113 is carried out; and if not, confirming that the scene file for displaying the second traffic scene does not exist in the traffic environment data around the host vehicle.
1113. And the electronic equipment determines a second starting time and a second ending time corresponding to the second traffic scene, and displays the second starting time and the second ending time to the user.
1114. The electronic equipment acquires a second scene file from the traffic environment data in the first time period.
In the embodiment of the present application, the specific implementation manner of steps 1111 to 1114 may refer to the description in steps 210 to 213 in the embodiment corresponding to fig. 2, which is not described herein again.
It should be noted that steps 1111 to 1114 are optional steps, and if steps 1111 to 1114 are executed, the embodiment of the present application does not limit the sequence between steps 1111 to 1113 and steps 1103 to 1108, and steps 1111 to 1113 and steps 1103 to 1108 may be executed in parallel; or, the steps 1103 to 1108 may be executed first, and then the steps 1111 to 1113 may be executed; alternatively, steps 1111 to 1113 may be performed first, and then steps 1103 to 1108 may be performed.
On the basis of the embodiments corresponding to fig. 1 to fig. 13, in order to better implement the above-mentioned scheme of the embodiments of the present application, the following also provides related equipment for implementing the above-mentioned scheme. Specifically, referring to fig. 14, fig. 14 is a schematic structural diagram of an apparatus for acquiring a scene file according to an embodiment of the present application. The acquiring device 1400 of the scene file may include an acquiring module 1401 and a determining module 1402, where the acquiring module 1401 is configured to acquire traffic environment data around the host vehicle, where the traffic environment data around the host vehicle is used to indicate position information of traffic participant entities around the host vehicle, and the traffic environment data around the host vehicle is obtained based on data collected by entities in a traffic road; the obtaining module 1401 is further configured to obtain a first preset condition corresponding to a first traffic scene, where the first traffic scene is configurable; a determining module 1402, configured to determine a first start point and a first end point corresponding to the first traffic scene if it is determined that there are traffic-participating entities satisfying a first preset condition around the host according to the traffic environment data around the host. Wherein the first start point indicates a start point of a first scene file for showing the first traffic scene in the traffic environment data, and the first end point indicates an end point of the first scene file for showing the first traffic scene in the traffic environment data.
In one possible design, the obtaining module 1401 is specifically configured to: acquiring original traffic environment data around the main vehicle; the method comprises the steps of inputting original traffic environment data around the main vehicle into a target tracking model to obtain an inference result output by the target tracking model, wherein the inference result is used for indicating position information of traffic participation entities around the main vehicle, the position information of the traffic participation entities around the main vehicle can be used for running tracks of the traffic participation entities around the main vehicle, and the original traffic environment data and the inference result around the main vehicle belong to the traffic environment data around the main vehicle. The determining module 1402 is specifically configured to determine a first start point and a first end point corresponding to the first traffic scene when it is determined that there is a traffic participant entity whose travel trajectory satisfies a first preset condition around the host vehicle according to the inference result.
In one possible design, the obtaining module 1401 is further configured to obtain control data of the host vehicle, where the control data of the host vehicle is used for indicating a traveling trajectory of the host vehicle; the obtaining module 1401 is further configured to obtain a second preset condition corresponding to a second traffic scene; the determining module 1402 is further configured to determine a second starting point and a second ending point corresponding to the second traffic scene if it is determined that the trajectory of the host vehicle satisfies a second preset condition according to the control data of the host vehicle. Wherein the second starting point indicates a starting point of a second scene file for showing the second traffic scene in the traffic environment data, and the second ending point indicates an ending point of the second scene file for showing the second traffic scene in the traffic environment data.
In one possible design, the scene file obtaining device is configured in the server, the entity located in the traffic road includes a main vehicle and/or a road end collecting device, the first starting point is a first starting time, and the first ending point is a first ending time; an obtaining module 1401, specifically configured to receive original traffic environment data around the host vehicle in a first time period sent by the host vehicle and/or the road-end collecting device; the obtaining module 1401 is further configured to obtain a first scene file from traffic environment data around the host vehicle in the first time period according to the first starting time and the first ending time, and mark a label of the first scene file as a first traffic scene.
In one possible design, the position information of the traffic-participating entities around the host vehicle can be used for the travel trajectory of the traffic-participating entities around the host vehicle, the first preset condition includes a first trigger condition and a first termination condition, and the determining module 1402 is specifically configured to: generating a first starting moment according to the traffic environment data around the host vehicle and a first trigger condition, wherein the first starting moment is determined according to the moment that the running track of a first traffic participant entity around the host vehicle meets the first trigger condition; if the running track of the first traffic participant entity meets the first confirmation condition within the first preset time length, generating a first termination time, wherein the first termination time is determined according to the time when the running track of the first traffic participant entity meets the first confirmation condition.
In one possible design, the scene file acquiring device is configured in a main vehicle, an entity located in a traffic road is the main vehicle, a first preset condition comprises a first trigger condition and a first confirmation condition, and the acquiring module 1401 is specifically configured to acquire original traffic environment data around the main vehicle in real time; the determining module 1402 is specifically configured to: when determining that a first traffic participation entity with a running track meeting a first trigger condition exists around the main vehicle according to the reasoning result, starting recording traffic environment data around the main vehicle; if the running track of the first traffic participant entity meets the first confirmation condition within the first preset time length, determining the recording stopping time of the traffic environment data around the main vehicle, determining the recorded traffic environment data around the main vehicle as a first scene file corresponding to the first traffic scene, and determining the recording stopping time according to the time when the running track of the first traffic participant entity meets the first confirmation condition.
In one possible design, the obtaining module 1401 is further configured to obtain map data that matches traffic environment data around the host vehicle; the determining module 1402 is specifically configured to determine a first start point and a first end point corresponding to the first traffic scene if it is determined that a traffic participant entity satisfying a first preset condition exists around the host vehicle according to the traffic environment data and the map data around the host vehicle.
In one possible design, the position information of the traffic participant entities around the host vehicle is used for reflecting the running tracks of the traffic participant entities around the host vehicle; an obtaining module 1401, further configured to obtain control data of the host vehicle, where the control data of the host vehicle is used for indicating a running track of the host vehicle; an obtaining module 1401, configured to obtain a first screening condition corresponding to a first traffic scene; the determining module 1402 is specifically configured to determine a first start point and a first end point corresponding to the first traffic scene when it is determined that a first traffic participant entity exists around the host, and the running trajectory of the host and/or the running trajectory of the first traffic participant entity meet the first screening condition, according to the traffic environment data around the host and the control data of the host.
In one possible design, the first preset condition includes a first trigger condition and a first confirmation condition, and the second preset condition includes a second trigger condition and a second confirmation condition; the first traffic scene is a traffic scene cut out of the main vehicle lane by a front vehicle, the first trigger condition comprises that a first bypass vehicle which is the front of the main vehicle and is closest to the longitudinal distance of the main vehicle drives out of a first trigger line, the first confirmation condition comprises that the first bypass vehicle drives out of a first confirmation line, and the transverse distance between the first confirmation line and the main vehicle is larger than the transverse distance between the first trigger line and the main vehicle; or the first traffic scene is a traffic scene in which the side vehicle cuts into the main vehicle lane, the first trigger condition comprises that the side vehicle is positioned in a first preset range in front of the main vehicle, the transverse distance between the side vehicle and the main vehicle is changed from being larger than a first preset distance to being smaller than or equal to the first preset distance, the first confirmation condition comprises that the transverse distance between the side vehicle and the main vehicle is changed from being larger than a second preset distance to being smaller than or equal to a second preset distance, and the second preset distance is smaller than the first preset distance; or the second traffic scene is a traffic scene in which the main vehicle makes a U-shaped turn, the second trigger condition is that the left-turn direction angle of the main vehicle is greater than the first angle and the vehicle speed of the main vehicle is greater than the first vehicle speed, and the second confirmation condition is that the left-turn direction angle of the main vehicle is less than the second angle.
It should be noted that, the contents of information interaction, execution process, and the like between the modules/units in the apparatus 1400 for acquiring a scene file are based on the same concept as the method embodiments corresponding to fig. 2 to fig. 10 in the present application, and specific contents may refer to the description in the foregoing method embodiments in the present application, and are not repeated herein.
Referring to fig. 15, fig. 15 is a schematic structural diagram of an apparatus for acquiring a scene file according to an embodiment of the present application. The acquiring device 1500 of the scene file may include an acquiring module 1501 and a determining module 1502, where the acquiring module 1501 is configured to acquire traffic environment data around the host vehicle and control data of the host vehicle, the traffic environment data around the host vehicle is obtained based on data collected by entities in a traffic road, and the control data of the host vehicle is used for indicating a running track of the host vehicle; the obtaining module 1501 is further configured to obtain a second preset condition corresponding to a second traffic scene, where the second traffic scene is configurable; a determining module 1502 is configured to determine a second starting point and a second ending point corresponding to the second traffic scene if it is determined that the trajectory of the host vehicle satisfies a second preset condition according to the control data of the host vehicle. Wherein the second starting point indicates a starting point of a second scene file for showing the second traffic scene in the traffic environment data, and the second ending point indicates an ending point of the second scene file for showing the second traffic scene in the traffic environment data.
In one possible design, traffic environment data surrounding the host vehicle is used to indicate position information of traffic-participating entities surrounding the host vehicle; the acquiring module 1501 is further configured to acquire a first preset condition corresponding to a first traffic scene, where the first traffic scene is configurable; the determining module 1502 is further configured to determine a first start point and a first end point corresponding to the first traffic scene if it is determined that a traffic participant entity meeting a first preset condition exists around the host according to the traffic environment data around the host. Wherein the first start point indicates a start point of a first scene file for showing the first traffic scene in the traffic environment data, and the first end point indicates an end point of the first scene file for showing the first traffic scene in the traffic environment data.
It should be noted that, the contents of information interaction, execution process, and the like between the modules/units in the apparatus 1500 for acquiring a scene file are based on the same concept as the method embodiments corresponding to fig. 2 to fig. 10 in the present application, and specific contents may refer to the description in the foregoing method embodiments in the present application, and are not described herein again.
Referring to fig. 16, fig. 16 is a schematic structural diagram of an apparatus for acquiring a scene file according to an embodiment of the present application. The scene file acquiring device 1600 may include: the receiving module 1601 is configured to receive a configuration operation of a user on a first traffic scene through a display interface, and acquire a first preset condition corresponding to the first traffic scene; an obtaining module 1602, configured to obtain traffic environment data around the host vehicle in a first time period, where the traffic environment data around the host vehicle is used to indicate position information of traffic-participating entities around the host vehicle, and the traffic environment data around the host vehicle is obtained based on data collected by entities in a traffic road; the presentation module 1603 is configured to, in a case where it is determined that there are traffic participating entities satisfying a first preset condition around the host according to traffic environment data around the host in a first time period, determine a first start time and a first end time corresponding to a first traffic scene from within the first time period, and present the first start time and the first end time to a user.
It should be noted that, the contents of information interaction, execution process, and the like between the modules/units in the apparatus 1600 for acquiring a scene file are based on the same concept as the method embodiments corresponding to fig. 11 to 13 in the present application, and specific contents may refer to the description in the foregoing method embodiments in the present application, and are not described herein again.
An embodiment of the present application further provides an electronic device, which may be specifically represented as a server or a vehicle, where the obtaining apparatus 1400 for a scene file described in fig. 14 may be deployed on the electronic device, or the obtaining apparatus 1500 for a scene file described in fig. 15 may be deployed on the electronic device, or the obtaining apparatus 1600 for a scene file described in fig. 16 may be deployed on the electronic device. When the electronic device is in the form of a server, please refer to fig. 17, fig. 17 is a schematic structural diagram of the server according to the embodiment of the present disclosure, in particular, the server 1700 is implemented by one or more servers, and the server 1700 may generate a large difference due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 1722 (e.g., one or more processors) and a memory 1732, and one or more storage media 1730 (e.g., one or more mass storage devices) for storing applications 1742 or data 1744. Memory 1732 and storage media 1730 may be transitory storage or persistent storage, among other things. The program stored in the storage medium 1730 may include one or more modules (not shown), each of which may include a sequence of instructions operating on a server. Further, the central processor 1722 may be configured to communicate with the storage medium 1730 to execute a series of instruction operations in the storage medium 1730 on the server 1700.
The server 1700 may also include one or more power supplies 1726, one or more wired or wireless network interfaces 1750, one or more input-output interfaces 1758, and/or one or more operating systems 1741 such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
When the electronic device is in a vehicle form, please refer to fig. 18, and fig. 18 is a schematic structural diagram of an autonomous vehicle according to an embodiment of the present application, in which the vehicle 100 may be configured in an autonomous mode. For example, the vehicle 100 may control itself while in the autonomous driving mode, and may determine the current state of the vehicle and its surroundings by human operation, determine whether there is an obstacle in the surroundings, and control the vehicle 100 based on information of the obstacle. The vehicle 100 may also be placed into operation without human interaction while the vehicle 100 is in the autonomous driving mode.
The vehicle 100 may include various subsystems such as a travel system 102, a sensor system 104, a control system 106, one or more peripherals 108, as well as a power supply 110, a computer system 112, and a user interface 116. Alternatively, vehicle 100 may include more or fewer subsystems, and each subsystem may include multiple components. In addition, each of the sub-systems and components of the vehicle 100 may be interconnected by wire or wirelessly.
The travel system 102 may include components that provide powered motion to the vehicle 100. In one embodiment, the travel system 102 may include an engine 118, an energy source 119, a transmission 120, and wheels/tires 121.
The engine 118 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine composed of a gasoline engine and an electric motor, and a hybrid engine composed of an internal combustion engine and an air compression engine. The engine 118 converts the energy source 119 into mechanical energy. Examples of energy sources 119 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electrical power. The energy source 119 may also provide energy to other systems of the vehicle 100. The transmission 120 may transmit mechanical power from the engine 118 to the wheels 121. The transmission 120 may include a gearbox, a differential, and a drive shaft. In one embodiment, the transmission 120 may also include other devices, such as a clutch. Wherein the drive shaft may comprise one or more shafts that may be coupled to one or more wheels 121.
The sensor system 104 may include a number of sensors that sense information about the environment surrounding the vehicle 100. For example, the sensor system 104 may include a positioning system 122 (which may be a global positioning GPS system, a compass system, or other positioning system), an Inertial Measurement Unit (IMU) 124, a radar 126, a laser range finder 128, and a camera 130. The sensor system 104 may also include sensors of internal systems of the monitored vehicle 100 (e.g., an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). The sensory data from one or more of these sensors may be used to detect the traffic participating entities and their respective characteristics (position, shape, direction, speed, etc.). Such detection and identification is a critical function of the safe operation of the autonomous vehicle 100. The sensor mentioned in the following embodiments of the present application may be a radar 126, a laser range finder 128, a camera 130, or the like.
The positioning system 122 may be used, among other things, to estimate the geographic location of the vehicle 100. The IMU 124 is used to sense position and orientation changes of the vehicle 100 based on inertial acceleration. In one embodiment, IMU 124 may be a combination of an accelerometer and a gyroscope. The radar 126 may utilize radio signals to sense objects within the surrounding environment of the vehicle 100, which may be embodied as millimeter wave radar or radar. In some embodiments, in addition to sensing objects, radar 126 may also be used to sense the speed and/or heading of an object. The laser rangefinder 128 may use laser light to sense objects in the environment in which the vehicle 100 is located. In some embodiments, the laser rangefinder 128 may include one or more laser sources, laser scanners, and one or more detectors, among other system components. The camera 130 may be used to capture multiple images of the surrounding environment of the vehicle 100. The camera 130 may be a still camera or a video camera.
The control system 106 is for controlling the operation of the vehicle 100 and its components. The control system 106 may include various components including a steering system 132, a throttle 134, a braking unit 136, a computer vision system 140, a line control system 142, and an obstacle avoidance system 144.
Wherein the steering system 132 is operable to adjust the heading of the vehicle 100. For example, in one embodiment, a steering wheel system. The throttle 134 is used to control the operating speed of the engine 118 and thus the speed of the vehicle 100. The brake unit 136 is used to control the deceleration of the vehicle 100. The brake unit 136 may use friction to slow the wheel 121. In other embodiments, the brake unit 136 may convert the kinetic energy of the wheel 121 into an electric current. The brake unit 136 may take other forms to slow the rotational speed of the wheels 121 to control the speed of the vehicle 100. The computer vision system 140 may be operable to process and analyze images captured by the camera 130 to identify objects and/or features in the environment surrounding the vehicle 100. The objects and/or features may include traffic signals, road boundaries, and obstacles. The computer vision system 140 may use object recognition algorithms, Structure From Motion (SFM) algorithms, video tracking, and other computer vision techniques. In some embodiments, the computer vision system 140 may be used to map an environment, track objects, estimate the speed of objects, and so forth. The route control system 142 is used to determine a travel route and a travel speed of the vehicle 100. In some embodiments, the route control system 142 may include a lateral planning module 1421 and a longitudinal planning module 1422, the lateral planning module 1421 and the longitudinal planning module 1422 being used to determine a travel route and a travel speed for the vehicle 100 in conjunction with data from the obstacle avoidance system 144, the GPS 122, and one or more predetermined maps, respectively. Obstacle avoidance system 144 is used to identify, evaluate, and avoid or otherwise negotiate obstacles in the environment of vehicle 100 that may be embodied as actual obstacles and virtual moving objects that may collide with vehicle 100. In one example, the control system 106 may additionally or alternatively include components other than those shown and described. Or may reduce some of the components shown above.
Vehicle 100 interacts with external sensors, other vehicles, other computer systems, or users through peripherals 108. The peripheral devices 108 may include a wireless data transmission system 146, an in-vehicle computer 148, a microphone 150, and/or a speaker 152. In some embodiments, the peripheral devices 108 provide a means for a user of the vehicle 100 to interact with the user interface 116. For example, the onboard computer 148 may provide information to a user of the vehicle 100. The user interface 116 may also operate the in-vehicle computer 148 to receive user input. The in-vehicle computer 148 may be operated via a touch screen. In other cases, the peripheral devices 108 may provide a means for the vehicle 100 to communicate with other devices located within the vehicle. For example, the microphone 150 may receive audio (e.g., voice commands or other audio input) from a user of the vehicle 100. Similarly, the speaker 152 may output audio to a user of the vehicle 100. The wireless data transmission system 146 may communicate wirelessly with one or more devices, either directly or via a communication network. For example, the wireless data transmission system 146 may use 3G cellular communications, such as CDMA, EVD0, GSM/GPRS, or 4G cellular communications, such as LTE. Or 5G cellular communication. The wireless data transmission system 146 may communicate using a Wireless Local Area Network (WLAN). In some embodiments, the wireless data transmission system 146 may utilize an infrared link, bluetooth, or ZigBee to communicate directly with the device. Other wireless protocols, such as various vehicle data transmission systems, for example, wireless data transmission system 146 may include one or more Dedicated Short Range Communications (DSRC) devices that may include public and/or private data communications between vehicles and/or roadside stations.
The power supply 110 may provide power to various components of the vehicle 100. In one embodiment, power source 110 may be a rechargeable lithium ion or lead acid battery. One or more battery packs of such batteries may be configured as a power source to provide power to various components of the vehicle 100. In some embodiments, the power source 110 and the energy source 119 may be implemented together, such as in some all-electric vehicles.
Some or all of the functionality of the vehicle 100 is controlled by the computer system 112. The computer system 112 may include at least one processor 113, the processor 113 executing instructions 115 stored in a non-transitory computer readable medium, such as the memory 114. The computer system 112 may also be a plurality of computing devices that control individual components or subsystems of the vehicle 100 in a distributed manner. The processor 113 may be any conventional processor, such as a commercially available Central Processing Unit (CPU). Alternatively, the processor 113 may be a dedicated device such as an Application Specific Integrated Circuit (ASIC) or other hardware-based processor. Although fig. 1 functionally illustrates a processor, memory, and other components of the computer system 112 in the same block, those skilled in the art will appreciate that the processor, or memory, may actually comprise multiple processors, or memories, that are not stored within the same physical housing. For example, the memory 114 may be a hard drive or other storage medium located in a different enclosure than the computer system 112. Thus, references to processor 113 or memory 114 are to be understood as including references to a collection of processors or memories that may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, some components, such as the steering component and the retarding component, may each have their own processor that performs only computations related to the component-specific functions.
In various aspects described herein, the processor 113 may be located remotely from the vehicle 100 and in wireless communication with the vehicle 100. In other aspects, some of the processes described herein are executed on a processor 113 disposed within the vehicle 100 while others are executed by the remote processor 113, including taking the steps necessary to execute a single maneuver.
In some embodiments, the memory 114 may include instructions 115 (e.g., program logic), and the instructions 115 may be executed by the processor 113 to perform various functions of the vehicle 100, including those described above. The memory 114 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of the travel system 102, the sensor system 104, the control system 106, and the peripheral devices 108. In addition to instructions 115, memory 114 may also store data such as road maps, route information, the location, direction, speed of the vehicle, and other such vehicle data, among other information. Such information may be used by the vehicle 100 and the computer system 112 during operation of the vehicle 100 in autonomous, semi-autonomous, and/or manual modes. A user interface 116 for providing information to and receiving information from a user of the vehicle 100. Optionally, the user interface 116 may include one or more input/output devices within the collection of peripheral devices 108, such as a wireless data transmission system 146, an in-vehicle computer 148, a microphone 150, or a speaker 152, among others.
The computer system 112 may control the functions of the vehicle 100 based on inputs received from various subsystems (e.g., the travel system 102, the sensor system 104, and the control system 106) and from the user interface 116. For example, the computer system 112 may communicate with other systems or components within the vehicle 100 using a can bus, such as the computer system 112 may utilize input from the control system 106 to control the steering system 132 to avoid obstacles detected by the sensor system 104 and the obstacle avoidance system 144. In some embodiments, the computer system 112 is operable to provide control over many aspects of the vehicle 100 and its subsystems.
Alternatively, one or more of these components described above may be mounted or associated separately from the vehicle 100. For example, the memory 114 may exist partially or completely separate from the vehicle 100. The above components may be communicatively coupled together in a wired and/or wireless manner.
Optionally, the above components are only an example, in an actual application, components in the above modules may be added or deleted according to an actual need, and fig. 18 should not be construed as limiting the embodiment of the present application. The method for acquiring the scene file provided by the present application may be executed by the computer system 112, the radar 126, the laser range finder 130, or a peripheral device, such as the vehicle-mounted computer 148 or other vehicle-mounted terminals. For example, the method for acquiring the scene file provided by the present application may be executed by the on-board computer 148, the on-board computer 148 may plan a driving path and a corresponding speed curve for the vehicle, generate a control instruction according to the driving path, send the control instruction to the computer system 112, and control the steering system 132, the accelerator 134, the braking unit 136, the computer vision system 140, the route control system 142, or the obstacle avoidance system 144, etc. in the control system 106 of the vehicle by the computer system 112, thereby implementing automatic driving of the vehicle.
The vehicle 100 may be a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a lawn mower, an amusement car, a playground vehicle, construction equipment, a trolley, a golf cart, a train, a trolley, etc., and the embodiment of the present invention is not particularly limited.
In this embodiment of the application, the central processing unit 1722 in the server 1700 or the processor 113 in the vehicle 100 is configured to execute the method for acquiring the scene file executed by the electronic device in the embodiment corresponding to fig. 2 to fig. 10. Or, the central processing unit 1722 in the server 1700 is configured to execute the method for acquiring the scene file executed by the electronic device in the embodiment corresponding to fig. 11 to fig. 13. It should be noted that, for specific implementation manners and advantageous effects brought by the central processing unit 1722 or the processor 113 executing the method for obtaining a scene file, reference may be made to descriptions in each method embodiment corresponding to fig. 2 to fig. 13, and details are not repeated here.
Also provided in an embodiment of the present application is a computer-readable storage medium having stored therein a program for generating a vehicle travel speed, which when run on a computer causes the computer to perform the steps performed by the electronic device in the method described in the foregoing embodiment shown in fig. 2 to 10, or causes the computer to perform the steps performed by the electronic device in the method described in the foregoing embodiment shown in fig. 11 to 13.
Embodiments of the present application also provide a computer program product, which when executed on a computer, causes the computer to execute the steps performed by the electronic device in the method described in the foregoing embodiments shown in fig. 2 to 10, or causes the computer to execute the steps performed by the electronic device in the method described in the foregoing embodiments shown in fig. 11 to 13.
Further provided in the embodiments of the present application is a circuit system, where the circuit system includes a processing circuit, and the processing circuit is configured to execute the steps executed by the electronic device in the method described in the foregoing embodiments shown in fig. 2 to 10, or the processing circuit is configured to execute the steps executed by the electronic device in the method described in the foregoing embodiments shown in fig. 11 to 13.
The device for acquiring a scene file or the electronic device provided by the embodiment of the application may specifically be a chip, and the chip includes: a processing unit, which may be for example a processor, and a communication unit, which may be for example an input/output interface, a pin or a circuit, etc. The processing unit may execute the computer-executable instructions stored in the storage unit to enable the chip to execute the method for acquiring a scene file described in the embodiments shown in fig. 2 to 10, or to execute the method for acquiring a scene file described in the embodiments shown in fig. 11 to 13. Optionally, the storage unit is a storage unit in the chip, such as a register, a cache, and the like, and the storage unit may also be a storage unit located outside the chip in the wireless access device, such as a read-only memory (ROM) or another type of static storage device that can store static information and instructions, a Random Access Memory (RAM), and the like.
Wherein any of the aforementioned processors may be a general purpose central processing unit, a microprocessor, an ASIC, or one or more integrated circuits configured to control the execution of the programs of the method of the first aspect.
It should be noted that the above-described embodiments of the apparatus are merely schematic, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiments of the apparatus provided in the present application, the connection relationship between the modules indicates that there is a communication connection therebetween, and may be implemented as one or more communication buses or signal lines.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus necessary general hardware, and certainly can also be implemented by special hardware including application specific integrated circuits, special CLUs, special memories, special components and the like. Generally, functions performed by computer programs can be easily implemented by corresponding hardware, and specific hardware structures for implementing the same functions may be various, such as analog circuits, digital circuits, or dedicated circuits. However, for the present application, the implementation of a software program is more preferable. Based on such understanding, the technical solutions of the present application may be substantially embodied in the form of a software product, which is stored in a readable storage medium, such as a floppy disk, a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods described in the embodiments of the present application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.

Claims (26)

1. A method for acquiring a scene file is applied to an electronic device, and comprises the following steps:
acquiring traffic environment data around the host vehicle, wherein the traffic environment data around the host vehicle is used for indicating position information of traffic participating entities around the host vehicle, and the traffic environment data around the host vehicle is obtained based on data acquired by entities in traffic roads;
acquiring a first preset condition corresponding to a first traffic scene, wherein the first traffic scene is configurable;
determining a first starting point and a first ending point corresponding to the first traffic scene if it is determined that traffic-participating entities meeting the first preset condition exist around the host according to the traffic environment data around the host;
wherein the first start point indicates a start point in the traffic environment data for a first scene file for presenting the first traffic scene, and the first end point indicates an end point in the traffic environment data for a first scene file for presenting the first traffic scene.
2. The method of claim 1,
the acquiring of traffic environment data around the host vehicle includes:
acquiring original traffic environment data around the main vehicle;
inputting the original traffic environment data around the host vehicle into a target tracking model to obtain a reasoning result output by the target tracking model, wherein the reasoning result is used for indicating the position information of traffic participating entities around the host vehicle, the position information of the traffic participating entities around the host vehicle can be used for the running track of the traffic participating entities around the host vehicle, and the original traffic environment data around the host vehicle and the reasoning result both belong to the traffic environment data around the host vehicle;
the determining a first start point and a first end point corresponding to the first traffic scene in the case that it is determined that there are traffic participant entities around the host that satisfy the first preset condition according to the traffic environment data around the host comprises:
and under the condition that the traffic participating entities with running tracks meeting the first preset condition exist around the host vehicle according to the reasoning result, determining a first starting point and a first ending point corresponding to the first traffic scene.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
acquiring control data of a host vehicle, wherein the control data of the host vehicle is used for indicating the running track of the host vehicle;
acquiring a second preset condition corresponding to a second traffic scene;
determining a second starting point and a second ending point corresponding to the second traffic scene under the condition that the running track of the host vehicle is determined to meet the second preset condition according to the control data of the host vehicle;
wherein the second starting point indicates a starting point of a second scene file for showing the second traffic scene in the traffic environment data, and the second ending point indicates an ending point of the second scene file for showing the second traffic scene in the traffic environment data.
4. The method of claim 2, wherein the electronic device is a server, the entity located in the traffic road comprises a host vehicle and/or a wayside collection device, the first start point is a first start time, the first end point is a first end time, the obtaining raw traffic environment data around the host vehicle comprises:
receiving original traffic environment data around the main vehicle in a first time period sent by the main vehicle and/or the road end acquisition equipment;
the method further comprises the following steps:
and acquiring the first scene file from the traffic environment data around the host vehicle in the first time period according to the first starting time and the first ending time, and marking the label of the first scene file as the first traffic scene.
5. The method of claim 4, wherein the position information of the traffic-participating entities around the host is usable for a trajectory of the traffic-participating entities around the host, the first preset condition comprises a first trigger condition and a first termination condition, and the determining a first start point and a first termination point corresponding to the first traffic scene in the case that the traffic-participating entities satisfying the first preset condition are determined to exist around the host according to the traffic-environment data around the host comprises:
generating the first starting time according to the traffic environment data around the host vehicle and the first trigger condition, wherein the first starting time is determined according to the time when the running track of the first traffic participant entity around the host vehicle meets the first trigger condition;
and if the running track of the first traffic participant entity meets the first confirmation condition within a first preset time length, generating the first termination time, wherein the first termination time is determined according to the time when the running track of the first traffic participant entity meets the first confirmation condition.
6. The method of claim 2, wherein the electronic device is a host vehicle, the entity located in the traffic road is a host vehicle, the first preset condition comprises a first trigger condition and a first confirmation condition, and the obtaining raw traffic environment data around the host vehicle comprises:
acquiring original traffic environment data around the main vehicle in real time;
the determining a first start point and a first end point corresponding to the first traffic scene in the case that it is determined that there are traffic participant entities around the host that satisfy the first preset condition according to the traffic environment data around the host comprises:
when determining that a first traffic participation entity with a running track meeting the first trigger condition exists around the main vehicle according to the reasoning result, starting recording traffic environment data around the main vehicle;
if the running track of the first traffic participant entity meets the first confirmation condition within a first preset time length, determining the recording stopping time of the traffic environment data around the main vehicle, and determining the recorded traffic environment data around the main vehicle as a first scene file for displaying the first traffic scene, wherein the recording stopping time is determined according to the time when the running track of the first traffic participant entity meets the first confirmation condition.
7. The method according to any one of claims 1 to 6, further comprising:
acquiring map data matched with traffic environment data around the host vehicle;
the determining a first start point and a first end point corresponding to the first traffic scene in the case that it is determined that there are traffic participant entities around the host that satisfy the first preset condition according to the traffic environment data around the host comprises:
determining a first start point and a first end point corresponding to the first traffic scene if it is determined that there are traffic-participating entities satisfying the first preset condition around the host vehicle according to the traffic environment data around the host vehicle and the map data.
8. The method of any one of claims 1 to 7, wherein the position information of the traffic participant entities around the host vehicle is used for reflecting the traveling locus of the traffic participant entities around the host vehicle, and the method further comprises:
acquiring control data of a host vehicle, wherein the control data of the host vehicle is used for indicating the running track of the host vehicle;
acquiring a first screening condition corresponding to the first traffic scene;
the determining a first start point and a first end point corresponding to the first traffic scene in the case that it is determined that there are traffic participant entities around the host that satisfy the first preset condition according to the traffic environment data around the host comprises:
and under the condition that a first traffic participant entity with a running track meeting the first preset condition exists around the host vehicle and the running track of the host vehicle and/or the running track of the first traffic participant entity meet the first screening condition is determined according to the traffic environment data around the host vehicle and the control data of the host vehicle, determining a first starting point and a first ending point corresponding to the first traffic scene.
9. The method according to claim 3, wherein the first preset condition comprises a first trigger condition and a first confirmation condition, and the second preset condition comprises a second trigger condition and a second confirmation condition;
the first traffic scene is a traffic scene cut out of a main vehicle lane by a front vehicle, the first trigger condition comprises that a first bypass vehicle in front of the main vehicle and closest to the longitudinal distance of the main vehicle drives out of a first trigger line, the first confirmation condition comprises that the first bypass vehicle drives out of a first confirmation line, and the transverse distance between the first confirmation line and the main vehicle is greater than the transverse distance between the first trigger line and the main vehicle; alternatively, the first and second electrodes may be,
the first traffic scene is a traffic scene in which a side vehicle cuts into a main vehicle lane, the first trigger condition comprises that the side vehicle is positioned in a first preset range in front of the main vehicle, the transverse distance between the side vehicle and the main vehicle is changed from being larger than a first preset distance to being smaller than or equal to the first preset distance, the first confirmation condition comprises that the transverse distance between the side vehicle and the main vehicle is changed from being larger than a second preset distance to being smaller than or equal to a second preset distance, and the second preset distance is smaller than the first preset distance; alternatively, the first and second electrodes may be,
the second traffic scene is a traffic scene of the main vehicle for U-shaped turning, the second trigger condition is that the left-turning direction angle of the main vehicle is larger than the first angle and the vehicle speed of the main vehicle is larger than the first vehicle speed, and the second confirmation condition is that the left-turning direction angle of the main vehicle is smaller than the second angle.
10. A method for acquiring a scene file is characterized by comprising the following steps:
acquiring traffic environment data around the main vehicle and control data of the main vehicle, wherein the traffic environment data around the main vehicle is obtained based on data acquired by entities in traffic roads, and the control data of the main vehicle is used for indicating the running track of the main vehicle;
acquiring a second preset condition corresponding to a second traffic scene, wherein the second traffic scene is configurable;
determining a second starting point and a second ending point corresponding to the second traffic scene under the condition that the running track of the host vehicle is determined to meet the second preset condition according to the control data of the host vehicle;
wherein the second starting point indicates a starting point of a second scene file for showing the second traffic scene in the traffic environment data, and the second ending point indicates an ending point of the second scene file for showing the second traffic scene in the traffic environment data.
11. The method of claim 10, wherein the traffic environment data surrounding the host vehicle is used to indicate location information of traffic-participating entities surrounding the host vehicle, the method further comprising:
acquiring a first preset condition corresponding to a first traffic scene, wherein the first traffic scene is configurable;
determining a first starting point and a first ending point corresponding to the first traffic scene if it is determined that traffic-participating entities meeting the first preset condition exist around the host according to the traffic environment data around the host;
wherein the first start point indicates a start point in the traffic environment data for a first scene file for presenting the first traffic scene, and the first end point indicates an end point in the traffic environment data for a first scene file for presenting the first traffic scene.
12. A method for acquiring a scene file is characterized by comprising the following steps:
receiving configuration operation of a user on a first traffic scene through a display interface, and acquiring a first preset condition corresponding to the first traffic scene;
acquiring traffic environment data around the host vehicle in a first time period, wherein the traffic environment data around the host vehicle is used for indicating position information of traffic participant entities around the host vehicle, and the traffic environment data around the host vehicle is obtained based on data acquired by entities in traffic roads;
determining a first starting time and a first ending time corresponding to the first traffic scene from the first time period and displaying the first starting time and the first ending time to a user when determining that traffic participating entities meeting the first preset condition exist around the host according to the traffic environment data around the host in the first time period;
wherein the first start time indicates a start time of a first scene file for showing the first traffic scene in the traffic environment data, and the first end time indicates an end time of the first scene file for showing the first traffic scene in the traffic environment data.
13. An apparatus for acquiring a scene file, the apparatus comprising:
the system comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring traffic environment data around a main vehicle, the traffic environment data around the main vehicle is used for indicating position information of traffic participation entities around the main vehicle, and the traffic environment data around the main vehicle is obtained based on data collected by entities in a traffic road;
the acquisition module is further used for acquiring a first preset condition corresponding to a first traffic scene, wherein the first traffic scene is configurable;
the determining module is used for determining a first starting point and a first ending point corresponding to the first traffic scene under the condition that the traffic participating entities meeting the first preset condition exist around the host vehicle according to the traffic environment data around the host vehicle;
wherein the first start point indicates a start point in the traffic environment data for a first scene file for presenting the first traffic scene, and the first end point indicates an end point in the traffic environment data for a first scene file for presenting the first traffic scene.
14. The apparatus of claim 13, wherein the obtaining module is specifically configured to:
acquiring original traffic environment data around the main vehicle;
inputting the original traffic environment data around the host vehicle into a target tracking model to obtain a reasoning result output by the target tracking model, wherein the reasoning result is used for indicating the position information of traffic participating entities around the host vehicle, the position information of the traffic participating entities around the host vehicle can be used for the running track of the traffic participating entities around the host vehicle, and the original traffic environment data around the host vehicle and the reasoning result both belong to the traffic environment data around the host vehicle;
the determining module is specifically configured to determine a first start point and a first end point corresponding to the first traffic scene when determining that a traffic participant entity whose travel trajectory meets the first preset condition exists around the host vehicle according to the inference result.
15. The apparatus of claim 13 or 14,
the obtaining module is further used for obtaining control data of the main vehicle, and the control data of the main vehicle is used for indicating the running track of the main vehicle;
the acquisition module is further used for acquiring a second preset condition corresponding to a second traffic scene;
the determining module is further configured to determine a second starting point and a second ending point corresponding to the second traffic scene when the running track of the host vehicle is determined to meet the second preset condition according to the control data of the host vehicle;
wherein the second starting point indicates a starting point of a second scene file for showing the second traffic scene in the traffic environment data, and the second ending point indicates an ending point of the second scene file for showing the second traffic scene in the traffic environment data.
16. The apparatus according to claim 14, wherein the acquiring means of the scene file is configured in a server, the entity located in the traffic road comprises a host vehicle and/or a road-end collecting device, the first starting point is a first starting time, and the first ending point is a first ending time;
the acquisition module is specifically used for receiving original traffic environment data around the main vehicle in a first time period sent by the main vehicle and/or the road end acquisition equipment;
the obtaining module is further configured to obtain the first scene file from traffic environment data around the host vehicle in the first time period according to the first starting time and the first ending time, and mark a label of the first scene file as the first traffic scene.
17. The arrangement according to claim 16, wherein the position information of the traffic participant entities around the host vehicle can be used for a trajectory of the traffic participant entities around the host vehicle, the first preset condition comprises a first trigger condition and a first termination condition, and the determining module is specifically configured to:
generating the first starting time according to the traffic environment data around the host vehicle and the first trigger condition, wherein the first starting time is determined according to the time when the running track of the first traffic participant entity around the host vehicle meets the first trigger condition;
and if the running track of the first traffic participant entity meets the first confirmation condition within a first preset time length, generating the first termination time, wherein the first termination time is determined according to the time when the running track of the first traffic participant entity meets the first confirmation condition.
18. The device according to claim 14, wherein the means for obtaining the scene file is configured in a host vehicle, the entity located in the traffic road is the host vehicle, the first preset condition comprises a first trigger condition and a first confirmation condition, and the obtaining module is specifically configured to collect original traffic environment data around the host vehicle in real time;
the determining module is specifically configured to:
when determining that a first traffic participation entity with a running track meeting the first trigger condition exists around the main vehicle according to the reasoning result, starting recording traffic environment data around the main vehicle;
if the running track of the first traffic participant entity meets the first confirmation condition within a first preset time length, determining the recording stopping time of the traffic environment data around the main vehicle, and determining the recorded traffic environment data around the main vehicle as a first scene file for displaying the first traffic scene, wherein the recording stopping time is determined according to the time when the running track of the first traffic participant entity meets the first confirmation condition.
19. The apparatus of any one of claims 13 to 18,
the acquisition module is also used for acquiring map data matched with the traffic environment data around the main vehicle;
the determining module is specifically configured to determine a first start point and a first end point corresponding to the first traffic scene when it is determined that a traffic participant entity meeting the first preset condition exists around the host according to the traffic environment data around the host and the map data.
20. The apparatus according to any one of claims 13 to 19, wherein the position information of the traffic participant entities around the host vehicle is used to reflect a traveling locus of the traffic participant entities around the host vehicle;
the obtaining module is further used for obtaining control data of the main vehicle, and the control data of the main vehicle is used for indicating the running track of the main vehicle;
the acquisition module is further used for acquiring a first screening condition corresponding to the first traffic scene;
the determining module is specifically configured to determine a first start point and a first end point corresponding to the first traffic scene when determining that a first traffic participant entity exists around the host vehicle, and the running track of the host vehicle and/or the running track of the first traffic participant entity meet the first screening condition, according to the traffic environment data around the host vehicle and the control data of the host vehicle.
21. The apparatus according to claim 15, wherein the first preset condition comprises a first trigger condition and a first confirmation condition, and the second preset condition comprises a second trigger condition and a second confirmation condition;
the first traffic scene is a traffic scene cut out of a main vehicle lane by a front vehicle, the first trigger condition comprises that a first bypass vehicle in front of the main vehicle and closest to the longitudinal distance of the main vehicle drives out of a first trigger line, the first confirmation condition comprises that the first bypass vehicle drives out of a first confirmation line, and the transverse distance between the first confirmation line and the main vehicle is greater than the transverse distance between the first trigger line and the main vehicle; alternatively, the first and second electrodes may be,
the first traffic scene is a traffic scene in which a side vehicle cuts into a main vehicle lane, the first trigger condition comprises that the side vehicle is positioned in a first preset range in front of the main vehicle, the transverse distance between the side vehicle and the main vehicle is changed from being larger than a first preset distance to being smaller than or equal to the first preset distance, the first confirmation condition comprises that the transverse distance between the side vehicle and the main vehicle is changed from being larger than a second preset distance to being smaller than or equal to a second preset distance, and the second preset distance is smaller than the first preset distance; alternatively, the first and second electrodes may be,
the second traffic scene is a traffic scene of the main vehicle for U-shaped turning, the second trigger condition is that the left-turning direction angle of the main vehicle is larger than the first angle and the vehicle speed of the main vehicle is larger than the first vehicle speed, and the second confirmation condition is that the left-turning direction angle of the main vehicle is smaller than the second angle.
22. An apparatus for acquiring a scene file, the apparatus comprising:
the system comprises an acquisition module, a control module and a display module, wherein the acquisition module is used for acquiring traffic environment data around a main vehicle and control data of the main vehicle, the traffic environment data around the main vehicle is obtained based on data collected by entities in traffic roads, and the control data of the main vehicle is used for indicating the running track of the main vehicle;
the acquisition module is further used for acquiring a second preset condition corresponding to a second traffic scene, wherein the second traffic scene is configurable;
the determining module is used for determining a second starting point and a second ending point corresponding to the second traffic scene under the condition that the running track of the host vehicle is determined to meet the second preset condition according to the control data of the host vehicle;
wherein the second starting point indicates a starting point of a second scene file for showing the second traffic scene in the traffic environment data, and the second ending point indicates an ending point of the second scene file for showing the second traffic scene in the traffic environment data.
23. The apparatus of claim 22, wherein the traffic environment data surrounding the host vehicle is indicative of location information of traffic-participating entities surrounding the host vehicle;
the acquisition module is further used for acquiring a first preset condition corresponding to a first traffic scene, wherein the first traffic scene is configurable;
the determining module is further used for determining a first starting point and a first ending point corresponding to the first traffic scene under the condition that the traffic participating entities meeting the first preset condition exist around the host vehicle according to the traffic environment data around the host vehicle;
wherein the first start point indicates a start point in the traffic environment data for a first scene file for presenting the first traffic scene, and the first end point indicates an end point in the traffic environment data for a first scene file for presenting the first traffic scene.
24. An apparatus for acquiring a scene file, the apparatus comprising:
the receiving module is used for receiving configuration operation of a user on a first traffic scene through a display interface and acquiring a first preset condition corresponding to the first traffic scene;
the system comprises an acquisition module, a traffic information acquisition module and a traffic information acquisition module, wherein the acquisition module is used for acquiring traffic environment data around a main vehicle in a first time period, the traffic environment data around the main vehicle is used for indicating position information of traffic participating entities around the main vehicle, and the traffic environment data around the main vehicle is obtained based on data acquired by entities in traffic roads;
the display module is used for determining a first starting time and a first ending time corresponding to the first traffic scene from the first time period and displaying the first starting time and the first ending time to a user under the condition that the traffic participation entities meeting the first preset condition exist around the host according to the traffic environment data around the host in the first time period;
wherein the first start time indicates a start time of a first scene file for showing the first traffic scene in the traffic environment data, and the first end time indicates an end time of the first scene file for showing the first traffic scene in the traffic environment data.
25. An electronic device, comprising a processor that obtains program instructions through the communication interface, the program instructions when executed by the processing unit implementing the method of any of claims 1 to 9; or, when executed by the processing unit, to implement the method of claim 10 or 11; or, when executed by the processing unit, implement the method of claim 12.
26. A computer program, which, when run on a computer, causes the computer to perform the method of any one of claims 1 to 9, or causes the computer to perform the method of claim 10 or 11, or causes the computer to perform the method of claim 12.
CN202080004249.3A 2020-10-28 2020-10-28 Scene file acquisition method and device Pending CN112513951A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/124319 WO2022087879A1 (en) 2020-10-28 2020-10-28 Method and apparatus for acquiring scene file

Publications (1)

Publication Number Publication Date
CN112513951A true CN112513951A (en) 2021-03-16

Family

ID=74953159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080004249.3A Pending CN112513951A (en) 2020-10-28 2020-10-28 Scene file acquisition method and device

Country Status (2)

Country Link
CN (1) CN112513951A (en)
WO (1) WO2022087879A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115148028B (en) * 2022-06-30 2023-12-15 北京小马智行科技有限公司 Method and device for constructing vehicle drive test scene according to historical data and vehicle
CN114863689B (en) * 2022-07-08 2022-09-30 中汽研(天津)汽车工程研究院有限公司 Method and system for collecting, identifying and extracting data of on-off ramp behavior scene

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108062864A (en) * 2016-11-09 2018-05-22 奥迪股份公司 A kind of traffic scene visualization system and method and vehicle for vehicle
CN108931927A (en) * 2018-07-24 2018-12-04 百度在线网络技术(北京)有限公司 The creation method and device of unmanned simulating scenes
CN109446371A (en) * 2018-11-09 2019-03-08 苏州清研精准汽车科技有限公司 A kind of intelligent automobile emulation testing scene library generating method and test macro and method
CN110276783A (en) * 2019-04-23 2019-09-24 上海高重信息科技有限公司 A kind of multi-object tracking method, device and computer system
CN110795818A (en) * 2019-09-12 2020-02-14 腾讯科技(深圳)有限公司 Method and device for determining virtual test scene, electronic equipment and storage medium
WO2020060480A1 (en) * 2018-09-18 2020-03-26 Sixan Pte Ltd System and method for generating a scenario template
CN111047901A (en) * 2019-11-05 2020-04-21 珠海格力电器股份有限公司 Parking management method, parking management device, storage medium and computer equipment
CN111062241A (en) * 2019-10-17 2020-04-24 武汉光庭信息技术股份有限公司 Method and system for automatically acquiring test scene based on natural driving original data
CN111429484A (en) * 2020-03-31 2020-07-17 电子科技大学 Multi-target vehicle track real-time construction method based on traffic monitoring video
CN111578951A (en) * 2020-04-30 2020-08-25 北京百度网讯科技有限公司 Method and device for generating information
CN111599181A (en) * 2020-07-22 2020-08-28 中汽院汽车技术有限公司 Typical natural driving scene recognition and extraction method for intelligent driving system test

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2990991A1 (en) * 2014-08-29 2016-03-02 Honda Research Institute Europe GmbH Method and system for using global scene context for adaptive prediction and corresponding program, and vehicle equipped with such system
CN109213134B (en) * 2017-07-03 2020-04-28 百度在线网络技术(北京)有限公司 Method and device for generating automatic driving strategy
CN109413572A (en) * 2018-10-31 2019-03-01 惠州市德赛西威汽车电子股份有限公司 Vehicle collision prewarning and the optimization method and system of speed guidance
CN111816003B (en) * 2019-04-12 2022-05-06 广州汽车集团股份有限公司 Vehicle early warning method and device and computer equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108062864A (en) * 2016-11-09 2018-05-22 奥迪股份公司 A kind of traffic scene visualization system and method and vehicle for vehicle
CN108931927A (en) * 2018-07-24 2018-12-04 百度在线网络技术(北京)有限公司 The creation method and device of unmanned simulating scenes
WO2020060480A1 (en) * 2018-09-18 2020-03-26 Sixan Pte Ltd System and method for generating a scenario template
CN109446371A (en) * 2018-11-09 2019-03-08 苏州清研精准汽车科技有限公司 A kind of intelligent automobile emulation testing scene library generating method and test macro and method
CN110276783A (en) * 2019-04-23 2019-09-24 上海高重信息科技有限公司 A kind of multi-object tracking method, device and computer system
CN110795818A (en) * 2019-09-12 2020-02-14 腾讯科技(深圳)有限公司 Method and device for determining virtual test scene, electronic equipment and storage medium
CN111062241A (en) * 2019-10-17 2020-04-24 武汉光庭信息技术股份有限公司 Method and system for automatically acquiring test scene based on natural driving original data
CN111047901A (en) * 2019-11-05 2020-04-21 珠海格力电器股份有限公司 Parking management method, parking management device, storage medium and computer equipment
CN111429484A (en) * 2020-03-31 2020-07-17 电子科技大学 Multi-target vehicle track real-time construction method based on traffic monitoring video
CN111578951A (en) * 2020-04-30 2020-08-25 北京百度网讯科技有限公司 Method and device for generating information
CN111599181A (en) * 2020-07-22 2020-08-28 中汽院汽车技术有限公司 Typical natural driving scene recognition and extraction method for intelligent driving system test

Also Published As

Publication number Publication date
WO2022087879A1 (en) 2022-05-05

Similar Documents

Publication Publication Date Title
WO2022027304A1 (en) Testing method and apparatus for autonomous vehicle
WO2021102955A1 (en) Path planning method for vehicle and path planning apparatus for vehicle
WO2021135371A1 (en) Automatic driving method, related device and computer-readable storage medium
CN113968216B (en) Vehicle collision detection method and device and computer readable storage medium
EP4029750A1 (en) Data presentation method and terminal device
CN112544071B (en) Video splicing method, device and system
CN112512887B (en) Driving decision selection method and device
CN113631452B (en) Lane change area acquisition method and device
EP4307251A1 (en) Mapping method, vehicle, computer readable storage medium, and chip
CN111094097A (en) Method and system for providing remote assistance for a vehicle
WO2022087879A1 (en) Method and apparatus for acquiring scene file
CN112810603B (en) Positioning method and related product
CN113885045A (en) Method and device for detecting lane line
EP4134769A1 (en) Method and apparatus for vehicle to pass through boom barrier
CN114813157A (en) Test scene construction method and device
CN115398272A (en) Method and device for detecting passable area of vehicle
EP4293630A1 (en) Method for generating lane line, vehicle, storage medium and chip
CN112829762A (en) Vehicle running speed generation method and related equipment
CN115100630B (en) Obstacle detection method, obstacle detection device, vehicle, medium and chip
CN115205311B (en) Image processing method, device, vehicle, medium and chip
CN113859265A (en) Reminding method and device in driving process
CN114537450A (en) Vehicle control method, device, medium, chip, electronic device and vehicle
CN113022573B (en) Road structure detection method and device
CN114092898A (en) Target object sensing method and device
CN113741384A (en) Method and device for detecting automatic driving system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210316

RJ01 Rejection of invention patent application after publication