CN116466685A - Evaluation method, device, equipment and medium for automatic driving perception algorithm - Google Patents

Evaluation method, device, equipment and medium for automatic driving perception algorithm Download PDF

Info

Publication number
CN116466685A
CN116466685A CN202310451494.4A CN202310451494A CN116466685A CN 116466685 A CN116466685 A CN 116466685A CN 202310451494 A CN202310451494 A CN 202310451494A CN 116466685 A CN116466685 A CN 116466685A
Authority
CN
China
Prior art keywords
traffic participant
perception
data
size
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310451494.4A
Other languages
Chinese (zh)
Inventor
肖洪
陶胜召
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Connectivity Beijing Technology Co Ltd
Original Assignee
Apollo Intelligent Connectivity Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Intelligent Connectivity Beijing Technology Co Ltd filed Critical Apollo Intelligent Connectivity Beijing Technology Co Ltd
Priority to CN202310451494.4A priority Critical patent/CN116466685A/en
Publication of CN116466685A publication Critical patent/CN116466685A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0208Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterized by the configuration of the monitoring system
    • G05B23/0213Modular or universal configuration of the monitoring system, e.g. monitoring system having modules that may be combined to build monitoring program; monitoring system that can be applied to legacy systems; adaptable monitoring system; using different communication protocols
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/24Pc safety
    • G05B2219/24065Real time diagnostics
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

The disclosure provides an evaluation method, an evaluation device, electronic equipment, a computer readable storage medium and a computer program product for an automatic driving perception algorithm, and relates to the field of artificial intelligence, in particular to the technical field of automatic driving. The implementation scheme is as follows: acquiring scene data and an actual measurement result, wherein the actual measurement result indicates whether a first traffic participant can cause driving risk to a host vehicle in a target scene; determining a target area based on the scene data, wherein the target area indicates an area that the first traffic participant may pass through within a first time period; processing the scene data by using an autopilot perception algorithm to obtain simulation test data; determining at least one second traffic participant and at least one perception test result corresponding to the at least one second traffic participant falling into the target area within the first time period based on the simulation test data; and automatically driving the perception capability of the perception algorithm based on the road actual measurement result and at least one perception test result.

Description

Evaluation method, device, equipment and medium for automatic driving perception algorithm
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular, to the field of autopilot technology, and more particularly, to an evaluation method, apparatus, electronic device, computer readable storage medium, and computer program product for an autopilot awareness algorithm.
Background
In the related art, when evaluating an autopilot perception algorithm, whether the perception result of the algorithm is consistent with an actual road test result is usually determined by manually reviewing simulation test video data, which is high in cost and low in efficiency.
Disclosure of Invention
The present disclosure provides an evaluation method, apparatus, electronic device, computer-readable storage medium and computer program product for an autopilot awareness algorithm.
According to an aspect of the present disclosure, there is provided an evaluation method for an autopilot awareness algorithm, including: acquiring scene data and an actual measurement result, wherein the scene data is acquired when a main vehicle runs in a target scene comprising a first traffic participant, and the actual measurement result indicates whether the first traffic participant can cause driving risk to the main vehicle in the target scene; determining a target area based on the scene data, wherein the target area indicates an area that the first traffic participant may pass through within a first time period; processing the scene data by using the automatic driving perception algorithm to obtain simulation test data output by the automatic driving perception algorithm; determining at least one second traffic participant falling within the target area within the first time period and at least one perception test result corresponding to the at least one second traffic participant based on the simulation test data, wherein each perception test result indicates whether the corresponding second traffic participant is perceived to pose a driving risk to the host vehicle; and evaluating the perception capability of the autopilot perception algorithm based on the road actual measurement result and the at least one perception test result.
According to another aspect of the present disclosure, there is provided an evaluation device for an autopilot awareness algorithm, including: the system comprises an acquisition unit, a first traffic participant and a second traffic participant, wherein the acquisition unit is configured to acquire scene data and an actual measurement result, the scene data is acquired when a main vehicle runs in a target scene comprising the first traffic participant, and the actual measurement result indicates whether the first traffic participant can cause driving risk to the main vehicle in the target scene; a first determination unit configured to determine a target area based on the scene data, wherein the target area indicates an area that the first traffic participant may pass through within a first period of time; the processing unit is configured to process the scene data by utilizing the automatic driving perception algorithm so as to obtain simulation test data output by the automatic driving perception algorithm; a second determining unit configured to determine, based on the simulation test data, at least one second traffic participant that falls within the target area within the first time period and at least one perception test result corresponding to the at least one second traffic participant, wherein each perception test result indicates whether the respective second traffic participant is perceived to pose a driving risk to the host vehicle; and an evaluation unit configured to evaluate a perception capability of the autopilot perception algorithm based on the road actual measurement result and the at least one perception test result.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the above-described evaluation method for the autopilot awareness algorithm.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the above-described evaluation method for an autopilot awareness algorithm.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the above-described evaluation method for an autopilot awareness algorithm.
According to one or more embodiments of the present disclosure, a target area through which a traffic participant for evaluation may pass is determined based on scene data generated by an actual road test, and an autopilot perception algorithm is scored for the perception capability of the target area in a simulation test, so that the traffic participant can be replaced with the target area associated with the traffic participant for evaluation to realize perception capability evaluation, the defect that the same traffic participant cannot be locked in multiple tests due to inconsistent ID in the scene data actually measured by the road and test data of the simulation test by the same traffic participant is overcome, the evaluation efficiency for the autopilot perception algorithm is effectively improved, and the evaluation cost is greatly reduced.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The accompanying drawings illustrate exemplary embodiments and, together with the description, serve to explain exemplary implementations of the embodiments. The illustrated embodiments are for exemplary purposes only and do not limit the scope of the claims. Throughout the drawings, identical reference numerals designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, in accordance with an embodiment of the present disclosure;
FIG. 2 illustrates an exemplary flow chart of an evaluation method for an autopilot awareness algorithm in accordance with an embodiment of the present disclosure;
FIG. 3 illustrates a partial exemplary flow chart of an evaluation method for an autopilot awareness algorithm in accordance with an embodiment of the present disclosure;
FIG. 4 illustrates another partial exemplary flowchart of an evaluation method for an autopilot awareness algorithm in accordance with an embodiment of the present disclosure;
FIG. 5 illustrates a schematic diagram of determining a target area from a first location and a second location according to an embodiment of the present disclosure;
FIG. 6 illustrates yet another portion of an exemplary flowchart of an evaluation method for an autopilot awareness algorithm in accordance with an embodiment of the present disclosure;
FIG. 7 illustrates yet another portion of an exemplary flowchart of an evaluation method for an autopilot awareness algorithm in accordance with an embodiment of the present disclosure;
FIG. 8 illustrates a block diagram of an evaluation device for an autopilot awareness algorithm in accordance with an embodiment of the present disclosure; and
fig. 9 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another element. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various illustrated examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. Furthermore, the term "and/or" as used in this disclosure encompasses any and all possible combinations of the listed items.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented, in accordance with an embodiment of the present disclosure. Referring to fig. 1, the system 100 includes a motor vehicle 110, a server 120, and one or more communication networks 130 coupling the motor vehicle 110 to the server 120.
In an embodiment of the present disclosure, motor vehicle 110 may include a computing device in accordance with an embodiment of the present disclosure and/or be configured to perform a method in accordance with an embodiment of the present disclosure.
The server 120 may run one or more services or software applications that enable the evaluation method for the autopilot awareness algorithm. In some embodiments, server 120 may also provide other services or software applications, which may include non-virtual environments and virtual environments. In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof that are executable by one or more processors. A user of motor vehicle 110 may in turn utilize one or more client applications to interact with server 120 to utilize the services provided by these components. It should be appreciated that a variety of different system configurations are possible, which may differ from system 100. Accordingly, FIG. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture that involves virtualization (e.g., one or more flexible pools of logical storage devices that may be virtualized to maintain virtual storage devices of the server). In various embodiments, server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above as well as any commercially available server operating systems. Server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, etc.
In some implementations, server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from motor vehicle 110. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of motor vehicle 110.
Network 130 may be any type of network known to those skilled in the art that may support data communications using any of a number of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. By way of example only, the one or more networks 130 may be a satellite communications network, a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a blockchain network, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (including, for example, bluetooth, wiFi), and/or any combination of these with other networks.
The system 100 may also include one or more databases 150. In some embodiments, these databases may be used to store data and other information. For example, one or more of databases 150 may be used to store information such as audio files and video files. The data store 150 may reside in various locations. For example, the data store used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. The data store 150 may be of different types. In some embodiments, the data store used by server 120 may be a database, such as a relational database. One or more of these databases may store, update, and retrieve the databases and data from the databases in response to the commands.
In some embodiments, one or more of databases 150 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key value stores, object stores, or conventional stores supported by the file system.
Motor vehicle 110 may include a sensor 111 for sensing the surrounding environment. The sensors 111 may include one or more of the following: visual cameras, infrared cameras, ultrasonic sensors, millimeter wave radar, and laser radar (LiDAR). Different sensors may provide different detection accuracy and range. The camera may be mounted in front of, behind or other locations on the vehicle. The vision cameras can capture the conditions inside and outside the vehicle in real time and present them to the driver and/or passengers. In addition, by analyzing the captured images of the visual camera, information such as traffic light indication, intersection situation, other vehicle running state, etc. can be acquired. The infrared camera can capture objects under night vision. The ultrasonic sensor can be arranged around the vehicle and is used for measuring the distance between an object outside the vehicle and the vehicle by utilizing the characteristics of strong ultrasonic directivity and the like. The millimeter wave radar may be installed in front of, behind, or other locations of the vehicle for measuring the distance of an object outside the vehicle from the vehicle using the characteristics of electromagnetic waves. Lidar may be mounted in front of, behind, or other locations on the vehicle for detecting object edges, shape information for object identification and tracking. The radar apparatus may also measure a change in the speed of the vehicle and the moving object due to the doppler effect.
Motor vehicle 110 may also include a communication device 112. The communication device 112 may include a satellite positioning module capable of receiving satellite positioning signals (e.g., beidou, GPS, GLONASS, and GALILEO) from satellites 141 and generating coordinates based on these signals. The communication device 112 may also include a module for communicating with the mobile communication base station 142, and the mobile communication network may implement any suitable communication technology, such as the current or evolving wireless communication technology (e.g., 5G technology) such as GSM/GPRS, CDMA, LTE. The communication device 112 may also have a Vehicle-to-Everything (V2X) module configured to enable, for example, vehicle-to-Vehicle (V2V) communication with other vehicles 143 and Vehicle-to-Infrastructure (V2I) communication with Infrastructure 144. In addition, the communication device 112 may also have a module configured to communicate with a user terminal 145 (including but not limited to a smart phone, tablet computer, or wearable device such as a watch), for example, by using a wireless local area network or bluetooth of the IEEE802.11 standard. With the communication device 112, the motor vehicle 110 can also access the server 120 via the network 130.
Motor vehicle 110 may also include a control device 113. The control device 113 may include a processor, such as a Central Processing Unit (CPU) or a Graphics Processing Unit (GPU), or other special purpose processor, etc., in communication with various types of computer readable storage devices or mediums. The control device 113 may include an autopilot system for automatically controlling various actuators in the vehicle. The autopilot system is configured to control a powertrain, steering system, braking system, etc. of a motor vehicle 110 (not shown) via a plurality of actuators in response to inputs from a plurality of sensors 111 or other input devices to control acceleration, steering, and braking, respectively, without human intervention or limited human intervention. Part of the processing functions of the control device 113 may be implemented by cloud computing. For example, some of the processing may be performed using an onboard processor while other processing may be performed using cloud computing resources. The control device 113 may be configured to perform a method according to the present disclosure. Furthermore, the control means 113 may be implemented as one example of a computing device on the motor vehicle side (client) according to the present disclosure.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
In the related art, when evaluating an autopilot perception algorithm, whether the perception result of the algorithm is consistent with the actual road test result is usually determined by manually reviewing the video data of the simulation test, which is high in cost and low in efficiency.
To this end, fig. 2 shows an exemplary flowchart of an evaluation method for an autopilot awareness algorithm according to an embodiment of the present disclosure.
As shown in fig. 2, an evaluation method 200 for an autopilot awareness algorithm is provided according to an embodiment of the present disclosure, including: acquiring scene data and an actual measurement result, wherein the scene data is acquired when a host vehicle runs in a target scene comprising a first traffic participant, and the actual measurement result indicates whether the first traffic participant can cause driving risk to the host vehicle in the target scene (step 210); determining a target area based on the scene data, wherein the target area indicates an area that the first traffic participant may pass through within a first time period (step 220); processing the scene data using the autopilot awareness algorithm to obtain simulated test data output by the autopilot awareness algorithm (step 230); determining at least one perception test result corresponding to the at least one second traffic participant and the at least one second traffic participant falling within the target area within the first time period based on the simulation test data, wherein each perception test result indicates whether the respective second traffic participant is perceived to cause driving risk to the host vehicle (step 240); and evaluating the awareness capabilities of the autopilot awareness algorithm based on the road actual measurement and the at least one awareness test result (step 250).
According to the evaluation method for the automatic driving perception algorithm, the target area through which the traffic participant for evaluation possibly passes is determined based on the scene data generated by the actual road test, and the perception capability of the automatic driving perception algorithm for the target area is scored in the simulation test, so that the traffic participant can be replaced by the target area related to the traffic participant for evaluation to realize perception capability evaluation, the defect that the same traffic participant cannot be locked in multiple tests due to inconsistent ID in the scene data actually measured by the road and the test data of the simulation test is overcome, the evaluation efficiency of the automatic driving perception algorithm is effectively improved, and the evaluation cost is greatly reduced.
At step 210, scene data acquired when a host vehicle travels in a target scene including a first traffic participant and an actual measurement result indicating whether the first traffic participant would pose a driving risk to the host vehicle in the target scene are acquired.
In some embodiments, the scene data may be video data and/or point cloud data, which is not limited.
In some embodiments, the scene data and the actual measurement result may be obtained by directly performing an actual road test on the host vehicle in the target scene, or may be scene data and the actual measurement result of a historical actual road test obtained from a database, which is not limited.
In some embodiments, the first traffic participant includes, but is not limited to, automobiles, non-automobiles, pedestrians, and other dynamic obstacles that may be present in the target scene.
In one example, the first traffic participant is a motor vehicle, the target scene is a scene in which the host vehicle and the motor vehicle travel in opposition along a lane line; in another example, the first traffic participant is a cast leaf and the target scene is a scene in which the cast leaf is blown toward the host vehicle by wind. It should be understood that the above examples are for illustrative purposes only, and are not therefore limiting,
at step 220, a target area is determined based on the scene data, wherein the target area indicates an area that the first traffic participant may pass through within a first time period.
FIG. 3 illustrates a partial exemplary flow chart of an evaluation method for an autopilot awareness algorithm in accordance with an embodiment of the present disclosure.
According to some embodiments, as shown in fig. 3, step 220 comprises: determining a first location of the first traffic participant at a start time of the first time period and a second location at an end time of the first time period based on the scene data (step 321); and determining a target area based on the first location and the second location (step 322).
The position of the first traffic participant at the starting moment and the ending moment can be used for obtaining the position change of the first traffic participant in the first time period, so that the area passed by the first traffic participant can be at least partially determined based on the position change to be used as a target area associated with the first traffic participant, the association relation between the target area and the first traffic participant is effectively improved, and the evaluation accuracy of the algorithm perception capability obtained based on the first traffic participant and the target position is improved.
At step 321, a first location of the host vehicle at a start time of the first time period and a second location at an end time of the first time period are determined based on the scene data.
In some embodiments, a start frame corresponding to a start time of the first time period and a stop frame corresponding to a stop time of the first time period may be obtained from the scene data, so as to determine a first position of the host vehicle from the start frame and determine a second position of the host vehicle from the stop frame.
In some embodiments, the first location and the second location may be based on actual geographic location coordinates of a high-precision map, or may be based on location coordinates of other specific reference frames, which is not limited.
In step 322, a target area is determined based on the first location and the second location.
In some embodiments, the first location and the second location may fall within the range of the target area, may fall outside the range of the target area, and may fall within the range of the target area, and fall outside the range of the target area, without limitation.
In some embodiments, the shape of the target area may be regular or irregular, without limitation.
Fig. 4 illustrates another portion of an exemplary flowchart of an evaluation method for an autopilot awareness algorithm for a scenario in which a first traffic participant is a motor vehicle or a non-motor vehicle traveling along a lane line, and the like, in accordance with an embodiment of the present disclosure.
According to some embodiments, as shown in fig. 4, step 322 comprises: moving the first position and the second position to the left a first distance along a direction perpendicular to the connecting line by taking the connecting line of the first position and the second position as a reference to obtain a first corner point and a second corner point (step 4221); moving the first position and the second position to the right a second distance along a direction perpendicular to the connecting line by taking the connecting line of the first position and the second position as a reference to obtain a third corner point and a fourth corner point (step 4222); and taking a rectangular area surrounded by the first corner point, the second corner point, the third corner point and the fourth corner point as a target area (step 4223).
The four corner points are obtained to enclose a rectangle in the mode, so that the rule that the first traffic participant runs along the lane line is effectively adapted, the area possibly passed by the first traffic participant can be more accurately included in the target area, the relevance between the target area and the first traffic participant is effectively enhanced, and the evaluation efficiency is improved while the accuracy of the evaluation of an automatic driving perception algorithm is ensured.
Fig. 5 illustrates a schematic diagram of determining a target area from a first location and a second location according to an embodiment of the present disclosure. As shown in fig. 5, the first position a has a coordinate (x a ,y a ) The second position B has a coordinate (x b ,y b )。
In step 4221, the first position A and the second position B are moved leftwards by a first distance d along the direction perpendicular to the connecting line AB based on the connecting line AB of the first position A and the second position B 1 To obtain a first corner C and a second corner D.
In step 4222, the first position A and the second position B are moved rightward by a second distance d along the direction perpendicular to the connection line AB 2 To obtain a third corner point E and a fourth corner point F.
In step 4223, a rectangular region CDEF surrounded by the first, second, third and fourth corner points C, D, E and F is taken as the target region.
In some embodiments, the first distance d 1 And a second distance d 2 May be the same or different. In one example, the lane line traveled by the first traffic participant is 5 meters wide, a first distance d may be 1 And a second distance d 2 Is set to 5 meters, at this time, the target area CDEF is 2 times of the lane line width, no matter whether the first traffic participant performs lane change behavior in the first time period, and the first traffic participant runsThe route may be included in the target area CDEF.
It should be noted that the above examples are for illustration only and are not limited thereto, e.g. the first distance d 1 May be 0.5 m, 3 m or 10 m, the first distance d 1 May be 0.5 meter, 3 meters or 10 meters, without limitation.
In step 230, the scene data is processed using the autopilot awareness algorithm to obtain simulated test data output by the autopilot awareness algorithm.
In some embodiments, after obtaining the simulation test data, the data frames in the simulation test data may be filtered based on the timestamp, so as to preserve the data frames corresponding to the first period and delete other data frames, thereby reducing the occupancy rate of the storage space.
At step 240, at least one perceived test result for at least one second traffic participant and at least one second traffic participant that falls within the target area during the first time period is determined based on the simulated test data, wherein each perceived test result indicates whether the respective second traffic participant is perceived to pose a driving risk to the host vehicle.
The perception test result of the automatic driving algorithm aiming at the target area in the first time is obtained based on the perception test result of the second traffic participant which generates the intersection with the target area in the first time, so that the traffic participants in the road actual measurement and simulation test can be prevented from being in one-to-one correspondence, and the evaluation can be realized.
However, in the case of a larger range of the target area, some traffic participants farther from the host vehicle, while present in the target area for a first period of time, may not pose a driving risk to the host vehicle, based on which fig. 6 shows yet another partial exemplary flow chart of an evaluation method for an autopilot awareness algorithm according to an embodiment of the present disclosure.
According to some embodiments, as shown in fig. 6, step 240 includes: acquiring a plurality of data frames corresponding to the first time period from the simulation test data (step 641); determining, for each data frame, at least one third traffic participant in the data frame having a distance to the host vehicle less than a collision threshold (step 6421); and regarding a third traffic participant, among the at least one third traffic participant, falling into the target area as a second traffic participant (step 6422); and traversing the plurality of data frames to obtain at least one second traffic participant (step 643).
By determining the distance between the third traffic participant and the host vehicle, the possibility that some third traffic participants which are far away from the host vehicle and do not generate collision risks but pass through the target area are taken as second traffic participants can be eliminated, so that the efficiency and the accuracy of evaluating the automatic driving perception algorithm are further improved.
Fig. 7 illustrates yet another exemplary flowchart of a portion of an evaluation method for an autopilot awareness algorithm in accordance with an embodiment of the present disclosure.
According to some embodiments, as shown in fig. 7, step 240 includes: for each second traffic participant, obtaining a first size perceived by the second traffic participant in a first data frame, wherein the first data frame is a data frame of a plurality of data frames having a smallest size perceived by the second traffic participant, the first size including at least one of a first length, a first width, and a first height of the second traffic participant (step 7411); acquiring a second size corresponding to the first size perceived by the second traffic participant in a second data frame, wherein the second data frame is a data frame of the plurality of data frames having a largest perceived size by the second traffic participant, the second size including at least one of a second length, a second width, and a second height of the second traffic participant (step 7412); determining a size change value for the second traffic participant based on the first size and the second size (step 7413); and determining a perception test result corresponding to the second traffic participant based on the time sequence relation of the first data frame and the second data frame and the size change value (step 7414); and traversing the at least one second traffic participant to obtain at least one perception test result (step 742).
For steps 7411 and 7412, the second size needs to correspond to the first size in order to be able to obtain the size change value for the second traffic participant. For example, when the first dimension includes a first height, the second dimension includes at least a second height; as another example, where the first dimension includes a first width and a first length, the second dimension includes at least a second width and a second length, and so on.
In some embodiments, the area or volume of the second traffic participant may also be calculated based on the acquired length-width-height of the second traffic participant to take the area or volume of the second traffic participant as the first and second dimensions.
For steps 7413 and 7414, the dimensional change values include, but are not limited to, a difference between the second dimension and the first dimension, an absolute value of the difference between the second dimension and the first dimension, a rate of change of the second dimension from the first dimension, and the like.
In one example, the dimensional change value is a difference between the second dimension and the first dimension. When the first moment corresponding to the first data frame is earlier than the second moment corresponding to the second data frame, the size change value is a positive value, and the larger the size change value is, the larger the distance that the second traffic participant moves along the direction close to the host vehicle is, the greater the possibility that the second traffic participant is perceived to cause driving risk to the host vehicle is; when the first time corresponding to the first data frame is later than the second time corresponding to the second data frame, the size change value is negative, the smaller the size change value is, the larger the distance that the second traffic participant moves along the direction away from the host vehicle is, and the smaller the probability that the second traffic participant is perceived to cause driving risk to the host vehicle is. Based on this, the size change value will always be positively correlated with the likelihood that the second traffic participant is perceived to pose a driving risk to the host vehicle.
For dynamic traffic participants, the perceived change in size can effectively characterize the likelihood of driving risk to the host vehicle. For example, in a scenario where a vehicle is traveling along an adjacent lane opposite a host vehicle, the greater the dimensional change of the vehicle is perceived, the higher the likelihood of it encroaching on the host vehicle's current travel lane, and correspondingly the higher the likelihood of driving risk to the host vehicle. Therefore, based on the perceived size change, the perception test result for the traffic participant can be simply and efficiently determined, so as to improve the evaluation efficiency.
In some embodiments, the size change value may be a length change value, a width change value, and/or a height change value of the second traffic participant; in other embodiments, the area or volume of the second traffic participant may be calculated based on the acquired length and width of the second traffic participant, and the area change value or the volume change value of the second traffic participant may be used as the size change value, which is not limited.
According to some embodiments, step 7414 comprises: and carrying out risk scoring on the second traffic participant based on the size change value, so as to take the score of the risk score as a perception test result corresponding to the second traffic participant, wherein the size change value is positively correlated with the score of the risk score. And, step 240 includes: and determining a risk perception score of the automatic driving perception algorithm according to the at least one perception test result, wherein the risk perception score is positively correlated with the score of the at least one risk score corresponding to the at least one perception test result.
Based on the size change of the second traffic participant, the automatic driving perception algorithm can be evaluated in a quantitative mode, so that the difficulty in realizing the evaluation is reduced, and the evaluation efficiency is improved.
In some embodiments, the risk perception score may be an average, a weighted average, a maximum, or a minimum of the at least one risk score, which is not limited.
According to some embodiments, the method 200 further comprises: and returning the scene data, the actual measurement result, the simulation test data and the evaluation result of the perception capability of the automatic driving perception algorithm to the automatic driving perception algorithm as training data so as to update the network model parameters of the automatic driving perception algorithm.
By returning the data and the evaluation result to the automatic driving perception algorithm, the next generation updating iteration direction of the automatic driving perception algorithm model can be effectively determined to adjust network parameters, so that the iteration efficiency and the iteration effect of the algorithm model can be better improved.
Fig. 8 shows a block diagram of a configuration of an evaluation apparatus for an autopilot awareness algorithm according to an embodiment of the present disclosure.
As shown in fig. 8, according to an embodiment of the present disclosure, there is provided an evaluation apparatus 800 for an autopilot awareness algorithm, including: an obtaining unit 810 configured to obtain scene data and an actual measurement result, where the scene data is obtained when the host vehicle travels in a target scene including the first traffic participant, and the actual measurement result indicates whether the first traffic participant causes a driving risk to the host vehicle in the target scene; a first determination unit 820 configured to determine a target area based on the scene data, wherein the target area indicates an area that the first traffic participant may pass through within a first time period; a processing unit 830 configured to process the scene data using the autopilot awareness algorithm to obtain simulation test data output by the autopilot awareness algorithm; a second determining unit 840 configured to determine, based on the simulation test data, at least one perception test result corresponding to the at least one second traffic participant and the at least one second traffic participant falling within the target area within the first time period, wherein each perception test result indicates whether the respective second traffic participant is perceived to pose a driving risk to the host vehicle; and an evaluation unit 850 configured to evaluate the perception capability of the autopilot perception algorithm based on the road actual measurement result and the at least one perception test result.
Here, the operations of the above units 810 to 850 of the evaluation device 800 for the autopilot sensing algorithm are similar to the operations of the steps 210 to 250 described above, respectively, and are not repeated here.
According to embodiments of the present disclosure, there is also provided an electronic device, a readable storage medium and a computer program product.
Referring to fig. 9, a block diagram of an electronic device 900 that may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the electronic device 900 includes a computing unit 901 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 902 or a computer program loaded from a storage unit 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data required for the operation of the electronic device 900 can also be stored. The computing unit 901, the ROM 902, and the RAM 903 are connected to each other by a bus 904. An input/output (I/O) interface 905 is also connected to the bus 904.
A number of components in the electronic device 900 are connected to the I/O interface 905, including: an input unit 906, an output unit 907, a storage unit 908, and a communication unit 909. The input unit 906 may be any type of device capable of inputting information to the electronic device 900, the input unit 906 may receive input numeric or character information and generate key signal inputs related to user settings and/or function control of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a trackpad, a trackball, a joystick, a microphone, and/or a remote control. The output unit 907 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, video/audio output terminals, vibrators, and/or printers. Storage unit 908 may include, but is not limited to, magnetic disks, optical disks. The communication unit 909 allows the electronic device 900 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth devices, 802.11 devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The computing unit 901 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 901 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning network algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 901 performs the various methods and processes described above, such as method 200, and in some embodiments, method 200 is embodied as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 908. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 900 via the ROM 902 and/or the communication unit 909. When the computer program is loaded into RAM 903 and executed by the computing unit 901, the method 200 or steps described above may be performed. Alternatively, in other embodiments, computing unit 901 may be configured to perform method 200 by any other suitable means (e.g., by means of firmware)
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the foregoing methods, systems, and apparatus are merely exemplary embodiments or examples, and that the scope of the present invention is not limited by these embodiments or examples but only by the claims following the grant and their equivalents. Various elements of the embodiments or examples may be omitted or replaced with equivalent elements thereof. Furthermore, the steps may be performed in a different order than described in the present disclosure. Further, various elements of the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the disclosure.

Claims (17)

1. An evaluation method for an autopilot awareness algorithm, comprising:
acquiring scene data and an actual measurement result, wherein the scene data is acquired when a main vehicle runs in a target scene comprising a first traffic participant, and the actual measurement result indicates whether the first traffic participant can cause driving risk to the main vehicle in the target scene;
determining a target area based on the scene data, wherein the target area indicates an area that the first traffic participant may pass through within a first time period;
processing the scene data by using the automatic driving perception algorithm to obtain simulation test data output by the automatic driving perception algorithm;
determining at least one second traffic participant falling within the target area within the first time period and at least one perception test result corresponding to the at least one second traffic participant based on the simulation test data, wherein each perception test result indicates whether the corresponding second traffic participant is perceived to pose a driving risk to the host vehicle; and
and evaluating the perception capability of the automatic driving perception algorithm based on the road actual measurement result and the at least one perception test result.
2. The method of claim 1, wherein the determining a target region based on the scene data comprises:
determining a first location of the first traffic participant at a start time of the first time period and a second location at an end time of the first time period based on the scene data; and
the target area is determined based on the first location and the second location.
3. The method of claim 2, wherein the determining the target area based on the first location and the second location comprises:
moving the first position and the second position leftwards by a first distance along a direction perpendicular to the connecting line by taking the connecting line of the first position and the second position as a reference so as to obtain a first angular point and a second angular point;
moving the first position and the second position to the right a second distance along a direction perpendicular to the connecting line to obtain a third corner point and a fourth corner point; and
and taking a rectangular area surrounded by the first corner point, the second corner point, the third corner point and the fourth corner point as the target area.
4. The method of any of claims 1-3, wherein the determining at least one second traffic participant that falls within the target area within the first time period comprises:
Acquiring a plurality of data frames corresponding to the first time period from the simulation test data;
for each frame of data,
determining at least one third traffic participant in the data frame having a distance to the host vehicle less than a collision threshold; and
taking a third traffic participant falling into the target area from the at least one third traffic participant as the second traffic participant; and
traversing the plurality of data frames to obtain the at least one second traffic participant.
5. The method of claim 4, wherein the determining at least one perception test result corresponding to the at least one second traffic participant comprises:
for each of the second traffic participants,
acquiring a first size perceived by the second traffic participant in a first data frame, wherein the first data frame is a data frame with the smallest size perceived by the second traffic participant in the plurality of data frames, and the first size comprises at least one of a first length, a first width and a first height of the second traffic participant;
acquiring a second size corresponding to the first size, perceived by the second traffic participant in a second data frame, wherein the second data frame is a data frame with the largest perceived size by the second traffic participant in the plurality of data frames, and the second size comprises at least one of a second length, a second width and a second height of the second traffic participant;
Determining a size change value for the second traffic participant based on the first size and the second size; and
determining a perception test result corresponding to the second traffic participant based on the time sequence relation between the first data frame and the second data frame and the size change value; and
traversing the at least one second traffic participant to obtain the at least one perception test result.
6. The method of claim 5, wherein the determining a perception test result corresponding to the second traffic participant based on the size change value comprises:
performing risk scoring on the second traffic participant based on the size change value to take the score of the risk score as a perception test result corresponding to the second traffic participant, wherein the size change value is positively correlated with the score of the risk score;
and wherein the evaluating the awareness of the autopilot awareness algorithm based on the road actual measurement and the at least one awareness test result comprises:
and determining a risk perception score of the automatic driving perception algorithm according to the at least one perception test result, wherein the risk perception score is positively correlated with a score of at least one risk score corresponding to the at least one perception test result.
7. The method of any of claims 1-6, further comprising:
and returning the scene data, the actual measurement result, the simulation test data and the evaluation result of the perception capability of the automatic driving perception algorithm to the automatic driving perception algorithm as training data so as to update network model parameters of the automatic driving perception algorithm.
8. An evaluation device for an autopilot awareness algorithm, comprising:
the system comprises an acquisition unit, a first traffic participant and a second traffic participant, wherein the acquisition unit is configured to acquire scene data and an actual measurement result, the scene data is acquired when a main vehicle runs in a target scene comprising the first traffic participant, and the actual measurement result indicates whether the first traffic participant can cause driving risk to the main vehicle in the target scene;
a first determination unit configured to determine a target area based on the scene data, wherein the target area indicates an area that the first traffic participant may pass through within a first period of time;
the processing unit is configured to process the scene data by utilizing the automatic driving perception algorithm so as to obtain simulation test data output by the automatic driving perception algorithm;
A second determining unit configured to determine, based on the simulation test data, at least one second traffic participant that falls within the target area within the first time period and at least one perception test result corresponding to the at least one second traffic participant, wherein each perception test result indicates whether the respective second traffic participant is perceived to pose a driving risk to the host vehicle; and
and the evaluation unit is configured to evaluate the perception capability of the automatic driving perception algorithm based on the road actual measurement result and the at least one perception test result.
9. The apparatus of claim 8, wherein the first determining unit comprises:
a first determination subunit configured to determine a first location of the first traffic participant at a start time of the first time period and a second location at an end time of the first time period based on the scene data; and
a second determination subunit configured to determine the target area based on the first location and the second location.
10. The apparatus of claim 9, wherein the second determination subunit comprises:
a first corner subunit configured to move the first position and the second position to the left by a first distance along a direction perpendicular to a line connecting the first position and the second position to obtain a first corner point and a second corner point;
A second corner subunit configured to move the first position and the second position to the right a second distance along a direction perpendicular to the connecting line, so as to obtain a third corner and a fourth corner; and
and the third determination subunit is configured to take a rectangular area surrounded by the first corner point, the second corner point, the third corner point and the fourth corner point as the target area.
11. The apparatus according to any one of claims 8-10, wherein the second determining unit comprises:
an obtaining subunit, configured to obtain a plurality of data frames corresponding to the first time period from the simulation test data;
a first processing subunit configured to, for each data frame,
determining at least one third traffic participant in the data frame having a distance to the host vehicle less than a collision threshold; and
taking a third traffic participant falling into the target area from the at least one third traffic participant as the second traffic participant; and
a first traversing subunit configured to traverse the plurality of data frames to obtain the at least one second traffic participant.
12. The apparatus of claim 11, wherein the second determining unit comprises:
A second processing subunit configured, for each second traffic participant,
acquiring a first size perceived by the second traffic participant in a first data frame, wherein the first data frame is a data frame with the smallest size perceived by the second traffic participant in the plurality of data frames, and the first size comprises at least one of a first length, a first width and a first height of the second traffic participant;
acquiring a second size corresponding to the first size, perceived by the second traffic participant in a second data frame, wherein the second data frame is a data frame with the largest perceived size by the second traffic participant in the plurality of data frames, and the second size comprises at least one of a second length, a second width and a second height of the second traffic participant;
determining a size change value for the second traffic participant based on the first size and the second size; and
determining a perception test result corresponding to the second traffic participant based on the time sequence relation between the first data frame and the second data frame and the size change value; and
traversing the at least one second traffic participant to obtain the at least one perception test result.
13. The apparatus of claim 12, wherein the second processing subunit comprises:
a scoring subunit configured to score a risk of the second traffic participant based on the size change value, so as to take the score of the risk score as a perception test result corresponding to the second traffic participant, wherein the size change value is positively correlated with the score of the risk score;
and wherein the evaluation unit comprises:
and a third determining subunit configured to determine a risk perception score of the autopilot perception algorithm according to the at least one perception test result, wherein the risk perception score is positively correlated with a score of at least one risk score corresponding to the at least one perception test result.
14. The apparatus of any of claims 8-13, further comprising:
and the updating unit is configured to return the scene data, the actual measurement result, the simulation test data and the evaluation result of the perception capability of the automatic driving perception algorithm to the automatic driving perception algorithm as training data so as to update network model parameters of the automatic driving perception algorithm.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the method comprises the steps of
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-7.
17. A computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1-7.
CN202310451494.4A 2023-04-24 2023-04-24 Evaluation method, device, equipment and medium for automatic driving perception algorithm Pending CN116466685A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310451494.4A CN116466685A (en) 2023-04-24 2023-04-24 Evaluation method, device, equipment and medium for automatic driving perception algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310451494.4A CN116466685A (en) 2023-04-24 2023-04-24 Evaluation method, device, equipment and medium for automatic driving perception algorithm

Publications (1)

Publication Number Publication Date
CN116466685A true CN116466685A (en) 2023-07-21

Family

ID=87182202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310451494.4A Pending CN116466685A (en) 2023-04-24 2023-04-24 Evaluation method, device, equipment and medium for automatic driving perception algorithm

Country Status (1)

Country Link
CN (1) CN116466685A (en)

Similar Documents

Publication Publication Date Title
CN114179832B (en) Lane changing method for automatic driving vehicle
CN113887400B (en) Obstacle detection method, model training method and device and automatic driving vehicle
CN114758502B (en) Dual-vehicle combined track prediction method and device, electronic equipment and automatic driving vehicle
CN115082690B (en) Target recognition method, target recognition model training method and device
CN115366920A (en) Decision method and apparatus, device and medium for autonomous driving of a vehicle
CN115556769A (en) Obstacle state quantity determination method and device, electronic device and medium
CN115019060A (en) Target recognition method, and training method and device of target recognition model
CN114047760B (en) Path planning method and device, electronic equipment and automatic driving vehicle
CN115164936A (en) Global pose correction method and device for point cloud splicing in high-precision map manufacturing
CN117724361A (en) Collision event detection method and device applied to automatic driving simulation scene
CN114394111B (en) Lane changing method for automatic driving vehicle
CN115861953A (en) Training method of scene coding model, and trajectory planning method and device
CN113850909B (en) Point cloud data processing method and device, electronic equipment and automatic driving equipment
CN115675528A (en) Automatic driving method and vehicle based on similar scene mining
CN115235487A (en) Data processing method and device, equipment and medium
CN114970112A (en) Method and device for automatic driving simulation, electronic equipment and storage medium
CN116466685A (en) Evaluation method, device, equipment and medium for automatic driving perception algorithm
CN116311943B (en) Method and device for estimating average delay time of intersection
CN115583243B (en) Method for determining lane line information, vehicle control method, device and equipment
CN115019278B (en) Lane line fitting method and device, electronic equipment and medium
CN114333368B (en) Voice reminding method, device, equipment and medium
CN114179834B (en) Vehicle parking method, device, electronic equipment, medium and automatic driving vehicle
CN116580367A (en) Data processing method, device, electronic equipment and storage medium
CN116469069A (en) Scene coding model training method, device and medium for automatic driving
CN114670839A (en) Method and device for evaluating driving behavior of automatic driving vehicle and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination