CN115923820A - Scene data collection method and device for automatic driving system of vehicle - Google Patents

Scene data collection method and device for automatic driving system of vehicle Download PDF

Info

Publication number
CN115923820A
CN115923820A CN202310102124.XA CN202310102124A CN115923820A CN 115923820 A CN115923820 A CN 115923820A CN 202310102124 A CN202310102124 A CN 202310102124A CN 115923820 A CN115923820 A CN 115923820A
Authority
CN
China
Prior art keywords
vehicle
scene
scene data
condition
data collection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310102124.XA
Other languages
Chinese (zh)
Inventor
邓勇章
林龙贤
王嘉浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Weilai Automobile Technology Anhui Co Ltd
Original Assignee
Weilai Automobile Technology Anhui Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Weilai Automobile Technology Anhui Co Ltd filed Critical Weilai Automobile Technology Anhui Co Ltd
Priority to CN202310102124.XA priority Critical patent/CN115923820A/en
Publication of CN115923820A publication Critical patent/CN115923820A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The application relates to a scene data collection method and a device for an automatic driving system of a vehicle, wherein the scene data collection method comprises the following steps: acquiring scene data; determining whether the acquired scene data meets a scene trigger condition; under the condition that the scene triggering condition is determined to be met, determining whether a dangerous scene occurs or not based on the acquired scene data; and storing associated scene data for use by the autonomous driving system in the event that the hazardous scene is determined to be present. The scene data collection method and device can convert unknown dangerous scenes into known safe scenes, and therefore expected functional safety characteristics of the automatic driving system are continuously improved.

Description

Scene data collection method and device for automatic driving system of vehicle
Technical Field
The present application relates to the field of vehicles, and in particular, to a scene data collection method and apparatus for an automatic driving system of a vehicle.
Background
The advent of autonomous vehicles will revolutionize future transportation and transportation modes, providing greater freedom of movement for people. The driving assistance and unmanned driving technologies are considered as powerful means for reducing traffic accidents, and in recent years, numerous companies are involved in a line for exploring the automatic driving technology. Implementation of automated driving techniques requires rich intelligence capabilities. These intelligent capabilities include algorithmic inputs from multiple sensors, intercommunication between multiple electronic control units within the vehicle, and complex algorithms based on the scene.
Autonomous vehicles typically perceive most external environments 360 degrees, typically through the use of multiple cameras, liDAR, radar, etc., and use different deep learning or Deep Neural Network (DNN) algorithms. Even so, there is a possibility of system failure due to "unknown, unsafe". The intended functional safety of the autopilot system helps to avoid unreasonable risks and hazards due to functional defects or human mishandling of the actually intended function.
Disclosure of Invention
Embodiments of the present application provide a scene data collection method and apparatus for an autonomous driving system of a vehicle for converting an unknown dangerous scene into a known safe scene, thereby continuously improving the expected functional safety characteristics of the autonomous driving system.
According to an aspect of the present application, there is provided a scene data collection method for an automatic driving system of a vehicle, the scene data collection method including: acquiring scene data; determining whether the acquired scene data meets a scene trigger condition; under the condition that the scene triggering condition is determined to be met, determining whether a dangerous scene occurs or not based on the acquired scene data; and storing associated scene data for use by the autonomous driving system in the event that the hazardous scene is determined to be present.
In some embodiments of the present application, optionally, the scene data comprises one or more of: the running speed of the vehicle, the communication situation between the vehicle and other vehicles, the communication situation between the vehicle and a remote vehicle management system, the interconnection situation between the vehicle and a road section sensor, the road situation where the vehicle is located, the sign situation around the vehicle, the obstacle situation around the vehicle, the traffic flow limitation situation where the vehicle is located, the weather situation where the vehicle is located, the illumination situation around the vehicle, and the location situation where the vehicle is located.
In some embodiments of the present application, optionally, the scenario trigger condition includes an indicator of one or more of:
the running speed of the vehicle, the communication condition between the vehicle and other vehicles, the communication condition between the vehicle and a remote vehicle management system, the interconnection condition between the vehicle and a road section sensor, the road condition of the vehicle, the sign condition of the periphery of the vehicle, the obstacle condition of the periphery of the vehicle, the traffic flow limit condition of the vehicle, the weather condition of the vehicle, the illumination condition of the periphery of the vehicle and the position condition of the position of the vehicle.
In some embodiments of the present application, optionally, determining whether a dangerous scene occurs based on the acquired scene data comprises: judging whether the operation behaviors of the automatic driving system and a human driver are consistent or not, and determining that the dangerous scene appears under the condition of judging that the operation behaviors are inconsistent; and/or judging whether the vehicle has a collision risk or not, and determining that the dangerous scene occurs under the condition that the collision risk is judged to exist.
In some embodiments of the present application, optionally, determining whether the automated driving system is consistent with the operational behavior of the human driver comprises: determining whether a deviation between a lateral control command of a human driver and a lateral control command of the autonomous driving system exceeds a lateral control threshold; determining whether a deviation between a longitudinal control command of a human driver and a longitudinal control command of the autonomous driving system exceeds a longitudinal control threshold; and in the event that it is determined that the lateral control threshold or the longitudinal control threshold is exceeded, determining that the operating behavior of the autonomous driving system and the human driver are inconsistent.
In some embodiments of the application, optionally, the lateral control command comprises a lateral angle value for the vehicle and the longitudinal control command comprises a longitudinal acceleration value for the vehicle.
In some embodiments of the present application, optionally, determining whether the vehicle is at risk of collision comprises: and determining whether the vehicle and the front vehicle have collision risks under the current driving condition based on the distance between the vehicle and the front vehicle and the speed and the acceleration of the vehicle.
In some embodiments of the application, optionally, determining whether a dangerous scene occurs based on the acquired scene data comprises: and uploading the acquired scene data to a cloud end so as to determine whether a dangerous scene occurs through the cloud end.
According to another aspect of the present application, there is provided a scene data collection device for an autonomous driving system of a vehicle, the scene data collection device comprising a memory configured to store instructions; and a processor configured to execute the instructions to cause the scene data collection apparatus to perform any one of the scene data collection methods as described in the foregoing.
According to a further aspect of the application, there is provided a vehicle comprising a scene data collection device as any one of the preceding.
According to yet a further aspect of the present application, there is provided a computer readable storage medium having stored therein instructions which, when executed by a processor, cause the processor to perform any of the scene data collection methods as described hereinbefore.
The method and the system automatically and flexibly collect data of unknown dangerous scenes by using the shadow mode function in the automatic driving controller, and improve the safety of the automatic driving system under corresponding scenes by identifying the unknown dangerous scenes.
Drawings
The above and other objects and advantages of the present application will become more apparent from the following detailed description when taken in conjunction with the accompanying drawings, in which like or similar elements are designated by like reference numerals.
FIG. 1 shows a schematic diagram of relevant scenarios and configurations for implementing a scenario data collection method according to an embodiment of the present application;
FIG. 2 shows a flow diagram of a method of scene data collection according to an embodiment of the present application;
fig. 3 shows a schematic view of a scene data collection apparatus according to an embodiment of the application.
Detailed Description
For the purposes of brevity and explanation, the principles of the present application are described herein with reference primarily to exemplary embodiments thereof. However, those skilled in the art will readily recognize that the same principles are equally applicable to all types of scene data collection methods and apparatus for automotive systems of vehicles and that these same or similar principles may be implemented therein, with any such variations not departing from the true spirit and scope of the present application.
The core criteria of the design of the safety of the intended function of an autopilot system is to make the probability of the occurrence of an unknown danger situation sufficiently small. Therefore, how to efficiently collect unknown dangerous scenes and convert the unknown dangerous scenes into known safe scenes becomes a key for improving the safety characteristics of the expected functions of the automatic driving system. The scene data collection method 200 (see fig. 2) of the present application can flexibly and automatically transform an unknown hazardous scene into a known safe scene, thereby improving the expected functional safety characteristics of the autopilot system. A related scenario and configuration for implementing the scenario data collection method 200 of one embodiment of the present application will be described below in conjunction with fig. 1.
FIG. 1 shows a schematic diagram of relevant scenarios and configurations for implementing a scenario data collection method 200 according to one embodiment of the present application. As shown in fig. 1, relevant scenarios and configurations for implementing the scenario data collection method 200 include a vehicle 100, an external driving environment 110, a cloud 120, and a human driver 170. In an embodiment of the present application, a human driver 170 is seated in the cockpit of the vehicle 100 so that information in the external driving environment 110 can be observed and collected by eyes and ears. For ease of illustration, fig. 1 shows a human driver 170 within the vehicle 100.
The vehicle 100 may include an autopilot system 130, a steering system 140, a braking system 150, and an onboard communication module 160. The steering system 140 is used to change or restore the driving direction of the vehicle 100, and may include a steering mechanism (including a steering wheel, a steering shaft, etc.), a steering gear, and a steering gear. The braking system 150 is used to control the travel speed of the vehicle 100 and may include power supply devices, control devices, transmission devices, and brakes. That is, the steering system 140 and the braking system 150 may control lateral and longitudinal movement of the vehicle 100, respectively.
The autopilot system 130 may include a perception sensor 132 and an autopilot controller 134 for implementing autopilot and/or driving assist functions. The perception sensor 132 may collect the state of the external driving environment 110 and the vehicle 100 to acquire scene data. Autopilot controller 134 may include an autopilot module 135, a condition trigger module 136, and a shadow module 137. The autopilot module 135 may control the travel state of the vehicle 100 by controlling the execution of the steering system 140 and the braking system 150.
In some embodiments, the vehicle 100 may be driven in a manual driving mode or an automatic driving mode. During manual driving of the vehicle 100, the human driver 170 may control the driving of the vehicle 100 according to the driving task and the environmental information collected by the human driver 170. During the execution of the autopilot function by the autopilot system 130, the autopilot module 135 of the autopilot controller 134 may control lateral and longitudinal movement of the vehicle 100 based on the scene data collected by the perception sensors 132 and the dynamic driving tasks.
The scene data may be data regarding the state of the external driving environment 110 and the vehicle 100. In some embodiments, the scene data collected by the perception sensor 132 includes one or more of: the running speed of the vehicle 100, the communication situation between the vehicle 100 and other vehicles, the communication situation between the vehicle 100 and a remote vehicle management system, the interconnection situation between the vehicle 100 and road section sensors on a road section on which the vehicle is running, the road situation where the vehicle 100 is located, the sign situation around the vehicle 100, the obstacle situation around the vehicle 100, the traffic flow limitation situation where the vehicle 100 is located, the weather situation where the vehicle 100 is located, the lighting situation around the vehicle 100, the location situation where the vehicle 100 is located, and the like. Based on the different types of perception sensors 132, the vehicle 100 may acquire different categories of scene data. Table 1 shows scene data types for some embodiments and the types of perceptual sensors 132 required to acquire corresponding scene data.
TABLE 1 scene data and corresponding perception sensors
Figure BDA0004073563400000051
Figure BDA0004073563400000061
As shown in table 1, the vehicle 100 may include one or more types of perception sensors 132: global Navigation Satellite System (GNSS), high precision maps, vehicle speed sensors, cameras, lidar, rain light sensors, temperature sensors, wireless communication (V2X) of vehicle 100. Corresponding to each type of perception sensor 132, accordingly, scene data may also be acquired for one or more of the following categories: infrastructure, surrounding objects, driving restrictions, interconnection conditions, environmental conditions, and regional location. There may be a plurality of segment types corresponding to scene data of each of the categories, and there may be a plurality of detailed types corresponding to scene data of each segment type. For example, as shown in table 1, scene data regarding the type of infrastructure may be acquired through the perception sensor 132 such as GNSS and high-precision maps, wherein the type of subdivision of the scene data may be the type of road on which the vehicle 100 is traveling. That is, through the GNSS and the high-precision map, scene data of a road type on which the vehicle 100 is currently traveling, such as an expressway, an overpass, a national road, a provincial road, a prefecture road, and the like, is known. As the external driving environment 110 changes with the travel of the vehicle 100, the perception sensor 132 can acquire continuously updated scene data in real time.
In some embodiments, autopilot controller 134 may implement shadow mode functionality through shadow module 137. The term "shadow mode functionality" as used herein means that in a situation where the human driver 170 is driving the vehicle 100, the autonomous driving system 130 of the vehicle 100 is still running but is not involved in controlling the vehicle 100, but only the decision algorithm running in the autonomous driving module 135 is verified. That is, in the shadow mode function of the autopilot system 130, the autopilot module 135 may continue to make simulation decisions through algorithms such that the shadow module 137 may compare the results of the simulation decisions to the operational behavior of the human driver 170.
In some embodiments, autopilot controller 134 may determine, via condition trigger module 136, whether the scene data acquired via perception sensor 132 satisfies the scene trigger condition. Alternatively, the scene trigger condition may be preset in the automatic driving controller 134 (e.g., the shadow module 137 therein) by the manufacturer at the time of vehicle shipment, or may be reset therein by the user or the manufacturer as needed. It should be appreciated that the context trigger condition may correspond to context data acquired by the perception sensor 132. In some embodiments, the indicators of the scenario trigger conditions may be selected from the type indicators of the possible scenario data listed in table 1.
Alternatively, the scenario trigger condition may only relate to one type index, or may relate to multiple type indexes at the same time. When all the type indexes in the scene trigger condition are satisfied, the scene data is represented to satisfy the scene trigger condition. Each type index corresponding to a scene trigger condition may be acquired by a respective perception sensor 132. For example, the scenario trigger condition may be: the regional location is Beijing, the weather is heavy rain, the vehicle speed is more than 80km/h, and the road type is an expressway. Accordingly, it may be determined by the GNSS positioning system whether the vehicle 100 is operating in beijing; rainfall data can be collected through a rainfall light sensor and whether heavy rain weather exists currently is judged according to ODD (design operation domain) indexes; whether the speed of the vehicle is greater than 80km/h can be judged through a vehicle speed sensor; whether the vehicle 100 is on the expressway can be simultaneously judged by the GNSS positioning system and the high-precision map. That is, the perception sensor 132 corresponding to the combined scene may be found corresponding to the different combined scenes in the scene trigger condition.
In some embodiments, the scenario trigger condition may be determined according to a user's performance improvement requirement for usage habits of the vehicle 100 and/or expected functional safety of the autonomous driving system 130, so as to purposely supplement the user with scenario data to improve the expected functional safety characteristics of the autonomous driving system 130. In terms of the use requirements of the user, for example, for the user who frequently drives to school, the scenario trigger condition may be set as: the geofence of the location of the area where the vehicle 100 is located is a school zone. Based on the above-mentioned scenario trigger condition, when the scenario data acquired by the perception sensor 132 of the vehicle 100 includes that the geofence of the area location where the vehicle 100 is located is a school area, it may be determined that the scenario data satisfies the scenario trigger condition. In terms of performance improvement requirements for the intended functional safety of the autopilot system 130, for example, to supplement the interconnection between the vehicle 100 and road segment sensors under snowy conditions, the scenario trigger condition may be set to: the weather outside of the vehicle 100 is snow and the vehicle 100 is interconnected with road segment sensors. Based on the above-mentioned scene trigger condition, when the scene data acquired by the perception sensor 132 includes data about that the weather is snow and the vehicle 100 is interconnected with the road segment sensor, it may be determined that the current scene data satisfies the scene trigger condition. Setting the scene trigger condition to the interconnection situation of the vehicles and the specific weather situation may associate the specific weather situation with the interconnection situation of the vehicles, so that the autopilot system 130 learns the operation situation of the utility under the specific weather condition, and the performance of the expected functional safety of the autopilot system 130 is improved.
Scene data regarding known safety scenarios is a key factor that the autopilot system 130 uses to improve the intended functional safety characteristics of the autopilot of the vehicle 100. For scene data of a particular scene that satisfies the scene trigger condition, the method 200 for implementing scene data collection of the present application may supplement the database of scene data of the autopilot system 130 by screening out scene data therein regarding unknown hazardous scenes. That is, the method 200 for implementing scene data collection of the present application may continuously accumulate unknown hazardous scenes to transform the unknown hazardous scenes into known safe scenes, thereby expanding the scene data of the known safe scenes. As referred to herein, a "hazardous scene" refers to a related scene that the autopilot system 130 has not previously identified. By accumulating scene data for unknown hazardous scenes, the autopilot system 130 may translate the hazardous scene into a safe scene based on machine learning.
The vehicle 100 may communicate with the cloud 120 via the in-vehicle communication module 160. Optionally, the cloud 120 may be embodied in the form of a cloud server. As shown in fig. 1, cloud 120 may include a data storage module 122 and a scene filtering module 124. The scene filtering module 124 may be configured to determine whether a dangerous scene occurs in the scene data meeting the scene trigger condition, and the data storage module 122 may be configured to store the scene data for the automatic driving system 130 to use. Some embodiments of the present application may determine, through the cloud 120 (e.g., the scene filtering module 124), whether a dangerous scene occurs in the scene data that satisfies the scene triggering condition. In other embodiments, it may also be determined by the autopilot controller 134 whether a dangerous scene occurs in the scene data that satisfies the scene trigger condition.
Optionally, for the scene data of a specific scene satisfying the scene trigger condition, it may be determined whether a dangerous scene occurs therein based on the following filtering conditions:
(i) Under the shadow mode function, judging whether the operation behaviors of the automatic driving system 130 and the human driver 170 are consistent, and determining that a dangerous scene appears in the collected scene data under the condition of judging that the operation behaviors are inconsistent; and/or
(ii) It is determined whether the vehicle 100 has a collision risk, and it is determined that a dangerous scene occurs in the case where it is determined that the collision risk exists.
In some embodiments, the scene screening module 124 of the cloud 120 may screen the scene data to be screened based on the above two screening conditions, and as long as the scene data to be screened satisfies any one of the screening conditions, it may be determined that a dangerous scene occurs in the scene data to be screened.
In some embodiments, for the above filtering condition (i), it may be determined whether the operation behaviors of the automatic driving system 130 and the human driver 170 are consistent by monitoring whether the lateral control command and the longitudinal control command of the automatic driving system 130 and the human driver 170 for the vehicle 100 are consistent, respectively. Alternatively, the lateral control commands may include control commands to the steering system 140 of the vehicle 100 (e.g., lateral angle values), and the longitudinal control commands may include control commands to the braking system 150 of the vehicle 100 (e.g., longitudinal acceleration values).
For example, the following equation (1) may be used to determine whether the lateral control commands for the vehicle 100 by the autopilot system 130 and the human driver 170 are consistent:
(|M h -N a |/M h )*100%>x% (1)
wherein, M h Is the lateral angle value, N, performed by the human driver 170 a Is the lateral angle value determined by the autopilot system 130 and x% is the lateral control threshold. The lateral control threshold may be adjusted according to the actual situation. Alternatively, the lateral control threshold is any value from 10% to 40%, e.g., 20%, 25%, etc. In the case where it is determined that formula (1) is satisfied based on the scene data to be screened (i.e., the deviation of the lateral steering value by the automatic driving system 130 and the human driver 170 is greater than the lateral control threshold), it may be determined that a dangerous scene occurs in the scene data to be screened.
For example, the following equation (2) may be used to determine whether the autopilot system 130 is consistent with the longitudinal control commands for the vehicle 100 by the human driver 170:
((|L h -L a |/L h )*100%>y% (2)
wherein L is h Is the longitudinal acceleration value, L, performed by the human driver 170 a Is the longitudinal acceleration value determined by the autopilot system 130 and y% is the longitudinal control threshold. The longitudinal control threshold value can be adjusted according to actual conditions. Optionally, the longitudinal control threshold is any value from 10% to 40%, e.g., 20%, 25%, etc. In the case where it is determined that formula (2) is satisfied based on the scene data to be screened (i.e., the deviation of the longitudinal acceleration values of the automatic driving system 130 and the human driver 170 is greater than the longitudinal control threshold), it may be determined that a dangerous scene occurs in the scene data to be screened.
In some embodiments, for the above filtering condition (ii), it may be determined whether the vehicle 100 is at risk of collision based on the distance between the vehicle 100 and the preceding vehicle, which is acquired from the scene data to be filtered, and the speed and acceleration of the vehicle 100.
For example, the following equation (3) may be used to determine whether the vehicle 100 is at risk of collision:
V 2 /2a>D (3)
where,/is a vehicle speed value of the vehicle 100 at the time,/is a maximum deceleration value in the longitudinal direction of the vehicle 100, and D is a detection distance between the vehicle 100 and the preceding vehicle. In the case where it is determined that formula (3) is satisfied based on the scene data to be screened (i.e., the distance between the vehicle 100 and the preceding vehicle is so short that the vehicle 100 cannot be braked, and thus there is a risk of collision), it may be determined that a dangerous scene occurs in the scene data to be screened.
For embodiments in which the scene screening module 124 of the cloud 120 determines whether a dangerous scene occurs in the scene data satisfying the scene trigger condition, the scene screening module 124 may further store the associated scene data in the data storage module 122 for use by the autopilot system 130 (e.g., via network communication) in the event that the dangerous scene is determined to occur by the scene screening module 124.
In some embodiments, the "associated context data" referred to herein may refer to context data for a period of time before or after determining that the context trigger condition is satisfied. Optionally, the period of time is selected from any period of time from 2 seconds to 20 seconds, for example, the period of time is 10 seconds. Accordingly, in the case where the cloud 120 determines that a dangerous scene occurs in the scene data satisfying the scene trigger condition, scene data (e.g., acquired via the perception sensor 132) of a period of time before and after the scene trigger condition is determined to be satisfied may be marked as dangerous scene data, and further stored.
The dangerous scene data is stored in the cloud end and can be acquired by the autopilot system 130, so that the autopilot system 130 can convert the dangerous scene into a safe scene by learning the dangerous scene data, and the expected functional safety characteristics of the autopilot system 130 are improved.
Next, a scene data collection method 200 for the autopilot system 130 of the vehicle 100 of one embodiment of the present application will be described in conjunction with fig. 2.
Fig. 2 shows a flow diagram of a scene data collection method 200 for the autopilot system 130 of the vehicle 100 according to an embodiment of the application. As shown in fig. 1, the scene data collection method 200 includes steps S202 to S212. Step S202 is a start step. In some embodiments, step S202 may be entered after the autopilot controller 134 initiates the shadow mode function to trigger execution of subsequent steps S204-S212 of the scene data collection method 200.
In step S204, a scene trigger condition is set (e.g., in the shadow module 137). In some embodiments, the setting manner and the specific content of the scene trigger condition may be as described above, and are not described herein again. After step S204, step S206 is further performed.
In step S206, scene data is acquired (e.g., via the perception sensor 132). In some embodiments, the manner and specific content of acquiring the scene data may be as described above, and are not described herein again. After step S206, step S208 is further performed.
In step S208, it is determined (e.g., via the condition trigger module 136) whether the scene data acquired in step S206 satisfies the scene trigger condition set in step S204. In some embodiments, the manner of determining whether the acquired scene data satisfies the scene trigger condition may be as described above, and will not be described herein again. In case it is determined that the scene trigger condition is satisfied, further performing step S210; in the case where it is determined that the scene trigger condition is not satisfied, it returns to step S206 to acquire scene data of the next scene based on the current running condition of the vehicle.
In step S210, it is determined (e.g., via the cloud 120) whether a dangerous scene occurs based on the acquired scene data. In some embodiments, the manner of determining whether a dangerous scene occurs may be as described above, and will not be described herein. For example, scene data satisfying the scene trigger condition may be uploaded to the cloud end 120, so as to screen whether a dangerous scene occurs or not based on the scene data through the cloud end 120. In case that it is determined that a dangerous scene occurs, further performing step S212; in the case where it is determined that no dangerous scene occurs, it returns to step S206 to acquire scene data of the next scene based on the traveling condition of the vehicle.
In step S212, the corresponding scene data is stored (e.g., via the cloud 120) for use by the autopilot system 130. In some embodiments, the manner of storing the corresponding scene data may be as described above, and will not be described herein again. For example, the associated scene data may be first marked as dangerous scene data and then stored, where "associated scene data" may refer to scene data for a period of time (e.g., selected from 2 seconds to 20 seconds, such as 10 seconds) before and after determining that the scene trigger condition is satisfied (e.g., determined based on step S208). That is, when it is determined based on step S210 that a dangerous scene occurs in the scene data satisfying the scene trigger condition, scene data (i.e., associated scene data) of a period of time before and after it may be acquired based on the time point, and the associated scene data may be stored for use by the automatic driving system 130. After step S212, step S206 is further executed to acquire scene data for the next scene along with the travel of the vehicle 100. That is, the scene data collection method 200 may be continuously performed for the next scene after completing the corresponding operations based on the current scene.
The embodiment of the scene data collection method 200 shown in fig. 2 performs steps S202 to S212 in a specific order, in some other embodiments, steps S202 to S212 may not be performed in the order shown in fig. 2, or steps in the scene data collection method 200 may be added or deleted as needed. For example, the scene trigger condition may be reset during the execution of the scene data collection method 200, rather than just at the initial execution of the scene data collection method 200. Resetting the scenario trigger condition based on user driving characteristics, habits, needs, etc. of the vehicle 100 facilitates a continuous reduction of unknown hazardous scenarios in the design of desired functional safety to provide continued improvement to the autopilot system 130.
Next, a scene data collection apparatus 300 of an automatic driving system for a vehicle of an embodiment of the present application will be described with reference to fig. 3.
Fig. 3 shows a schematic diagram of a scene data collection device 300 for an autonomous driving system of a vehicle according to an embodiment of the application. As shown in fig. 3, the scene data collection apparatus 300 includes a communication unit 310, a memory 320 (e.g., a non-volatile memory such as a flash memory, a ROM, a hard disk drive, a magnetic disk, an optical disk), and a processor 330. The communication unit 310, the memory 320, and the processor 330 are communicatively coupled to each other. The communication unit 310 serves as a communication interface configured to establish a communication connection between the scene data collection apparatus 300 and an external device or network (e.g., a cloud, a road segment sensor, another vehicle, etc.). The memory 320 stores instructions that are executable by the processor 330. The processor 330 is configured to execute instructions to implement the scene data collection method 200 according to one or more embodiments of the invention. In some embodiments, the scene data collection apparatus 300 may be provided in the vehicle 100.
According to another aspect of the present application, there is provided a computer readable storage medium having stored therein instructions, which when executed by a processor, cause the processor to execute any one of the scene numbers as described aboveA data collection method 200. Computer-readable media as referred to in the present application includes various types of computer storage media and can be any available media that can be accessed by a general purpose or special purpose computer. By way of example, computer-readable media may include RAM, ROM, EPROM, E 2 pROM, registers, hard disk, removable disk, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other temporary or non-temporary medium that can be used to carry or store desired program code elements in the form of instructions or data structures and that can be accessed by a general purpose or special purpose computer, or a general purpose or special purpose processor. A disk, as used herein, typically reproduces data magnetically, whereas a disc reproduces data optically with a laser. Combinations of the above should also be included within the scope of computer-readable media. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
The present application presets a scene trigger condition in the shadow mode by setting a shadow mode function in the autopilot controller 134. When the scene data (e.g., the environmental data and the vehicle data of the vehicle 100 in the operation scene) acquired by the vehicle 100 satisfies the scene trigger condition, the automatic driving controller 134 records the scene data and transmits the scene data to the server platform (e.g., the cloud 120) through the vehicle-mounted communication module 160, so that the server platform can analyze the scene data to select an unknown dangerous scene according to the judgment condition and provide the dangerous scene to the automatic driving system 130 for learning processing. By continuously accumulating such unknown hazardous scenarios, more and more unknown hazardous scenarios may become known safe scenarios, effectively increasing the expected functional safety of the autopilot system 130.
According to the method and the device, the scene triggering conditions can be flexibly set according to the driving requirements of the user and/or the development requirements of the vehicle enterprises, so that the scene data under special scenes can be pertinently collected. The method and the device can automatically collect and upload the scene data, so that the cost for collecting the special scenes can be effectively reduced. In addition, the present application may identify the special scenes in the cloud server or the autopilot controller 134, pick out the dangerous scenes therein, and accumulate the unknown dangerous scenes continuously with the accumulation of the autopilot mileage, thereby slowly expanding the known safe scenes so as to continuously improve the expected functional safety design of the autopilot system 130.
The above are only specific embodiments of the present application, but the scope of protection of the present application is not limited thereto. Other possible variations or alternatives may occur to those skilled in the art based on the technical scope disclosed in the present application, and are all covered by the scope of the present application. In the present invention, the embodiments and features of the embodiments may be combined with each other without conflict. The scope of protection of the present application is subject to the description of the claims.

Claims (11)

1. A scene data collection method for an automatic driving system of a vehicle, characterized by comprising:
acquiring scene data;
determining whether the acquired scene data meets a scene trigger condition;
under the condition that the scene triggering condition is determined to be met, determining whether a dangerous scene appears based on the acquired scene data; and
in the event that the hazardous scene is determined to be present, storing associated scene data for use by the autonomous driving system.
2. The method of claim 1, wherein the scene data comprises one or more of:
the running speed of the vehicle, the communication condition between the vehicle and other vehicles, the communication condition between the vehicle and a remote vehicle management system, the interconnection condition between the vehicle and a road section sensor, the road condition of the vehicle, the sign condition of the periphery of the vehicle, the obstacle condition of the periphery of the vehicle, the traffic flow limit condition of the vehicle, the weather condition of the vehicle, the illumination condition of the periphery of the vehicle and the position condition of the position of the vehicle.
3. The method of claim 1, wherein the scenario trigger condition comprises an indicator of one or more of:
the running speed of the vehicle, the communication situation between the vehicle and other vehicles, the communication situation between the vehicle and a remote vehicle management system, the interconnection situation between the vehicle and a road section sensor, the road situation where the vehicle is located, the sign situation around the vehicle, the obstacle situation around the vehicle, the traffic flow limitation situation where the vehicle is located, the weather situation where the vehicle is located, the illumination situation around the vehicle, and the location situation where the vehicle is located.
4. The method of claim 1, wherein determining whether a dangerous scene is present based on the acquired scene data comprises:
judging whether the operation behaviors of the automatic driving system and a human driver are consistent or not, and determining that the dangerous scene appears under the condition of judging the inconsistency; and/or
And judging whether the vehicle has a collision risk or not, and determining that the dangerous scene appears under the condition of judging that the collision risk exists.
5. The scene data collection method according to claim 4, wherein determining whether the automated driving system is consistent with the operational behavior of the human driver comprises:
determining whether a deviation between a lateral control command of a human driver and a lateral control command of the autonomous driving system exceeds a lateral control threshold;
determining whether a deviation between a longitudinal control command of a human driver and a longitudinal control command of the autonomous driving system exceeds a longitudinal control threshold; and
in the event that it is determined that the lateral control threshold or the longitudinal control threshold is exceeded, it is determined that the operating behavior of the autonomous driving system and the human driver is not consistent.
6. The scene data collection method according to claim 5, wherein the lateral control command comprises a lateral angle value for the vehicle and the longitudinal control command comprises a longitudinal acceleration value for the vehicle.
7. The method of claim 4, wherein determining whether the vehicle is at risk of collision comprises:
and determining whether the vehicle and the front vehicle have collision risks under the current driving condition based on the distance between the vehicle and the front vehicle and the speed and the acceleration of the vehicle.
8. The method of claim 1, wherein determining whether a dangerous scene is present based on the acquired scene data comprises:
and uploading the acquired scene data to a cloud end so as to determine whether a dangerous scene occurs through the cloud end.
9. Scene data collection device for an autonomous driving system of a vehicle, characterized in that the scene data collection device comprises
A memory configured to store instructions; and
a processor configured to execute the instructions to cause the scene data collection apparatus to perform the scene data collection method of any one of claims 1-8.
10. A vehicle characterized in that it comprises the scene data collection device of claim 9.
11. A computer-readable storage medium having instructions stored therein, which when executed by a processor, cause the processor to perform the scene data collection method of any one of claims 1-8.
CN202310102124.XA 2023-01-19 2023-01-19 Scene data collection method and device for automatic driving system of vehicle Pending CN115923820A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310102124.XA CN115923820A (en) 2023-01-19 2023-01-19 Scene data collection method and device for automatic driving system of vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310102124.XA CN115923820A (en) 2023-01-19 2023-01-19 Scene data collection method and device for automatic driving system of vehicle

Publications (1)

Publication Number Publication Date
CN115923820A true CN115923820A (en) 2023-04-07

Family

ID=86557863

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310102124.XA Pending CN115923820A (en) 2023-01-19 2023-01-19 Scene data collection method and device for automatic driving system of vehicle

Country Status (1)

Country Link
CN (1) CN115923820A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117041916A (en) * 2023-09-27 2023-11-10 创意信息技术股份有限公司 Mass data processing method, device, system and storage medium
WO2023226733A1 (en) * 2022-05-27 2023-11-30 中国第一汽车股份有限公司 Vehicle scene data acquisition method and apparatus, storage medium and electronic device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023226733A1 (en) * 2022-05-27 2023-11-30 中国第一汽车股份有限公司 Vehicle scene data acquisition method and apparatus, storage medium and electronic device
CN117041916A (en) * 2023-09-27 2023-11-10 创意信息技术股份有限公司 Mass data processing method, device, system and storage medium
CN117041916B (en) * 2023-09-27 2024-01-09 创意信息技术股份有限公司 Mass data processing method, device, system and storage medium

Similar Documents

Publication Publication Date Title
JP6677765B2 (en) Map creation of working and non-working construction areas for autonomous driving
US11726493B2 (en) Modifying behavior of autonomous vehicles based on sensor blind spots and limitations
AU2020203517B2 (en) Dynamic routing for autonomous vehicles
US11938967B2 (en) Preparing autonomous vehicles for turns
EP4184476A1 (en) Method and device for controlling switching of vehicle driving mode
CN115923820A (en) Scene data collection method and device for automatic driving system of vehicle
JP2016212905A (en) Engaging and disengaging for autonomous driving
KR102376122B1 (en) A method of generating an overtaking probability collecting unit, a method of operating a control device of a vehicle, an overtaking probability collecting device and a control device
EP4141816A1 (en) Scene security level determination method, device and storage medium
CN115516276A (en) Road section evaluation method
CN114222689A (en) Method for quantifying extreme traffic behavior
US11590978B1 (en) Assessing perception of sensor using known mapped objects
CN114179812A (en) Control method and device for assisting driving
KR102499056B1 (en) Method, apparatus and server to monitor driving status based vehicle route
CN117968710A (en) Travel path planning method, related device, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination