WO2022246852A1 - 基于航测数据的自动驾驶系统测试方法、测试系统及存储介质 - Google Patents

基于航测数据的自动驾驶系统测试方法、测试系统及存储介质 Download PDF

Info

Publication number
WO2022246852A1
WO2022246852A1 PCT/CN2021/096994 CN2021096994W WO2022246852A1 WO 2022246852 A1 WO2022246852 A1 WO 2022246852A1 CN 2021096994 W CN2021096994 W CN 2021096994W WO 2022246852 A1 WO2022246852 A1 WO 2022246852A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
test
target
elements
vehicle
Prior art date
Application number
PCT/CN2021/096994
Other languages
English (en)
French (fr)
Inventor
张玉新
俞瑞林
杜昕一
吴优
王璐瑶
Original Assignee
吉林大学
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 吉林大学, 深圳市大疆创新科技有限公司 filed Critical 吉林大学
Priority to PCT/CN2021/096994 priority Critical patent/WO2022246852A1/zh
Priority to CN202180087995.8A priority patent/CN116829919A/zh
Publication of WO2022246852A1 publication Critical patent/WO2022246852A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M17/00Testing of vehicles
    • G01M17/007Wheeled or endless-tracked vehicles

Definitions

  • the present application relates to the technical field of automatic driving, and in particular to a testing method, testing system and storage medium for an automatic driving system based on aerial survey data.
  • Autonomous driving means that the driver does not need to operate the vehicle, but automatically collects environmental information through the sensors on the vehicle, and automatically drives according to the environmental information.
  • the performance test of the current automatic driving system is mainly through the data collection test of the manually driven vehicle, or the data collected by the camera installed on the road.
  • the manually driven vehicle needs to be installed with a large number of sensors.
  • the naturalness of the data is affected; the data collected by the camera installed on the road is also easily detected by other drivers and regarded as a traffic surveillance camera, which also affects the naturalness of the collected data. Therefore, the current performance test methods cannot complete the test for autonomous driving.
  • the performance test of the system in the natural state will lead to the failure of the automatic driving system and cause traffic accidents.
  • the embodiment of the present application provides a test method, test system and storage medium for an automatic driving system based on aerial survey data, and more specifically provides a test system, a test device, and an unmanned test system for testing the performance of an automatic driving system.
  • Machines, test methods, and storage media to improve the safety of automated driving systems.
  • the embodiment of the present application provides a test system, the test system is used to test the performance of the automatic driving system, and the test system includes:
  • An unmanned aerial vehicle the unmanned aerial vehicle can hover over the target road section, and collect the traffic scene data of the target road section;
  • the test device is used to communicate with the unmanned aerial vehicle, the test device is used to determine the target test scene according to the traffic scene data, and perform the automatic driving system test according to the traffic scene data corresponding to the target test scene. test.
  • the embodiment of the present application also provides a test method for an automatic driving system, the test method comprising:
  • the traffic scene data is collected by hovering over the target road section by the drone;
  • a target test scene is determined according to the traffic scene data, and the automatic driving system is tested according to the traffic scene data corresponding to the target test scene.
  • the embodiment of the present application also provides an unmanned aerial vehicle, the unmanned aerial vehicle includes:
  • the platform is arranged on the body;
  • a photographing device arranged on the cloud platform, for photographing images
  • the drone also includes a processor and a memory, the memory is used to store a computer program, and the processor is used to execute the computer program and realize the following operations when executing the computer program:
  • a target test scene is determined according to the traffic scene data, and the automatic driving system is tested according to the traffic scene data corresponding to the target test scene.
  • the embodiment of the present application further provides a test device, the test device includes a processor and a memory;
  • the memory is used to store computer programs
  • the processor is configured to execute the computer program and implement any one of the automatic driving system testing methods provided in the embodiments of the present application when executing the computer program.
  • the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor realizes the implementation of the present application.
  • test system, test device, unmanned aerial vehicle, test method, and storage medium disclosed in the embodiments of the present application use the aerial survey data of the unmanned aerial vehicle to realize the performance test of the automatic driving system in a natural state, thereby improving the performance of the automatic driving system in practice. In-app security.
  • Fig. 1 is a schematic diagram of a test system provided by the embodiment of the present application.
  • Fig. 2 is a schematic structural diagram of an unmanned aerial vehicle provided by an embodiment of the present application.
  • Fig. 3 is a schematic block diagram of an unmanned aerial vehicle provided by an embodiment of the present application.
  • Fig. 4 is a schematic diagram of another test system provided by the embodiment of the present application.
  • FIG. 5 is a schematic diagram of a traffic scene provided by an embodiment of the present application.
  • Fig. 6 is a schematic diagram of another traffic scene provided by the embodiment of the present application.
  • FIG. 7 is a schematic diagram of a test scenario framework provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a target test scenario framework provided by an embodiment of the present application.
  • FIG. 9 is a schematic flow chart of a testing method for an automatic driving system provided in an embodiment of the present application.
  • Fig. 10 is a schematic block diagram of a testing device provided by an embodiment of the present application.
  • L0-L5 levels which are No Automation (L0) and Driver Assistance (L1).
  • partial driving automation Partial Automation, L2)
  • conditional driving automation Conditional Automation, L3
  • high driving automation High Automation, L4
  • full driving automation Full Automation, L5
  • the performance test of the current automatic driving system is mainly through the data collection test of the manual driving vehicle, or the data collection test of the camera installed on the road.
  • test vehicle Manually driving the test vehicle to collect data for testing, since a large number of sensors are set on the outside of the test vehicle (such as the roof), the test vehicle has a large difference from the ordinary vehicle, so that the driver on the ordinary vehicle can be aware of the
  • the particularity of the test vehicle has an impact on the naturalness of the collected data, that is, the collected data is not driving data in a natural state, so the data collected by the test vehicle should be used to test the automatic driving system, and the automatic driving system in the future Real-world applications are not the same, so it is not possible to accurately measure the performance of an automated driving system in a natural state.
  • test vehicles are tested with sufficient randomness, but it is impossible to conduct a long-term test on a certain target road section. segment or roundabout segment.
  • the test vehicle cannot be tested for a long time when passing through these road sections, so it cannot fail to provide enough data to support it.
  • the sensors installed on the test vehicle have a limited measurement range and low collection efficiency. They can only collect data from vehicles around the test vehicle. They only focus on the safety performance of the automatic driving system, without considering the impact of the automatic driving system on traffic efficiency. improvement.
  • Installing a camera on the road to collect data for testing usually involves setting up a fixed pole next to the road. Installing a camera on the fixed pole is easy to be noticed by the driver of the vehicle and regarded as a traffic monitoring camera, which will also affect driving behavior and further affect the measurement data. Naturalness matters. Due to the fixed installation of the camera and the limitation of its angle of view and accuracy, there are requirements for the surrounding environment of the road section to be measured, and it is impossible to select any road section for measurement. In addition, due to the limited angle of view of the cameras, a large number of cameras are required for the measurement of the road section to be tested, and the installation cost is high. After the cameras are installed, they cannot be moved conveniently, and the later use costs are higher. Moreover, the data collected by the camera may include data related to personal privacy such as faces and license plates, which has limitations in privacy protection.
  • the embodiments of the present application provide a test method, test system, and storage medium for an automatic driving system based on aerial survey data, and more specifically provide a test system, a test device, and an unmanned aerial vehicle for testing an automatic driving system. , test methods and storage media to complete the performance test of the automatic driving system in a natural state, thereby improving the safety of the automatic driving system and reducing traffic accidents.
  • FIG. 1 shows a schematic diagram of a testing system provided by an embodiment of the present application.
  • the testing system 100 includes a UAV 10 and a testing device 20 , and the testing device 20 is connected to the UAV 10 in communication.
  • the UAV 10 may include a body 11, a platform 12, a photographing device 13, a power system 14, a control system 15, and the like.
  • Airframe 11 may include a fuselage and an undercarriage (also referred to as landing gear).
  • the fuselage may include a center frame and one or more arms connected to the center frame, and the one or more arms extend radially from the center frame.
  • the tripod is connected with the fuselage, and is used for supporting when the UAV 10 lands.
  • the pan-tilt 12 is installed on the body 11 for carrying the photographing device 13 .
  • the cloud platform 12 can include three motors, that is, the cloud platform 12 is a three-axis platform, and under the control of the control system 15 of the drone 10, the shooting angle of the shooting device 13 can be adjusted, and the shooting angle can be understood as The angle of the direction of the lens of the photographing device 13 towards the target to be photographed relative to the horizontal direction or the vertical direction.
  • the pan/tilt 12 may further include a controller for controlling the movement of the pan/tilt 12 by controlling the motor of the pan/tilt, and then adjusting the shooting angle of the shooting device 13 .
  • the gimbal 12 may be independent of the UAV 10 , or may be a part of the UAV 10 .
  • the motor may be a DC motor or an AC motor; or, the motor may be a brushless motor or a brushed motor.
  • the photographing device 13 can be, for example, a camera or a video camera, etc., which is used to capture images.
  • the photographing device 13 can communicate with the control system 15 and take pictures under the control of the control system 15.
  • the photographing device 13 is mounted on the body 11 of the UAV 10 through the platform 12 . It can be understood that the camera device 13 can also be directly fixed on the body 11 of the UAV 10, so that the pan-tilt 12 can be omitted.
  • the photographing device 13 can be controlled to photograph the target road segment from a bird's-eye view to obtain video data of the target road segment, which can be used as traffic scene data of the target road segment.
  • the viewing angle is that the optical axis direction of the camera lens of the shooting device 13 is perpendicular to the target road section to be photographed, or is approximately perpendicular to the target road section to be photographed.
  • the approximately vertical angle is, for example, 88 degrees or 92 degrees, etc., and of course other angle values can also be used. , is not limited here.
  • the photographing device 13 may include a monocular camera or a binocular camera for shooting of different functions, for example, the monocular camera is used to capture images of the target road section, and the binocular camera can obtain the image of the target object on the target road section.
  • Depth image, the depth image includes the target object on the target road section and the distance information of the target object, such as vehicles or pedestrians, etc.
  • the depth image can also be used as a kind of traffic scene data.
  • the power system 14 may include one or more electronic governors (referred to as ESCs for short), one or more propellers and one or more motors corresponding to the one or more propellers, wherein the motors are connected to the electronic governor Between the propeller, the motor and the propeller are arranged on the arm of the UAV 10 .
  • the electronic governor is used to receive the drive signal generated by the control system 15, and provide a drive current to the motor according to the drive signal to control the speed of the motor and then drive the propeller to rotate, thereby providing power for the flight of the UAV 10.
  • the man-machine 10 is capable of movement in one or more degrees of freedom. In some embodiments, drone 10 may rotate about one or more axes of rotation.
  • the rotation axis may include a roll axis (Roll), a yaw axis (Yaw) and a pitch axis (pitch).
  • Roll roll
  • Yaw yaw
  • pitch axis pitch axis
  • the motor may be a DC motor or an AC motor.
  • the motor can be a brushless motor or a brushed motor.
  • Control system 15 may include a controller and a sensing system. Wherein, the controller is used to control the flight of the UAV 10, for example, the flight of the UAV 10 can be controlled according to the attitude information measured by the sensor system. It should be understood that the controller can control the UAV 10 according to pre-programmed instructions, or can control the UAV 10 by responding to one or more control instructions from the control terminal.
  • the sensing system is used to measure the attitude information of the UAV 10, that is, the position information and status information of the UAV 10 in space, such as three-dimensional position, three-dimensional angle, three-dimensional velocity, three-dimensional acceleration and three-dimensional angular velocity, etc.
  • the sensing system may include, for example, at least one of sensors such as a gyroscope, an ultrasonic sensor, an electronic compass, an inertial measurement unit (Inertial Measurement Unit, IMU), a visual sensor, a global navigation satellite system, and a barometer.
  • sensors such as a gyroscope, an ultrasonic sensor, an electronic compass, an inertial measurement unit (Inertial Measurement Unit, IMU), a visual sensor, a global navigation satellite system, and a barometer.
  • IMU inertial Measurement Unit
  • the global navigation satellite system may be the Global Positioning System (GPS).
  • the position information of the target object in the image can be calculated.
  • the location information of the target object in the image can be calculated through coordinate transformation and triangular relationship.
  • a controller may include one or more processors and memory.
  • the processor may be, for example, a micro-controller unit (Micro-controller Unit, MCU), a central processing unit (Central Processing Unit, CPU), or a digital signal processor (Digital Signal Processor, DSP), etc.
  • the memory can be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk or a mobile hard disk.
  • the UAV 10 may also include a radar device, which is installed on the UAV 10, specifically on the body 11 of the UAV 10, during the flight of the UAV 10, It is used to measure the surrounding environment of the UAV 10, such as obstacles, to ensure flight safety.
  • a radar device which is installed on the UAV 10, specifically on the body 11 of the UAV 10, during the flight of the UAV 10, It is used to measure the surrounding environment of the UAV 10, such as obstacles, to ensure flight safety.
  • the radar device is installed on the tripod of the UAV 10 , and the radar device communicates with the control system 15 , and the radar device transmits the collected observation data to the control system for processing by the control system 15 .
  • the drone 10 may include two or more tripods, and the radar device is mounted on one of the tripods.
  • the radar device can also be mounted on other positions of the UAV 10, which is not specifically limited.
  • the radar device may specifically be a laser radar, and point cloud data of the target road may also be collected, and the point cloud data may be used as traffic scene data.
  • the UAV is used to collect the traffic scene data of the target road section
  • the data form of the traffic scene data can include video data, point cloud data, depth image, position information of the UAV, and attitude information of the UAV etc.
  • it can also include weather information queried from the Internet, etc.
  • the unmanned aerial vehicle 10 may include a rotary-wing unmanned aerial vehicle, such as a four-rotor unmanned aerial vehicle, a six-rotor unmanned aerial vehicle, an eight-rotor unmanned aerial vehicle, or a fixed-wing unmanned aerial vehicle.
  • a rotary-wing unmanned aerial vehicle such as a four-rotor unmanned aerial vehicle, a six-rotor unmanned aerial vehicle, an eight-rotor unmanned aerial vehicle, or a fixed-wing unmanned aerial vehicle.
  • the combination of machines is not limited here.
  • the test device 20 can be a server, a terminal device or an automatic driving simulator, wherein the terminal device can be, for example, a desktop computer, a notebook, a tablet or a smart phone, etc., and the automatic driving simulator can be a software simulation platform or a simulation simulator.
  • the software simulation platform is used to simulate the automatic driving system.
  • the software simulation platform can be designed based on the software platform, such as Matlab.
  • the data input of the software simulation platform is the traffic scene data corresponding to the target test scene, and the output is the automatic driving system.
  • Response results that is, how the automatic driving system should respond to the target test scenario.
  • the traffic scene data corresponding to the vehicle cut-in scene is input to the software simulation platform, and the output of the software simulation platform is the response result for the vehicle cut-in scene.
  • the simulation simulator can use a compact driving simulator, a vehicle-level driving simulator or a high-performance driving simulator, etc.
  • the compact driving simulator uses a simulated driving position and is generally equipped with an LCD screen, manual or automatic transmission, handbrake and The steering wheel with force feedback
  • the vehicle-level driving simulator uses real vehicles, equipped with a six-degree-of-freedom motion platform, and uses a ring display system
  • the high-performance driving simulator uses real vehicles or parts of real vehicles, and is equipped with a high-speed motion system to simulate vehicle movement, Use a ring display system.
  • the test system 100 also includes an automatic driving simulator 30, and the automatic driving simulator 30 is used to determine the response result of the automatic driving system for the target test scenario according to the traffic scene data of the target test scenario .
  • the automatic driving simulator 30 may be a software simulation platform or a simulation simulator. If it is a software simulation platform, the software simulation platform can be installed in the test device 20, or in other electronic equipment different from the test device;
  • the UAV 10 can fly to the target road section and hover over the target road section, collect the traffic scene data of the target road section, and send the collected traffic scene data
  • the test device 20 receives the traffic scene data collected by the UAV 10, determines the target test scene according to the traffic scene data, and tests the automatic driving system according to the traffic scene data corresponding to the target test scene.
  • the target road segment may be any road segment that the autonomous driving vehicle may travel on, such as any road segment among expressways, urban roads, and urban and rural roads.
  • the target road section may also be a road section with frequent traffic accidents, such as a long downhill road section, an urban-suburban junction road section, a sharp turning road section, an "S"-shaped road section or a circular island road section.
  • the target road section may also be a special road section, such as any road section in a tunnel, a sea-crossing bridge, or a viaduct. It can be seen that the test system can measure the traffic scene data of any road section, and is not limited by the terrain, and the measurement cost is low.
  • the traffic scene data includes at least one of the following: road information data, vehicle information data, environment information data and traffic participant information data.
  • the road information data includes road facility information and/or road type information.
  • the road facility information may include, for example, traffic signs, traffic markings, traffic lights and/or road auxiliary facilities.
  • the road type information is, for example, urban roads or highways, urban roads Can include arterial roads, expressways, secondary arterial roads and/or branch roads, etc.
  • the vehicle information data includes at least one of the following: vehicle type information, vehicle location information, vehicle speed information, vehicle driving direction information and vehicle size information.
  • the vehicle type information is, for example, M1 passenger car, N1 passenger car, O-type trailer, two-wheeled vehicle or Tricycle.
  • the environment information data includes weather information and/or road surrounding environment information, such as daytime, night, sunny, rainy, snowy or foggy weather information, and road surrounding environment information such as buildings or flowers and trees around the road.
  • Traffic participant information data includes pedestrian information and/non-motor vehicle information.
  • Pedestrian information includes, for example, the walking speed, direction and location of children, adults or the elderly, and non-motor vehicle information, such as the speed of bicycles and electric two-wheelers , direction and location information.
  • the above-mentioned traffic scene data can be obtained by measuring the sensors carried by the UAV 10.
  • vehicle information data, road information data, environmental information data and traffic participant information data can be obtained through the images taken by the shooting device carried on the UAV 10. and/or point cloud data captured by radar devices.
  • the weather information in the environmental information data can be queried for weather information released on the Internet, and the weather information can be added to the collected traffic scene data.
  • the target test scene is a scene where the performance of the automatic driving system is to be measured, such as a vehicle cut-in scene, a vehicle cut-out scene, or a vehicle emergency braking scene.
  • the vehicle cut-in scenario is used to test how the automatic driving system responds when a vehicle cuts in
  • the vehicle cut-out scenario is used to test how the automatic driving system responds when a vehicle is cut out
  • the vehicle emergency braking scenario is used to test automatic driving The response result of the system if there is front emergency braking.
  • the target test scene can also include all the scenes that may be encountered in the process of automatic driving, such as the car-following scene, etc., here No detailed introduction.
  • the automatic driving system is tested according to the traffic scene data corresponding to the target test scene, specifically, the automatic driving system is replaced by the target vehicle in the target test scene, and the response result of the automatic driving system is compared with the response of the driver of the target vehicle As a result, the response result of the driver of the target vehicle is used as the true value to verify the corresponding result of the automatic driving system, and the performance test of the automatic driving system is completed.
  • this test method can not only test the safety of the automatic driving system, but also test the anthropomorphism of the automatic driving system, making the automatic driving system more anthropomorphic, thereby improving the user experience.
  • the automatic driving system can be used to replace the target vehicle in the target test scene, and the response results of the automatic driving system can be compared with the driving results of the driver in natural traffic, thus realizing the automatic driving system.
  • Anthropomorphic driving and safe driving performance are tested.
  • the target vehicle in the target test scene may also be called a dangerous vehicle, such as a vehicle cut in by a front vehicle in the vehicle cut-in scene.
  • a dangerous vehicle such as a vehicle cut in by a front vehicle in the vehicle cut-in scene.
  • Figure 5 if there is a pedestrian in front of vehicle a, then the vehicle a needs to brake urgently to avoid the pedestrian, and vehicle a is a dangerous vehicle. If this scene is used as the target test scene, vehicle a is the target vehicle.
  • the automatic driving system is tested according to the traffic scene data corresponding to the target test scene.
  • the driving result of the target vehicle in the target test scene can be determined according to the traffic scene data of the target test scene, and then according to the target test scene.
  • the traffic scene data determines the response result of the automatic driving system to the target test scene, and compares the difference between the driving result of the driver and the response result of the automatic driving system to determine the performance of the automatic driving system.
  • the driving result of the target vehicle in the target test scene is determined according to the traffic scene data of the target test scene.
  • the response results of the test environment can be specifically reflected according to the driving data of the target vehicle.
  • the driving result of the driver in the target vehicle 2 can be determined according to the collected traffic scene data corresponding to the target test scene, such as decelerating in order to avoid danger and the magnitude of the deceleration can be based on the collected The multi-frame video images of the target vehicle 2 in the traffic scene data are determined.
  • the response result of the automatic driving system to the target test scene can be determined.
  • the target vehicle in the target test scene can be replaced by the automatic driving system, or the vehicle including the automatic driving system can be replaced to determine the automatic driving system.
  • the response result of the system in this environment target test environment).
  • the traffic scene data of the vehicle cut-in scene corresponding to the target vehicle 2 is input to the automatic driving simulator to obtain the response result of the automatic driving system. Then compare the difference between the driver's driving result and the response result of the automatic driving system to determine the performance of the automatic driving system.
  • testing the performance of the automatic driving system includes not only safety, but also traffic efficiency, comfort, anthropomorphism, and trust.
  • the target vehicle may comprise one vehicle, or, the target vehicle may comprise a first target vehicle and a plurality of second target vehicles, the plurality of second target vehicles being related to the first target vehicle, wherein the first Both the target vehicle and the plurality of second target vehicles are used to test the performance of the automated driving system.
  • the first target vehicle is a vehicle that needs to adapt to the target test scenario, for example, the first target vehicle is a dangerous vehicle, and the plurality of second target vehicles are vehicles similar to the first target vehicle.
  • FIG. 6 shows a target test scenario, which may be a highway driving scenario, where the speeds of vehicles increase from right to left according to the lane they are in, and the speeds of vehicles in the same lane are equal.
  • the vehicle 1 originally driving in the rightmost lane suddenly changes lanes to the second right lane, forming a vehicle cut-in scene for the vehicle 2 originally driving in the second right lane.
  • the automatic driving system replaces vehicle 2 and enters the vehicle cut-in scene. Since there are vehicles on the left and right sides (vehicle 4 and vehicle 5 respectively), vehicle 2 cannot change lanes and can only slow down.
  • the driving system or at the same time replaced by a vehicle loaded with an automatic driving system, then the vehicle 3 changes lanes to the left at the same time through inter-vehicle communication, not only avoiding accidents, but also not causing deceleration behavior, and improving traffic efficiency.
  • vehicle 2 is the first target vehicle
  • vehicles 3, 4 and 5 are the second target vehicles.
  • the UAV 10 can fly to the target road section according to the control instruction, and can hover over the target road section to collect traffic scene data of the target road section.
  • the control instruction may be issued by the test device 20 , or other equipment, such as a remote controller for controlling the flight of the UAV 10 .
  • the convenience and mobility of drones can be used to collect traffic scene data in different spaces and different times, so that the automatic driving system can be comprehensively tested and the coverage of the automatic driving system can be improved.
  • the UAV 10 can hover above the target road section and collect the traffic scene data of the target road section. Traffic scene data of the target road segment.
  • the bird's-eye view can be understood as taking an image of the target road section in an orthographic manner. Since the shooting angle of the UAV is from directly above the target road section, there will be no situation where large vehicles block small vehicles, and the data error rate is low, which can improve the performance test accuracy of the automatic driving system.
  • the UAV 10 can adjust its flight posture and/or the shooting angle of its mounted camera device according to the road information of the target road section, and collect traffic scene data of the target road section. For example, if there are traffic signs on the target road section, in order to prevent the traffic signs from covering part of the traffic scene, the flying attitude of the drone and/or the shooting angle of the shooting device can be adjusted accordingly, for example, the flight height can be increased or the flight can be lowered Height, or, for example, adjusting the shooting angle of the shooting device to avoid the obstruction of traffic signs, thereby improving the quality of traffic scene data collection.
  • the hovering position and/or hovering height of the UAV 10 over the target road section are related to the traffic scene of the target road section.
  • the traffic scene may be a vehicle scene or a facility scene on the target road.
  • the UAV 10 when the UAV 10 recognizes that a vehicle (such as the height of a truck) passing by with a height exceeding a preset threshold value, it can increase the hovering height to avoid the truck from blocking other vehicles, thereby allowing the collection of higher High-quality traffic scene data.
  • a vehicle such as the height of a truck
  • the UAV 10 recognizes that a traffic jam occurs on one side of the two-way road, while almost no vehicles pass by on the other side, so its hovering position can be adjusted to collect the traffic scene of the road on the side where the traffic jam occurs Data, so that the collected traffic scene data can include more target test scenarios, thereby improving the test efficiency of the automatic driving system.
  • the UAV 10 can hover over the target road segment within a specific period of time to collect traffic scene data of the target road segment.
  • the wind force in a certain time period is smaller than that in other time periods, and/or, the light intensity in a certain time period is greater than that in other time periods.
  • a specific period of time such as 8:00 am to 5:00 pm in sunny and windless weather, the image shaking caused by the movement of the drone can be minimized, thereby improving the test accuracy of the automatic driving system.
  • the target test scene is determined according to the traffic scene data.
  • the scene elements corresponding to the target road section can be obtained, and the element parameters of the scene elements are determined according to the traffic scene data.
  • the scenario element determine the test scenario, and from the test scenarios determine the target test scenario.
  • the scene element is a factor that can affect the automatic driving system, and may be related to the target road segment.
  • Different target road segments may include different scene elements.
  • the scene elements corresponding to the target road segment in the expressway and the urban road are not completely the same.
  • the scene elements include one or more of traffic participant elements, vehicle information elements, traffic facility elements, road information elements, weather elements and surrounding environment elements.
  • determine the element parameters of the scene elements according to the traffic scene data For example, you can first determine that the traffic participant elements include pedestrians and/or non-motor vehicles, and then determine the speed, direction and position of the pedestrians and/or non-motor vehicles according to the traffic scene data. element parameters. For another example, according to the traffic scene data, it is determined that the type of the vehicle is a trailer and the element parameters such as the position, speed, direction and outline size of the vehicle.
  • the target test scene can also be called a key driving scene, specifically, the scene determination conditions corresponding to the target test scene can be obtained, and the target test scene is determined from the test scenes according to the scene determination conditions, wherein the scene determination The conditions are determined according to the scene elements of the target test scene, and different target test scenes correspond to different scene determination conditions.
  • scenario determination conditions that can express the target test scenario, including extraction criteria, start conditions and end conditions.
  • the target test scene is a vehicle cut-in scene.
  • the extraction standard of the vehicle cut-in scene is: the lateral speed of the cut-in vehicle remains constant in the same direction, and the cut-in vehicle is from the same direction. There is no other vehicle between the car that rushes out of its lane, cuts in and the own car (target vehicle) in the adjacent lane.
  • the start condition and end condition of the cut-in scene are respectively: the lateral speed of the cut-in vehicle increases from zero, and the lateral speed of the cut-in vehicle decreases to zero. It should be noted that other target test scenarios can also be set according to the ISO34502 standard.
  • key scene elements related to the functional modules can be selected from the scene elements of the target road section, and the element parameters of the key scene elements can be determined according to the traffic scene data. Scenario elements and corresponding element parameters to obtain the target test scenario.
  • the functional module to be tested is, for example, an emergency braking (Autonomous Emergency Braking, AEB) module, then select scene elements related to the emergency braking module, and the relevant scene elements are, for example, traffic participant elements, vehicle elements, Traffic auxiliary facility elements, weather elements, surrounding environment elements, etc., determine the element parameters of key scene elements according to the traffic scene data, and then fill the element parameters in the corresponding key scene elements to obtain the target test scene.
  • AEB Autonomous Emergency Braking
  • the target test scenario can also be determined by building a test scenario framework, and the performance of the automatic driving system can be verified by using the target test scenario.
  • a method for building a test scene framework which specifically includes: obtaining scene elements that have an impact on the automatic driving system, and classifying the scene elements to obtain a test scene frame; according to the test scene frame in the test scene The scene element extracts the element parameters of the scene elements from the traffic scene data, wherein the extraction includes identifying scene elements and corresponding element parameters in the traffic scene data by using algorithms such as target recognition algorithms, vehicle speed calculation algorithms, or neural network models; And filling the obtained element parameters into the corresponding scene elements of the side scene frame to obtain the target test scene frame, wherein the scene elements and element parameters included in the target test scene frame are the traffic scene data of the target test scene.
  • the data processing speed can be accelerated, and at the same time, the element parameters of key scene elements can be targeted to verify the performance of the automatic driving system, thereby improving the accuracy of the verification.
  • the test scenario framework may include traffic participant elements, vehicle elements, road elements, environment elements, and the like. Then use target recognition algorithm, vehicle speed calculation algorithm, or neural network model to identify these scene elements and corresponding element parameters from the collected traffic scene data, fill the identified element parameters into the corresponding scene elements of the scene frame, and obtain
  • the target test scenario framework is specifically shown in Figure 8.
  • no traffic participants are identified, and the relevant vehicle elements include trailers and light trucks.
  • the element parameters include the trailer behind the light truck, 40m away from the front light truck, and the speed of the trailer.
  • the speed of the light truck is 40km/h and the deceleration is 4m/s 2.
  • the element parameters of the road element are white dotted line, secondary road, road guardrail, one-way three-lane, environment element
  • the element parameters are day morning, office buildings and apartments around, trees on both sides, no noise, etc.
  • test scene framework obtained by classifying the scene elements can be classified according to the type relationship of the scene elements. For example, as shown in Figure 7, pedestrian elements and non-motor vehicle elements are divided into traffic participants, and traffic facilities and The road information is divided into road categories.
  • another test scene framework construction method which specifically includes: obtaining scene elements that have an impact on the automatic driving system; obtaining key points related to the functional modules to be tested in the automatic driving system from the scene elements Scene elements, classifying the key scene elements to obtain a test scene frame; extracting corresponding element parameters from the traffic scene data according to the key scene elements in the test scene frame, and filling the extracted element parameters into the corresponding key points of the test scene frame Among the scene elements, a target test scene frame is obtained, and the scene elements and element parameters included in the target test scene frame are the traffic scene data of the target test scene. In this way, the data processing speed can be accelerated, and at the same time, the element parameters of key scene elements can be targeted to verify the performance of the automatic driving system, thereby improving the accuracy of the verification.
  • an emergency braking (AEB) module in an automatic driving system needs to be tested.
  • AEB emergency braking
  • the UAV collects traffic scene data on urban roads
  • all scene elements that affect the function of the automatic driving system are obtained.
  • select the corresponding scene elements according to the functions of the AEB module such as: traffic participant elements, vehicle elements, traffic auxiliary facility elements, weather elements, surrounding environment elements, and build a test scene framework.
  • the element parameters of the corresponding scene elements are extracted from the traffic scene data collected by the UAV (specifically, each frame of video data), and correspondingly filled in the test scene frame to obtain the target test scene frame.
  • the trailer follows the light truck driving at the same speed in front at a speed of 50km/h. Both vehicles are driving in the same direction in the middle lane of the one-way three-lane lane. The longitudinal distance between the two vehicles is 40m. The light truck in front of the trailer decelerates at a deceleration rate of 4m/s 2 .
  • test scenario framework can be embodied in the form of a data table, that is, the test scenario framework can be a test scenario data table, and the test scenario data table includes all the contents of the test scenario framework.
  • the test scenario framework may also be in the form of other structural data types, which is not limited here.
  • the UAV has the function of flight stability control, there are still many factors that prevent the collected traffic scene data from being directly used for the performance verification of the autopilot system. Due to the influence of various aspects, the pixel matrix of the collected video data has many problems such as unclear targets, weak contrast, and high brightness exposure rate, which increases the difficulty of detection. Therefore, it is also necessary to preprocess the collected traffic scene data, wherein the preprocessing includes at least one of foreground extraction, color space conversion, image binarization, edge detection, morphological processing and setting the region of interest, specifically The corresponding processing method can be selected according to the actual processing effect. It can be understood that in the above-mentioned performance testing process of the automatic driving system, the traffic scene data used are preprocessed traffic scene data.
  • the pedestrian detection module in the automatic driving system determines the pedestrian detection module in the automatic driving system.
  • the pedestrian detection module failed to recognize the child in front, and an accident occurred. Therefore, the pedestrian detection model in the pedestrian detection module is optimized, wherein the pedestrian detection model is a model obtained based on neural network training.
  • the embodiment of the present application also provides a method for optimizing the pedestrian detection model. It should be understood that the embodiment of the present application uses the pedestrian detection model as an example for introduction. Of course, this optimization method can also be used to optimize other detection models. This is not limited.
  • the optimization method of the model mainly includes: obtaining training sample data, and training the classifier in the pedestrian detection model according to the training sample data; verifying whether the accuracy of the classifier in the pedestrian detection model meets the requirements, and When the accuracy rate of the classifier in the pedestrian detection model meets the requirements, the parameters of the classifier in the pedestrian detection model are saved to complete the optimization of the pedestrian detection model.
  • the training sample Before obtaining the training sample data, the training sample needs to be collected, specifically: determining that the pedestrian detection model fails to recognize the pedestrian scene data corresponding to the pedestrian, wherein the pedestrian scene data includes at least one frame collected by the drone including pedestrians.
  • Video image control the UAV to collect traffic scene data related to the pedestrian scene data, and process the traffic scene data according to the image information of the video image, such as contour sampling, size cutting, sample normalization, etc. Process to get sample data.
  • the sample data is used as a positive sample of the training sample, and a sample excluding pedestrians is split from the sample data as a negative sample, and the positive sample and the negative sample constitute a training sample.
  • the optimized pedestrian detection model can have a higher recognition accuracy for this scene.
  • the classifier in the pedestrian detection model is trained according to the training sample data.
  • a cascaded classifier based on an LBP operator can be used to train the classifier of the pedestrian detection model.
  • the LBP operator is an operator that describes the local texture features of an image. Using its advantages of rotation invariance and gray scale invariance, it can improve the influence of the image under lighting and other environments, so it is more suitable for automatic driving systems.
  • positive samples can be used to train the classifier. After normalizing the positive samples in the training samples, output a file that meets the training requirements. At the same time, output a file that reads negative samples, put it into training, and train the classification in turn. multiple times, the model recognition rate under the optimal parameters can be obtained.
  • the test system provided by the above embodiments can use the aerial survey data of the UAV, specifically the collected traffic scene data, to complete the performance test of the automatic driving system. Since the UAV is hovering above the target road section, it is not easy to be driven. The researchers found that it is possible to complete the performance test of the automatic driving system in a natural state, thereby improving the safety of the automatic driving system in practical applications.
  • FIG. 9 shows a schematic flowchart of a testing method for an automatic driving system provided by an embodiment of the present application.
  • the testing method of the automatic driving system can be applied to the unmanned aerial vehicle or the testing device provided in the above embodiment to complete the performance test of the automatic driving system.
  • the testing method of the automatic driving system includes step S101 and step S102.
  • the traffic scene data includes at least one of the following: road information data, vehicle information data, environment information data and traffic participant information data.
  • the road information data includes road facility information and/or road type information; the vehicle information data includes at least one of the following: vehicle type information, vehicle position information, vehicle speed information, vehicle direction information and vehicle size information.
  • vehicle type information includes at least one of the following: vehicle type information, vehicle position information, vehicle speed information, vehicle direction information and vehicle size information.
  • the environment information data includes weather information and/or road surrounding environment information.
  • Traffic participant information data includes pedestrian information and/or non-motor vehicle information.
  • Target test scenarios include vehicle cut-in scenarios, vehicle cut-out scenarios, or vehicle emergency braking scenarios. Of course, other target test scenarios may also be included, such as car-following scenarios, which are not limited here.
  • the automatic driving system is tested according to the traffic scene data corresponding to the target test scene.
  • the driving result of the target vehicle in the target test scene can be determined according to the traffic scene data of the target test scene, and then according to the target test scene.
  • the traffic scene data determines the response result of the automatic driving system to the target test scene, and compares the difference between the driving result of the driver and the response result of the automatic driving system to determine the performance of the automatic driving system. It can not only test the safety of the automatic driving system, but also make the performance of the automatic driving system more anthropomorphic.
  • the target vehicle includes one vehicle, or, the target vehicle includes a first target vehicle and a plurality of second target vehicles related to the first target vehicle, wherein the first target vehicle and the second target vehicles are both for testing the performance of automated driving systems. It is possible to test the improvement of the traffic flow efficiency of the automatic driving system, and then improve the traffic flow efficiency.
  • the hovering position and/or hovering height of the UAV over the target road section are related to the traffic scene of the target road section, and the traffic scene can be a vehicle scene or a facility scene of the target road, so that Collect higher quality traffic scene data.
  • the unmanned aerial vehicle can hover over the target road segment within a specific time period, and collect traffic scene data of the target road segment.
  • the wind force in a certain time period is smaller than that in other time periods, and/or, the light intensity in a certain time period is greater than that in other time periods.
  • a specific period of time such as 8:00 am to 5:00 pm in sunny and windless weather, the image shaking caused by the movement of the drone can be minimized, thereby improving the test accuracy of the automatic driving system.
  • the target test scene is determined according to the traffic scene data.
  • the scene element corresponding to the target road section can be obtained.
  • the scene element is a factor that can affect the automatic driving system.
  • the element parameters of the scene element are determined according to the traffic scene data.
  • the element parameters of the element and scene elements determine the test scenario, and from the test scenarios determine the target test scenario. This can improve the accuracy and efficiency of automatic driving system performance testing.
  • the scene elements include: one or more of traffic participant elements, vehicle information elements, traffic facility elements, road information elements, weather elements and surrounding environment elements.
  • Determining the target test scene from the test scene specifically, the scene determination conditions corresponding to the target test scene can be obtained, and the target test scene is determined from the test scenes according to the scene determination conditions, wherein the scene determination conditions are determined according to the scene elements of the target test scene, Different target test scenarios correspond to different scenario determination conditions.
  • key scene elements related to the functional modules can be selected from the scene elements of the target road section, and the element parameters of the key scene elements can be determined according to the traffic scene data. Scenario elements and corresponding element parameters to obtain the target test scenario.
  • the functional module to be tested is an emergency braking (Autonomous Emergency Braking, AEB) module.
  • the target test scenario in order to quickly determine the target test scenario to improve the test accuracy of the automatic driving system. It is also possible to obtain scene elements that have an impact on the automatic driving system, classify the scene elements to obtain a test scene frame, extract corresponding element parameters from the traffic scene data according to the scene elements in the test scene frame, and fill the element parameters into the test scene. In the corresponding scene element of the scene frame, the target test scene is obtained. Specifically shown in Figure 7 and Figure 8.
  • in order to quickly determine the target test scenario to improve the test accuracy of the automatic driving system it is also possible to obtain scene elements that have an impact on the automatic driving system, obtain key scene elements related to the functional modules to be tested in the automatic driving system from the scene elements, and classify the key scene elements to obtain the test scene framework.
  • the key scene elements extract the corresponding element parameters from the traffic scene data; fill the element parameters into the corresponding key scene elements of the test scene frame to obtain the target test scene.
  • the coping result of the automatic driving system for the target test scene can be determined according to the traffic scene data of the target test scene.
  • the traffic scene data corresponding to the target test scene can be input to the automatic driving simulator, and the output of the automatic driving simulator is the response result for the target test scene.
  • the automatic driving simulator may include a software simulation platform or a simulation simulator.
  • the collected traffic scene data can also be preprocessed first, wherein the preprocessing includes foreground extraction, color space conversion, image binarization, edge detection, morphological processing and setting the At least one item, specifically, the corresponding processing method can be selected according to the actual processing effect. It can be understood that in the above-mentioned performance testing process of the automatic driving system, the traffic scene data used are preprocessed traffic scene data.
  • the testing method can also determine the functional modules in the automatic driving system that do not meet the performance standard when testing the automatic driving system, and optimize the functional modules, thereby improving the performance of the automatic driving system.
  • the classifier in the pedestrian detection model can be optimized and trained according to the training sample data by obtaining training sample data; Check whether the accuracy of the classifier in the pedestrian detection model meets the requirements, and save the parameters of the classifier in the pedestrian detection model when the accuracy of the classifier in the pedestrian detection model meets the requirements.
  • the test test provided by the above embodiment can use the traffic scene data collected by the UAV to complete the performance test of the automatic driving system. Since the UAV is hovering above the target road section, it is not easy to be found by the driver, so it can be completed. The performance test of the automatic driving system in the natural state, thereby improving the safety of the automatic driving system in practical applications.
  • this unmanned aerial vehicle 10 comprises: body 11, cloud platform 12 and photographing device 13, and cloud platform 12 is arranged on the body 11, and photographing device 13 is arranged on the cloud platform 12, and the photographing device 13 is used for photographing images.
  • the drone 10 also includes a processor and a memory, the memory is used to store a computer program, and the processor is used to execute the computer program and realize the following operations when executing the computer program:
  • the traffic scene data includes at least one of the following: road information data, vehicle information data, environment information data, and traffic participant information data; wherein, the road information data includes road facility information and/or road Type information; the vehicle information data includes at least one of the following: vehicle type information, vehicle location information, vehicle speed information, vehicle travel direction information and vehicle size information; the environmental information data includes weather information and/or road surrounding environment information ; The traffic participant information data includes pedestrian information and/non-motor vehicle information.
  • the target test scenario includes a vehicle cut-in scenario, a vehicle cut-out scenario or a vehicle emergency braking scenario.
  • the testing of the automatic driving system according to the traffic scene data corresponding to the target test scene includes:
  • the target vehicle includes one vehicle; alternatively, the target vehicle includes a first target vehicle and a plurality of second target vehicles related to the first target vehicle; wherein the first target Both the vehicle and the second target vehicle are used to test the performance of the automated driving system.
  • the UAV can adjust its flight attitude and/or the shooting angle of its onboard camera device according to the road information of the target road section, and collect traffic scene data of the target road section.
  • the hovering position and/or hovering height of the UAV over the target road section is related to the traffic scene of the target road section.
  • the unmanned aerial vehicle can hover over the target road section for a specific period of time to collect traffic scene data of the target road section.
  • the wind force in the specific time period is smaller than that in other time periods, and/or, the light intensity in the specific time period is greater than the light intensity in other time periods.
  • the determining the target test scenario according to the traffic scenario data includes:
  • the scene element is a factor that can affect the automatic driving system; determine the element parameters of the scene element according to the traffic scene data; according to the scene element and the scene element The element parameters of determine the test scenario, and determine the target test scenario from the test scenario.
  • the scene elements include: one or more of traffic participant elements, vehicle information elements, traffic facility elements, road information elements, weather elements and surrounding environment elements.
  • the processor is further configured to: acquire the scene determination conditions corresponding to the target test scene, and determine the target test scene from the test scenes according to the scene determination conditions; wherein, the scene determination conditions are based on The scene elements of the target test scene are determined, and different target test scenes correspond to different scene determination conditions.
  • the processor is configured to: select a key scene element related to the function module from the scene elements of the target road section according to the function module to be tested in the automatic driving system;
  • the scene data determines element parameters of the key scene elements; and obtains a target test scene according to the key scene elements and corresponding element parameters.
  • the processor is configured to: obtain scene elements that have an impact on the automatic driving system, and classify the scene elements to obtain a test scene frame; Extract corresponding element parameters from the traffic scene data; fill the element parameters into the corresponding scene elements of the test scene frame to obtain the target test scene.
  • the processor is configured to:
  • Obtaining scene elements that have an impact on the automatic driving system obtaining key scene elements related to the functional modules to be tested in the automatic driving system from the scene elements, and classifying the key scene elements to obtain a test scene framework; Extracting corresponding element parameters from the traffic scene data according to key scene elements in the test scene frame; filling the element parameters into corresponding key scene elements in the test scene frame to obtain a target test scene.
  • the processor is configured to: determine the response result of the automatic driving system for the target test scenario according to the traffic scene data of the target test scenario based on an automatic driving simulator.
  • the automatic driving simulator may include a software simulation platform or a simulation simulator.
  • the processor is further configured to: perform preprocessing on the traffic scene data; wherein the preprocessing includes foreground extraction, color space conversion, image binarization, edge detection, morphological processing and Set at least one item in the region of interest.
  • the processor is further configured to: determine a functional module in the automatic driving system that does not meet a performance standard, and optimize the functional module.
  • the functional modules include a pedestrian detection module; the processor is configured to: obtain training sample data, perform optimization training on the classifier in the pedestrian detection model according to the training sample data; verify the pedestrian Check whether the accuracy of the classifier in the pedestrian detection model meets the requirements, and save the parameters of the classifier in the pedestrian detection model when the accuracy of the classifier in the pedestrian detection model meets the requirements.
  • FIG. 10 is a schematic block diagram of a testing device provided by an embodiment of the present application. As shown in FIG. 10 , the test device 200 further includes at least one or more processors 201 and memory 202 .
  • the processor 201 may be, for example, a micro-controller unit (Micro-controller Unit, MCU), a central processing unit (Central Processing Unit, CPU) or a digital signal processor (Digital Signal Processor, DSP), etc.
  • MCU Micro-controller Unit
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • the memory 202 can be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
  • the memory 202 is used to store a computer program; the processor 201 is used to execute the computer program and when executing the computer program, execute any one of the automatic driving system testing methods provided in the embodiments of the present application, so as to Realize the performance test of the automatic driving system in the natural state, thereby improving the safety of the automatic driving system in practical applications.
  • the processor 201 is configured to execute the computer program and implement the following operations when executing the computer program:
  • the traffic scene data includes at least one of the following: road information data, vehicle information data, environment information data, and traffic participant information data; wherein, the road information data includes road facility information and/or road Type information; the vehicle information data includes at least one of the following: vehicle type information, vehicle location information, vehicle speed information, vehicle travel direction information and vehicle size information; the environmental information data includes weather information and/or road surrounding environment information ; The traffic participant information data includes pedestrian information and/non-motor vehicle information.
  • the target test scenario includes a vehicle cut-in scenario, a vehicle cut-out scenario or a vehicle emergency braking scenario.
  • the testing of the automatic driving system according to the traffic scene data corresponding to the target test scene includes:
  • the target vehicle includes one vehicle; alternatively, the target vehicle includes a first target vehicle and a plurality of second target vehicles related to the first target vehicle; wherein the first target Both the vehicle and the second target vehicle are used to test the performance of the automated driving system.
  • the UAV can adjust its flight attitude and/or the shooting angle of its onboard camera device according to the road information of the target road section, and collect traffic scene data of the target road section.
  • the hovering position and/or hovering height of the UAV over the target road section is related to the traffic scene of the target road section.
  • the UAV can hover over the target road segment within a specific period of time to collect traffic scene data of the target road segment.
  • the wind force in the specific time period is smaller than that in other time periods, and/or, the light intensity in the specific time period is greater than the light intensity in other time periods.
  • the determining the target test scenario according to the traffic scenario data includes:
  • the scene element is a factor that can affect the automatic driving system; determine the element parameters of the scene element according to the traffic scene data; according to the scene element and the scene element The element parameters of determine the test scenario, and determine the target test scenario from the test scenario.
  • the scene elements include: one or more of traffic participant elements, vehicle information elements, traffic facility elements, road information elements, weather elements and surrounding environment elements.
  • the processor is further configured to: acquire the scene determination conditions corresponding to the target test scene, and determine the target test scene from the test scenes according to the scene determination conditions; wherein, the scene determination conditions are based on The scene elements of the target test scene are determined, and different target test scenes correspond to different scene determination conditions.
  • the processor is configured to: select a key scene element related to the function module from the scene elements of the target road section according to the function module to be tested in the automatic driving system;
  • the scene data determines element parameters of the key scene elements; and obtains a target test scene according to the key scene elements and corresponding element parameters.
  • the processor is configured to: obtain scene elements that have an impact on the automatic driving system, and classify the scene elements to obtain a test scene frame; Extract corresponding element parameters from the traffic scene data; fill the element parameters into the corresponding scene elements of the test scene frame to obtain the target test scene.
  • the processor is configured to:
  • Obtaining scene elements that have an impact on the automatic driving system obtaining key scene elements related to the functional modules to be tested in the automatic driving system from the scene elements, and classifying the key scene elements to obtain a test scene framework; Extracting corresponding element parameters from the traffic scene data according to key scene elements in the test scene frame; filling the element parameters into corresponding key scene elements in the test scene frame to obtain a target test scene.
  • the processor is configured to: determine the response result of the automatic driving system for the target test scenario according to the traffic scene data of the target test scenario based on an automatic driving simulator.
  • the automatic driving simulator may include a software simulation platform or a simulation simulator.
  • the processor is further configured to: perform preprocessing on the traffic scene data; wherein the preprocessing includes foreground extraction, color space conversion, image binarization, edge detection, morphological processing and Set at least one item in the region of interest.
  • the processor is further configured to: determine a functional module in the automatic driving system that does not meet a performance standard, and optimize the functional module.
  • the functional modules include a pedestrian detection module; the processor is configured to: obtain training sample data, perform optimization training on the classifier in the pedestrian detection model according to the training sample data; verify the pedestrian Check whether the accuracy of the classifier in the pedestrian detection model meets the requirements, and save the parameters of the classifier in the pedestrian detection model when the accuracy of the classifier in the pedestrian detection model meets the requirements.
  • an embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes program instructions, and the processor executes the program instructions to implement The steps of any one of the automatic driving system testing methods provided in the above-mentioned embodiments.
  • the computer-readable storage medium may be the internal storage unit of the test device or the drone described in any of the foregoing embodiments, such as the memory or internal memory of the test device.
  • the computer-readable storage medium can also be an external storage device of the test device, such as a plug-in hard disk equipped on the test device, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD ) card, flash memory card (Flash Card), etc.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种基于航测数据的自动驾驶系统测试方法、测试系统及存储介质,其中,测试系统(100)包括:无人机(10)和测试装置(20),无人机(10)能够悬停在目标路段的上方,并采集目标路段的交通场景数据,测试装置(20)用于与无人机(10)通信连接,测试装置(20)用于根据交通场景数据确定目标测试场景,并根据目标测试场景对应的交通场景数据对自动驾驶系统进行测试。

Description

基于航测数据的自动驾驶系统测试方法、测试系统及存储介质 技术领域
本申请涉及自动驾驶技术领域,尤其涉及一种基于航测数据的自动驾驶系统测试方法、测试系统及存储介质。
背景技术
自动驾驶是指无需驾驶员对车辆进行操作,而是通过车辆上的传感器自动采集环境信息,并根据环境信息进行自动行驶。为了避免自动驾驶系统失灵导致的交通事故出现,需要对自动驾驶系统在自然状态下进行性能测试,在性能满足要求后才可以投入使用。目前的自动驾驶系统的性能测试,主要是通过人工驾驶车辆采集数据测试,或者道路上安装摄像头采集数据测试,人工驾驶的车辆需要安装大量的传感器,驾驶员可以意识到车辆的特殊性,对采集数据的自然性有影响;道路上安装摄像头采集数据测试,也容易被其他驾驶员察觉并认为是交通监控摄像头,对采集数据的自然性也有影响,因此目前的性能测试方法均无法完成对自动驾驶系统在自然状态下的性能测试,进而会导致自动驾驶系统的失灵进而造成交通事故。
发明内容
为此,本申请实施例提供了一种基于航测数据的自动驾驶系统测试方法、测试系统及存储介质,更具体地提供了一种用于测试自动驾驶系统性能的测试系统、测试装置、无人机、测试方法以及存储介质,以提高自动驾驶系统的安全性。
第一方面,本申请实施例提供了一种测试系统,所述测试系统用于测试自动驾驶系统的性能,所述测试系统包括:
无人机,所述无人机能够悬停在目标路段的上方,并采集所述目标路段的交通场景数据;
测试装置,用于与所述无人机通信连接,所述测试装置用于根据所述交通 场景数据确定目标测试场景,并根据所述目标测试场景对应的交通场景数据对所述自动驾驶系统进行测试。
第二方面,本申请实施例还提供了一种自动驾驶系统的测试方法,所述测试方法包括:
获取目标路段的交通场景数据,所述交通场景数据为通过无人机悬停在所述目标路段上采集的;
根据所述交通场景数据确定目标测试场景,并根据所述目标测试场景对应的交通场景数据对所述自动驾驶系统进行测试。
第三方面,本申请实施例还提供了一种无人机,所述无人机包括:
机体;
云台,所述云台设置在所述机体上;
拍摄装置,设置在所述云台上,用于拍摄图像;
其中,所述无人机还包括处理器和存储器,所述存储器用于存储计算机程序,所述处理器用于执行所述计算机程序并在执行所述计算机程序时实现如下操作:
控制所述无人机悬停在目标路段上方,并采集所述目标路段的交通场景数据;
根据所述交通场景数据确定目标测试场景,并根据所述目标测试场景对应的交通场景数据对所述自动驾驶系统进行测试。
第四方面,本申请实施例还提供了一种测试装置,所述测试装置包括处理器和存储器;
所述存储器用于存储计算机程序;
所述处理器,用于执行所述计算机程序并在执行所述计算机程序时实现本申请实施例提供任一项所述的自动驾驶系统的测试方法。
第五方面,本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现如本申请实施例提供的任一项所述的自动驾驶系统的测试方法。
本申请实施例公开的测试系统、测试装置、无人机、测试方法以及存储介质,利用无人机的航测数据实现对自动驾驶系统在自然状态下的性能测试,由此提高自动驾驶系统在实际应用中安全性。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
附图说明
为了更清楚地说明本申请实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种测试系统的示意图;
图2是本申请实施例提供的一种无人机的结构示意图;
图3是本申请实施例提供的一种无人机的示意性框图;
图4是本申请实施例提供的另一种测试系统的示意图;
图5是本申请实施例提供的一种交通场景的示意图;
图6是本申请实施例提供的另一种交通场景的示意图;
图7是本申请实施例提供的一种测试场景框架的示意图;
图8是本申请实施例提供的一种目标测试场景框架的示意图;
图9是本申请实施例提供的一种自动驾驶系统的测试方法的示意流程图;
图10是本申请实施例提供的一种测试装置的示意性框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
还应当理解,在此本申请说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本申请。如在本申请说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。
还应当进一步理解,在本申请说明书和所附权利要求书中使用的术语“和/ 或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。
附图中所示的流程图仅是示例说明,不是必须包括所有的内容和操作/步骤,也不是必须按所描述的顺序执行。例如,有的操作/步骤还可以分解、组合或部分合并,因此实际执行的顺序有可能根据实际情况改变。
随着科学技术的发展以及人工智能技术的应用,自动驾驶技术得到了快速的发展和广泛的应用。基于车辆的驾驶自动化水平,现有的SAE J3016标准将驾驶自动化划分为6个等级,也即是L0-L5等级,分别为无驾驶自动化(No Automation,L0),驾驶辅助(Driver Assistance,L1),部分驾驶自动化(Partial Automation,L2),有条件驾驶自动化(Conditional Automation,L3),高度驾驶自动化(High Automation,L4)和完全驾驶自动化(Full Automation,L5)。随着驾驶自动化等级的不断提高,在驾驶活动中,人的参与程度越来越低。
可以预见的是,未来将会有更多使用自动驾驶系统的车辆行驶在道路上,从而出现自动驾驶车辆和人工驾驶车辆并行在道路上的局面。
目前,为了避免自动驾驶系统失灵导致的交通事故出现,需要对自动驾驶系统在自然状态下进行性能测试,在性能满足要求后才可以投入使用。目前的自动驾驶系统的性能测试,主要是通过人工驾驶车辆采集数据测试,或者道路上安装摄像头采集数据测试。
人工驾驶测试车辆采集数据进行测试,由于测试车辆的外部(比如车顶)设置了大量传感器,使得该测试车辆和普通车辆具有较大的差异性,进而使得普通车辆上的驾驶员可以意识到该测试车辆的特殊性,对采集的数据的自然性有影响,即采集的数据不是在自然状态下的驾驶数据,因此该通过测试车辆采集的数据对自动驾驶系统进行测试,和将来该自动驾驶系统实际应用并不相同,因此不能准确地测量出自动驾驶系统在自然状态下的性能。
测试车辆进行测试具有足够随机性,但是无法对某一目标路段进行长时间的测试,该目标路段比如交通事故多发路段,具体比如为长下坡路段、城郊结合路段、急转弯路段、“S”形路段或环形岛路段。测试车辆在经过这些路段时不能长时间进行测试,因此不能不提供足够的数据进行支持。
此外,测试车辆上安装的传感器测量范围有限,且采集效率低,仅能采集测试车辆周围车辆的数据,仅仅关注的是自动驾驶系统的安全性能,并未考虑 到自动驾驶系统可以对交通通行效率的提升。
道路上安装摄像头采集数据测试,一般是在道路旁设置固定杆,在该固定杆上安装摄像头,容易被车辆的驾驶员察觉并认为是交通监控摄像头,同样会影响驾驶行为,进而对测量数据的自然性有影响。由于摄像头是固定安装,以及其视角与精度限制,对待测路段周围环境有要求,并且无法选择任意路段进行测量。另外由于摄像头视角有限,对待测路段进行测量需大量的摄像头配合使用,安装成本高,摄像头安装后无法方便的移动位置,后期使用成本更高。并且摄像头采集的数据可能会包含人脸,车牌等涉及个人隐私的数据,在隐私保护方面存在局限性。
为此,本申请的实施例提供一种基于航测数据的自动驾驶系统测试方法、测试系统及存储介质,更具体地提供了一种用于测试自动驾驶系统的测试系统、测试装置、无人机、测试方法及存储介质,以完成对自动驾驶系统在自然状态下的性能测试,进而提高自动驾驶系统的安全性,减少交通事故。
请参见图1,图1示出了本申请实施例提供的一种测试系统的示意图。如图1所示,该测试系统100包括无人机10和测试装置20,测试装置20与无人机10通信连接。
无人机10,具体如图2和图3所示,可以包括机体11、云台12、拍摄装置13、动力系统14和控制系统15等。
机体11可以包括机身和脚架(也称为起落架)。机身可以包括中心架以及与中心架连接的一个或多个机臂,一个或多个机臂呈辐射状从中心架延伸出。脚架与机身连接,用于在无人机10着陆时起支撑作用。
云台12安装在机体11上,用于搭载拍摄装置13。其中,该云台12可以包括三个电机,即云台12为三轴云台,在无人机10的控制系统15的控制下,可以调整拍摄装置13的拍摄角度,该拍摄角度可以理解为拍摄装置13的镜头朝向待拍摄目标的方向相对水平方向或竖直方向的角度。
在一些实施例中,云台12还可以包括控制器,用于通过控制云台的电机来控制云台12的运动,进而调整拍摄装置13的拍摄角度。应理解,云台12可以独立于无人机10,也可以为无人机10的一部分。还应理解,电机可以是直流电机,也可以是交流电机;或者,电机可以是无刷电机,也可以是有刷电机。
拍摄装置13可例如为照相机或摄像机等用于捕获图像的设备,拍摄装置 13可以与控制系统15通信,并在控制系统15的控制下进行拍摄。在本申请的实施例中,拍摄装置13是通过云台12搭载在无人机10的机体11上。可以理解的是,拍摄装置13也可直接固定于无人机10的机体11上,从而云台12可以省略。
在一些实施例中,可以控制拍摄装置13以俯视角对目标路段进行拍摄,得到目标路段的视频数据,该视频数据可以作为目标路段的交通场景数据。俯视角为拍摄装置13的镜头的光轴方向与待拍摄的目标路段垂直,或者是与待拍摄的目标路段大致垂直,该大致垂直比如呈88度或92度等,当然也可以为其他角度值,在此不做限定。
在一些实施例中,拍摄装置13可以包括单目相机或者双目相机,用于实现不同功能的拍摄,比如单目相机用于拍摄目标路段的图像,双目相机可以得到目标路段上目标物的深度图像,深度图像包括目标路段上的目标物以及目标物的距离信息,目标物比如车辆或行人等,深度图像也可以作为交通场景数据的一种。
动力系统14可以包括一个或多个电子调速器(简称为电调)、一个或多个螺旋桨以及与一个或多个螺旋桨相对应的一个或多个电机,其中,电机连接在电子调速器与螺旋桨之间,电机和螺旋桨设置在无人机10的机臂上。电子调速器用于接收控制系统15产生的驱动信号,并根据该驱动信号提供驱动电流给电机,以控制电机的转速进而驱动螺旋桨旋转,从而为无人机10的飞行提供动力,该动力使得无人机10能够实现一个或多个自由度的运动。在某些实施例中,无人机10可以围绕一个或多个旋转轴旋转。
例如,上述旋转轴可以包括横滚轴(Roll)、偏航轴(Yaw)和俯仰轴(pitch)。应理解,电机可以是直流电机,也可以交流电机。另外,电机可以是无刷电机,也可以是有刷电机。
控制系统15可以包括控制器和传感系统。其中,控制器用于控制无人机10的飞行,例如,可以根据传感系统测量的姿态信息控制无人机10的飞行。应理解,控制器可以按照预先编好的程序指令对无人机10进行控制,也可以通过响应来自控制终端的一个或多个控制指令对无人机10进行控制。
传感系统用于测量无人机10的姿态信息,即无人机10在空间的位置信息和状态信息,例如,三维位置、三维角度、三维速度、三维加速度和三维角速 度等。
传感系统例如可以包括陀螺仪、超声传感器、电子罗盘、惯性测量单元(Inertial Measurement Unit,IMU)、视觉传感器、全球导航卫星系统和气压计等传感器中的至少一种。例如,全球导航卫星系统可以是全球定位系统(Global Positioning System,GPS)。
利用无人机10在空间的位置信息和状态信息,再结合无人机10拍摄的图像,可以计算出图像中目标物的位置信息。示例性的,比如,根据无人机的飞行高度、拍摄图像的视场角以及无人机的位置信息,通过坐标转换以及三角关系,可以计算出图像中目标物的位置信息。
控制器可以包括一个或多个处理器和存储器。处理器例如可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。存储器可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等。
在一些实施例中,无人机10还可以包括雷达装置,该雷达装置安装在无人机10上,具体可以安装在无人机10的机体11上,在无人机10的飞行过程中,用于测量无人机10的周围环境,比如障碍物等,以确保飞行的安全性。
雷达装置安装在无人机10的脚架上,该雷达装置与控制系统15通信连接,雷达装置将采集到的观测数据传输至控制系统,由控制系统15进行处理。无人机10可以包括两个或两个以上脚架,雷达装置搭载在其中一个脚架上。雷达装置也可以搭载在无人机10的其他位置,对此不作具体限定。
在本申请的实施例中,雷达装置具体可以为激光雷达,还可以用采集目标道路的点云数据,将该点云数据作为交通场景数据。
可以理解的是,无人机用于采集目标路段的交通场景数据,该交通场景数据的数据形式可以包括视频数据、点云数据、深度图像、无人机的位置信息、无人机的姿态信息等,当然还可以包括从互联网查询的天气信息等。
无人机10可以包括旋翼型无人机,例如四旋翼无人机、六旋翼无人机、八旋翼无人机,也可以是固定翼无人机,还可以是旋翼型与固定翼无人机的组合,在此不作限定。
测试装置20可以是服务器、终端设备或者自动驾驶仿真器,其中,终端设 备可例如为台式电脑、笔记本、平板或智能手机等,自动驾驶仿真器可以为一个软件仿真平台或模拟仿真器。
软件仿真平台用于模拟自动驾驶系统,该软件仿真平台可以基于软件平台设计完成,比如基于Matlab设计完成,该软件仿真平台的数据输入为目标测试场景对应的交通场景数据,输出为自动驾驶系统的应对结果,即自动驾驶系统针对该目标测试场景该如何应对。
示例性的,比如向软件仿真平台输入车辆切入场景对应的交通场景数据,该软件仿真平台的输出为针对该车辆切入场景的应对结果。
模拟仿真器可以采用紧凑型驾驶模拟器、整车级驾驶模拟器或高性能驾驶模拟器等,其中,紧凑型驾驶模拟器使用模拟驾驶位一般配备液晶显示屏,手动或自动变速箱,手刹和带力反馈的方向盘;整车级驾驶模拟器使用真实车辆,配备六自由度运动平台,使用环形显示系统;高性能驾驶模拟器,使用真实车辆或部分真实车辆,配备高速运动系统模拟车辆运动,使用环形显示系统。
在一些实施例中,如图4所示,测试系统100还包括自动驾驶仿真器30,自动驾驶仿真器30用于根据目标测试场景的交通场景数据确定自动驾驶系统针对该目标测试场景的应对结果。
需要说明的是,在图4中的测试系统100中,自动驾驶仿真器30可以是以个软件仿真平台或模拟仿真器。若为软件仿真平台,该软件仿真平台可以安装在测试装置20中,或者安装在不同于测试装置的其他电子设备中;若模拟仿真器,该模拟仿真器与测试装置20通信连接。
在使用测试系统100测试自动驾驶系统的性能时,具体为无人机10能够飞行至目标路段并悬停在该目标路段的上方,采集目标路段的交通场景数据,并将采集的交通场景数据发送给测试装置20,测试装置20接收无人机10采集的交通场景数据,根据交通场景数据确定目标测试场景,并根据目标测试场景对应的交通场景数据对自动驾驶系统进行测试。
目标路段可以为自动驾驶车辆可能行驶的任意路段,比如为高速公路、城市道路、城乡道路中的任意路段。目标路段具体还可以是交通事故频发的路段,具体比如为长下坡路段、城郊结合路段、急转弯路段、“S”形路段或环形岛路段。该目标路段还可以是特殊路段,比如隧道、跨海大桥、高架桥中的任意路段。由此可见该测试系统可以测量任意路段的交通场景数据,并且不受地形 的限制,并且测量成本较低。
在本申请的实施例中,交通场景数据包括如下至少一种:道路信息数据、车辆信息数据、环境信息数据和交通参与者信息数据。
道路信息数据包括道路设施信息和/或道路类型信息,该道路设施信息比如可以包括交通标志、交通标线、交通信号灯和/或道路辅助设施等,道路类型信息比如为城市道路或公路,城市道路可以包括主干道路、快速道路、次干道路和/或支路等。车辆信息数据包括如下至少一种:车型信息、车辆位置信息、车辆行驶速度信息、车辆行驶方向信息和车辆尺寸信息,车型信息比如为M1载客车、N1载客车、O型挂车、两轮车或三轮车。环境信息数据包括天气信息和/或道路周围环境信息,天气信息比如为白天、晚上、晴天、雨天、下雪或有雾等,道路周围环境信息比如道路周围的建筑物或花草树木等。交通参与者信息数据包括行人信息和/非机动车信息,行人信息比如包括儿童、成年人或老人的行走速度、方向和位置等信息,非机动车信息比如为自行车、电动两轮车的行驶速度、方向和位置等信息。
上述交通场景数据均可以通过无人机10搭载的传感器测量得到,比如车辆信息数据、道路信息数据、环境信息数据和交通参与者信息数据均可以通过无人机10上搭载的拍摄装置拍摄的图像和/或雷达装置拍摄的点云数据得到。
需要说明的是,环境信息数据中的天气信息可以查询互联网上发布的天气信息,将该天气信息添加到采集的交通场景数据中。
其中,目标测试场景为想要测量自动驾驶系统的性能的场景,目标测试场景比如车辆切入场景、车辆切出场景或车辆紧急制动场景。车辆切入场景用于测试自动驾驶系统如果有车辆切入时其该如何应对,车辆切出场景用于测试自动驾驶系统如果有车辆切出时其该如何应对,车辆紧急制动场景用于测试自动驾驶系统如果有前侧紧急制动时其应对结果。
需要说明的是,目标测试场景除了包括车辆切入场景、车辆切出场景或车辆紧急制动场景外,还可以包括所有自动驾驶过程中可能遇到的场景,比如跟车行驶场景等等,在此不做详细介绍。
其中,根据目标测试场景对应的交通场景数据对所述自动驾驶系统进行测试,具体为将自动驾驶系统替换目标测试场景中的目标车辆,比较自动驾驶系统的应对结果和目标车辆的驾驶员的应对结果,即用目标车辆的驾驶员的应对 结果作为真值来验证自动驾驶系统的对应结果,完成对自动驾驶系统的性能测试。由此可见,该测试方式不仅可以测试自动驾驶系统的安全性,还可以测试自动驾驶系统的拟人化,使得自动驾驶系统更加拟人化,进而提高用户的体验度。
由此针对自动驾驶系统的性能测试,可以利用自动驾驶系统替代目标测试场景中的目标车辆,对比自动驾驶系统的应对结果与自然交通中驾驶员的驾驶结果,由此实现了对自动驾驶系统的拟人驾驶、安全驾驶性能进行测试。
需要说明的是,目标测试场景中的目标车辆,也可以称为危险车辆,如车辆切入场景中被前方车辆切入的车辆。再比如,如图5所示,若车辆a的前方出现行人,那么该车辆a需要紧急制动避让行人,车辆a即为危险车辆,若该场景作为目标测试场景,车辆a为目标车辆。
在一些实施例中,根据目标测试场景对应的交通场景数据对自动驾驶系统进行测试,具体可以先根据目标测试场景的交通场景数据确定目标测试场景中目标车辆的行驶结果,再根据目标测试场景的交通场景数据确定自动驾驶系统针对目标测试场景的应对结果,以及比对驾驶员的驾驶结果和自动驾驶系统的应对结果差异,确定自动驾驶系统的性能。
其中,根据目标测试场景的交通场景数据确定目标测试场景中目标车辆的行驶结果,该行驶结果为目标车辆根据该目标测试环境进行主光判断后做出的操作结果,即为驾驶员针对该目标测试环境的应对结果,具体可以根据目标车辆的行驶数据体现。
示例性的,比如在图6中车辆切入场景中,可根据采集的目标测试场景对应的交通场景数据确定目标车辆2中驾驶员的驾驶结果,比如为了躲避危险进行减速以及减速大小均可以根据采集的交通场景数据中目标车辆2的多帧视频图像来确定。
其中,根据目标测试场景的交通场景数据确定自动驾驶系统针对目标测试场景的应对结果,具体可以将目标测试场景中目标车辆替换为自动驾驶系统,或者替换包括自动驾驶系统的车辆,来确定自动驾驶系统在该环境(目标测试环境)中的应对结果。
示例性的,比如,将目标车辆2对应的车辆切入场景的交通场景数据输入给自动驾驶仿真器,得到自动驾驶系统的应对结果。再比对驾驶员的驾驶结果 和自动驾驶系统的应对结果差异,确定自动驾驶系统的性能。
需要说明的是,在本申请的实施例中,测试自动驾驶系统的性能不仅包括安全性,还可以包括交通效率、舒适性、拟人化以及信任度等。
在一些实施例中,该目标车辆可以包括一个车辆,或者,该目标车辆包括一个第一目标车辆和多个第二目标车辆,多个第二目标车辆与第一目标车辆相关,其中,第一目标车辆和多个第二目标车辆均用于测试自动驾驶系统的性能。
第一目标车辆为针对该目标测试场景需要做出应变的车辆,比如,第一目标车辆为危险车辆,多个第二目标车辆为与第一目标车辆相近的车辆。
示例性的,如图6所示,图6示出目标测试场景,可以为一个高速公路行驶场景,根据所处车道从右到左各车辆速度递增,同车道车速相等。此时,原本行驶在最右侧车道的车辆1突然变道至右侧第二车道,对原本行驶在右侧第二车道的车辆2形成了车辆切入场景。将自动驾驶系统代替车辆2进入该车辆切入场景,由于其左右两侧都有车辆(分别为车辆4和车辆5),导致车辆2无法变道,只能减速。但减速度过大,容易导致于车辆3追尾,减速度过小,容易导致车辆2追尾车辆1。因此自动驾驶系统能否安全度过该车辆切入场景,整个减速过程与前后车保持的最小距离,都可以作为自动驾驶系统的性能评价指标。然而在该场景中,不管自动驾驶系统的性能如何优越,由于车辆2选择减速,势必会对车辆3产生影响,导致交通通行效率下降,此时如果将车辆2、车辆3、车辆4同时替换自动驾驶系统,或者同时替换为加载了自动驾驶系统的车辆,则通过车辆之间通讯车辆3同时向左变道,则不仅避免了事故,还未产生减速行为,并且提高了交通通行效率。
需要说明的是,在该场景中,车辆2为第一目标车辆,车辆3、车辆4和车辆5为第二目标车辆。
在一些实施例中,无人机10可以根据控制指令飞行至目标路段,并能够悬停在该目标路段的上方,采集目标路段的交通场景数据。其中,控制指令可以是由测试装置20发出的,也可以有其他设备,比如用于控制无人机10飞行的遥控器。由此可以利用无人机的便利性和机动性,采集不同空间和不同时间的交通场景数据,由此可以对自动驾驶系统进行全面的测试,提高自动驾驶系统的覆盖率。
在本申请的实施例中,无人机10能够悬停在该目标路段的上方,采集该目 标路段的交通场景数据,其中,当使用拍摄装置采集交通场景数据时,具体是以俯视角采集该目标路段的交通场景数据。该俯视角可以理解为以正投影的方式拍摄目标路段的图像。由于无人机的拍摄角度是从目标路段的正上方向下拍摄,不会出现大型车挡住小型车的情形,数据错误率低,由此可以提高自动驾驶系统性能测试准确性。
在一些实施例中,无人机10能够根据目标路段的道路信息调整其飞行姿态和/或其搭载的拍摄装置的拍摄角度,采集目标路段的交通场景数据。比如目标路段设有交通指示牌,为了避免交通指示牌遮挡部分交通场景,可以相应地调整无人机的飞行姿态和/或拍摄装置的拍摄角度,比如可以调高飞行高度或也可以调低飞行高度,再或者比如调整拍摄装置的拍摄角度,以避免交通指示牌的遮挡,由此提高交通场景数据的采集质量。
在一些实施例中,无人机10在目标路段上空的悬停位置和/或悬停高度均与该目标路段的交通场景相关。交通场景可以为目标道路的车辆场景或设施场景。
示例性的,比如,无人机10识别到有高度超过预设阈值的车辆(比如货车的高度)经过时,可以调高悬停高度,以避免该货车遮挡其他车辆,由此可以采集更高质量的交通场景数据。
示例性的,再比如,无人机10识别到双向道路一侧发生堵车现象,而另一侧几乎没有车辆经过,由此可以调整其悬停位置以采集发生堵车现象的一侧道路的交通场景数据,使得采集的交通场景数据可以包括更多目标测试场景,进而提高自动驾驶系统的测试效率。
在一些实施例中,无人机10能够在特定时间段悬停在目标路段的上方,采集目标路段的交通场景数据。其中,特定时间段的风力小于其他时间段的风力,和/或,特定时间段的光线强度大于其他时间段的光线强度。示例性的,特定时间段比如晴天、无风的天气中上午8点至下午5点,可以最大程度减少无人机移动引起的画面抖动,由此提高自动驾驶系统的测试准确率。
在一些实施例中,为了提高自动驾驶系统的性能测试效率,根据交通场景数据确定目标测试场景,具体可以获取目标路段对应的场景元素,根据交通场景数据确定场景元素的元素参数,根据场景元素和场景元素的元素参数确定测试场景,以及从测试场景中确定目标测试场景。
该场景元素为能够影响自动驾驶系统的因素,并且可以与目标路段相关,不同的目标路段可以包括不同场景元素,比如高速公路和城市道路中的目标路段对应场景元素不完全相同。
在本申请的实施例中,场景元素包括交通参与者元素、车辆信息元素、交通设施元素、道路信息元素、天气元素和周围环境元素中的一项或多项。
比如,根据交通场景数据确定场景元素的元素参数,具体可以先确定交通参与者元素包括行人和/或非机动车,再根据交通场景数据确定行人和/或非机动车的速度、方向和位置等元素参数。再比如,根据交通场景数据确定行车辆的类型为挂车以及该车辆的位置、速度、方向和轮廓尺寸等元素参数。
从测试场景中确定目标测试场景,该目标测试场景也可以称为关键驾驶场景,具体可以获取目标测试场景对应的场景确定条件,根据场景确定条件从测试场景中确定目标测试场景,其中,场景确定条件是根据目标测试场景的场景元素确定的,不同的目标测试场景对应不同的场景确定条件。
为了提取目标测试场景,首先需要先定义能够表达该目标测试场景的场景确定条件,具体包括提取标准、开始条件和结束条件。
示例性的,比如目标测试场景为车辆切入场景,可选的,可以根据ISO34502标准,即车辆切入场景的提取标准为:切入车辆的横向速度在同一个方向上保持不变、切入辆车从相邻车道上冲出其所在的车道、切入辆车与本车(目标车辆)之间没有其他车辆。切入场景的开始条件和结束条件分别为:切入车辆的横向速度从零开始增加、切入车辆横向速度减小到零。需要说明的是,其他目标测试场景也可以根据ISO34502标准进行设定。
在一些实施例中,还可以根据自动驾驶系统中需要测试的功能模块,从目标路段的场景元素中选择与功能模块相关的关键场景元素,根据交通场景数据确定关键场景元素的元素参数,根据关键场景元素和对应的元素参数,得到目标测试场景。
示例性的,需要测试的功能模块比如为紧急制动(Autonomous Emergency Braking,AEB)模块,则选择与该紧急制动模块相关的场景元素,相关的场景元素比如为交通参与者元素、车辆元素、交通辅助设施元素、天气元素和周围环境元素等,根据交通场景数据确定关键场景元素的元素参数,再将元素参数填充在相应的关键场景元素,即可以得到目标测试场景。
由于通过无人机采集的交通场景数据中场景元素数量太多,含有许多与交通场景无关的元素,如果直接用交通场景数据验证自动驾驶系统的性能,数据处理较慢,并且验证效果也不够理想。为此,还可以通过搭建测试场景框架的方式确定目标测试场景,利用目标测试场景验证自动驾驶系统的性能。
在一些实施例中,给出一种测试场景框架搭建方法,具体包括:获取对自动驾驶系统有影响的场景元素,对所述场景元素进行分类得到测试场景框架;根据所述测试场景框架中的场景元素从交通场景数据中提取所述场景元素的元素参数,其中,该提取包括通过使用目标识别算法、车速计算算法、或神经网络模型等算法识别交通场景数据中场景元素以及对应的元素参数;并将得到的元素参数填充到侧场景框架的对应场景元素中,得到目标测试场景框架,其中,该目标测试场景框架包括的场景元素和元素参数即为目标测试场景的交通场景数据。由此可以加快数据处理速度,同时有针对地提成关键场景元素的元素参数对自动驾驶系统的性能进行验证,由此又提高验证的准确性。
示例性的,测试场景框架,具体如图7所示,该测试场景框架可以包括交通参与者元素、车辆元素、道路元素和环境元素等。再利用目标识别算法、车速计算算法、或神经网络模型等算法从采集的交通场景数据中识别到这些场景元素以及对应的元素参数,将识别得到元素参数填充到场景框架的对应场景元素中,得到目标测试场景框架,具体如图8所示,比如识别到没有交通参与者,相关车辆元素包括挂车和轻型货车,其中,元素参数包括挂车位于轻型货车之后,距离前方的轻型货车40m,挂车的速度为50km/h,轻型货车的速度为40km/h并以减速度为4m/s 2在减速,道路元素的元素参数为白色虚线、次干路,有道路护栏、单向三车道,环境元素的元素参数为白天上午、周围有办公大楼和公寓、两侧有树木、无噪音等。
需要说明的是,场景元素进行分类得到测试场景框架,可以根据场景元素所属的类型关系进行分类,比如如图7所示,将行人元素和非机动车元素划分为交通参与者,将交通设施和道路信息划分为道路类别。
在一些实施例中,给出另一种测试场景框架搭建方法,具体包括:获取对所述自动驾驶系统有影响的场景元素;从场景元素中获取与自动驾驶系统中待测功能模块相关的关键场景元素,对所述关键场景元素进行分类得到测试场景框架;根据测试场景框架中的关键场景元素从交通场景数据中提取对应的元素 参数,将提取到的元素参数填充到测试场景框架的对应关键场景元素中,得到目标测试场景框架,该目标测试场景框架包括的场景元素和元素参数即为目标测试场景的交通场景数据。由此可以加快数据处理速度,同时有针对地提成关键场景元素的元素参数对自动驾驶系统的性能进行验证,由此又提高验证的准确性。
示例性的,比如,需要对自动驾驶系统中的紧急制动(AEB)模块进行测试。假设无人机采集的是城市道路上交通场景数据,则获取所有对自动驾驶系统功能有影响的场景元素。再根据AEB模块的功能选择相应场景元素,比如:交通参与者元素、车辆元素、交通辅助设施元素、天气元素、周围环境元素,构建测试场景框架。最后从无人机采集的交通场景数据(具体为每一帧视频数据)提取相应的场景元素的元素参数,并对应填充填充测试场景框架中,得到目标测试场景框架。
例如在某一帧视频数据中:挂车以50km/h的速度跟随前方以同样速度行驶的轻型货车行驶,两车都在单向三车道的中间车道沿同一方向行驶,两车纵向距离为40m,挂车前方轻型货车以4m/s 2的减速度减速。
需要说明的是,在具体应用中,测试场景框架可以以数据表的形式体现,即该测试场景框架可以为一个测试场景数据表,该测试场景数据表包括该测试场景框架的所有内容,当然该测试场景框架也可以是其他结构性数据类型的方式,在此不做限定。
尽管无人机具有飞行稳定控制的功能,但还是有许多因素会导致采集交通场景数据无法直接用于自动驾驶系统的性能验证,比如无人机在进行数据采集时会受到窗口抖动、日光照射等方面的影响,导致采集的视频数据的像素矩阵出现目标不清晰、对比度较弱、亮度曝光率过高等多种问题,加大了检测的难度。因此,还需要对采集的交通场景数据进行预处理,其中,预处理包括前景提取、色彩空间转换、影像二值化、边缘检测、形态学处理和设定感兴趣区域中的至少一项,具体可以根据实际处理效果选择对应的处理方式。可以理解的是上述对自动驾驶系统的性能测试过程中,使用的交通场景数据均为预处理后的交通场景数据。
在一些实施例中,在对自动驾驶系统的性能进行测试时,若发现该自动驾驶系统中不满足性能标准的功能模块,则优化功能模块,比如可以选择具有更 高精度的传感器或其他硬件,也可以在优化软件算法。
例如,确定自动驾驶系统中行人检测模块,在某测试场景中,该行人检测模块未能识别到前方儿童,发生了事故。为此,针对行人检测模块中的行人检测模型进行优化,其中,该行人检测模型为基于神经网络训练得到的模型。
为此,本申请实施例还提供一种行人检测模型的优化方法,需要了解的是,本申请实施例是以行人检测模型为例进行介绍,当然也可以用该优化方法优化其他检测模型,在此不做限定。
该模型的优化方法主要包括:获取训练样本数据,根据所述训练样本数据对行人检测模型中的分类器进行训练;验证所述行人检测模型中的分类器的准确率是否满足要求,在所述行人检测模型中的分类器的准确率满足要求时,保存所述行人检测模型中的分类器的参数,完成对所述行人检测模型的优化。
获取训练样本数据之前,需要收集该训练样本,具体为:确定所述行人检测模型未能识别到行人对应的行人场景数据,其中所述行人场景数据包括无人机采集的至少一帧包括行人的视频图像;控制无人机采集与所述行人场景数据相关的交通场景数据,根据所述视频图像的图像信息对所述交通场景数据进行处理,比如进行轮廓取样、尺寸裁剪、样本归一化等处理,得到样本数据。该样本数据做训练样本的正样本,从该样本数据中拆分出不包括行人的样本作为负样本,该正样本和负样本构成训练样本。利用该训练样本,可以使得优化后的行人检测模型针对该场景具有较高识别准确率。
根据所述训练样本数据对行人检测模型中的分类器进行训练,具体可以利用一种基于LBP算子的级联分类器,训练行人检测模型的分类器。LBP算子是一种描述图像局部纹理特征的算子,利用其旋转不变性和灰度不变性优点,可改善图像在光照等环境下的影响,由此更适合自动驾驶系统。具体地,可以利用正样本对分类器训练,对训练样本中的正样本进行归一化处理后,输出符合训练的文件,同时,输出一个读取负样本的文件,投入训练,依次循环训练分类器多次,可以得到最佳参数下的模型识别率。
在从训练样本抽取一些样本验证所述行人检测模型中的分类器的准确率是否满足要求,比如准确率达到99%以上时,确定行人检测模型中的分类器的准确率满足要求,在所述行人检测模型中的分类器的准确率满足要求时,保存所述行人检测模型中的分类器的参数,完成对所述行人检测模型的优化。
上述实施例提供的测试系统可以利用无人机的航测数据,具体为采集的交通场景数据,完成对自动驾驶系统的性能测试,由于无人机是悬停在目标路段的上方,因此不易被驾驶员发现,因此可以完成对自动驾驶系统在自然状态下的性能测试,由此提高自动驾驶系统在实际应用中安全性。
请参阅图9,图9示出了本申请实施例提供的一种自动驾驶系统的测试方法的示意流程。该自动驾驶系统的测试方法可以应用于上述实施例提供测试系统的无人机或者测试装置中,以完成对自动驾驶系统的性能测试。
如图9所示,该自动驾驶系统的测试方法包括步骤S101和步骤S102。
S101、获取目标路段的交通场景数据,所述交通场景数据为通过无人机悬停在所述目标路段上采集的;
S102、根据所述交通场景数据确定目标测试场景,并根据所述目标测试场景对应的交通场景数据对所述自动驾驶系统进行测试。
交通场景数据包括如下至少一种:道路信息数据、车辆信息数据、环境信息数据和交通参与者信息数据。
其中,道路信息数据包括道路设施信息和/或道路类型信息;车辆信息数据包括如下至少一种:车型信息、车辆位置信息、车辆行驶速度信息、车辆行驶方向信息和车辆尺寸信息。环境信息数据包括天气信息和/或道路周围环境信息。交通参与者信息数据包括行人信息和/非机动车信息。
目标测试场景包括车辆切入场景、车辆切出场景或车辆紧急制动场景。当然还可以包括其他目标测试场景,比如跟车场景等,在此不做限定。
在一些实施例中,根据目标测试场景对应的交通场景数据对自动驾驶系统进行测试,具体可以先根据目标测试场景的交通场景数据确定目标测试场景中目标车辆的行驶结果,再根据目标测试场景的交通场景数据确定自动驾驶系统针对目标测试场景的应对结果,以及比对驾驶员的驾驶结果和自动驾驶系统的应对结果差异,确定自动驾驶系统的性能。不仅可以测试自动驾驶系统的安全性,还可以使得自动驾驶系统的性能更加拟人化。
在一些实施例中,目标车辆包括一个车辆,或者,目标车辆包括一个第一目标车辆和多个与第一目标车辆相关的第二目标车辆,其中,第一目标车辆和第二目标车辆均用于测试自动驾驶系统的性能。可以测试自动驾驶系统的交通通行效率的改善,进而提高交通通行效率。
在一些实施例中,无人机能够根据目标路段的道路信息调整其飞行姿态和/或其搭载的拍摄装置的拍摄角度,采集所述==目标路段的交通场景数据,进而可以拍摄出高质量的交通场景数据。
在一些实施例中,无人机在目标路段上空的悬停位置和/或悬停高度与所述目标路段的交通场景相关,该交通场景可以为目标道路的车辆场景或设施场景,由此可以采集更高质量的交通场景数据。
在一些实施例中,无人机能够在特定时间段悬停在目标路段的上方,采集目标路段的交通场景数据。其中,特定时间段的风力小于其他时间段的风力,和/或,特定时间段的光线强度大于其他时间段的光线强度。示例性的,特定时间段比如晴天、无风的天气中上午8点至下午5点,可以最大程度减少无人机移动引起的画面抖动,由此提高自动驾驶系统的测试准确率。
在一些实施例中,根据交通场景数据确定目标测试场景,具体可以获取目标路段对应的场景元素,该场景元素为能够影响自动驾驶系统的因素,根据交通场景数据确定场景元素的元素参数,根据场景元素和场景元素的元素参数确定测试场景,以及从测试场景中确定目标测试场景。由此可以提高自动驾驶系统性能测试准确性和效率。
其中,场景元素包括:交通参与者元素、车辆信息元素、交通设施元素、道路信息元素、天气元素和周围环境元素中的一项或多项。
从测试场景中确定目标测试场景,具体可以获取目标测试场景对应的场景确定条件,根据场景确定条件从测试场景中确定目标测试场景,其中,场景确定条件是根据目标测试场景的场景元素确定的,不同的目标测试场景对应不同的场景确定条件。
在一些实施例中,还可以根据自动驾驶系统中需要测试的功能模块,从目标路段的场景元素中选择与功能模块相关的关键场景元素,根据交通场景数据确定关键场景元素的元素参数,根据关键场景元素和对应的元素参数,得到目标测试场景。示例性的,需要测试的功能模块比如为紧急制动(Autonomous Emergency Braking,AEB)模块。
在一些实施例中,为了快速确定目标测试场景,以提高自动驾驶系统的测试准确性。还可以获取对自动驾驶系统有影响的场景元素,对场景元素进行分类得到测试场景框架,根据测试场景框架中的场景元素从交通场景数据中提取 相应的元素参数,将元素参数填充到所述测试场景框架的对应场景元素中,得到目标测试场景。具体如图7和图8所示。
在另一些实施例中,为了快速确定目标测试场景,以提高自动驾驶系统的测试准确性。还可以获取对自动驾驶系统有影响的场景元素,从场景元素中获取与自动驾驶系统中待测功能模块相关的关键场景元素,对关键场景元素进行分类得到测试场景框架,根据测试场景框架中的关键场景元素从交通场景数据中提取对应的元素参数;将元素参数填充到测试场景框架的对应关键场景元素中,得到目标测试场景。
在本申请提供的测试方法的实施例,可以基于自动驾驶仿真器,根据所述目标测试场景的交通场景数据确定所述自动驾驶系统针对所述目标测试场景的应对结果。具体地,可以向自动驾驶仿真器输入目标测试场景对应的交通场景数据,该自动驾驶仿真器的输出针对该目标测试场景的应对结果。
在一些实施例中,自动驾驶仿真器可以包括软件仿真平台或模拟仿真器。
在一些实施例中,还可以先对采集的交通场景数据进行预处理,其中,预处理包括前景提取、色彩空间转换、影像二值化、边缘检测、形态学处理和设定感兴趣区域中的至少一项,具体可以根据实际处理效果选择对应的处理方式。可以理解的是上述对自动驾驶系统的性能测试过程中,使用的交通场景数据均为预处理后的交通场景数据。
在一些实施例中,测试方法还可以在对自动驾驶系统进行测试时,确定所述自动驾驶系统中不满足性能标准的功能模块,并优化所述功能模块,由此提高自动驾驶系统的性能。
示例性的,比如,不满足性能标准的功能模块为行人检测模块,则可以通过获取训练样本数据,根据所述训练样本数据对所述行人检测模型中的分类器进行优化训练;验证所述行人检测模型中的分类器的准确率是否满足要求,在所述行人检测模型中的分类器的准确率满足要求时,保存所述行人检测模型中的分类器的参数。
上述实施例提供的测试测试可以利用无人机采集的交通场景数据,完成对自动驾驶系统的性能测试,由于无人机是悬停在目标路段的上方,因此不易被驾驶员发现,因此可以完成对自动驾驶系统在自然状态下的性能测试,由此提高自动驾驶系统在实际应用中安全性。
本申请的实施例还提供了一种无人机,具体如图2所示,该无人机10包括:机体11、云台12和拍摄装置13,云台12设置在机体11上,拍摄装置13设置在云台12上,拍摄装置13用于拍摄图像。
其中,无人机10还包括处理器和存储器,所述存储器用于存储计算机程序,所述处理器用于执行所述计算机程序并在执行所述计算机程序时实现如下操作:
控制所述无人机悬停在目标路段上方,并采集所述目标路段的交通场景数据;根据所述交通场景数据确定目标测试场景,并根据所述目标测试场景对应的交通场景数据对所述自动驾驶系统进行测试。
在一些实施例中,所述交通场景数据包括如下至少一种:道路信息数据、车辆信息数据、环境信息数据和交通参与者信息数据;其中,所述道路信息数据包括道路设施信息和/或道路类型信息;所述车辆信息数据包括如下至少一种:车型信息、车辆位置信息、车辆行驶速度信息、车辆行驶方向信息和车辆尺寸信息;所述环境信息数据包括天气信息和/或道路周围环境信息;所述交通参与者信息数据包括行人信息和/非机动车信息。
在一些实施例中,所述目标测试场景包括车辆切入场景、车辆切出场景或车辆紧急制动场景。
在一些实施例中,所述根据所述目标测试场景对应的交通场景数据对所述自动驾驶系统进行测试,包括:
根据所述目标测试场景的交通场景数据确定所述目标测试场景中目标车辆的行驶结果;根据所述目标测试场景的交通场景数据,确定所述自动驾驶系统针对所述目标测试场景的应对结果;比对所述驾驶结果和所述应对结果差异,确定所述自动驾驶系统的性能。
在一些实施例中,所述目标车辆包括一个车辆;或者,所述目标车辆包括一个第一目标车辆和多个与所述第一目标车辆相关的第二目标车辆;其中,所述第一目标车辆和第二目标车辆均用于测试所述自动驾驶系统的性能。
在一些实施例中,所述无人机能够根据目标路段的道路信息调整其飞行姿态和/或其搭载的拍摄装置的拍摄角度,采集所述目标路段的交通场景数据。
在一些实施例中,所述无人机在所述目标路段上空的悬停位置和/或悬停高度与所述目标路段的交通场景相关。
在一些实施例中,所述无人机能够在特定时间段悬停在所述目标路段的上 方,采集所述目标路段的交通场景数据。
在一些实施例中,所述特定时间段的风力小于其他时间段的风力,和/或,所述特定时间段的光线强度大于其他时间段的光线强度。
在一些实施例中,所述根据所述交通场景数据确定目标测试场景,包括:
获取所述目标路段对应的场景元素,所述场景元素为能够影响所述自动驾驶系统的因素;根据所述交通场景数据确定所述场景元素的元素参数;根据所述场景元素和所述场景元素的元素参数确定测试场景,以及从所述测试场景中确定目标测试场景。
在一些实施例中,所述场景元素包括:交通参与者元素、车辆信息元素、交通设施元素、道路信息元素、天气元素和周围环境元素中的一项或多项。
在一些实施例中,所述处理器还用于:获取目标测试场景对应的场景确定条件,根据所述场景确定条件从所述测试场景中确定目标测试场景;其中,所述场景确定条件是根据所述目标测试场景的场景元素确定的,不同的所述目标测试场景对应不同的场景确定条件。
在一些实施例中,所述处理器用于:根据所述自动驾驶系统中需要测试的功能模块,从所述目标路段的场景元素中选择与所述功能模块相关的关键场景元素;根据所述交通场景数据确定所述关键场景元素的元素参数;以及根据所述关键场景元素和对应的元素参数,得到目标测试场景。
在一些实施例中,所述处理器用于:获取对所述自动驾驶系统有影响的场景元素,对所述场景元素进行分类得到测试场景框架;根据所述测试场景框架中的场景元素从所述交通场景数据中提取相应的元素参数;将所述元素参数填充到所述测试场景框架的对应场景元素中,得到目标测试场景。
在一些实施例中,所述处理器用于:
获取对所述自动驾驶系统有影响的场景元素;从所述场景元素中获取与所述自动驾驶系统中待测功能模块相关的关键场景元素,对所述关键场景元素进行分类得到测试场景框架;根据所述测试场景框架中的关键场景元素从所述交通场景数据中提取对应的元素参数;将所述元素参数填充到所述测试场景框架的对应关键场景元素中,得到目标测试场景。
在一些实施例中,所述处理器用于:基于自动驾驶仿真器,根据所述目标测试场景的交通场景数据确定所述自动驾驶系统针对所述目标测试场景的应对 结果。
在一些实施例中,所述自动驾驶仿真器可以包括软件仿真平台或模拟仿真器。
在一些实施例中,所述处理器还用于:对所述交通场景数据进行预处理;其中,所述预处理包括前景提取、色彩空间转换、影像二值化、边缘检测、形态学处理和设定感兴趣区域中的至少一项。
在一些实施例中,所述处理器还用于:确定所述自动驾驶系统中不满足性能标准的功能模块,优化所述功能模块。
在一些实施例中,所述功能模块包括行人检测模块;所述处理器用于:获取训练样本数据,根据所述训练样本数据对所述行人检测模型中的分类器进行优化训练;验证所述行人检测模型中的分类器的准确率是否满足要求,在所述行人检测模型中的分类器的准确率满足要求时,保存所述行人检测模型中的分类器的参数。
请参阅图10,图10是本申请实施例提供的一种测试装置的示意性框图。如图10所示,该测试装置200还至少包括一个或多个处理器201和存储器202。
其中,处理器201例如可以是微控制单元(Micro-controller Unit,MCU)、中央处理单元(Central Processing Unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。
存储器202可以是Flash芯片、只读存储器(ROM,Read-Only Memory)磁盘、光盘、U盘或移动硬盘等。
其中,存储器202用于存储计算机程序;处理器201用于执行所述计算机程序并在执行所述计算机程序时,执行本申请实施例提供的任一项所述的自动驾驶系统的测试方法,以实现对自动驾驶系统在自然状态下的性能测试,由此提高自动驾驶系统在实际应用中安全性。
示例性的,处理器201用于执行所述计算机程序并在执行所述计算机程序时实现如下操作:
控制所述无人机悬停在目标路段上方,并采集所述目标路段的交通场景数据;根据所述交通场景数据确定目标测试场景,并根据所述目标测试场景对应的交通场景数据对所述自动驾驶系统进行测试。
在一些实施例中,所述交通场景数据包括如下至少一种:道路信息数据、 车辆信息数据、环境信息数据和交通参与者信息数据;其中,所述道路信息数据包括道路设施信息和/或道路类型信息;所述车辆信息数据包括如下至少一种:车型信息、车辆位置信息、车辆行驶速度信息、车辆行驶方向信息和车辆尺寸信息;所述环境信息数据包括天气信息和/或道路周围环境信息;所述交通参与者信息数据包括行人信息和/非机动车信息。
在一些实施例中,所述目标测试场景包括车辆切入场景、车辆切出场景或车辆紧急制动场景。
在一些实施例中,所述根据所述目标测试场景对应的交通场景数据对所述自动驾驶系统进行测试,包括:
根据所述目标测试场景的交通场景数据确定所述目标测试场景中目标车辆的行驶结果;根据所述目标测试场景的交通场景数据,确定所述自动驾驶系统针对所述目标测试场景的应对结果;比对所述驾驶结果和所述应对结果差异,确定所述自动驾驶系统的性能。
在一些实施例中,所述目标车辆包括一个车辆;或者,所述目标车辆包括一个第一目标车辆和多个与所述第一目标车辆相关的第二目标车辆;其中,所述第一目标车辆和第二目标车辆均用于测试所述自动驾驶系统的性能。
在一些实施例中,所述无人机能够根据目标路段的道路信息调整其飞行姿态和/或其搭载的拍摄装置的拍摄角度,采集所述目标路段的交通场景数据。
在一些实施例中,所述无人机在所述目标路段上空的悬停位置和/或悬停高度与所述目标路段的交通场景相关。
在一些实施例中,所述无人机能够在特定时间段悬停在所述目标路段的上方,采集所述目标路段的交通场景数据。
在一些实施例中,所述特定时间段的风力小于其他时间段的风力,和/或,所述特定时间段的光线强度大于其他时间段的光线强度。
在一些实施例中,所述根据所述交通场景数据确定目标测试场景,包括:
获取所述目标路段对应的场景元素,所述场景元素为能够影响所述自动驾驶系统的因素;根据所述交通场景数据确定所述场景元素的元素参数;根据所述场景元素和所述场景元素的元素参数确定测试场景,以及从所述测试场景中确定目标测试场景。
在一些实施例中,所述场景元素包括:交通参与者元素、车辆信息元素、 交通设施元素、道路信息元素、天气元素和周围环境元素中的一项或多项。
在一些实施例中,所述处理器还用于:获取目标测试场景对应的场景确定条件,根据所述场景确定条件从所述测试场景中确定目标测试场景;其中,所述场景确定条件是根据所述目标测试场景的场景元素确定的,不同的所述目标测试场景对应不同的场景确定条件。
在一些实施例中,所述处理器用于:根据所述自动驾驶系统中需要测试的功能模块,从所述目标路段的场景元素中选择与所述功能模块相关的关键场景元素;根据所述交通场景数据确定所述关键场景元素的元素参数;以及根据所述关键场景元素和对应的元素参数,得到目标测试场景。
在一些实施例中,所述处理器用于:获取对所述自动驾驶系统有影响的场景元素,对所述场景元素进行分类得到测试场景框架;根据所述测试场景框架中的场景元素从所述交通场景数据中提取相应的元素参数;将所述元素参数填充到所述测试场景框架的对应场景元素中,得到目标测试场景。
在一些实施例中,所述处理器用于:
获取对所述自动驾驶系统有影响的场景元素;从所述场景元素中获取与所述自动驾驶系统中待测功能模块相关的关键场景元素,对所述关键场景元素进行分类得到测试场景框架;根据所述测试场景框架中的关键场景元素从所述交通场景数据中提取对应的元素参数;将所述元素参数填充到所述测试场景框架的对应关键场景元素中,得到目标测试场景。
在一些实施例中,所述处理器用于:基于自动驾驶仿真器,根据所述目标测试场景的交通场景数据确定所述自动驾驶系统针对所述目标测试场景的应对结果。
在一些实施例中,所述自动驾驶仿真器可以包括软件仿真平台或模拟仿真器。
在一些实施例中,所述处理器还用于:对所述交通场景数据进行预处理;其中,所述预处理包括前景提取、色彩空间转换、影像二值化、边缘检测、形态学处理和设定感兴趣区域中的至少一项。
在一些实施例中,所述处理器还用于:确定所述自动驾驶系统中不满足性能标准的功能模块,优化所述功能模块。
在一些实施例中,所述功能模块包括行人检测模块;所述处理器用于:获 取训练样本数据,根据所述训练样本数据对所述行人检测模型中的分类器进行优化训练;验证所述行人检测模型中的分类器的准确率是否满足要求,在所述行人检测模型中的分类器的准确率满足要求时,保存所述行人检测模型中的分类器的参数。
此外,本申请的实施例中还提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序中包括程序指令,所述处理器执行所述程序指令,实现上述实施例提供的任一种所述的自动驾驶系统的测试方法的步骤。
其中,所述计算机可读存储介质可以是前述任一实施例所述的测试装置或无人机的内部存储单元,例如所述测试装置的存储器或内存。所述计算机可读存储介质也可以是所述测试装置的外部存储设备,例如所述测试装置上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (62)

  1. 一种测试系统,其特征在于,用于测试自动驾驶系统的性能,所述测试系统包括:
    无人机,所述无人机能够悬停在目标路段的上方,并采集所述目标路段的交通场景数据;
    测试装置,用于与所述无人机通信连接,所述测试装置用于根据所述交通场景数据确定目标测试场景,并根据所述目标测试场景对应的交通场景数据对所述自动驾驶系统进行测试。
  2. 根据权利要求1所述的测试系统,其特征在于,所述交通场景数据包括如下至少一种:道路信息数据、车辆信息数据、环境信息数据和交通参与者信息数据;
    其中,所述道路信息数据包括道路设施信息和/或道路类型信息;所述车辆信息数据包括如下至少一种:车型信息、车辆位置信息、车辆行驶速度信息、车辆行驶方向信息和车辆尺寸信息;所述环境信息数据包括天气信息和/或道路周围环境信息;所述交通参与者信息数据包括行人信息和/非机动车信息。
  3. 根据权利要求1所述的测试系统,其特征在于,所述目标测试场景包括车辆切入场景、车辆切出场景或车辆紧急制动场景。
  4. 根据权利要求1所述的测试系统,其特征在于,所述根据所述目标测试场景对应的交通场景数据对所述自动驾驶系统进行测试,包括:
    根据所述目标测试场景的交通场景数据确定所述目标测试场景中目标车辆的行驶结果;
    根据所述目标测试场景的交通场景数据,确定所述自动驾驶系统针对所述目标测试场景的应对结果;
    比对所述驾驶结果和所述应对结果差异,确定所述自动驾驶系统的性能。
  5. 根据权利要求4所述的测试系统,其特征在于,所述目标车辆包括一个车辆;或者,
    所述目标车辆包括一个第一目标车辆和多个与所述第一目标车辆相关的第二目标车辆,其中,所述第一目标车辆和第二目标车辆均用于测试所述自动驾 驶系统的性能。
  6. 根据权利要求1所述的测试系统,其特征在于,所述无人机能够根据目标路段的道路信息调整其飞行姿态和/或其搭载的拍摄装置的拍摄角度,采集所述目标路段的交通场景数据。
  7. 根据权利要求1所述的测试系统,其特征在于,所述无人机在所述目标路段上空的悬停位置和/或悬停高度与所述目标路段的交通场景相关。
  8. 根据权利要求1所述的测试系统,其特征在于,所述无人机能够在特定时间段悬停在所述目标路段的上方,采集所述目标路段的交通场景数据。
  9. 根据权利要求8所述的测试系统,其特征在于,所述特定时间段的风力小于其他时间段的风力,和/或,所述特定时间段的光线强度大于其他时间段的光线强度。
  10. 根据权利要求1-9任一项所述的测试系统,其特征在于,所述根据所述交通场景数据确定目标测试场景,包括:
    获取所述目标路段对应的场景元素,所述场景元素为能够影响所述自动驾驶系统的因素;
    根据所述交通场景数据确定所述场景元素的元素参数;
    根据所述场景元素和所述场景元素的元素参数确定测试场景,以及从所述测试场景中确定目标测试场景。
  11. 根据权利要求10所述的测试系统,其特征在于,所述场景元素包括:交通参与者元素、车辆信息元素、交通设施元素、道路信息元素、天气元素和周围环境元素中的一项或多项。
  12. 根据权利要求10所述的测试系统,其特征在于,所述测试装置还用于:
    获取目标测试场景对应的场景确定条件,根据所述场景确定条件从所述测试场景中确定目标测试场景;
    其中,所述场景确定条件是根据所述目标测试场景的场景元素确定的,不同的所述目标测试场景对应不同的场景确定条件。
  13. 根据权利要求1-9任一项所述的测试系统,其特征在于,所述测试装置用于:
    根据所述自动驾驶系统中需要测试的功能模块,从所述目标路段的场景元素中选择与所述功能模块相关的关键场景元素;
    根据所述交通场景数据确定所述关键场景元素的元素参数;以及
    根据所述关键场景元素和对应的元素参数,得到目标测试场景。
  14. 根据权利要求1-9任一项所述的测试系统,其特征在于,所述测试装置用于:
    获取对所述自动驾驶系统有影响的场景元素,对所述场景元素进行分类得到测试场景框架;
    根据所述测试场景框架中的场景元素从所述交通场景数据中提取相应的元素参数;
    将所述元素参数填充到所述测试场景框架的对应场景元素中,得到目标测试场景。
  15. 根据权利要求1-9任一项所述的测试系统,其特征在于,所述测试装置用于:
    获取对所述自动驾驶系统有影响的场景元素;
    从所述场景元素中获取与所述自动驾驶系统中待测功能模块相关的关键场景元素,对所述关键场景元素进行分类得到测试场景框架;
    根据所述测试场景框架中的关键场景元素从所述交通场景数据中提取对应的元素参数;
    将所述元素参数填充到所述测试场景框架的对应关键场景元素中,得到目标测试场景。
  16. 根据权利要求1所述的测试系统,其特征在于,所述测试系统包括:
    自动驾驶仿真器,所述自动驾驶仿真器用于根据所述目标测试场景的交通场景数据确定所述自动驾驶系统针对所述目标测试场景的应对结果。
  17. 根据权利要求16所述的测试系统,其特征在于,所述测试装置包括所述自动驾驶仿真器。
  18. 根据权利要求1所述的测试系统,其特征在于,所述测试装置还用于:
    对所述交通场景数据进行预处理;
    其中,所述预处理包括前景提取、色彩空间转换、影像二值化、边缘检测、形态学处理和设定感兴趣区域中的至少一项。
  19. 根据权利要求1所述的测试系统,其特征在于,所述测试装置还用于:
    确定所述自动驾驶系统中不满足性能标准的功能模块,优化所述功能模块。
  20. 根据权利要求19所述的测试系统,其特征在于,所述功能模块包括行人检测模块;所述测试装置用于:
    获取训练样本数据,根据所述训练样本数据对所述行人检测模型中的分类器进行优化训练;
    验证所述行人检测模型中的分类器的准确率是否满足要求,在所述行人检测模型中的分类器的准确率满足要求时,保存所述行人检测模型中的分类器的参数。
  21. 一种自动驾驶系统的测试方法,其特征在于,所述测试方法包括:
    获取目标路段的交通场景数据,所述交通场景数据为通过无人机悬停在所述目标路段上采集的;
    根据所述交通场景数据确定目标测试场景,并根据所述目标测试场景对应的交通场景数据对所述自动驾驶系统进行测试。
  22. 根据权利要求21所述的测试方法,其特征在于,所述交通场景数据包括如下至少一种:道路信息数据、车辆信息数据、环境信息数据和交通参与者信息数据;
    其中,所述道路信息数据包括道路设施信息和/或道路类型信息;所述车辆信息数据包括如下至少一种:车型信息、车辆位置信息、车辆行驶速度信息、车辆行驶方向信息和车辆尺寸信息;所述环境信息数据包括天气信息和/或道路周围环境信息;所述交通参与者信息数据包括行人信息和/非机动车信息。
  23. 根据权利要求21所述的测试方法,其特征在于,所述目标测试场景包括车辆切入场景、车辆切出场景或车辆紧急制动场景。
  24. 根据权利要求21所述的测试方法,其特征在于,所述根据所述目标测试场景对应的交通场景数据对所述自动驾驶系统进行测试,包括:
    根据所述目标测试场景的交通场景数据确定所述目标测试场景中目标车辆的行驶结果;
    根据所述目标测试场景的交通场景数据,确定所述自动驾驶系统针对所述目标测试场景的应对结果;
    比对所述驾驶结果和所述应对结果差异,确定所述自动驾驶系统的性能。
  25. 根据权利要求24所述的测试方法,其特征在于,所述目标车辆包括一个车辆;或者,
    所述目标车辆包括一个第一目标车辆和多个与所述第一目标车辆相关的第二目标车辆,其中,所述第一目标车辆和第二目标车辆均用于测试所述自动驾驶系统的性能。
  26. 根据权利要求21所述的测试方法,其特征在于,所述无人机能够根据目标路段的道路信息调整其飞行姿态和/或其搭载的拍摄装置的拍摄角度,采集所述目标路段的交通场景数据。
  27. 根据权利要求21所述的测试方法,其特征在于,所述无人机在所述目标路段上空的悬停位置和/或悬停高度与所述目标路段的交通场景相关。
  28. 根据权利要求21所述的测试方法,其特征在于,所述无人机能够在特定时间段悬停在所述目标路段的上方,采集所述目标路段的交通场景数据。
  29. 根据权利要求28所述的测试方法,其特征在于,所述特定时间段的风力小于其他时间段的风力,和/或,所述特定时间段的光线强度大于其他时间段的光线强度。
  30. 根据权利要求21-29任一项所述的测试方法,其特征在于,所述根据所述交通场景数据确定目标测试场景,包括:
    获取所述目标路段对应的场景元素,所述场景元素为能够影响所述自动驾驶系统的因素;
    根据所述交通场景数据确定所述场景元素的元素参数;
    根据所述场景元素和所述场景元素的元素参数确定测试场景,以及从所述测试场景中确定目标测试场景。
  31. 根据权利要求30所述的测试方法,其特征在于,所述场景元素包括:交通参与者元素、车辆信息元素、交通设施元素、道路信息元素、天气元素和周围环境元素中的一项或多项。
  32. 根据权利要求30所述的测试方法,其特征在于,所述测试方法还包括:
    获取目标测试场景对应的场景确定条件,根据所述场景确定条件从所述测试场景中确定目标测试场景;
    其中,所述场景确定条件是根据所述目标测试场景的场景元素确定的,不同的所述目标测试场景对应不同的场景确定条件。
  33. 根据权利要求21-29任一项所述的测试方法,其特征在于,所述测试方法包括:
    根据所述自动驾驶系统中需要测试的功能模块,从所述目标路段的场景元素中选择与所述功能模块相关的关键场景元素;
    根据所述交通场景数据确定所述关键场景元素的元素参数;以及
    根据所述关键场景元素和对应的元素参数,得到目标测试场景。
  34. 根据权利要求21-29任一项所述的测试方法,其特征在于,所述测试方法包括:
    获取对所述自动驾驶系统有影响的场景元素,对所述场景元素进行分类得到测试场景框架;
    根据所述测试场景框架中的场景元素从所述交通场景数据中提取相应的元素参数;
    将所述元素参数填充到所述测试场景框架的对应场景元素中,得到目标测试场景。
  35. 根据权利要求21-29任一项所述的测试方法,其特征在于,所述测试方法包括:
    获取对所述自动驾驶系统有影响的场景元素;
    从所述场景元素中获取与所述自动驾驶系统中待测功能模块相关的关键场景元素,对所述关键场景元素进行分类得到测试场景框架;
    根据所述测试场景框架中的关键场景元素从所述交通场景数据中提取对应的元素参数;
    将所述元素参数填充到所述测试场景框架的对应关键场景元素中,得到目标测试场景。
  36. 根据权利要求21所述的测试方法,其特征在于,所述测试方法包括:
    基于自动驾驶仿真器,根据所述目标测试场景的交通场景数据确定所述自动驾驶系统针对所述目标测试场景的应对结果。
  37. 根据权利要求36所述的测试方法,其特征在于,所述自动驾驶仿真器可以包括软件仿真平台或模拟仿真器。
  38. 根据权利要求21所述的测试方法,其特征在于,所述测试方法包括:
    对所述交通场景数据进行预处理;
    其中,所述预处理包括前景提取、色彩空间转换、影像二值化、边缘检测、形态学处理和设定感兴趣区域中的至少一项。
  39. 根据权利要求21所述的测试方法,其特征在于,所述测试方法包括:
    确定所述自动驾驶系统中不满足性能标准的功能模块,优化所述功能模块。
  40. 根据权利要求39所述的测试方法,其特征在于,所述功能模块包括行人检测模块;所述测试方法包括:
    获取训练样本数据,根据所述训练样本数据对所述行人检测模型中的分类器进行优化训练;
    验证所述行人检测模型中的分类器的准确率是否满足要求,在所述行人检测模型中的分类器的准确率满足要求时,保存所述行人检测模型中的分类器的参数。
  41. 一种无人机,其特征在于,所述无人机包括:
    机体;
    云台,所述云台设置在所述机体上;
    拍摄装置,设置在所述云台上,用于拍摄图像;
    其中,所述无人机还包括处理器和存储器,所述存储器用于存储计算机程序,所述处理器用于执行所述计算机程序并在执行所述计算机程序时实现如下操作:
    控制所述无人机悬停在目标路段上方,并采集所述目标路段的交通场景数据;
    根据所述交通场景数据确定目标测试场景,并根据所述目标测试场景对应的交通场景数据对所述自动驾驶系统进行测试。
  42. 根据权利要求41所述的无人机,其特征在于,所述交通场景数据包括如下至少一种:道路信息数据、车辆信息数据、环境信息数据和交通参与者信息数据;
    其中,所述道路信息数据包括道路设施信息和/或道路类型信息;所述车辆信息数据包括如下至少一种:车型信息、车辆位置信息、车辆行驶速度信息、车辆行驶方向信息和车辆尺寸信息;所述环境信息数据包括天气信息和/或道路周围环境信息;所述交通参与者信息数据包括行人信息和/非机动车信息。
  43. 根据权利要求41所述的无人机,其特征在于,所述目标测试场景包括车辆切入场景、车辆切出场景或车辆紧急制动场景。
  44. 根据权利要求41所述的无人机,其特征在于,所述根据所述目标测试 场景对应的交通场景数据对所述自动驾驶系统进行测试,包括:
    根据所述目标测试场景的交通场景数据确定所述目标测试场景中目标车辆的行驶结果;
    根据所述目标测试场景的交通场景数据,确定所述自动驾驶系统针对所述目标测试场景的应对结果;
    比对所述驾驶结果和所述应对结果差异,确定所述自动驾驶系统的性能。
  45. 根据权利要求44所述的无人机,其特征在于,所述目标车辆包括一个车辆;或者,
    所述目标车辆包括一个第一目标车辆和多个与所述第一目标车辆相关的第二目标车辆;其中,所述第一目标车辆和第二目标车辆均用于测试所述自动驾驶系统的性能。
  46. 根据权利要求41所述的无人机,其特征在于,所述无人机能够根据目标路段的道路信息调整其飞行姿态和/或其搭载的拍摄装置的拍摄角度,采集所述目标路段的交通场景数据。
  47. 根据权利要求41所述的无人机,其特征在于,所述无人机在所述目标路段上空的悬停位置和/或悬停高度与所述目标路段的交通场景相关。
  48. 根据权利要求41所述的无人机,其特征在于,所述无人机能够在特定时间段悬停在所述目标路段的上方,采集所述目标路段的交通场景数据。
  49. 根据权利要求48所述的无人机,其特征在于,所述特定时间段的风力小于其他时间段的风力,和/或,所述特定时间段的光线强度大于其他时间段的光线强度。
  50. 根据权利要求41-49任一项所述的无人机,其特征在于,所述根据所述交通场景数据确定目标测试场景,包括:
    获取所述目标路段对应的场景元素,所述场景元素为能够影响所述自动驾驶系统的因素;
    根据所述交通场景数据确定所述场景元素的元素参数;
    根据所述场景元素和所述场景元素的元素参数确定测试场景,以及从所述测试场景中确定目标测试场景。
  51. 根据权利要求50所述的无人机,其特征在于,所述场景元素包括:交通参与者元素、车辆信息元素、交通设施元素、道路信息元素、天气元素和周 围环境元素中的一项或多项。
  52. 根据权利要求50所述的无人机,其特征在于,所述处理器还用于:
    获取目标测试场景对应的场景确定条件,根据所述场景确定条件从所述测试场景中确定目标测试场景;
    其中,所述场景确定条件是根据所述目标测试场景的场景元素确定的,不同的所述目标测试场景对应不同的场景确定条件。
  53. 根据权利要求41-49任一项所述的无人机,其特征在于,所述处理器用于:
    根据所述自动驾驶系统中需要测试的功能模块,从所述目标路段的场景元素中选择与所述功能模块相关的关键场景元素;
    根据所述交通场景数据确定所述关键场景元素的元素参数;以及
    根据所述关键场景元素和对应的元素参数,得到目标测试场景。
  54. 根据权利要求41-49任一项所述的无人机,其特征在于,所述处理器用于:
    获取对所述自动驾驶系统有影响的场景元素,对所述场景元素进行分类得到测试场景框架;
    根据所述测试场景框架中的场景元素从所述交通场景数据中提取相应的元素参数;
    将所述元素参数填充到所述测试场景框架的对应场景元素中,得到目标测试场景。
  55. 根据权利要求41-59任一项所述的无人机,其特征在于,所述处理器用于:
    获取对所述自动驾驶系统有影响的场景元素;
    从所述场景元素中获取与所述自动驾驶系统中待测功能模块相关的关键场景元素,对所述关键场景元素进行分类得到测试场景框架;
    根据所述测试场景框架中的关键场景元素从所述交通场景数据中提取对应的元素参数;
    将所述元素参数填充到所述测试场景框架的对应关键场景元素中,得到目标测试场景。
  56. 根据权利要求41所述的无人机,其特征在于,所述处理器用于:
    基于自动驾驶仿真器,根据所述目标测试场景的交通场景数据确定所述自动驾驶系统针对所述目标测试场景的应对结果。
  57. 根据权利要求56所述的无人机,其特征在于,所述自动驾驶仿真器可以包括软件仿真平台或模拟仿真器。
  58. 根据权利要求41所述的无人机,其特征在于,所述处理器还用于:
    对所述交通场景数据进行预处理;
    其中,所述预处理包括前景提取、色彩空间转换、影像二值化、边缘检测、形态学处理和设定感兴趣区域中的至少一项。
  59. 根据权利要求41所述的无人机,其特征在于,所述处理器还用于:
    确定所述自动驾驶系统中不满足性能标准的功能模块,优化所述功能模块。
  60. 根据权利要求59所述的无人机,其特征在于,所述功能模块包括行人检测模块;所述处理器用于:
    获取训练样本数据,根据所述训练样本数据对所述行人检测模型中的分类器进行优化训练;
    验证所述行人检测模型中的分类器的准确率是否满足要求,在所述行人检测模型中的分类器的准确率满足要求时,保存所述行人检测模型中的分类器的参数。
  61. 一种测试装置,其特征在于,所述测试装置包括处理器和存储器;
    所述存储器用于存储计算机程序;
    所述处理器,用于执行所述计算机程序并在执行所述计算机程序时实现权利要求21-40任一项所述的自动驾驶系统的测试方法。
  62. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现权利要求21-40任一项所述的自动驾驶系统的测试方法。
PCT/CN2021/096994 2021-05-28 2021-05-28 基于航测数据的自动驾驶系统测试方法、测试系统及存储介质 WO2022246852A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/096994 WO2022246852A1 (zh) 2021-05-28 2021-05-28 基于航测数据的自动驾驶系统测试方法、测试系统及存储介质
CN202180087995.8A CN116829919A (zh) 2021-05-28 2021-05-28 基于航测数据的自动驾驶系统测试方法、测试系统及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/096994 WO2022246852A1 (zh) 2021-05-28 2021-05-28 基于航测数据的自动驾驶系统测试方法、测试系统及存储介质

Publications (1)

Publication Number Publication Date
WO2022246852A1 true WO2022246852A1 (zh) 2022-12-01

Family

ID=84229471

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/096994 WO2022246852A1 (zh) 2021-05-28 2021-05-28 基于航测数据的自动驾驶系统测试方法、测试系统及存储介质

Country Status (2)

Country Link
CN (1) CN116829919A (zh)
WO (1) WO2022246852A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115979679A (zh) * 2023-03-22 2023-04-18 中国汽车技术研究中心有限公司 自动驾驶系统实际道路测试方法、设备和存储介质
CN116125953A (zh) * 2023-02-22 2023-05-16 吉林大学 基于飞行器的车辆监控系统及方法
CN116183242A (zh) * 2023-02-22 2023-05-30 吉林大学 基于模拟降雨环境的自动驾驶测试方法、设备及存储介质
CN116204441A (zh) * 2023-03-17 2023-06-02 百度时代网络技术(北京)有限公司 索引数据结构的性能测试方法、装置、设备及存储介质
CN117171290A (zh) * 2023-11-03 2023-12-05 安徽蔚来智驾科技有限公司 确定安全行驶区域的方法及系统、自动驾驶方法及系统
CN117744366A (zh) * 2023-12-19 2024-03-22 万物镜像(北京)计算机系统有限公司 一种自动驾驶边缘仿真测试场景生成方法、装置及设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180354512A1 (en) * 2017-06-09 2018-12-13 Baidu Online Network Technology (Beijing) Co., Ltd. Driverless Vehicle Testing Method and Apparatus, Device and Storage Medium
CN110059393A (zh) * 2019-04-11 2019-07-26 东软睿驰汽车技术(沈阳)有限公司 一种车辆的仿真测试方法、装置及系统
CN112634610A (zh) * 2020-12-14 2021-04-09 北京智能车联产业创新中心有限公司 自然驾驶数据采集方法、装置、电子设备以及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180354512A1 (en) * 2017-06-09 2018-12-13 Baidu Online Network Technology (Beijing) Co., Ltd. Driverless Vehicle Testing Method and Apparatus, Device and Storage Medium
CN109032102A (zh) * 2017-06-09 2018-12-18 百度在线网络技术(北京)有限公司 无人驾驶车辆测试方法、装置、设备及存储介质
CN110059393A (zh) * 2019-04-11 2019-07-26 东软睿驰汽车技术(沈阳)有限公司 一种车辆的仿真测试方法、装置及系统
CN112634610A (zh) * 2020-12-14 2021-04-09 北京智能车联产业创新中心有限公司 自然驾驶数据采集方法、装置、电子设备以及存储介质

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116125953A (zh) * 2023-02-22 2023-05-16 吉林大学 基于飞行器的车辆监控系统及方法
CN116183242A (zh) * 2023-02-22 2023-05-30 吉林大学 基于模拟降雨环境的自动驾驶测试方法、设备及存储介质
CN116204441A (zh) * 2023-03-17 2023-06-02 百度时代网络技术(北京)有限公司 索引数据结构的性能测试方法、装置、设备及存储介质
CN115979679A (zh) * 2023-03-22 2023-04-18 中国汽车技术研究中心有限公司 自动驾驶系统实际道路测试方法、设备和存储介质
CN117171290A (zh) * 2023-11-03 2023-12-05 安徽蔚来智驾科技有限公司 确定安全行驶区域的方法及系统、自动驾驶方法及系统
CN117171290B (zh) * 2023-11-03 2024-04-16 安徽蔚来智驾科技有限公司 确定安全行驶区域的方法及系统、自动驾驶方法及系统
CN117744366A (zh) * 2023-12-19 2024-03-22 万物镜像(北京)计算机系统有限公司 一种自动驾驶边缘仿真测试场景生成方法、装置及设备

Also Published As

Publication number Publication date
CN116829919A (zh) 2023-09-29

Similar Documents

Publication Publication Date Title
WO2022246852A1 (zh) 基于航测数据的自动驾驶系统测试方法、测试系统及存储介质
CN110785718B (zh) 一种车载自动驾驶测试系统及测试方法
US20160252905A1 (en) Real-time active emergency vehicle detection
US11574462B1 (en) Data augmentation for detour path configuring
WO2021057344A1 (zh) 一种数据呈现的方法及终端设备
CN110415544B (zh) 一种灾害天气预警方法及汽车ar-hud系统
CN112055806A (zh) 在困难驾驶条件下利用地标增强导航指令
CN113261274A (zh) 一种图像处理方法及相关终端装置
US11754719B2 (en) Object detection based on three-dimensional distance measurement sensor point cloud data
EP4145409A1 (en) Pipeline architecture for road sign detection and evaluation
CN113126075A (zh) 使用用于运载工具的有源多普勒感测系统的实时动态定位
US20240005642A1 (en) Data Augmentation for Vehicle Control
El-Hassan Experimenting with sensors of a low-cost prototype of an autonomous vehicle
US20230196619A1 (en) Validation of virtual camera models
US20210405651A1 (en) Adaptive sensor control
US20240290199A1 (en) Traffic monitoring method based on aerial survey data
US20240001849A1 (en) Data Augmentation for Driver Monitoring
WO2022246851A1 (zh) 基于航测数据的自动驾驶感知系统测试方法、系统及存储介质
CN111077893B (zh) 一种基于多灭点的导航方法、电子设备和存储介质
CN114299715A (zh) 一种基于视频、激光雷达与dsrc的高速公路信息检测系统
CN113188568A (zh) 利用针对运载工具的角分辨多普勒信息的有源传感器的实时动态校准
CN209447330U (zh) 一种基于ar技术的高速公路收费口超速提醒系统
US20230251384A1 (en) Augmentation of sensor data under various weather conditions to train machine-learning systems
US20240010233A1 (en) Camera calibration for underexposed cameras using traffic signal targets
JP7295320B1 (ja) 情報処理装置、プログラム、システム、及び情報処理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21942423

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202180087995.8

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21942423

Country of ref document: EP

Kind code of ref document: A1