CN116802581A - Automatic driving perception system testing method, system and storage medium based on aerial survey data - Google Patents

Automatic driving perception system testing method, system and storage medium based on aerial survey data Download PDF

Info

Publication number
CN116802581A
CN116802581A CN202180087990.5A CN202180087990A CN116802581A CN 116802581 A CN116802581 A CN 116802581A CN 202180087990 A CN202180087990 A CN 202180087990A CN 116802581 A CN116802581 A CN 116802581A
Authority
CN
China
Prior art keywords
test
vehicle
scene data
image
traffic scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180087990.5A
Other languages
Chinese (zh)
Inventor
张玉新
俞瑞林
杜昕一
吴显亮
王璐瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
SZ DJI Technology Co Ltd
Original Assignee
Jilin University
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University, SZ DJI Technology Co Ltd filed Critical Jilin University
Publication of CN116802581A publication Critical patent/CN116802581A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions

Abstract

An automatic driving perception system testing method, system (100) and storage medium based on aerial survey data, wherein the testing system (100) comprises: the system comprises an unmanned aerial vehicle (10), a testing device (20) and a testing vehicle (30), wherein the testing vehicle (30) comprises an automatic driving system, the automatic driving system comprises an automatic driving sensing system, and the automatic driving sensing system is used for sensing surrounding environment to generate a sensing result; the unmanned aerial vehicle (10) can fly along with the test vehicle (30) and collect traffic scene data in the running process of the test vehicle (30); the testing device (20) is used for being in communication connection with the unmanned aerial vehicle (10) and the test vehicle (30) and used for acquiring traffic scene data and determining the accuracy of a perception result according to the traffic scene data.

Description

Automatic driving perception system testing method, system and storage medium based on aerial survey data Technical Field
The application relates to the technical field of automatic driving, in particular to an automatic driving perception system testing method, an automatic driving perception system testing system and a storage medium based on aerial survey data.
Background
The automatic driving means that the driver does not need to operate the vehicle, but automatically collects environmental information through a sensor on the vehicle, and automatically runs according to the environmental information. In order to avoid traffic accidents caused by failure of an automatic driving system, performance test is required to be carried out on an automatic driving sensing system in the automatic driving system in a natural state, and the automatic driving sensing system can be put into use after the performance meets the requirement. The existing test method is that a high-precision sensor is arranged on the roof of a test vehicle, a driver of the test vehicle and a driver on a common vehicle can realize the specificity of the test vehicle and influence the naturalness of measurement data, so that the performance test of an automatic driving sensing system in a natural state cannot be completed by the existing test method, and the failure of the automatic driving system can be caused to cause traffic accidents.
Disclosure of Invention
Therefore, the embodiment of the application provides a test method, a test system and a storage medium of an automatic driving perception system based on aerial survey data, and particularly provides a test system, a test device, an unmanned aerial vehicle, a test method and a storage medium for testing the automatic driving perception system so as to improve the safety of the automatic driving system.
In a first aspect, an embodiment of the present application provides a test system for testing an autopilot awareness system in an autopilot system, the test system comprising:
a test vehicle comprising an autopilot system including an autopilot sensing system for ambient environmental sensing to generate a sensing result;
the unmanned aerial vehicle can fly along with the test vehicle and collect traffic scene data in the running process of the test vehicle;
the testing device is used for being in communication connection with the unmanned aerial vehicle and the test vehicle, and is used for acquiring the traffic scene data and determining the accuracy of the perception result according to the traffic scene data.
In addition, another test system is provided in an embodiment of the present application, where the test system is also used to test an autopilot sensing system in an autopilot system, and the test system includes:
The automatic driving system comprises an automatic driving sensing system, and the automatic driving sensing system is used for sensing the surrounding environment of the test vehicle to obtain a sensing result;
the unmanned aerial vehicle is in communication connection with the test vehicle, the unmanned aerial vehicle can fly along with the test vehicle, and the traffic scene data in the running process of the test vehicle are collected, and the accuracy of the sensing result of the automatic driving sensing system is determined according to the traffic scene data.
In addition, another test system is provided in an embodiment of the present application, where the test system is used to test an autopilot sensing system in an autopilot system, and the test system includes:
the automatic driving system comprises an automatic driving sensing system, and the automatic driving sensing system is used for sensing the surrounding environment of the test vehicle to obtain a sensing result;
the unmanned aerial vehicle can fly along with the test vehicle and collect traffic scene data in the running process of the test vehicle,
the test vehicle is in communication connection with the unmanned aerial vehicle, and is used for acquiring the traffic scene data and determining the accuracy of the sensing result of the automatic driving sensing system according to the traffic scene data.
In a second aspect, an embodiment of the present application further provides a testing method for testing an autopilot sensing system in an autopilot system, where the testing method includes:
acquiring traffic scene data of a test vehicle, wherein the test vehicle comprises an automatic driving system, the automatic driving system comprises an automatic driving sensing system, the automatic driving sensing system is used for sensing the surrounding environment of the test vehicle to generate a sensing result, and the traffic scene data is acquired by the unmanned aerial vehicle following the test vehicle;
and determining the accuracy of the sensing result of the automatic driving sensing system according to the traffic scene data.
In a third aspect, an embodiment of the present application further provides an unmanned aerial vehicle, the unmanned aerial vehicle including:
a body;
the cradle head is arranged on the machine body;
the shooting device is arranged on the cradle head and is used for shooting images;
the unmanned aerial vehicle further comprises a processor and a memory, wherein the memory is used for storing a computer program, and the processor is used for executing the computer program and realizing any one of the testing methods provided by the embodiment of the application when the computer program is executed.
In a fourth aspect, an embodiment of the present application further provides a testing device, where the testing device includes a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and implement any one of the test methods provided in the embodiments of the present application when the computer program is executed.
In a fifth aspect, an embodiment of the present application further provides a vehicle, including:
a vehicle platform;
the automatic driving system is connected with the vehicle platform and comprises an automatic driving sensing system, and the automatic driving sensing system is used for sensing the surrounding environment of the vehicle to obtain a sensing result;
the automatic driving system comprises a processor and a memory, wherein the memory is used for storing a computer program, and the processor is used for executing the computer program and realizing any testing method provided by the embodiment of the application when the computer program is executed.
In a sixth aspect, embodiments of the present application further provide a computer readable storage medium storing a computer program, which when executed by a processor causes the processor to implement a test method according to any one of the embodiments of the present application.
According to the test system, the test device, the unmanned aerial vehicle, the test method and the storage medium disclosed by the embodiment of the application, the performance test of the automatic driving perception system of the automatic driving system in a natural state can be realized by utilizing the aerial survey data of the unmanned aerial vehicle, so that the safety of the automatic driving system in practical application is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a test vehicle according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a test system according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an unmanned aerial vehicle according to an embodiment of the present application;
fig. 4 is a schematic block diagram of a unmanned aerial vehicle provided by an embodiment of the present application;
FIG. 5 is a schematic illustration of another test vehicle according to an embodiment of the present application;
FIG. 6 is a schematic diagram of another test system provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of a traffic scenario provided by an embodiment of the present application;
FIG. 8 is a schematic flow chart of determining a perception result provided by an embodiment of the present application;
FIG. 9 is a schematic flow chart of an optimization target detection algorithm provided by an embodiment of the present application;
FIG. 10 is a schematic flow chart diagram of a test method for an autopilot awareness system provided by an embodiment of the present application;
FIG. 11 is a schematic flow chart diagram of another test method for an autopilot awareness system provided by an embodiment of the present application;
FIG. 12 is a schematic block diagram of a test apparatus provided by an embodiment of the present application;
fig. 13 is a schematic block diagram of a vehicle provided by an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It is also to be understood that the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
The flow diagrams depicted in the figures are merely illustrative and not necessarily all of the elements and operations/steps are included or performed in the order described. For example, some operations/steps may be further divided, combined, or partially combined, so that the order of actual execution may be changed according to actual situations.
With the development of scientific technology and the application of artificial intelligence technology, the automatic driving technology is rapidly developed and widely applied. Based on the driving Automation level of the vehicle, the existing SAE J3016 standard classifies driving Automation into 6 classes, i.e., L0-L5 classes, no driving Automation (No Automation, L0), driving assistance (Driver Assistance, L1), partial driving Automation (Partial Automation, L2), conditional driving Automation (Conditional Automation, L3), high driving Automation (High Automation, L4) and Full driving Automation (L5), respectively. With the continuous increase in the level of driving automation, the degree of human participation in driving activities is becoming lower.
It is anticipated that more vehicles using the automatic driving system will be driven on the road in the future, so that a situation in which the automatic driving vehicle and the manual driving vehicle are parallel on the road occurs.
In order to avoid traffic accidents caused by failure of the automatic driving system, performance tests are required to be performed on all systems or functional modules in the automatic driving system in a natural state, such as tests on an automatic driving sensing system in the automatic driving system, and the automatic driving system can be put into use after performance meets requirements.
An autopilot sensing system in an autopilot system includes sensors disposed on an autopilot vehicle for sensing the environment surrounding the vehicle, including, for example, lidar and vision sensors, etc.
At present, a test method of an automatic driving sensing system mainly tests by installing a high-precision sensor on a test vehicle, specifically, as shown in fig. 1, the high-precision sensor (such as 128-line laser radar) is installed on the roof of the test vehicle as a true value of the automatic driving sensing system, so as to verify the sensing result of the automatic driving sensing system. The high-accuracy sensor is higher in accuracy than a sensor actually used in the automatic driving sensing system.
Because a large number of sensors are arranged on the test vehicle, a driver of the test vehicle and a driver of a common vehicle can realize the particularity of the test vehicle, so that the naturalness of the collected test data is influenced, for example, the collected test data can be the data of the test vehicle for avoiding the test vehicle by the driver of the common vehicle, and therefore, the test of the automatic driving perception system in a natural state can not be completed, and the test of the automatic driving perception system is not accurate enough.
The test vehicle performs the test with sufficient randomness, but cannot perform the test for a long time on a certain target road section such as a traffic accident-prone road section, specifically such as a long downhill road section, suburb junction road section, sharp-turning road section, "S" -shaped road section, or a circular island road section. The test vehicle cannot perform a test for a long time while passing through these road segments, and thus cannot provide insufficient data for support.
In addition, the high-precision sensor installed on the test vehicle has limited measurement range and low collection efficiency, only data of vehicles around the test vehicle can be collected, only the safety performance of the automatic driving system is concerned, and the improvement of traffic efficiency by the automatic driving system is not considered.
Therefore, the embodiment of the application provides a test method, a test system and a storage medium of an automatic driving perception system based on aerial survey data, and more particularly provides a test system, a test device, an unmanned aerial vehicle, a test method and a storage medium for testing the automatic driving perception system so as to finish performance test of the automatic driving perception system in a natural state, further improve the safety of the automatic driving system and reduce traffic accidents.
Referring to fig. 2, fig. 2 is a schematic diagram of a test system according to an embodiment of the application. As shown in fig. 2, the test system 100 includes a drone 10, a test device 20, and a test vehicle 30, the test device 20 being communicatively coupled to the drone 10 and the test vehicle 30.
The test vehicle 30 includes an autopilot system that includes an autopilot sensing system for sensing the surrounding environment of the test vehicle 30 to generate a sensing result. It should be noted that, the test vehicle 30 in the test system 100 provided in the embodiment of the present application may not be provided with a high-precision sensor.
The drone 10 is capable of following the test vehicle 30 and collecting traffic scenario data during travel of the test vehicle 30. The testing device 20 is used for acquiring traffic scene data acquired by the unmanned aerial vehicle 10, and determining the accuracy of the sensing result according to the traffic scene data. The unmanned aerial vehicle 10 can follow the test vehicle 30 to fly so that a certain target road section can be focused, and the flexibility of data acquisition is improved.
As shown in fig. 3 and 4, the unmanned aerial vehicle 10 includes a body 11, a pan-tilt 12, a camera 13, a power system 14, a control system 15, and the like.
The body 11 may include a fuselage and a foot rest (also referred to as landing gear). The fuselage may include a center frame and one or more arms coupled to the center frame, the one or more arms extending radially from the center frame. The foot rest is connected with the fuselage for supporting the unmanned aerial vehicle 10 when landing.
The cradle head 12 is mounted on the body 11 and is used for mounting the photographing device 13. The pan-tilt 12 may include three motors, that is, the pan-tilt 12 is a three-axis pan-tilt, and under the control of the control system 15 of the unmanned aerial vehicle 10, the shooting angle of the shooting device 13 may be adjusted, where the shooting angle may be understood as an angle of a direction of a lens of the shooting device 13 toward a target to be shot relative to a horizontal direction or a vertical direction.
In some embodiments, the pan-tilt head 12 may further include a controller for controlling the movement of the pan-tilt head 12 by controlling the motor of the pan-tilt head, so as to adjust the shooting angle of the shooting device 13. It should be appreciated that the pan-tilt 12 may be independent of the drone 10 or may be part of the drone 10. It should also be appreciated that the motor may be a direct current motor or an alternating current motor; alternatively, the motor may be a brushless motor or a brushed motor.
The photographing device 13 may be, for example, a device for capturing an image, such as a camera or a video camera, and the photographing device 13 may communicate with the control system 15 and perform photographing under the control of the control system 15. In the embodiment of the present application, the camera 13 is mounted on the body 11 of the unmanned aerial vehicle 10 through the cradle head 12. It is understood that the camera 13 may be directly fixed to the body 11 of the unmanned aerial vehicle 10, so that the cradle head 12 may be omitted.
In some embodiments, the photographing device 13 may be controlled to photograph the test vehicle traveling on the target road section in a depression angle, so as to obtain video data of the test vehicle, which may be used as traffic scene data of the test vehicle. The depression angle is a direction of an optical axis of a lens of the photographing device 13 perpendicular to a target link to be photographed or a direction of the optical axis of the lens is substantially perpendicular to the target link to be photographed, and the substantially perpendicular is, for example, 88 degrees or 92 degrees, and may be any angle value, which is not limited herein.
In some embodiments, the photographing device 13 may include a monocular camera or a binocular camera for implementing photographing of different functions, for example, the monocular camera is used for photographing an image of a test vehicle running on a target road segment, the binocular camera may obtain a depth image of the test vehicle and a target object on the target road segment, the depth image includes distance information of the test vehicle and the target object on the target road segment, and the target object such as other common vehicles or pedestrians, etc., and the depth image may also be used as one kind of traffic scene data.
The power system 14 may include one or more electronic speed governors (simply referred to as electric governors), one or more propellers, and one or more motors corresponding to the one or more propellers, wherein the motors are connected between the electronic speed governors and the propellers, the motors and propellers being disposed on a horn of the unmanned aerial vehicle 10. The electronic speed regulator is configured to receive a driving signal generated by the control system 15, and provide a driving current to the motor according to the driving signal, so as to control a rotation speed of the motor and further drive the propeller to rotate, thereby providing power for flight of the unmanned aerial vehicle 10, and the power enables the unmanned aerial vehicle 10 to realize movement in one or more degrees of freedom. In certain embodiments, the drone 10 may rotate about one or more axes of rotation.
For example, the rotation shaft may include a Roll shaft (Roll), a Yaw shaft (Yaw), and a pitch shaft (pitch). It should be appreciated that the motor may be a direct current motor or an alternating current motor. The motor may be a brushless motor or a brushed motor.
The control system 15 may include a controller and a sensing system. The controller is configured to control the flight of the unmanned aerial vehicle 10, for example, the flight of the unmanned aerial vehicle 10 may be controlled according to gesture information measured by the sensing system. It should be appreciated that the controller may control the drone 10 in accordance with preprogrammed instructions or may control the drone 10 in response to one or more control instructions from a control terminal.
The sensing system is used for measuring attitude information of the unmanned aerial vehicle 10, namely position information and state information of the unmanned aerial vehicle 10 in space, such as three-dimensional position, three-dimensional angle, three-dimensional speed, three-dimensional acceleration, three-dimensional angular speed and the like.
The sensing system may include, for example, at least one of a gyroscope, an ultrasonic sensor, an electronic compass, an inertial measurement unit (Inertial Measurement Unit, IMU), a vision sensor, a global navigation satellite system, and a barometer. For example, the global navigation satellite system may be a global positioning system (Global Positioning System, GPS).
The position information of the test vehicle or the target object in the image can be calculated by utilizing the position information and the state information of the unmanned aerial vehicle 10 in the space and combining the image shot by the unmanned aerial vehicle 10. For example, according to the flying height of the unmanned aerial vehicle, the field angle of the shot image and the position information of the unmanned aerial vehicle, the position information of the target object in the image can be calculated through coordinate transformation, triangle relation and the pixel position of the target object in the image.
The controller may include one or more processors and memory. The processor may be, for example, a Micro-controller Unit (MCU), a central processing Unit (Central Processing Unit, CPU), or a digital signal processor (Digital Signal Processor, DSP), etc. The Memory may be a Flash chip, a Read-Only Memory (ROM) disk, an optical disk, a U-disk, a removable hard disk, or the like.
In some embodiments, the unmanned aerial vehicle 10 may further include a radar device mounted on the unmanned aerial vehicle 10, in particular, on the body 11 of the unmanned aerial vehicle 10, for measuring the surrounding environment of the unmanned aerial vehicle 10, such as an obstacle, etc., during the flight of the unmanned aerial vehicle 10, to ensure the safety of the flight.
The radar device is mounted on a foot rest of the unmanned aerial vehicle 10, and is in communication connection with the control system 15, and the radar device transmits acquired observation data to the control system and is processed by the control system 15. The unmanned aerial vehicle 10 may include two or more foot stands, and the radar apparatus is mounted on one of the foot stands. The radar device may be mounted at another position of the unmanned aerial vehicle 10, and is not particularly limited.
In the embodiment of the application, the radar device can be specifically a laser radar, and can also collect point cloud data of a test vehicle running on a target road, and the point cloud data is used as traffic scene data.
It can be appreciated that the unmanned aerial vehicle is used for collecting traffic scene data of the running process of the test vehicle, and the data form of the traffic scene data can include video data, point cloud data, depth images, position information of the unmanned aerial vehicle, attitude information of the unmanned aerial vehicle and the like.
The unmanned aerial vehicle 10 may include a rotor unmanned aerial vehicle, such as a four-rotor unmanned aerial vehicle, a six-rotor unmanned aerial vehicle, or an eight-rotor unmanned aerial vehicle, or may be a fixed-wing unmanned aerial vehicle, or may be a combination of a rotor wing type and a fixed-wing unmanned aerial vehicle, which is not limited herein.
The test device 20 may be a server or a terminal device, wherein the terminal device may be, for example, a desktop computer, a notebook, a tablet, a smart phone, or the like.
The test vehicle 30 includes an autopilot system that includes an autopilot sensing system that includes at least one of: a vision sensor, radar or inertial sensor, wherein the vision sensor comprises a monocular camera or a binocular camera.
As shown in fig. 5, the autopilot sensing system includes a laser radar 301 and a vision sensor 302, wherein the laser radar 301 and the vision sensor 302 have a plurality of data volumes, and are disposed at different positions of the test vehicle 30, and are used for sensing the surrounding environment of the test vehicle to obtain sensing results, where the sensing results include, for example, the distance from the test vehicle 30 to other surrounding vehicles, or the distance from roadside facilities, etc.
It will be appreciated that the autopilot sensing system may of course also comprise other types of radar, such as millimeter wave radar or the like, and the vision sensor may comprise a monocular or binocular camera.
In some embodiments, the accuracy of the lidar 301 disposed at different locations on the test vehicle 30 may be different, such as the accuracy of the lidar 301 disposed on the front and rear sides of the test vehicle 30 may be greater than the accuracy of the lidar 301 disposed on the left and right sides of the test vehicle 30.
Referring to fig. 6, fig. 6 is a schematic diagram of another test system for testing an autopilot sensing system according to the present application, wherein the test system 100 includes a drone 10 and a test vehicle 30, and the test vehicle 30 is communicatively connected with the drone 10.
Wherein the test vehicle 30 in fig. 6 also includes an autopilot system including an autopilot sensing system for sensing the surroundings of the test vehicle 30 to generate a sensing result. The unmanned aerial vehicle 10 can fly along with the test vehicle 30 and collect traffic scene data in the running process of the test vehicle 30, wherein the unmanned aerial vehicle 10 is further used for obtaining a sensing result of an automatic driving sensing system of the test vehicle 30 and determining accuracy of the sensing result according to the traffic scene data.
It will be appreciated that in the test system in fig. 6, the unmanned aerial vehicle 10 may also send the collected traffic scene data to the test vehicle 30, or send the detection result of the objects around the test vehicle 30 determined according to the traffic scene data to the test vehicle 30, so that the test vehicle 30 determines the accuracy of the sensing result of the autopilot sensing system according to the traffic scene data or the detection result.
It should be noted that, in the embodiment of the present application, the test of the autopilot sensing system is completed according to the traffic scene data collected by the unmanned aerial vehicle 10, and may be completed by the unmanned aerial vehicle 10, the testing device 20 or the testing vehicle 30. For example, the unmanned aerial vehicle 10 is in communication connection with the test vehicle 30, obtains a sensing result of an automatic driving sensing system of the test vehicle 30, and determines the accuracy of the sensing result according to traffic scene data; for example, the testing device 20 is in communication connection with the unmanned aerial vehicle 10 and the test vehicle, and is configured to obtain traffic scene data and a sensing result of the automatic driving sensing system, and determine accuracy of the sensing result according to the traffic scene data; for another example, the test vehicle 30 is communicatively connected to the unmanned aerial vehicle 10, and is configured to obtain traffic scene data, and determine the accuracy of the perceived result according to the traffic scene data.
It should be further noted that the test of the autopilot sensing system may be performed by the unmanned aerial vehicle 10, the test device 20 or the test vehicle 30, and the processing of the traffic scene data may be performed by the unmanned aerial vehicle 10, the test device 20 or the test vehicle 30, for example, to identify the motion state of the test vehicle and the target information of the targets and the targets around the test vehicle in the traffic scene data, where the target information includes the relative position and motion information of the targets with respect to the test vehicle.
Hereinafter, the accuracy of the sensing result of the test device 20 for determining the automatic driving sensing system of the test vehicle based on the traffic scene data will be described as an example.
In a specific test scenario, that is, when the test system 100 is used to test the autopilot sensing system, the unmanned aerial vehicle 10 can be controlled to fly along with the test vehicle 30 and hover above the test vehicle 30, and traffic scenario data during the running process of the test vehicle 30 is collected. The hovering above the test vehicle 30 means that the drone 10 is relatively stationary with respect to the test vehicle 30, or it is understood that the drone 10 and the test vehicle 30 have the same speed and direction of movement.
Specifically, the following function of the unmanned aerial vehicle 10 can be utilized to hover above the test vehicle 30, and a certain hover height is maintained, so that the side wall of the vehicle around the test vehicle can be reduced to the greatest extent, the subsequent data processing is facilitated, and the test efficiency and accuracy of the automatic driving perception system are improved.
In some embodiments, the hover position and/or hover height of the drone 10 above the test vehicle is related to the traffic scenario of the test vehicle. The traffic scene may be a vehicle scene or a facility scene of a test vehicle traveling on a target road.
For example, when the drone 10 recognizes that a vehicle (e.g., the height of a truck) having a height above a preset threshold around the test vehicle 30 passes, the hover height may be adjusted to avoid the truck from blocking other vehicles, thereby allowing higher quality traffic scene data to be collected.
In some embodiments, the drone 10 is able to hover over the test vehicle 30 for a particular period of time, collecting traffic scenario data during the travel of the test vehicle 30. Wherein the wind power of the specific time period is smaller than the wind power of other time periods, and/or the light intensity of the specific time period is larger than the light intensity of other time periods. By way of example, the image shake caused by the movement of the unmanned aerial vehicle can be reduced to the greatest extent in a specific time period such as 8 am to 5 pm in sunny days and windless weather, so that the test accuracy of the automatic driving perception system is improved.
In some embodiments, to improve the quality of traffic scene data collection, the accuracy of the autopilot sensing system test is further improved. The unmanned aerial vehicle 10 can adjust the flight attitude and/or the shooting angle of the shooting device 13, and collect traffic scene data during the running process of the test vehicle 30.
For example, the unmanned aerial vehicle 10 can adjust the flight attitude of the test vehicle 30 and/or the shooting angle of the shooting device 13 mounted thereon according to the road information of the test vehicle 30 driving on the target road section, and collect traffic scene data during the driving of the test vehicle 30.
The target road section may be any road section where the test vehicle may travel, such as any road section of an expressway, an urban road, and an urban and rural road. The target road section may also be a road section in which traffic accidents occur, such as a long downhill road section, suburban junction road section, sharp-turning road section, an "S" -shaped road section, or a circular island road section. The target road section may also be a special road section such as any of a tunnel, a cross-sea bridge, a overpass. Therefore, the test system can measure traffic scene data of any road section, is not limited by terrain, and has low measurement cost.
The road information of the target link may be the shape of the link, other facilities on the link that affect the travel of the test vehicle, and the like.
In particular, the road information data includes road facility information, which may include, for example, traffic signs, traffic markings, traffic lights, and/or road auxiliary facilities, and/or the like, and/or road type information, such as an urban road or highway, which may include a trunk road, an expressway, a sub-trunk road, and/or a branch road, and the like.
For example, the unmanned aerial vehicle 10 can collect traffic scene data during the driving of the test vehicle 30 according to the operation state of the test vehicle 30, such as the speed and the steering of the test vehicle, to adjust the flight attitude and/or the shooting angle of the shooting device.
In an embodiment of the present application, the contents of the traffic scene data may include road information data, vehicle information data, environment information data, traffic participant information data, and the like.
The road information data includes road facility information and/or road type information, the road facility information may include traffic signs, traffic markings, traffic lights, and/or road auxiliary facilities, and the road type information may include an urban road or a highway, and the urban road may include a trunk road, an expressway, a secondary road, and/or a branch road. The vehicle information data includes at least one of: vehicle type information, vehicle position information, vehicle travel speed information, vehicle travel direction information, and vehicle size information, such as M1 passenger car, N1 passenger car, trailer, two-wheel vehicle, or three-wheel vehicle. The environmental information data includes weather information such as daytime, evening, sunny, rainy, snowy or foggy, and/or road surrounding information such as buildings around the road or flowers, plants, trees, and the like. The traffic participant information data includes pedestrian information such as traveling speed, direction, and position information including children, adults, or old people, and/or non-motor vehicle information such as traveling speed, direction, and position information of a bicycle, electric two-wheeled vehicle, and the like.
In addition, the high-precision sensor shown in fig. 1 is mounted on the roof of the test vehicle, and the mounting height is limited, so that the visual field range is small and the blind area is large. The height, the gesture and the like of the unmanned aerial vehicle of the application providing test system can be adaptively adjusted based on the requirement of a perception range, the brought maneuvering performance can eliminate blind areas, all targets of the test vehicle are perceived, the blind area vehicles around the test vehicle and the blind area vehicles which are partially shielded for the test vehicle exist, and the blind area vehicles are specifically vehicles shielded by the surrounding vehicles. And the blind area vehicle data is used for measuring the automatic driving sensing system, so that the traffic passing rate of the automatic driving system can be improved.
Specifically, as shown in fig. 7, the Ego vehicle is a test vehicle, the vehicle 1, the vehicle 2..the vehicle 8 is a surrounding vehicle of the test vehicle, and the vehicle 9, the vehicle 10..the vehicle 24 is a blind area vehicle.
Although unmanned aerial vehicles have the function of flight stability control, a plurality of factors still cause that the collected traffic scene data cannot be directly used for performance verification of an automatic driving system, for example, the unmanned aerial vehicle can be influenced by aspects of window shake, sunlight irradiation and the like when data collection is carried out, so that various problems of unclear targets, weaker contrast, overhigh brightness exposure rate and the like of a pixel matrix of collected video data are caused, and the detection difficulty is increased. Therefore, the collected traffic scene data needs to be preprocessed, where the preprocessing includes at least one of foreground extraction, color space conversion, image binarization, edge detection, morphological processing and setting of an interested area, and specifically, a corresponding processing mode can be selected according to an actual processing effect.
Since the traffic scene data includes traveling conditions of the test vehicle 30 such as traveling road information, vehicle type information, vehicle position information, vehicle traveling speed information, vehicle traveling direction information, and vehicle type information of surrounding vehicles, vehicle position information, vehicle traveling speed information, vehicle traveling direction information, and also includes road surrounding environment information such as buildings or flowers and trees around the road. The traffic participant information data includes pedestrian information such as traveling speed, direction, and position information including children, adults, or old people, and/or non-motor vehicle information such as traveling speed, direction, and position information of bicycles, electric bikes, and the like. And the perceived result of the autopilot perception system of the test vehicle 30 also includes such information, whereby the accuracy of the perceived result of the autopilot perception system can be determined using the traffic scene data collected by the drone 10.
In some embodiments, for the sensing result of the autopilot sensing system, firstly, source data required by the autopilot sensing system for sensing can be obtained, the source data are, for example, image data output by a vehicle-mounted camera, point cloud data output by a laser radar, radar data, positioning data output by a positioning module and the like, then the source data are input to a sensing algorithm module adopted by the autopilot sensing system of the autopilot sensing system, the sensing algorithm module can also be called a functional module of the autopilot sensing system, the sensing result is obtained through calculation by the sensing algorithm module, and particularly as shown in fig. 8, the sensing result specifically refers to the relative position information of objects around a test vehicle and the test vehicle.
It should be noted that, in the embodiment of the present application, the sensing result includes at least information of a relative position of the object around the test vehicle 30 with respect to the test vehicle 30, such as a relative distance. In some embodiments, the perceived result further includes pose information of the test vehicle itself and/or target information of the target, the target information including a type of the target and/or a speed of movement of the target. Such as travel speed and travel direction.
In some embodiments, the accuracy of the sensing result is determined according to the traffic scene data, specifically, the detection result of the target object around the test vehicle may be determined according to the traffic scene data, the detection result at least includes the position information of the target object relative to the test vehicle, then the sensing result corresponding to the automatic driving sensing system of the test vehicle is obtained, the test result, specifically, the detection result is used as a true value by comparing the detection result and the sensing result, the sensing result is compared with the detection result, and the difference result (i.e., the test result) is obtained to determine whether the accuracy of the sensing result meets the design requirement.
In some embodiments, the sensing result of the autopilot sensing system includes the recognition result of the functional modules, since the sensing algorithm module of the autopilot sensing system includes the target recognition module, the travelable region module, the multi-target tracking module, and the self-pose detection module. The detection results of the functional modules can be determined by using traffic scene data collected by the unmanned aerial vehicle, the detection results and the traffic scene data can be compared to finish the test of the automatic driving perception system, and table 1 can be referred to in specific comparison, and the scene elements identified by the traffic scene data shot by the unmanned aerial vehicle can be used as true values of the perception algorithm module in the automatic driving perception system.
Table 1 shows the correspondence between the detection result of the unmanned aerial vehicle and the perception algorithm module in the perception system
It should be noted that, the sensing algorithm modules shown in table 1 and the detection result based on the detection target object of the unmanned aerial vehicle form a limitation on the sensing algorithm modules of the automatic driving sensing system according to the embodiment of the present application, and in practical application, the sensing algorithm modules may include more sensing algorithm modules.
In some embodiments, the dynamic traffic elements and the static traffic elements may be classified according to the motion state of the object around the test vehicle, i.e., the object includes the dynamic traffic elements and the static traffic elements. In addition to identifying the relative position information from the dynamic traffic element to the test vehicle, the dynamic traffic element needs to be identified to obtain the motion information, such as the motion speed, the motion direction and the like, of the dynamic traffic element.
Wherein the dynamic traffic element comprises at least one of: motor vehicles, non-motor vehicles, pedestrians or animals, the static traffic elements comprising at least one of: lane lines, obstacles or road edges on the road.
In some embodiments, the testing device 20 is specifically configured to perform object recognition on an image of traffic scene data according to image features of an object, so as to obtain the object around the tested vehicle, where the image features include one or more of color information, size information, and texture information of the object.
For example, the information of the lane lines, the obstacles, the road edges and other targets can be identified through a target identification algorithm according to the image characteristics of the lane lines, the obstacles and the road edges.
In some embodiments, in particular for a lane line, a line type feature of the lane line on the driving road of the test vehicle may be obtained, the line type feature includes line type information and color information, and the lane line in the traffic scene data is identified according to the line type feature.
Specifically, as the line type, the color and the like of the lane line are definitely specified, compared with the surrounding road surface environment, the lane line has obviously different image characteristics, and the different image characteristics are embodied in the edge characteristics such as gradient, gray scale and the like, so that the lane line can be detected and positioned according to the edge characteristics, and the recognition and the positioning of the highway are realized.
In some embodiments, lane lines in the traffic scene data may also be identified from edge features based on edge detection, wherein the edge features include gradient features and/or color features. Specifically, calculating gradients of images in the traffic scene data, and determining edges of lane lines according to changes of the gradients to obtain the lane lines.
Compared with the road surface, the lane line has more distinguishable edge characteristics such as gradient and gray scale, the lane line extraction is realized through the pre-designed characteristics, the lane line extraction is mainly carried out by the edge characteristics of the lane line compared with the road surface environment, and the common edge characteristics are mainly gradient characteristics and color characteristics. For example, the gradient of the image in the traffic scene data is calculated, if the edge of the lane line has obvious gradient change compared with the road surface, the lane line can be determined, or the first derivative or the higher derivative of the image in the traffic scene data in different directions in the gradient map is calculated, the peak value of the first derivative or the higher reciprocal is searched to position the edge, and then the edge direction is determined according to the direction of the gradient, so as to determine the lane line.
In some embodiments, a pre-trained image recognition model for recognizing objects around the test vehicle may also be pre-stored in the test device 20, so that the test device 20 may also input an image of the traffic scene data into the pre-trained image recognition model, resulting in objects included in the traffic scene data.
Different objects correspond to different image recognition models, for example, the lane line corresponding image recognition model is used for recognizing lane lines, and other objects are similar.
The image recognition model for recognizing the lane line can be obtained by training a convolutional neural network (Convolutional Neural Networks, CNN), which is an important component of a deep learning network, is an important tool in the field of image recognition, and is widely used in the fields of image recognition and target detection. A lane line database is established by using a convolutional neural network, a model is trained on the convolutional neural network by using marked data, learning parameters are adjusted by automatic learning of the model, and lane lines can be rapidly and accurately identified.
In some embodiments, if the drone includes a binocular camera, the binocular camera is used to capture left and right eye images of the test vehicle while it is traveling. The testing device can obtain the depth image of the tested vehicle when the tested vehicle runs according to the left eye image and the right eye image, and then the left eye image or the right eye image is segmented according to the difference between the depth information in the depth image so as to determine the obstacle.
Specifically, for obstacle recognition, obstacle detection may be performed based on binocular stereo vision, where the basic principle of binocular stereo vision is to image a target scene from respective angles by using left and right cameras to obtain image pairs (left-eye image and right-eye image) thereof, and match corresponding image points in the left-eye image and the right-eye image by using corresponding algorithms, so that parallax between the image pairs is calculated to obtain depth information thereof, and to divide any one image of the image pairs by using differences between the depth information to recognize an obstacle.
In some embodiments, the object may be identified by using an image processing manner, for example, morphological processing, specifically, for example, a region corresponding to the object in an image of the traffic scene data may be connected, so as to obtain a connected region, and the object is determined according to the connected region.
In some embodiments, an area corresponding to a target object in an image of traffic scene data may be further connected, a connected area is obtained, and size information and a centroid position of the connected area are determined. The size information may be used to determine the type of object and the centroid position may be used to calculate the speed of the object.
The region corresponding to the target object in the image of the traffic scene data is a region in a foreground image of the image, and the foreground image is subjected to morphological processing. The foreground image of the image is obtained by carrying out foreground advance on the image.
Specifically, a foreground image of the whole image in the traffic scene data is extracted by continuous image subtraction, an area corresponding to a target object is determined in the foreground image, and the area is communicated to obtain a communicated area.
In order to more accurately identify the object, it is of course also possible to extract a region of interest (Region Of Interest, ROI) in the foreground image, which is the important part of the object identification interest, in particular by various operators (such as the Operator in OpenCV) and functions. After the region of interest is determined, the image in the region of interest can be subjected to binarization processing, interference is removed and repair is performed by morphology, a complete foreground image can be obtained, the mutually communicated regions in the processed foreground image are marked into a whole to obtain communicated regions by a region marking method, and the length, width and mass center positions of each communicated region are calculated for subsequent calculation of the speed of a target object.
In some embodiments, the type of the object may be determined according to the size of the communication area, where the object includes a vehicle or an interfering object, and the type of the object includes at least a first type, a second type, and a third type, and the sizes of the vehicles corresponding to the first type, the second type, and the third type are different.
Specifically, for example, the first type is a small-sized vehicle, the second type is a medium-sized vehicle, the third type is a large-sized vehicle, the small-sized vehicle includes a locomotive, a bicycle, etc., the medium-sized vehicle includes a car, a business car, a small truck, etc., and the large-sized vehicle includes a truck, a bus, a tourist bus, a large truck, etc.
In some embodiments, the type of object may also be determined based on the width of the connected region and the width of the road in the image of the traffic scene data.
For example, if the width of the communication area is greater than one fourth of the road width, the aspect ratio of the communication area needs to be determined to determine whether the communication area is an interference, for example, the width of the communication area is greater than one fourth of the road width, and the aspect ratio of the communication area is between 1.5 and 3 times, then the corresponding object of the communication area is determined to be the vehicle.
Further, the type of the vehicle may be determined, for example, if the width of the communication area is greater than half the width of the road and the length is greater than 3.5 times the width, the vehicle is determined to be a large vehicle, and the other vehicle widths are regarded as interference elimination. By utilizing the width of the communication area and the road width, most of residual shadows generated by misdetection of the unmanned aerial vehicle or shaking of branches and leaves of the roadside trees can be removed, so that the accuracy of detection of the automatic driving perception system is improved.
The road width referred to above is an average road width obtained by the horizontal projection process.
By the method, the target objects in the traffic scene, namely the dynamic traffic elements and the lens traffic elements, can be identified, and the relative positions of the target objects to the test vehicles can be calculated through the image information. However, the motion information of the dynamic traffic element, such as the motion speed and the motion direction, needs to be determined for the dynamic traffic element, and the motion information, such as the deceleration or the acceleration, may be determined according to a similar manner.
The movement direction may be determined according to a movement track of the dynamic traffic element in the multi-frame image, and of course, other manners may be adopted, which are not limited herein, for example, determining an orientation of the dynamic traffic element relative to the lane line to determine the movement direction.
In some embodiments, pixel positions of at least two frames of images of a reference point in traffic scene data can be determined, relative distance information of the reference point relative to another reference point is obtained, wherein the relative distance information is a distance between the reference point and the other reference point in a real scene, the at least two frames of images are converted into three-dimensional coordinates from two-dimensional coordinates, actual displacement of a target object in the images is determined according to the pixel positions and the relative distance information, finally changing time corresponding to the actual displacement is determined according to frame rates corresponding to the at least two frames of images, and movement speed of the target object is determined according to the changing time and the actual displacement.
The reference point may be a fixed point in a traffic scene, such as a light pole or a lane line, and the like, the distance between two selected reference points in the real scene is known, the pixel position of the reference point in at least two frames of images in the traffic scene data is determined, the change position of the target object (such as a vehicle) in at least two frames of images can be determined according to the pixel position, because the vehicle moves relative to the reference point when the images are shot, the position of the target object relative to the reference point in different frames of images is changed, the relative distance information of the reference point input by a user relative to another reference point is acquired, and the at least two frames of images are converted from two-dimensional coordinates to three-dimensional coordinates, the actual displacement corresponding to the change position of the target object in the images can be determined, the actual displacement is also the displacement in the real scene, and finally the change time corresponding to the actual displacement is determined according to the frame rate corresponding to the at least two frames of images, so that the movement speed can be determined according to the actual displacement and the change time, such as dividing the actual displacement by the change time to obtain the movement speed.
In some embodiments, the actual running distance of the target object in two adjacent images of the traffic scene data can be determined according to the road width of the running road of the test vehicle, the running time of the target object is determined according to the time difference of the two adjacent images, and the movement speed of the target object is determined according to the actual running distance and the running time. The movement speed of the target object can be quickly and accurately determined by utilizing the road width.
The method comprises the steps of determining the actual driving distance of a target object in two images of traffic scene data according to the road width of a road on which a test vehicle runs, specifically determining the centroid moving distance of the target object according to the centroid position of the target object in two adjacent images, determining the distance corresponding to each pixel in the image according to the number of pixels corresponding to the road in the image and the actual width of the road, and determining the actual driving distance of the target object according to the centroid moving distance and the distance corresponding to each pixel. The actual driving distance of the target object can be obtained by multiplying the number of pixels included in the centroid moving distance by the distance corresponding to each pixel.
In some embodiments, to eliminate errors caused by the shake of the unmanned aerial vehicle to the number of pixels corresponding to the road width, correction processing may be further performed on the number of pixels corresponding to the road in the image, where the correction processing includes binarization processing and/or horizontal direction projection processing, where the binarization processing and/or horizontal direction projection processing each processes image information corresponding to the road in the image.
In some embodiments, when the sensing result is tested, a functional module that does not meet the requirement in the automatic driving sensing system, for example, a functional module that does not meet the performance standard may be determined according to the test result, and specifically, a certain performance value in the sensing result is lower than the performance standard, or a difference value between the performance value and the performance standard exceeds a preset range, which may be regarded as not meeting the requirement, and after the functional module that does not meet the requirement in the automatic driving sensing system is determined, the functional module is optimized.
In an embodiment of the application, the functional module comprises one or more of a target recognition module, a travelable region recognition module, a multi-target tracking module and a self-pose detection module, the optimizing comprising at least one of: optimizing the perception algorithm of the functional module and selecting a sensor with higher precision.
Specifically, according to the test result, a functional module with an undesirable perception effect in the automatic driving perception system is found, the functional module is optimized, a perception algorithm can be optimized in a targeted manner on a software level, and a sensor with highest cost performance and accuracy can be selected on a hardware level.
For example, if the performance of the target recognition module is determined to be not ideal enough, the target detection algorithm in the target recognition module may be optimized, for example, other target detection algorithms are adopted to replace a mixed gaussian model as the target detection algorithm, or the mixed gaussian model is optimized.
As the mixed Gaussian model method is a self-adaptive background modeling method, for the situation of camera fixation, the mixed Gaussian model gradually reaches a stable state after a period of time is established. Then, the background image is changed slowly with the change of time, and the Gaussian mixture model is also required to be updated continuously. Compared with other non-adaptive algorithms, the Gaussian mixture model does not need manual intervention in the initialization process, so that the background calculation error accumulation is small, and the method has good adaptability to background change. However, the hybrid high-si model method has some drawbacks in coping with factors such as noise and abrupt illumination due to characteristics such as a low convergence rate.
In view of the above, the embodiment of the application also provides a Gaussian mixture model established based on the edge detection image, and the improved neighborhood difference-based method is adopted to remove noise, and then the two comprehensive treatments are used as a target object detection algorithm to be applied to a target identification module, so that the target detection effect can be improved.
As shown in particular in fig. 9. And (3) establishing a common Gaussian mixture model by using an image sequence of an original video frame in traffic scene data, calculating edges of the video frame by using an operator, and then establishing an edge Gaussian mixture model. According to the edge Gaussian mixture model, on one hand, part of noise can be reduced by utilizing an edge image, the illumination mutation resistance is improved, and on the other hand, the detection result of the common Gaussian mixture model is denoised by using an improved neighborhood difference method.
In addition, the foreground image of the common Gaussian model processed by the neighborhood difference method is expanded by a morphological method, and then the foreground image of the edge mixture Gaussian model is intersected with the foreground image of the common Gaussian model, so that the denoising effect of the two methods can be integrated, and the edge image after noise removal is obtained. The obtained edge image is inflated again, and then intersection is carried out with a foreground image of a common Gaussian model, so that a plurality of target images which are mistakenly removed as the background when denoising is carried out by using the improved neighborhood difference method can be found back.
Because the unmanned aerial vehicle cannot be observed by the drivers of the test vehicles and the drivers of the common vehicles, the collected data are collected in a natural state, so that the performance test of the automatic driving perception system in the automatic driving system in the natural state is realized, and the safety of the automatic driving system in practical application is improved. And because the shooting angle of the unmanned aerial vehicle is shot downwards from the right upper direction of the lane, the situation that a large-sized vehicle stops a small-sized vehicle can not occur, and the data error rate is low, so that the test precision of the automatic driving perception system can be improved. Because the test system only needs one test vehicle and one unmanned aerial vehicle, the test on different road sections in different time can be completed, and the coverage rate of the test can be improved while the cost is reduced.
Referring to fig. 10, fig. 10 shows a schematic flow of a test method of an autopilot sensing system according to an embodiment of the present application. The test method of the automatic driving perception system can be applied to the unmanned aerial vehicle, the test device or the test vehicle provided with the test system in the embodiment so as to finish the performance test of the automatic driving perception system in a natural state.
As shown in fig. 10, the test method of the automatic driving system includes step S101 and step S102.
S101, acquiring traffic scene data of a test vehicle, wherein the test vehicle comprises an automatic driving system, the automatic driving system comprises an automatic driving sensing system, the automatic driving sensing system is used for sensing the surrounding environment of the test vehicle to generate a sensing result, and the traffic scene data is acquired by the unmanned aerial vehicle following the test vehicle;
s102, determining the accuracy of the sensing result of the automatic driving sensing system according to the traffic scene data.
In some embodiments, the drone is able to follow the test vehicle and hover over the test vehicle. Therefore, the side walls of the vehicles around the collected and tested vehicles can be reduced to the greatest extent, subsequent data processing is facilitated, and the testing efficiency and accuracy of the automatic driving perception system are improved.
In some embodiments, the hover position and/or hover height of the drone above the test vehicle is related to the traffic scenario of the test vehicle. The traffic scene may be a vehicle scene or a facility scene of a test vehicle driving target road, whereby higher quality traffic scene data may be collected.
In some embodiments, the drone is able to hover over the test vehicle for a specific period of time and collect traffic scenario data during travel of the test vehicle. The wind power at the specific time is smaller than the wind power at other time periods, and/or the light intensity at the specific time is larger than the light intensity at other time periods. Thereby improving the test accuracy of the autopilot sensing system.
In some embodiments, to improve the quality of traffic scene data collection, the accuracy of the autopilot sensing system test is further improved. The unmanned aerial vehicle can adjust the flight attitude and/or the shooting angle of the shooting device carried by the unmanned aerial vehicle, and collect traffic scene data in the running process of the test vehicle.
For example, the unmanned aerial vehicle can adjust the flight attitude of the test vehicle and/or the shooting angle of the shooting device of the unmanned aerial vehicle according to the road information of the test vehicle running on the target road section, and collect traffic scene data during the running process of the test vehicle.
For example, for another example, the unmanned aerial vehicle can adjust the flight attitude of the unmanned aerial vehicle and/or the shooting angle of the shooting device of the unmanned aerial vehicle according to the running condition of the test vehicle, and collect traffic scene data during the running process of the test vehicle.
In an embodiment of the present application, the sensing result at least includes relative position information of an object around the test vehicle with respect to the test vehicle.
In some embodiments, the perceived result further includes pose information of the test vehicle and/or target information of the target, the target information including a type of the target and/or a speed of movement of the target.
In some embodiments, the objects surrounding the test vehicle include surrounding vehicles, which are vehicles adjacent to the test vehicle, and blind vehicles, which are vehicles that are obscured by the surrounding vehicles. Thereby eliminating the influence of the dead zone.
In an embodiment of the application, the autopilot awareness system includes at least one of: a vision sensor, radar or inertial sensor, wherein the vision sensor comprises a monocular camera or a binocular camera.
Although unmanned aerial vehicles have the function of flight stability control, a plurality of factors still cause that the collected traffic scene data cannot be directly used for performance verification of an automatic driving system, for example, the unmanned aerial vehicle can be influenced by aspects of window shake, sunlight irradiation and the like when data collection is carried out, so that various problems of unclear targets, weaker contrast, overhigh brightness exposure rate and the like of a pixel matrix of collected video data are caused, and the detection difficulty is increased. Therefore, in some embodiments, preprocessing is further required to perform preprocessing on the collected traffic scene data, where the preprocessing includes at least one of foreground extraction, color space conversion, image binarization, edge detection, morphological processing, and setting of a region of interest, and specifically, a corresponding processing manner may be selected according to an actual processing effect.
In some embodiments, the accuracy of the sensing result is determined according to the traffic scene data, specifically, the detection result of the target object around the test vehicle may be determined according to the traffic scene data, the detection result at least includes the position information of the target object relative to the test vehicle, then the sensing result corresponding to the automatic driving sensing system of the test vehicle is obtained, the test result, specifically, the detection result is used as a true value by comparing the detection result and the sensing result, the sensing result is compared with the detection result, and the difference result (i.e., the test result) is obtained to determine whether the accuracy of the sensing result meets the design requirement
In some embodiments, the targets surrounding the test vehicle include dynamic traffic elements and static traffic elements, wherein the dynamic traffic elements include at least one of: a motor vehicle, a non-motor vehicle, a pedestrian or an animal, the static traffic element comprising at least one of: lane lines, obstacles or road edges on the road.
In some embodiments, the image of the traffic scene data may be subjected to target recognition according to the image features of the target object, so as to obtain the target object around the test vehicle; wherein the image features include one or more of color information, size information, and texture information of the object.
In some embodiments, the region corresponding to the target object in the image of the traffic scene data may be further connected to obtain a connected region; and determining size information and centroid positions of the connected regions. The area corresponding to the target object in the image of the traffic scene data is an area in a foreground image of the image, and the foreground image is subjected to morphological processing.
In some embodiments, the type of the object may be determined according to the size of the communication area, where the object includes a vehicle or an interfering object, and the type of the object includes at least a first type, a second type, and a third type, and the sizes of the vehicles corresponding to the first type, the second type, and the third type are different.
In some embodiments, the type of object may be determined based on the width of the connected region and the width of the road in the image of the traffic scene data.
In some embodiments, to determine the speed of movement of the object, the pixel locations of the reference point in the traffic scene data for at least two frames of images may be determined; acquiring relative distance information of a reference point relative to another reference point, wherein the relative distance information is the distance of the reference point relative to the other reference point in a real scene; converting at least two frames of images from two-dimensional coordinates to three-dimensional coordinates, and determining the actual displacement of a target object in the images according to the pixel positions and the relative distance information; and determining the change time corresponding to the actual displacement according to the frame rate corresponding to at least two frames of images, and determining the movement speed of the target object according to the change time and the actual displacement.
In other embodiments, in order to determine the movement speed of the target object, the actual driving distance of the target object in two adjacent images of the traffic scene data may be determined according to the road width of the driving road of the test vehicle, the driving time of the target object may be determined according to the time difference between the two adjacent images, and then the movement speed of the target object may be determined according to the actual driving distance and the driving time.
Determining the actual driving distance of the target object in the two images of the traffic scene data according to the road width of the driving road of the test vehicle, specifically determining the centroid moving distance of the target object according to the centroid position of the target object in the two adjacent images, wherein the centroid moving distance comprises a plurality of pixels; determining the distance corresponding to each pixel in the image according to the number of pixels corresponding to the road in the image and the actual width of the road; and determining the actual driving distance of the target object according to the centroid moving distance and the distance corresponding to each pixel.
In some embodiments, to eliminate errors caused by the shake of the unmanned aerial vehicle to the number of pixels corresponding to the road width, correction processing may be further performed on the number of pixels corresponding to the road in the image, where the correction processing includes binarization processing and/or horizontal direction projection processing, where the binarization processing and/or horizontal direction projection processing each processes image information corresponding to the road in the image.
In some embodiments, for a lane line as the target, linear characteristics of the lane line on the test vehicle driving road may be obtained, the linear characteristics including linear information and color information; and identifying the lane lines in the traffic scene data according to the linear characteristics.
In some embodiments, for a lane line that is the target, the lane line in the traffic scene data may also be identified from edge features based on edge detection, wherein the edge features include gradient features and/or color features.
Specifically, by calculating the gradient of the image in the traffic scene data, the edge of the lane line is determined according to the change of the gradient, and the lane line is obtained.
In some embodiments, the objects around the test vehicle may be identified by a pre-trained image recognition model, whereby an image of the traffic scene data may be input to the pre-trained image recognition model, resulting in the objects included in the traffic scene data.
In some embodiments, for an obstacle, if the unmanned aerial vehicle includes a binocular camera, the binocular camera is used for shooting a left eye image and a right eye image of the test vehicle when the test vehicle runs, and a depth image of the test vehicle when the test vehicle runs can be obtained according to the left eye image and the right eye image; and then the left-eye image or the right-eye image is segmented according to the difference between the depth information in the depth image to determine the obstacle.
In some embodiments, when the sensing result is tested, a functional module that does not meet the requirement in the automatic driving sensing system, for example, a functional module that does not meet the performance standard may be determined according to the test result, and specifically, a certain performance value in the sensing result may be lower than the performance standard, or a difference value between the relative performance standards exceeds a preset range, which may be regarded as not meeting the requirement, and after the functional module that does not meet the requirement in the automatic driving sensing system is determined, the functional module is optimized. Wherein the functional module comprises one or more of a target recognition module, a travelable region recognition module, a multi-target tracking module, and a self-pose detection module, the optimizing comprising at least one of: optimizing the perception algorithm of the functional module and selecting a sensor with higher precision.
Referring to fig. 11, fig. 11 shows a schematic flow of another test method of an autopilot sensing system according to an embodiment of the present application. The test method of the automatic driving perception system can be applied to the unmanned aerial vehicle, the test device or the test vehicle provided with the test system in the embodiment so as to finish the performance test of the automatic driving perception system in a natural state.
As shown in fig. 11, the test method of the autopilot system includes steps S201 to S207.
S201, collecting traffic scene data.
The traffic scene data are acquired by the unmanned aerial vehicle following the test vehicle, the test vehicle comprises an automatic driving system, the automatic driving system comprises an automatic driving sensing system, and the automatic driving sensing system is used for sensing the surrounding environment of the test vehicle to generate a sensing result.
S202, preprocessing traffic scene data.
Wherein the preprocessing includes at least one of foreground extraction, color space conversion, image binarization, edge detection, morphological processing, and setting a region of interest.
S203, dynamic traffic element identification;
s204, static traffic element identification.
Specifically, the preprocessed traffic scene data is identified to obtain the dynamic traffic elements and the static traffic elements, and the specific identification mode refers to the embodiment.
S205, verifying whether key indexes of the automatic driving perception system are qualified or not.
Exemplary, for example, the key indexes of each perception algorithm module in table 1 are verified, when the key indexes of the automatic driving perception system are qualified, step S207 is executed to determine that the automatic driving perception system is qualified; when the key index of the autopilot sensing system is not qualified, step S206 is performed.
S206, optimizing the functional module.
The hardware corresponding to the functional module can be optimized, for example, a sensor with higher precision can be selected, and a software algorithm can be optimized.
S207, determining that the automatic driving perception system is qualified.
According to the testing method of the embodiment, the unmanned aerial vehicle cannot be observed by the driver of the tested vehicle and the driver of the common vehicle, so that the collected data are collected in a natural state, the performance test of the automatic driving sensing system in the automatic driving system in the natural state is realized, and the safety of the automatic driving system in practical application is improved.
The embodiment of the present application further provides an unmanned aerial vehicle, specifically as shown in fig. 3, the unmanned aerial vehicle 10 includes: the camera comprises a machine body 11, a cradle head 12 and a shooting device 13, wherein the cradle head 12 is arranged on the machine body 11, the shooting device 13 is arranged on the cradle head 12, and the shooting device 13 is used for shooting images.
The unmanned aerial vehicle 10 further comprises a processor and a memory, wherein the memory is used for storing a computer program, and the processor is used for executing the computer program and realizing the testing method of the automatic driving perception system provided by any one of the embodiments of the application when the computer program is executed.
Referring to fig. 12, fig. 12 is a schematic block diagram of a testing apparatus according to an embodiment of the application. As shown in fig. 12, the test apparatus 200 further includes at least one or more processors 201 and a memory 202.
The processor 201 may be, for example, a Micro-controller Unit (MCU), a central processing Unit (Central Processing Unit, CPU), or a digital signal processor (Digital Signal Processor, DSP).
The Memory 202 may be a Flash chip, a Read-Only Memory (ROM) disk, an optical disk, a U-disk, a removable hard disk, or the like.
Wherein the memory 202 is used for storing a computer program; the processor 201 is configured to execute the computer program and execute any one of the test methods for the autopilot sensing system provided by the embodiments of the present application when the computer program is executed, so as to implement a performance test for the autopilot sensing system in a natural state, thereby improving the safety of the autopilot system in practical applications.
The processor 201 is for example configured to execute the computer program and to implement the following operations when executing the computer program:
acquiring traffic scene data of a test vehicle, wherein the test vehicle comprises an automatic driving system, the automatic driving system comprises an automatic driving sensing system, the automatic driving sensing system is used for sensing the surrounding environment of the test vehicle to generate a sensing result, and the traffic scene data is acquired by the unmanned aerial vehicle following the test vehicle; and determining the accuracy of the sensing result of the automatic driving sensing system according to the traffic scene data.
Referring to fig. 13, fig. 13 is a schematic diagram of a vehicle according to an embodiment of the application. As shown in fig. 13, the vehicle 400 includes an autopilot system 40 and a vehicle platform 41, the autopilot system 40 and the vehicle platform 41 being connected, the autopilot system including an autopilot sensing system, the vehicle platform 41 including various devices, components, etc. of the vehicle body.
The autopilot system 40 further includes at least one or more processor 401 and memory 402.
The processor 401 may be, for example, a Micro-controller Unit (MCU), a central processing Unit (Central Processing Unit, CPU), a digital signal processor (Digital Signal Processor, DSP), or the like.
The Memory 402 may be a Flash chip, a Read-Only Memory (ROM) disk, an optical disk, a U-disk, a removable hard disk, or the like.
Wherein the memory 402 is used for storing a computer program; the processor 401 is configured to execute the computer program and execute any one of the test methods for the autopilot sensing system provided by the embodiments of the present application when the computer program is executed, so as to implement a performance test for the autopilot sensing system in a natural state, thereby improving the safety of the autopilot system in practical applications.
The processor 401 is exemplary for executing the computer program and for carrying out the following operations when executing the computer program:
acquiring traffic scene data of a test vehicle, wherein the test vehicle comprises an automatic driving system, the automatic driving system comprises an automatic driving sensing system, the automatic driving sensing system is used for sensing the surrounding environment of the test vehicle to generate a sensing result, and the traffic scene data is acquired by the unmanned aerial vehicle following the test vehicle; and determining the accuracy of the sensing result of the automatic driving sensing system according to the traffic scene data.
In addition, in an embodiment of the present application, there is further provided a computer readable storage medium, where a computer program is stored, where the computer program includes program instructions, and the processor executes the program instructions to implement the steps of the method for testing an autopilot sensing system provided in any one of the foregoing embodiments.
The computer readable storage medium may be an internal storage unit of the test device, the drone, or the vehicle according to any of the foregoing embodiments, such as a memory or a memory of the test device. The computer readable storage medium may also be an external storage device of the test apparatus, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the test apparatus.
While the application has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (66)

  1. A test system for testing an autopilot awareness system in an autopilot system, the test system comprising:
    a test vehicle comprising an autopilot system including an autopilot sensing system for ambient environmental sensing to generate a sensing result;
    the unmanned aerial vehicle can fly along with the test vehicle and collect traffic scene data in the running process of the test vehicle;
    the testing device is used for being in communication connection with the unmanned aerial vehicle and the test vehicle, and is used for acquiring the traffic scene data and determining the accuracy of the perception result according to the traffic scene data.
  2. The test system of claim 1, wherein the drone is capable of following and hovering over the test vehicle.
  3. The test system of claim 2, wherein a hover position and/or hover height of the drone over the test vehicle is related to a traffic scenario of the test vehicle.
  4. The test system of claim 2, wherein the drone is capable of hovering over the test vehicle for a specified period of time and collecting traffic scenario data during travel of the test vehicle.
  5. The test system of claim 4, wherein the wind force at the specific time is less than the wind force at other time periods and/or the light intensity at the specific time is greater than the light intensity at other time periods.
  6. The test system according to claim 1, wherein the unmanned aerial vehicle is capable of adjusting its flight attitude and/or the shooting angle of its onboard shooting device, and collecting traffic scene data during the running of the test vehicle.
  7. The test system according to claim 6, wherein the unmanned aerial vehicle is capable of adjusting its flight attitude and/or a shooting angle of a shooting device mounted thereon according to road information of the test vehicle traveling on a target road section, and collecting traffic scene data during the test vehicle traveling.
  8. The test system according to claim 6, wherein the unmanned aerial vehicle is capable of collecting traffic scene data during the running of the test vehicle by adjusting the flying attitude thereof and/or the shooting angle of the shooting device mounted thereon according to the running condition of the test vehicle.
  9. The test system of claim 1, wherein the perceived result includes at least relative positional information of objects surrounding the test vehicle with respect to the test vehicle.
  10. The test system according to claim 9, wherein the perceived result further comprises pose information of the test vehicle and/or target information of the target object, the target information comprising a type of the target object and/or a speed of movement of the target object.
  11. The test system of claim 1, wherein the autopilot awareness system includes at least one of: a vision sensor, radar or inertial sensor, wherein the vision sensor comprises a monocular camera or a binocular camera.
  12. The test system of claim 1, wherein the test device is configured to:
    determining a detection result of a target object around the test vehicle according to the traffic scene data, wherein the detection result at least comprises position information of the target object relative to the test vehicle;
    Obtaining a sensing result corresponding to an automatic driving sensing system of the test vehicle;
    and comparing the detection result with the perception result to obtain a test result.
  13. The test system of any one of claims 1-12, wherein the objects surrounding the test vehicle include dynamic traffic elements and static traffic elements;
    wherein the dynamic traffic element comprises at least one of: a motor vehicle, a non-motor vehicle, a pedestrian or an animal, the static traffic element comprising at least one of: lane lines, obstacles or road edges on the road.
  14. The test system of claim 13, wherein the test device is configured to:
    performing target recognition on the image of the traffic scene data according to the image characteristics of the target object to obtain the target object around the test vehicle;
    wherein the image features include one or more of color information, size information, and texture information of the object.
  15. The test system of claim 13, wherein the test device is configured to:
    connecting the areas corresponding to the targets in the images of the traffic scene data to obtain a connected area; and
    And determining the size information and the centroid position of the connected region.
  16. The test system of claim 15, wherein the region corresponding to the object in the image of the traffic scene data is a region located in a foreground image of the image, and the foreground image is morphologically processed.
  17. The test system of claim 15, wherein the test device is configured to:
    and determining the type of the target object according to the size of the communication area, wherein the target object comprises a vehicle or an interfering object, the type of the target object at least comprises a first type, a second type and a third type, and the sizes of vehicles corresponding to the first type, the second type and the third type are different.
  18. The test system of claim 17, wherein the test device is configured to:
    and determining the type of the target object according to the width of the communication area and the width of the road in the image of the traffic scene data.
  19. The test system of claim 13, wherein the test device is configured to:
    determining pixel positions of reference points in at least two frames of images in the traffic scene data;
    Acquiring relative distance information of the reference point relative to another reference point, wherein the relative distance information is the distance of the reference point relative to the other reference point in a real scene;
    converting the at least two frames of images from two-dimensional coordinates to three-dimensional coordinates, and determining the actual displacement of a target object in the images according to the pixel positions and the relative distance information;
    and determining the change time corresponding to the actual displacement according to the frame rate corresponding to the at least two frames of images, and determining the movement speed of the target object according to the change time and the actual displacement.
  20. The test system of claim 13, wherein the test device is configured to:
    determining the actual driving distance of the target object in two adjacent images of the traffic scene data according to the road width of the driving road of the test vehicle;
    determining the running time of the target object according to the time difference of the two adjacent images;
    and determining the movement speed of the target object according to the actual driving distance and the driving time.
  21. The test system of claim 20, wherein the determining the actual distance traveled by the object in the two images of the traffic scene data based on the road width of the test vehicle's travel road comprises:
    Determining a centroid moving distance of the target object according to centroid positions of the target object in two adjacent images, wherein the centroid moving distance comprises a plurality of pixels;
    determining a distance corresponding to each pixel in the image according to the number of pixels corresponding to the road in the image and the actual width of the road;
    and determining the actual driving distance of the target object according to the centroid moving distance and the distance corresponding to each pixel.
  22. The test system of claim 21, wherein the test device is configured to:
    and carrying out correction processing on the corresponding pixel number of the road in the image, wherein the correction processing comprises binarization processing and/or horizontal direction projection processing.
  23. The test system of claim 13, wherein the test device is configured to:
    acquiring linear characteristics of lane lines on the running road of the test vehicle, wherein the linear characteristics comprise linear information and color information;
    and identifying lane lines in the traffic scene data according to the linear characteristics.
  24. The test system of claim 13, wherein the test device is configured to:
    based on edge detection, lane lines in the traffic scene data are identified according to edge features, wherein the edge features include gradient features and/or color features.
  25. The test system of claim 24, wherein the test device is configured to:
    and calculating the gradient of the image in the traffic scene data, and determining the edge of the lane line according to the change of the gradient to obtain the lane line.
  26. The test system of claim 13, wherein the test device is further configured to:
    and inputting the image of the traffic scene data into a pre-trained image recognition model to obtain the target object included in the traffic scene data.
  27. The test system of claim 13, wherein the drone includes a binocular camera for capturing left and right eye images of the test vehicle while traveling;
    the testing device is used for:
    obtaining a depth image of the test vehicle when the test vehicle runs according to the left eye image and the right eye image;
    the left-eye image or right-eye image is segmented according to a difference between depth information in the depth image to determine an obstacle.
  28. The test system according to any one of claims 1-12, wherein the test device is adapted to:
    determining a functional module which does not meet the requirement in the automatic driving perception system according to a test result, and optimizing the functional module;
    Wherein the functional module comprises one or more of a target recognition module, a travelable region recognition module, a multi-target tracking module and a self-gesture detection module, and the optimization comprises at least one of the following: optimizing the perception algorithm of the functional module and selecting a sensor with higher precision.
  29. The test system of any one of claims 1-12, wherein the test device is further configured to:
    preprocessing the traffic scene data;
    wherein the preprocessing includes at least one of foreground extraction, color space conversion, image binarization, edge detection, morphological processing, and setting a region of interest.
  30. The test system of claim 13, wherein the objects surrounding the test vehicle include surrounding vehicles that are vehicles adjacent to the test vehicle and blind vehicles that are obscured by the surrounding vehicles.
  31. A test method for testing an autopilot awareness system in an autopilot system, the test method comprising:
    acquiring traffic scene data of a test vehicle, wherein the test vehicle comprises an automatic driving system, the automatic driving system comprises an automatic driving sensing system, the automatic driving sensing system is used for sensing the surrounding environment of the test vehicle to generate a sensing result, and the traffic scene data is acquired by the unmanned aerial vehicle following the test vehicle;
    And determining the accuracy of the sensing result of the automatic driving sensing system according to the traffic scene data.
  32. The test method of claim 31, wherein the drone is able to follow the test vehicle and hover over the test vehicle.
  33. The test method of claim 32, wherein a hover position and/or hover height of the unmanned aerial vehicle over the test vehicle is related to a traffic scenario of the test vehicle.
  34. The test method of claim 32, wherein the drone is capable of hovering over the test vehicle for a specified period of time and collecting traffic scenario data during travel of the test vehicle.
  35. The method of claim 34, wherein the wind force at the specific time is less than the wind force at other time periods and/or the light intensity at the specific time is greater than the light intensity at other time periods.
  36. The method according to claim 31, wherein the unmanned aerial vehicle is capable of adjusting its flight attitude and/or the shooting angle of its onboard shooting device, and collecting traffic scene data during the running of the test vehicle.
  37. The method according to claim 36, wherein the unmanned aerial vehicle is capable of adjusting its flight attitude and/or a shooting angle of a shooting device mounted thereon according to road information of the test vehicle traveling on a target road section, and collecting traffic scene data during the test vehicle traveling.
  38. The method according to claim 36, wherein the unmanned aerial vehicle is capable of collecting traffic scene data during the driving of the test vehicle by adjusting its flight attitude and/or the shooting angle of its onboard shooting device according to the running condition of the test vehicle.
  39. The method of claim 31, wherein the perceived result includes at least relative positional information of objects surrounding the test vehicle with respect to the test vehicle.
  40. The method of claim 39, wherein the perceived result further includes pose information of the test vehicle and/or target information of the target, the target information including a type of the target and/or a speed of movement of the target.
  41. The method of testing of claim 31, wherein the autopilot awareness system comprises at least one of: a vision sensor, radar or inertial sensor, wherein the vision sensor comprises a monocular camera or a binocular camera.
  42. The method of testing of claim 31, wherein the method of testing comprises:
    determining a detection result of a target object around the test vehicle according to the traffic scene data, wherein the detection result at least comprises position information of the target object relative to the test vehicle;
    obtaining a sensing result corresponding to an automatic driving sensing system of the test vehicle;
    and comparing the detection result with the perception result to obtain a test result.
  43. The method of any one of claims 31-42, wherein the objects surrounding the test vehicle include dynamic traffic elements and static traffic elements;
    wherein the dynamic traffic element comprises at least one of: a motor vehicle, a non-motor vehicle, a pedestrian or an animal, the static traffic element comprising at least one of: lane lines, obstacles or road edges on the road.
  44. The test method of claim 43, wherein the test method comprises:
    performing target recognition on the image of the traffic scene data according to the image characteristics of the target object to obtain the target object around the test vehicle;
    Wherein the image features include one or more of color information, size information, and texture information of the object.
  45. The test method of claim 43, wherein the test method comprises:
    connecting the areas corresponding to the targets in the images of the traffic scene data to obtain a connected area; and
    and determining the size information and the centroid position of the connected region.
  46. The method of claim 45, wherein the region corresponding to the object in the image of the traffic scene data is a region in a foreground image of the image, and the foreground image is morphologically processed.
  47. The test method of claim 45, wherein the test method comprises:
    and determining the type of the target object according to the size of the communication area, wherein the target object comprises a vehicle or an interfering object, the type of the target object at least comprises a first type, a second type and a third type, and the sizes of vehicles corresponding to the first type, the second type and the third type are different.
  48. The method of testing of claim 47, wherein the method of testing comprises:
    And determining the type of the target object according to the width of the communication area and the width of the road in the image of the traffic scene data.
  49. The test method of claim 43, wherein the test method comprises:
    determining pixel positions of reference points in at least two frames of images in the traffic scene data;
    acquiring relative distance information of the reference point relative to another reference point, wherein the relative distance information is the distance of the reference point relative to the other reference point in a real scene;
    converting the at least two frames of images from two-dimensional coordinates to three-dimensional coordinates, and determining the actual displacement of a target object in the images according to the pixel positions and the relative distance information;
    and determining the change time corresponding to the actual displacement according to the frame rate corresponding to the at least two frames of images, and determining the movement speed of the target object according to the change time and the actual displacement.
  50. The test method of claim 43, wherein the test method comprises:
    determining the actual driving distance of the target object in two adjacent images of the traffic scene data according to the road width of the driving road of the test vehicle;
    Determining the running time of the target object according to the time difference of the two adjacent images;
    and determining the movement speed of the target object according to the actual driving distance and the driving time.
  51. The method of claim 50, wherein determining the actual distance traveled by the object in the two images of the traffic scene data based on the road width of the test vehicle's driving road comprises:
    determining a centroid moving distance of the target object according to centroid positions of the target object in two adjacent images, wherein the centroid moving distance comprises a plurality of pixels;
    determining a distance corresponding to each pixel in the image according to the number of pixels corresponding to the road in the image and the actual width of the road;
    and determining the actual driving distance of the target object according to the centroid moving distance and the distance corresponding to each pixel.
  52. The method of claim 51, wherein the method of testing comprises:
    and carrying out correction processing on the corresponding pixel number of the road in the image, wherein the correction processing comprises binarization processing and/or horizontal direction projection processing.
  53. The test method of claim 43, wherein the test method comprises:
    acquiring linear characteristics of lane lines on the running road of the test vehicle, wherein the linear characteristics comprise linear information and color information;
    and identifying lane lines in the traffic scene data according to the linear characteristics.
  54. The test method of claim 43, wherein the test method comprises:
    based on edge detection, lane lines in the traffic scene data are identified according to edge features, wherein the edge features include gradient features and/or color features.
  55. The method of claim 54, wherein the method of testing comprises:
    and calculating the gradient of the image in the traffic scene data, and determining the edge of the lane line according to the change of the gradient to obtain the lane line.
  56. The test method of claim 43, wherein the test device is further configured to:
    and inputting the image of the traffic scene data into a pre-trained image recognition model to obtain the target object included in the traffic scene data.
  57. The method of claim 43, wherein the drone includes a binocular camera for capturing left and right eye images of the test vehicle while traveling;
    The test method comprises the following steps:
    obtaining a depth image of the test vehicle when the test vehicle runs according to the left eye image and the right eye image;
    the left-eye image or right-eye image is segmented according to a difference between depth information in the depth image to determine an obstacle.
  58. The method of any one of claims 31-42, wherein the method of testing comprises:
    determining a functional module which does not meet the requirement in the automatic driving perception system according to a test result, and optimizing the functional module;
    wherein the functional module comprises one or more of a target recognition module, a travelable region recognition module, a multi-target tracking module and a self-gesture detection module, and the optimization comprises at least one of the following: optimizing the perception algorithm of the functional module and selecting a sensor with higher precision.
  59. The test method of any one of claims 31-42, wherein the test device is further configured to:
    preprocessing the traffic scene data;
    wherein the preprocessing includes at least one of foreground extraction, color space conversion, image binarization, edge detection, morphological processing, and setting a region of interest.
  60. The test method of claim 43, wherein the objects surrounding the test vehicle include surrounding vehicles that are vehicles adjacent to the test vehicle and blind spot vehicles that are obscured by the surrounding vehicles.
  61. A test system for testing an autopilot awareness system in an autopilot system, the test system comprising:
    the automatic driving system comprises an automatic driving sensing system, and the automatic driving sensing system is used for sensing the surrounding environment of the test vehicle to obtain a sensing result;
    the unmanned aerial vehicle is in communication connection with the test vehicle, the unmanned aerial vehicle can fly along with the test vehicle, and the traffic scene data in the running process of the test vehicle are collected, and the accuracy of the sensing result of the automatic driving sensing system is determined according to the traffic scene data.
  62. A test system for testing an autopilot awareness system in an autopilot system, the test system comprising:
    the automatic driving system comprises an automatic driving sensing system, and the automatic driving sensing system is used for sensing the surrounding environment of the test vehicle to obtain a sensing result;
    The unmanned aerial vehicle can fly along with the test vehicle and collect traffic scene data in the running process of the test vehicle,
    the test vehicle is in communication connection with the unmanned aerial vehicle, and is used for acquiring the traffic scene data and determining the accuracy of the sensing result of the automatic driving sensing system according to the traffic scene data.
  63. A test device, wherein the test device comprises a processor and a memory;
    the memory is used for storing a computer program;
    the processor being adapted to execute the computer program and to implement the test method of any one of claims 31-60 when the computer program is executed.
  64. An unmanned aerial vehicle, characterized in that the unmanned aerial vehicle comprises:
    a body;
    the cradle head is arranged on the machine body;
    the shooting device is arranged on the cradle head and is used for shooting images;
    wherein the drone further comprises a processor and a memory, the memory for storing a computer program, the processor for executing the computer program and implementing the test method of any one of claims 31-60 when the computer program is executed.
  65. A vehicle, characterized in that the vehicle comprises:
    a vehicle platform;
    the automatic driving system is connected with the vehicle platform and comprises an automatic driving sensing system, and the automatic driving sensing system is used for sensing the surrounding environment of the vehicle to obtain a sensing result;
    wherein the autopilot system comprises a processor and a memory, the memory for storing a computer program, the processor for executing the computer program and for implementing the test method of any one of claims 31-60 when the computer program is executed.
  66. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, causes the processor to implement the test method of any one of claims 31-60.
CN202180087990.5A 2021-05-28 2021-05-28 Automatic driving perception system testing method, system and storage medium based on aerial survey data Pending CN116802581A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/096993 WO2022246851A1 (en) 2021-05-28 2021-05-28 Aerial survey data-based testing method and system for autonomous driving perception system, and storage medium

Publications (1)

Publication Number Publication Date
CN116802581A true CN116802581A (en) 2023-09-22

Family

ID=84229469

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180087990.5A Pending CN116802581A (en) 2021-05-28 2021-05-28 Automatic driving perception system testing method, system and storage medium based on aerial survey data

Country Status (2)

Country Link
CN (1) CN116802581A (en)
WO (1) WO2022246851A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7765062B2 (en) * 2006-04-25 2010-07-27 Honeywell International Inc. Method and system for autonomous tracking of a mobile target by an unmanned aerial vehicle
KR101882419B1 (en) * 2017-01-13 2018-07-26 주식회사 블루젠드론 System for Detecting Vehicle in Multiplelane using UAV and Method thereof
CN108414238A (en) * 2018-03-09 2018-08-17 孙会鸿 Automatic parking function real steering vectors system and test method
CN110347182A (en) * 2019-07-23 2019-10-18 广汽蔚来新能源汽车科技有限公司 Auxiliary driving device, system, unmanned plane and vehicle
CN112558608B (en) * 2020-12-11 2023-03-17 重庆邮电大学 Vehicle-mounted machine cooperative control and path optimization method based on unmanned aerial vehicle assistance
CN112735164B (en) * 2020-12-25 2022-08-05 北京智能车联产业创新中心有限公司 Test data construction method and test method

Also Published As

Publication number Publication date
WO2022246851A1 (en) 2022-12-01

Similar Documents

Publication Publication Date Title
JP7073315B2 (en) Vehicles, vehicle positioning systems, and vehicle positioning methods
TWI703064B (en) Systems and methods for positioning vehicles under poor lighting conditions
CN110009765B (en) Scene format conversion method of automatic driving vehicle scene data system
CN108256413B (en) Passable area detection method and device, storage medium and electronic equipment
CN108647638B (en) Vehicle position detection method and device
CN110531376B (en) Obstacle detection and tracking method for port unmanned vehicle
CN111874006B (en) Route planning processing method and device
CN107235044A (en) It is a kind of to be realized based on many sensing datas to road traffic scene and the restoring method of driver driving behavior
WO2018020954A1 (en) Database construction system for machine-learning
JP6678605B2 (en) Information processing apparatus, information processing method, and information processing program
EP2372605A2 (en) Image processing system and position measurement system
CN107194989A (en) The scene of a traffic accident three-dimensional reconstruction system and method taken photo by plane based on unmanned plane aircraft
WO2022246852A1 (en) Automatic driving system testing method based on aerial survey data, testing system, and storage medium
JP2021508815A (en) Systems and methods for correcting high-definition maps based on the detection of obstructing objects
CN112740225B (en) Method and device for determining road surface elements
CN111091037A (en) Method and device for determining driving information
US11842440B2 (en) Landmark location reconstruction in autonomous machine applications
WO2018149539A1 (en) A method and apparatus for estimating a range of a moving object
CN116935281A (en) Method and equipment for monitoring abnormal behavior of motor vehicle lane on line based on radar and video
CN110727269B (en) Vehicle control method and related product
de Frías et al. Intelligent cooperative system for traffic monitoring in smart cities
Murashov et al. Method of determining vehicle speed according to video stream data
CN116802581A (en) Automatic driving perception system testing method, system and storage medium based on aerial survey data
CN111077893B (en) Navigation method based on multiple vanishing points, electronic equipment and storage medium
CN112020722B (en) Three-dimensional sensor data-based road shoulder identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination