WO2023116344A1 - 无人驾驶测试方法、无人驾驶测试系统及计算机设备 - Google Patents

无人驾驶测试方法、无人驾驶测试系统及计算机设备 Download PDF

Info

Publication number
WO2023116344A1
WO2023116344A1 PCT/CN2022/134343 CN2022134343W WO2023116344A1 WO 2023116344 A1 WO2023116344 A1 WO 2023116344A1 CN 2022134343 W CN2022134343 W CN 2022134343W WO 2023116344 A1 WO2023116344 A1 WO 2023116344A1
Authority
WO
WIPO (PCT)
Prior art keywords
test
traffic flow
tested
vehicle
unmanned
Prior art date
Application number
PCT/CN2022/134343
Other languages
English (en)
French (fr)
Inventor
吴建平
李冠洲
Original Assignee
清华大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 清华大学 filed Critical 清华大学
Publication of WO2023116344A1 publication Critical patent/WO2023116344A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • Embodiments of the present disclosure relate to an unmanned driving test method, an unmanned driving test system and computer equipment.
  • Unmanned driving technology is a comprehensive technology of many cutting-edge disciplines. It can realize autonomous driving without completely relying on the driver's control, and can realize intelligent travel solutions. It is a hot research field for next-generation vehicles.
  • At least some embodiments of the present disclosure provide an unmanned driving test method, including: building a static road environment consistent with the unmanned driving test site in a traffic flow simulation platform, and constructing a virtual image of the unmanned vehicle to be tested according to the test requirements and a virtual traffic scene, wherein the traffic flow simulation platform includes a FLOWSIM traffic flow simulation platform; the virtual sensor configured by the virtual image perceives the real-time dynamic simulation results of the traffic flow simulation platform, and obtains based on the perception of the first A running status information returns perception information to the sensing terminal of the unmanned vehicle under test in real time; the unmanned vehicle under test makes decisions according to the received perception information, and based on the decision driving in the field, and feed back the second operating state information of the unmanned vehicle to be tested to the traffic flow simulation platform in real time, and the traffic flow simulation platform updates the position of the virtual image in the virtual traffic scene with status.
  • the method provided by some embodiments of the present disclosure further includes: the unmanned vehicle to be tested becomes a participant in a virtual traffic scene, and interacts with surrounding vehicles and the environment in the virtual traffic scene, wherein the traffic Two-way communication is used for interaction between the flow simulation platform and the unmanned vehicle to be tested.
  • the method provided by some embodiments of the present disclosure further includes: testing the decision-making and control logic of the unmanned vehicle to be tested, and/or testing the perception system of the unmanned vehicle.
  • testing the decision-making and control logic of the unmanned vehicle under test includes at least one of the following: testing the unmanned vehicle under test within the design operating range Test the operation status, test the judgment ability of drivers and passengers to take over the state, test the ability to identify safety risks, and test the response to user intervention requests.
  • testing the operating conditions of the unmanned vehicle to be tested within the design operating range includes: testing the decision-making and control strategies under different driving conditions; Including but not limited to: car following, changing lanes, turning left and right, turning around, parking at intersections, reversing, vehicles merging into the own vehicle lane, vehicles leaving the own vehicle lane, interaction with pedestrians and bicycles; Technology constructs the interaction between the unmanned vehicle to be tested and the simulation system, and performs two-way mapping between the unmanned vehicle to be tested and the virtual image in the corresponding simulation environment;
  • the test includes: identifying at least one of the situation where the driver and passengers take over when the operating range is exceeded, the situation where the driver and passengers take over in an emergency, and the driving state of the driver and passengers;
  • the test of the ability to identify safety risks includes at least one of the following : Test the driving strategy with visual blind spots ahead, judge the risk of pedestrian intrusion, and judge the conflict situation of turning left and going straight; the reaction to the user intervention request is
  • the testing of the perception system of the unmanned vehicle includes: the capability calibration of the perception system, and/or the recognition accuracy of the cognitive system on the object type recognition and the evaluation of the motion state accuracy.
  • the method provided by some embodiments of the present disclosure further includes: the testing of the driverless vehicle perception system includes a static object perception ability test and/or a dynamic object perception ability test; wherein the static object perception ability test includes By comparing the running state information of the static objects measured by the sensor end with the static fixed objects that have been pre-measured, modeled, and dimensioned in the simulation platform, the perception ability of the static objects of the vehicle to be tested is obtained; the dynamic object perception ability The test includes pre-determining the shape, size, and color information of the object, and then installing a differential positioning global positioning system, an inertial navigation system and a gyroscope on the vehicle to be tested and the dynamic object to obtain the movement and information of the dynamic object under the coordinates of the vehicle to be tested. And compare it with the results obtained by the unmanned driving perception-cognition system, so as to determine the effectiveness of the vehicle perception-cognition system.
  • the static object perception ability test includes By comparing the running state information of the static objects measured by the sensor end with the static fixed objects that
  • the method provided by some embodiments of the present disclosure further includes: the virtual sensor compares and calibrates the perception and real information of the static and dynamic calibration objects in the unmanned driving test site through the real vehicle sensor.
  • the FLOWSIM traffic flow simulation platform is constructed based on a plurality of real vehicle driving behavior data.
  • the method is used to realize the fusion of online traffic flow simulation and real road environment.
  • the method is executed multiple times until the function and level test of the unmanned vehicle to be tested is completed.
  • At least some embodiments of the present disclosure also provide an unmanned driving test method, including: in the traffic flow simulation platform, build a static road environment consistent with the unmanned driving test site, and construct the unmanned vehicle to be tested according to the test requirements.
  • a virtual image and a virtual traffic scene wherein the traffic flow simulation platform includes a FLOWSIM traffic flow simulation platform; the virtual sensor configured through the virtual image perceives the real-time dynamic simulation results of the traffic flow simulation platform, and acquires
  • the first operating state information of the unmanned vehicle under test returns the sensing information to the sensing end of the unmanned vehicle to be tested in real time; the second operating state information fed back to the traffic flow simulation platform by the unmanned vehicle to be tested is received in real time, and the all information is updated.
  • the position and state of the virtual image in the virtual traffic scene is provided.
  • At least some embodiments of the present disclosure also provide an unmanned driving test method, including: allowing the sensing end of the unmanned vehicle to be tested to receive perception information, wherein the perception information is based on The first operating state information acquired by the virtual sensor configured with virtual image in the traffic flow simulation platform to perceive the real-time dynamic simulation results of the traffic flow simulation platform is sent in real time, and the traffic flow simulation platform includes the FLOWSIM traffic flow simulation platform Make a decision based on the received perception information, and control the unmanned vehicle to be tested to drive in the unmanned test site based on the decision result, and send the second operating status information of the unmanned vehicle to be tested in real time Feedback to the traffic flow simulation platform, wherein the traffic flow simulation platform has a static road environment consistent with the unmanned driving test site and a virtual traffic scene according to test requirements.
  • At least some embodiments of the present disclosure also provide an unmanned driving test system for implementing the unmanned driving test method provided in any embodiment of the present disclosure, wherein the unmanned driving test system includes: a traffic flow simulation platform, an entity The vehicle to be tested; wherein, the traffic flow simulation platform is used to provide surrounding environment data to the entity to be tested; the entity to be tested responds according to the surrounding environment data, and performs corresponding longitudinal and lateral movements
  • the entity vehicle to be tested takes over the driving right when a special situation occurs, the simulation test is interrupted, and the special situation includes at least one of the following: the entity vehicle to be tested will have a real dangerous situation; In the event of an accident, when the simulation test is suspended, the driver and passengers are required to take over; when the simulation test is over, the driver and passengers are required to take over.
  • At least some embodiments of the present disclosure further provide a computer device, including a memory, a processor, and a computer program stored on the memory and operable on the processor, when the processor executes the computer program, the As mentioned above, an unmanned driving test method that integrates online traffic flow simulation and real road environment.
  • FIG. 1 is a schematic flowchart of an unmanned driving test method that integrates online traffic flow simulation and real road environment provided by an embodiment of the present disclosure.
  • FIG. 2 is a schematic flowchart of an unmanned driving test system that integrates online traffic flow simulation and real road environment provided by an embodiment of the present disclosure.
  • FIG. 3 is a project block diagram of an unmanned driving test method that integrates online traffic flow simulation and real road environment provided by an embodiment of the present disclosure.
  • the test cost is high: it needs to spend a lot of money on scene preparation, including the purchase of surrounding vehicles, the preparation of surrounding vehicle models, the preparation of pedestrian models, the organization and scheduling of multi-vehicle traffic, etc.; b) Limited traffic scenarios: Due to the constraints of site equipment and safety factors, it is difficult to fully cover the corner case of traffic scenarios for testing unmanned vehicles in the layout of the test site; c) Slow iteration cycle: it is necessary to replace the real scene in the actual vehicle test site It consumes a lot of time and cost, which will slow down the test process; d) There are safety problems: during the interactive test process of real unmanned vehicles and manned vehicles, there are potential safety hazards of collisions; high site perception requirements: real vehicle tests require The establishment, installation and maintenance of a full-coverage sensing system, as well as data collection and analysis, are costly.
  • Embodiments of the present disclosure provide an unmanned driving test method, an unmanned driving test system, and computer equipment.
  • the unmanned driving test method and unmanned driving test system integrate online traffic flow simulation and real road environment, realize the unmanned driving test method based on virtual reality, and can quickly test and evaluate the ability and limitations of automatic driving, Conducive to the formation of unified testing standards and procedures.
  • the first purpose of the embodiments of the present disclosure is to propose an unmanned driving test method that integrates online traffic flow simulation and real road environment, which is used to combine the characteristics of safety, efficiency, speed and convenience of the simulation platform with the characteristics of the measured scene. Combining the characteristics of authenticity and the convenience of accessing the vehicle under test, key scenarios are constructed for the unmanned driving test capability, thereby replacing or accelerating the large-scale road test link originally required for unmanned driving.
  • the unmanned driving test method that integrates the online traffic flow simulation and the real road environment proposed by the embodiments of the present disclosure combines the safety, efficiency, speed, and convenience of the simulation platform with the authenticity of the actual measurement scene and the convenience of accessing the vehicle to be tested. combination of sexual characteristics.
  • FIG. 1 is a schematic flowchart of an unmanned driving test method based on fusion of online traffic flow simulation and real road environment provided by an embodiment of the present disclosure.
  • the unmanned driving test method for the fusion of online traffic flow simulation and real road environment includes the following steps:
  • S102 Perceive the real-time dynamic simulation results of the traffic flow simulation platform through the virtual sensor configured by the virtual image, and return the perceptual information to the perceptual end of the unmanned vehicle to be tested in real time based on the first operating status information obtained through perceptual acquisition;
  • the unmanned vehicle to be tested makes a decision based on the received perception information, drives in the unmanned driving test site based on the decision result, and feeds back the second operating state information of the unmanned vehicle to be tested to the traffic flow simulation platform in real time
  • the traffic flow simulation platform updates the position and state of the virtual image in the virtual traffic scene.
  • step S101 may specifically include: in the FLOWSIM traffic flow simulation platform built based on a large amount of real vehicle driving behavior data, build a static road environment consistent with the closed unmanned driving test site, and build The virtual image of the unmanned vehicle to be tested and the virtual traffic scene.
  • Step S102 may specifically include: sensing the real-time dynamic simulation results of the traffic flow simulation platform through the virtual sensor configured by the virtual image, and returning the acquired first running state information to the sensing terminal of the unmanned vehicle under test in real time.
  • Step S103 may specifically include: controlling the unmanned vehicle to be tested to drive in a closed unmanned driving test field based on the perception information, and feeding back the second operating state information of the unmanned vehicle to be tested to the traffic flow simulation platform in real time, updating the virtual The position and state of the image in the virtual traffic scene.
  • the method may further include step S104.
  • S104 Perform the above steps S101 to S103 repeatedly until the function and level test of the unmanned vehicle to be tested is completed.
  • the FLOWSIM traffic flow simulation platform is built based on multiple real vehicle driving behavior data, that is, based on a large number of real vehicle driving behavior data.
  • the traffic flow simulation platform is not limited to the FLOWSIM traffic flow simulation platform, and other types of simulation platforms can also be used, which can be determined according to actual needs, and the embodiments of the present disclosure are not limited thereto.
  • This method is used to realize the integration of online traffic flow simulation and real road environment. By executing the method several times until the function and level tests of the unmanned vehicle to be tested are completed, various tests of the unmanned vehicle can be completed.
  • the unmanned driving test method that integrates the online traffic flow simulation and the real road environment proposed by the embodiment of the present disclosure combines the safety, efficiency, speed, and convenience of the simulation platform with the authenticity of the actual measurement scene and the convenience of accessing the vehicle to be tested. features are combined.
  • it also includes:
  • the unmanned vehicle to be tested becomes a participant in the virtual traffic scene and interacts with the surrounding vehicles and the environment in the virtual traffic scene.
  • two-way low-latency communication is used for interaction between the traffic flow simulation platform and the unmanned vehicle to be tested.
  • the embodiments of the present disclosure are not limited thereto, and any applicable two-way communication method can be used for interaction between the traffic flow simulation platform and the unmanned vehicle to be tested, which can be determined according to actual needs.
  • it also includes:
  • the decision-making and control logic test mainly includes the test of the control effect of the vehicle under different working conditions such as going straight, changing lanes, and turning left and right, the decision-making and control logic and risk control test of the perception information under different completeness, and the driving Judgment test of a person's suitability to take over driving.
  • the test of driverless perception system can be carried out separately in advance, or it can be carried out simultaneously with the test of decision-making and control logic.
  • the information of the sensing end of the vehicle to be tested is transmitted to the simulation platform, and compared with the static objects on the site with known position, shape and other information determined in advance, and the information of the dynamic model measured by the additionally installed sensing equipment. Yes, to judge the perception system capability.
  • the evaluation of the simulation platform can be continuously verified and calibrated by installing sensors on real vehicles in real manned/unmanned driving scenarios, such as headway, driver reaction time, vehicle braking distance, acceleration and other micro-traffic model parameters.
  • the driving style library in the simulation module can be enriched, thereby generating a virtual reality closer to the real scene.
  • testing the decision-making and control logic of the unmanned vehicle to be tested includes:
  • testing of the decision-making and control logic may include at least one of the above tests, that is, may include one or more of the above tests.
  • testing the operating conditions of the unmanned vehicle to be tested within the design operating range includes: testing decision-making and control strategies under different driving conditions.
  • the driving conditions include but are not limited to: following, changing lanes, turning left and right, turning around, parking at an intersection, reversing, merging into the own vehicle lane, driving out of the own vehicle lane, and interacting with pedestrians and bicycles. It should be noted that the specific examples of the driving conditions are not limited to those listed above, and may also be other types of driving states, which may be determined according to actual needs, and are not limited by the embodiments of the present disclosure.
  • the test uses low-latency two-way communication technology or other applicable two-way communication technology to construct the interaction between the unmanned vehicle to be tested and the simulation system, and perform a virtual image of the unmanned vehicle to be tested and its corresponding simulation environment
  • the communication technology includes the most advanced communication technology, such as 5G communication.
  • the testing of the judgment ability of the driver and passenger to take over the state includes: identifying at least one of the situation of the driver and the passenger taking over the vehicle when it exceeds the operating range, the situation of the driver and the passenger taking over in an emergency situation, and the driving state of the driver and passengers.
  • the testing of the ability to identify safety risks includes: a driving strategy in which there is a visual blind spot ahead, judgment of pedestrian intrusion risks, and judgment of conflict situations when turning left and going straight. It should be noted that the testing of the ability to identify security risks may include at least one of the above tests, that is, may include one or more of the above tests.
  • the reaction to the user intervention request is determined based on the takeover status of the driver and occupant and the current safety risk identification situation.
  • the decision-making and control logic test process of the unmanned vehicle to be tested is developed based on the FLOWSIM simulation software.
  • the behavior of the manned vehicle has been collected and extracted from real driving behavior characteristics for many years.
  • Fuzzy decision-making, using fuzzy mathematical decision-making and control logic each vehicle has the characteristics of an independent driver, which well maps the characteristics of manned vehicles in the real world to the simulation system.
  • the simulation model generated by the real vehicle interacts with the unmanned vehicle to generate a realistic test scenario.
  • the vehicle's perception range, perception accuracy and effectiveness of environmental data under different environments such as weather, light, and static background, etc. it is necessary to test the control system's ability to limit its own perception capabilities. Recognition and judgment ability, and whether it can adjust its own decision-making and control logic according to the situation of impaired perception or occlusion.
  • testing the perception system of an unmanned vehicle includes:
  • Perceptual system capability calibration, and/or cognitive system s accuracy in object type recognition and motion state assessment.
  • the unmanned driving perception system test it mainly consists of three major elements: the ability calibration of the perception system (whether it can perceive the existence of objects), the recognition accuracy of the cognitive system for object types (whether it can accurately identify the object category) and the recognition of motion The accuracy of state assessment (judging whether the information such as the position, distance, speed, acceleration, and motion angle of the object is accurate).
  • the results of the test calibration are used to improve the virtual perception module of the simulation platform, so that the data transmitted from the virtual perception to the vehicle under test is more realistic.
  • testing the perception system of the unmanned vehicle includes a static object perception ability test and/or a dynamic object perception ability test.
  • the static object perception ability test includes comparing the running state information of the static object measured by the sensor end with the static fixed object that has been pre-measured, modeled, and dimensioned in the simulation platform to obtain the static object of the vehicle to be tested. perception ability.
  • the dynamic object perception ability test includes pre-determining the shape, size, and color information of the object, and then installing a high-precision differential positioning global positioning system (GPS), inertial navigation system (INS) and gyroscope respectively on the vehicle to be tested and the dynamic object.
  • GPS global positioning system
  • INS inertial navigation system
  • gyroscope gyroscope
  • it also includes:
  • the virtual sensor can compare and calibrate the perception and real information of the static and dynamic calibration objects in the unmanned driving test site through the real vehicle sensor.
  • test effect of this test method largely depends on the simulation of the real road traffic scene by the test platform, in order to ensure the authenticity and reliability of the test results, it is necessary to evaluate and iteratively optimize the simulation effect of the test platform .
  • Evaluate and iterate the simulation effect of the test platform including at least one of the following: parameter calibration for manned driving, parameter calibration for unmanned driving, and parameter calibration for pedestrians and non-motorized vehicles.
  • the parameter calibration of manned driving includes: comparing and dynamically calibrating the interactive motion conditions between limited manned vehicles and the vehicle motion interaction conditions of the simulation platform, wherein the interactive motion conditions include static starting, straight line cruising, Changing lanes for overtaking, stopping at intersections, starting at intersections, and emergency braking.
  • Parameter calibration of unmanned driving including: learning the driving behavior of unmanned vehicles and extracting characteristic parameters, and then storing the collected unmanned driving behavior data in the unmanned vehicle driving behavior database, and then testing unmanned driving When the vehicle is used, the simulation module of the unmanned vehicle is called.
  • the parameter calibration of pedestrians and non-motor vehicles includes: extracting, collecting and modeling the motion and driving characteristics of different types of pedestrians and different types of non-motor vehicles, and integrating them into the traffic flow simulation platform system.
  • the unmanned driving test method proposed by the embodiment of the present disclosure combines online traffic flow simulation and real road environment. First, it greatly facilitates the development of unmanned driving tests. Remotely access the vehicle to be tested and provide a series of scene tests without considering the location and distance of the test site; second, the interaction process between the unmanned real car and the online simulated vehicle will not constitute a collision when a collision occurs.
  • this test method reduces the transportation cost of physical manned vehicles, the production cost of the same scale solid model, and avoids the construction, organization and scheduling of complex real test scenarios;
  • the rapid summary of a large number of test scenarios based on simulation and accurate problem location and traceability can form a unified and standardized test standard, which can become a unified test standard for unmanned driving in the future;
  • unmanned vehicle testing consists of perception, decision-making,
  • the planning and action control modules are composed of several modules: the simulation platform provides the perception environment, the unmanned vehicle control terminal provides decision-making and planning results, and the unmanned driving of the real vehicle can provide the unmanned driving action control effect test.
  • the physical separation and virtual connection of different functional partitions are realized through high-information signal transmission.
  • Embodiments of the present disclosure also provide an unmanned driving test method, which can be used on the platform side, that is, the computer or server used to run the traffic flow simulation platform.
  • the method includes the following operations:
  • the traffic flow simulation platform build a static road environment consistent with the unmanned driving test site, and build a virtual image of the unmanned vehicle to be tested and a virtual traffic scene according to the test requirements.
  • the traffic flow simulation platform includes FLOWSIM traffic flow simulation platform;
  • the virtual sensor configured by the virtual image perceives the real-time dynamic simulation results of the traffic flow simulation platform, and returns the sensing information to the sensing end of the unmanned vehicle to be tested in real time based on the first operating state information obtained through sensing;
  • Embodiments of the present disclosure also provide an unmanned driving test method, which can be used for the test end, that is, for the unmanned vehicle and the on-board controller.
  • the method includes the following operations:
  • the traffic flow simulation platform includes the FLOWSIM traffic flow simulation platform for the real-time transmission of the first running state information acquired through sensing;
  • the embodiments of the present disclosure further propose an unmanned driving test system that integrates online traffic flow simulation and real road environment.
  • FIG. 2 is a schematic structural diagram of an unmanned driving test system that integrates online traffic flow simulation and real road environment provided by an embodiment of the present disclosure.
  • the unmanned driving test system based on the integration of online traffic flow simulation and real road environment includes: traffic flow simulation platform (or also called traffic simulation platform), physical vehicle to be tested, physical static and dynamic calibration things etc.
  • traffic flow simulation platform or also called traffic simulation platform
  • the components included in the unmanned driving test system are not limited to those listed above, and may also include other components or components, which may be determined according to actual needs.
  • the traffic flow simulation platform is used to provide the surrounding environment data to the physical vehicle under test; the physical vehicle under test responds according to the surrounding environment data, and performs corresponding longitudinal movement and lateral movement.
  • the simulation test is interrupted, and the special situation includes at least one of the following situations:
  • the simulation system provides accurate surrounding environment data (position, attitude, distance, etc. of surrounding vehicles) to the vehicle under test, and the vehicle under test responds according to the surrounding environment data and performs corresponding longitudinal and lateral movements.
  • real-time position information is provided through the additionally installed GPS and INS (inertial navigation system), and the attitude information of the vehicle to be tested is obtained by the gyroscope.
  • the communication signal Through the communication signal, the information of the vehicle end is synchronized to the simulation platform.
  • the simulation platform updates the position and state of the unmanned vehicle in the virtual environment through calculation, and the surrounding vehicles interact accordingly and generate the position, attitude, speed, acceleration, etc. of the next time step.
  • Embodiments of the present disclosure also propose a computer device, including a memory, a processor, and a computer program stored on the memory and operable on the processor.
  • the processor executes the computer program, the above The unmanned driving test method of the fusion of online traffic flow simulation and real road environment.
  • the computer device may be any type of device having processing and computing functions, such as a server, a terminal device, a personal computer, etc., which is not limited in the embodiments of the present disclosure.
  • first and second are used for descriptive purposes only, and cannot be interpreted as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features.
  • the features defined as “first” and “second” may explicitly or implicitly include at least one of these features.
  • “plurality” means at least two, such as two, three, etc., unless otherwise specifically defined.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

提出了一种无人驾驶测试方法、无人驾驶测试系统及计算机设备。方法包括:在交流仿真平台中搭建与无人驾驶测试场地一致的静态道路环境,并根据测试要求构建待测无人驾驶车辆的虚拟镜像及虚拟交通场景,交流仿真平台包括FLOWSIM交通流仿真平台(S101);通过虚拟镜像配置的虚拟传感器对交通流仿真平台的实时动态仿真结果进行感知,并基于感知获取的第一运行状态信息实时向待测无人驾驶车辆感知端返回感知信息(S102);待测无人驾驶车辆根据接收的感知信息进行决策,并基于决策结果在无人驾驶测试场地中行驶,并实时将待测无人驾驶车辆的第二运行状态信息反馈给交通流仿真平台,交流仿真平台更新虚拟镜像在虚拟交通场景中的位置与状态(S103)。可以将仿真平台与实测场景相结合,针对无人驾驶测试能力进行关键场景构建,从而替代或加速无人驾驶原本所需的大规模路测环节。

Description

无人驾驶测试方法、无人驾驶测试系统及计算机设备
本申请要求于2021年12月23日递交的中国专利申请第202111592703.4号的优先权,在此全文引用上述中国专利申请公开的内容以作为本申请的一部分。
技术领域
本公开的实施例涉及无人驾驶测试方法、无人驾驶测试系统及计算机设备。
背景技术
随着技术的发展,无人驾驶技术应运而生。无人驾驶技术是多门前沿学科的综合技术,能够实现汽车自主驾驶,无需完全依赖驾驶员的操控,可以实现智能化出行解决方案,是下一代汽车的热点研究领域。
发明内容
本公开至少一些实施例提供一种无人驾驶测试方法,包括:在交通流仿真平台中搭建与无人驾驶测试场地一致的静态道路环境,并根据测试要求构建待测无人驾驶车辆的虚拟镜像及虚拟交通场景,其中,所述交通流仿真平台包括FLOWSIM交通流仿真平台;通过所述虚拟镜像配置的虚拟传感器对所述交通流仿真平台的实时动态仿真结果进行感知,并基于感知获取的第一运行状态信息实时向所述待测无人驾驶车辆的感知端返回感知信息;所述待测无人驾驶车辆根据接收的所述感知信息进行决策,并基于决策结果在所述无人驾驶测试场地中行驶,并实时将所述待测无人驾驶车辆的第二运行状态信息反馈给所述交通流仿真平台,所述交通流仿真平台更新所述虚拟镜像在所述虚拟交通场景中的位置与状态。
例如,本公开一些实施例提供的方法还包括:所述待测无人驾驶车辆成为虚拟交通场景中的参与者,与所述虚拟交通场景中的周围车辆和环境形成互动,其中,所述交通流仿真平台与所述待测无人驾驶车辆之间采用双向通讯进行交互。
例如,本公开一些实施例提供的方法还包括:对所述待测无人驾驶车辆 的决策与控制逻辑进行测试,和/或对无人驾驶车辆感知系统进行测试。
例如,在本公开一些实施例提供的方法中,对所述待测无人驾驶车辆的决策与控制逻辑进行测试,包括如下至少之一:对所述待测无人驾驶车辆在设计运行范围内运行情况进行测试,对驾乘人员接管状态的判断能力进行测试,对安全风险的识别能力进行测试,对用户介入请求的反应进行测试。
例如,在本公开一些实施例提供的方法中,对所述待测无人驾驶车辆在设计运行范围内运行情况进行测试包括:测试不同驾驶工况下的决策与控制策略;所述驾驶工况包括但不限于:跟驰、换道、左右转向、掉头行驶、路口停车、倒车、车辆汇入本车车道、车辆驶出本车车道、与行人和自行车交互;其中,所述测试通过双向通讯技术构建所述待测无人驾驶车辆与仿真系统的交互,进行所述待测无人驾驶车辆与对应的仿真环境中的虚拟镜像的双向映射;所述对驾乘人员接管状态的判断能力进行测试包括:对超出运行范围时驾乘人员接管情况、紧急情形下驾乘人员接管情况、驾乘人员的驾驶状态至少之一进行识别;所述对安全风险的识别能力进行测试包括如下至少之一:测试前方存在视觉盲区的行驶策略,对行人闯入风险判断,对左转直行冲突情形判断;所述对用户介入请求的反应基于驾乘人员接管状态与当前安全风险识别情形确定。
例如,在本公开一些实施例提供的方法中,所述对无人驾驶车辆感知系统进行测试,包括:感知系统能力标定,和/或认知体系对物体类型识别准确性与对运动状态评估的准确性。
例如,本公开一些实施例提供的方法还包括:所述对无人驾驶车辆感知系统进行测试包括静态物感知能力测试和/或动态物的感知能力测试;其中,所述静态物感知能力测试包括通过将传感器端测量得到的静态物体的运行状态信息与仿真平台中预先测定、建模、尺寸标注完的静态固定物进行比对,得到待测车静态物体的感知能力;所述动态物感知能力测试包括通过预先测定物体的形状、尺寸、颜色信息,再对待测车与动态物体分别加装差分定位全球定位系统、惯性导航系统与陀螺仪得到待测车坐标下动态物体的运动情况与信息,并与无人驾驶感知-认知系统所得到的结果进行比对,从而测定待测车感知-认知系统的有效性。
例如,本公开一些实施例提供的方法还包括:所述虚拟传感器通过实车传感器对所述无人驾驶测试场地中静态与动态标定物的感知与真实信息进行 比对与标定。
例如,在本公开一些实施例提供的方法中,所述FLOWSIM交通流仿真平台基于多个实车驾驶行为数据构建。
例如,在本公开一些实施例提供的方法中,所述方法用于实现在线交通流仿真与真实道路环境的融合。
例如,在本公开一些实施例提供的方法中,所述方法被执行多次直至所述待测无人驾驶车辆的功能和等级测试结束。
本公开至少一些实施例还提供一种无人驾驶测试方法,包括:在交通流仿真平台中,搭建与无人驾驶测试场地一致的静态道路环境,并根据测试要求构建待测无人驾驶车辆的虚拟镜像及虚拟交通场景,其中,所述交通流仿真平台包括FLOWSIM交通流仿真平台;通过所述虚拟镜像配置的虚拟传感器对所述交通流仿真平台的实时动态仿真结果进行感知,并基于感知获取的第一运行状态信息实时向所述待测无人驾驶车辆的感知端返回感知信息;实时接收所述待测无人驾驶车辆反馈给所述交通流仿真平台的第二运行状态信息,更新所述虚拟镜像在所述虚拟交通场景中的位置与状态。
本公开至少一些实施例还提供一种无人驾驶测试方法,包括:使待测无人驾驶车辆的感知端接收感知信息,其中,所述感知信息是基于通过所述待测无人驾驶车辆在交通流仿真平台中的虚拟镜像配置的虚拟传感器对所述交通流仿真平台的实时动态仿真结果进行感知而获取的第一运行状态信息实时发送的,所述交通流仿真平台包括FLOWSIM交通流仿真平台;根据接收的所述感知信息进行决策,并基于决策结果控制所述待测无人驾驶车辆在无人驾驶测试场地中行驶,并实时将所述待测无人驾驶车辆的第二运行状态信息反馈给所述交通流仿真平台,其中,所述交通流仿真平台中搭建有与所述无人驾驶测试场地一致的静态道路环境且根据测试要求构建有虚拟交通场景。
本公开至少一些实施例还提供一种无人驾驶测试系统,用于实施本公开任一实施例提供的无人驾驶测试方法,其中,所述无人驾驶测试系统包括:交通流仿真平台、实体待测车辆;其中,所述交通流仿真平台用于提供周围环境数据给所述实体待测车辆;所述实体待测车辆根据所述周围环境数据做出响应,进行相应的纵向运动与横向运动;其中,当所述实体待测车辆发生特殊情形时被接管驾驶权,仿真测试中断,所述特殊情形包括如下至少之一:所述实体待测车辆将发生真实危险的情形;仿真系统中发生事故,仿真测试 暂停时,要求驾乘人员接管;仿真测试结束,要求驾乘人员接管。
本公开至少一些实施例还提供一种计算机设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现如上所述在线交通流仿真与真实道路环境融合的无人驾驶测试方法。
附图说明
本公开上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解。为了更清楚地说明本公开实施例的技术方案,下面将对实施例的附图作简单地介绍,显而易见地,下面描述中的附图仅仅涉及本公开的一些实施例,而非对本公开的限制。
图1为本公开实施例所提供的一种在线交通流仿真与真实道路环境融合的无人驾驶测试方法的流程示意图。
图2为本公开实施例所提供的一种在线交通流仿真与真实道路环境融合的无人驾驶测试系统的流程示意图。
图3为本公开实施例所提供的一种在线交通流仿真与真实道路环境融合的无人驾驶测试方法项目框图。
具体实施方式
下面详细描述本公开的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本公开,而不能理解为对本公开的限制。
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例的附图,对本公开实施例的技术方案进行清楚、完整地描述。显然,所描述的实施例是本公开的一部分实施例,而不是全部的实施例。基于所描述的本公开的实施例,本领域普通技术人员在无需创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
除非另作定义,此处使用的技术术语或者科学术语应当为本公开所属领域内具有一般技能的人士所理解的通常意义。本公开中使用的“包括”或者“包含”等类似的词语意指出现该词前面的元件或者物件涵盖出现在该词后 面列举的元件或者物件及其等同,而不排除其他元件或者物件。“连接”或者“相连”等类似的词语并非限定于物理的或者机械的连接,而是可以包括电性的连接,不管是直接的还是间接的。“上”、“下”、“左”、“右”等仅用于表示相对位置关系,当被描述对象的绝对位置改变后,则该相对位置关系也可能相应地改变。
通常的无人驾驶汽车功能和等级测试存在诸多不足,体现在如下诸多方面。
在无人驾驶实车测试过程中,a)测试成本高昂:需要花费大量资金进行场景准备,包括周围车辆的购置、周围车辆模型的准备、行人模型准备、多车辆交通组织与调度等;b)交通场景有限:受到场地设备以及安全性等要素制约,测试无人驾驶车辆的交通场景的Corner case难以在测试场地的布设中完全覆盖;c)迭代周期慢:更换实车测试场地中真实场景需要消耗大量时间与费用,会拖慢测试进程;d)存在安全性问题:在真实无人驾驶车辆与有人驾驶车辆行驶交互测试过程中,存在碰撞的安全隐患;场地感知要求高:实车测试需要全覆盖的感知体系建立、安装与维护和数据采集、分析,成本高昂。
在无人驾驶纯仿真测试过程中,a)真实性:仿真测试结果的可靠程度受到仿真模型的真实性的制约;b)可解释性:交通仿真模型中,模块的构建往往基于特定的模型。在测试场景中出现事故,存在仿真平台计算错误与事故真实发生两种可能,作为无人驾驶测试标准存在可解释性问题;c)算法公开情况:完全虚拟环境测试无人驾驶车辆,需要将无人驾驶车辆的控制算法逻辑写入仿真平台中,需要公开其内部算法,存在商业秘密或知识产权保护的问题;d)道路及物理条件反映不足:无人驾驶测试需要考虑不同道路条件对于待测车辆正常行驶与制动能力的限制,待测车如何处理意外的坑洼或是轮胎爆胎等意外需要进行实地模拟。
本公开的实施例提供一种无人驾驶测试方法、无人驾驶测试系统及计算机设备。该无人驾驶测试方法及无人驾驶测试系统融合了在线交通流仿真与真实道路环境,实现了基于虚拟现实的无人驾驶测试方式,可以对自动驾驶能力及局限性实现快速的测试与评定,有利于统一的测试标准及流程的形成。
本公开的实施例的第一个目的在于提出一种在线交通流仿真与真实道路环境融合的无人驾驶测试方法,用于将仿真平台的安全、高效、快速、便捷 的特性,与实测场景的真实性、待测车接入便利性的特征相结合,针对无人驾驶测试能力进行关键场景构建,从而替代或加速无人驾驶原本所需的大规模路测环节。
本公开的实施例提出的在线交通流仿真与真实道路环境融合的无人驾驶测试方法,将仿真平台的安全、高效、快速、便捷的特性,与实测场景的真实性、待测车接入便利性的特征相结合。依托于高拟真性的交通流仿真平台,利用虚拟现实的技术搭建无人驾驶感知与仿真场景之间的桥梁,利用仿真平台快速迭代、问题精准定位溯源的特性,针对无人驾驶测试能力进行关键场景构建,从而替代或加速无人驾驶原本所需的大规模路测环节,快速完成原本无人驾驶所需的大规模路测的环节,并依据于众多车辆测试结果形成具备不同特征标签的问题场景数据集,有助于对于自动驾驶能力及局限性实现快速的测试与评定,有利于统一的测试标准及流程的形成。
下面参考附图描述本公开实施例提供的在线交通流仿真与真实道路环境融合的无人驾驶测试方法和系统。
图1为本公开实施例所提供的一种基于在线交通流仿真与真实道路环境融合的无人驾驶测试方法的流程示意图。
如图1所示,该在线交通流仿真与真实道路环境融合的无人驾驶测试方法包括以下步骤:
S101:在交通流仿真平台中搭建与无人驾驶测试场地(例如封闭无人驾驶测试场地)一致的静态道路环境,并根据测试要求构建待测无人驾驶车辆的虚拟镜像及虚拟交通场景,其中,该交通流仿真平台包括FLOWSIM交通流仿真平台;
S102:通过虚拟镜像配置的虚拟传感器对交通流仿真平台的实时动态仿真结果进行感知,并基于感知获取的第一运行状态信息实时向待测无人驾驶车辆的感知端返回感知信息;
S103:待测无人驾驶车辆根据接收的感知信息进行决策,并基于决策结果在无人驾驶测试场地中行驶,并实时将待测无人驾驶车辆的第二运行状态信息反馈给交通流仿真平台,交通流仿真平台更新虚拟镜像在所述虚拟交通场景中的位置与状态。
例如,在一些示例中,步骤S101可以具体包括:在基于大量实车驾驶行为数据构建的FLOWSIM交通流仿真平台中,搭建与封闭无人驾驶测试场 地一致的静态道路环境,同时根据测试要求,构建待测无人驾驶车辆的虚拟镜像及虚拟交通场景。步骤S102可以具体包括:通过虚拟镜像配置的虚拟传感器对交通流仿真平台的实时动态仿真结果进行感知,获取的第一运行状态信息实时向待测无人驾驶车辆感知端返回感知信息。步骤S103可以具体包括:基于感知信息控制待测无人驾驶车辆在封闭无人驾驶测试场地中行驶,并实时将待测无人驾驶车辆的第二运行状态信息反馈给交通流仿真平台,更新虚拟镜像在虚拟交通场景中的位置与状态。
例如,在一些示例中,该方法还可以包括步骤S104。
S104:反复循环执行上述步骤S101至S103,直至待测无人驾驶车辆的功能和等级测试结束。
例如,FLOWSIM交通流仿真平台基于多个实车驾驶行为数据构建,也即,基于大量实车驾驶行为数据构建。在本公开的实施例中,交通流仿真平台不限于FLOWSIM交通流仿真平台,也可以采用其他类型的仿真平台,这可以根据实际需求而定,本公开的实施例对此不作限制。该方法用于实现在线交通流仿真与真实道路环境的融合。通过多次执行该方法直至待测无人驾驶车辆的功能和等级测试结束,可以完成无人驾驶车辆的各项测试。
本公开实施例提出的在线交通流仿真与真实道路环境融合的无人驾驶测试方法,将仿真平台的安全、高效、快速、便捷的特性,与实测场景的真实性、待测车接入便利性的特征相结合。依托于高拟真性的交通流仿真平台,利用虚拟现实的技术搭建无人驾驶感知与仿真场景之间的桥梁,利用仿真平台快速迭代、问题精准定位溯源的特性,针对无人驾驶测试能力进行关键场景构建,从而替代或加速无人驾驶原本所需的大规模路测环节,快速完成原本无人驾驶所需的大规模路测的环节,并依据于众多车辆测试结果形成具备不同特征标签的问题场景数据集,有助于对于自动驾驶能力及局限性实现快速的测试与评定,有利于统一的测试标准及流程的形成,其项目框图如图3所示。
进一步地,在本公开的一个实施例中,还包括:
所述待测无人驾驶车辆成为虚拟交通场景中的参与者,与虚拟交通场景中的周围车辆和环境形成互动。例如,所述交通流仿真平台与所述待测无人驾驶车辆之间采用双向低延时通讯进行交互。当然,本公开的实施例不限于此,交通流仿真平台与待测无人驾驶车辆之间可以采用任意适用的双向通讯 方式进行交互,这可以根据实际需求而定。
进一步地,在本公开的一个实施例中,还包括:
对所述待测无人驾驶车辆的决策与控制逻辑进行测试,和/或对无人驾驶车辆感知系统进行测试。
其中,决策与控制逻辑测试主要包含对车辆在直行、换道、左右转向等不同工况下控制效果的测试、对于感知信息在不同完备性下的决策与控制逻辑与风险控制测试、对于驾乘人员是否适合接管驾驶的判断测试。对无人驾驶感知系统的测试可以预先单独进行,也可与对决策与控制逻辑的测试同时进行。在同时进行的场景下,待测车辆感知端的信息传递给仿真平台,与提前测定的已知位置、形态等信息的场地静态物体,和通过额外安装的感知设备测量得到的动态模型的信息进行比对,判断感知系统能力。对仿真平台的评估可通过对真实有人/无人驾驶场景下的真实车辆安装传感器,对车头时距、驾驶者反映时间、车辆制动距离、加速度等微观交通模型参数进行不断校验与标定,同时可以丰富仿真模块中的驾驶风格库,从而产生更贴近真实场景的虚拟现实。
进一步地,在本公开的一个实施例中,对所述待测无人驾驶车辆的决策与控制逻辑进行测试,包括:
对所述待测无人驾驶车辆在设计运行范围内运行情况进行测试,对驾乘人员接管状态的判断能力进行测试,对安全风险的识别能力进行测试,对用户介入请求的反应进行测试。需要说明的是,对决策与控制逻辑进行测试可以包括上述各项测试的至少之一,也即,可以包括上述各项测试的一项或多项。
进一步地,在本公开的一个实施例中,对所述待测无人驾驶车辆在设计运行范围内运行情况进行测试包括:测试不同驾驶工况下的决策与控制策略。所述驾驶工况包括但不限于:跟驰、换道、左右转向、掉头行驶、路口停车、倒车、车辆汇入本车车道、车辆驶出本车车道、与行人和自行车交互。需要说明的是,驾驶工况的具体示例不限于上文列举的内容,还可以是其他类型的驾驶状态,这可以根据实际需求而定,本公开的实施例对此不作限制。所述测试通过低延时双向通讯技术或其他适用的双向通讯技术构建所述待测无人驾驶车辆与仿真系统的交互,进行所述待测无人驾驶车辆与其对应的仿真环境中的虚拟镜像的双向映射,所述通讯技术包括最先进的通讯技术,如5G 通讯。
所述对驾乘人员接管状态的判断能力进行测试包括:对超出运行范围时驾乘人员接管情况、紧急情形下驾乘人员接管情况、驾乘人员的驾驶状态至少之一进行识别。
所述对安全风险的识别能力进行测试包括:前方存在视觉盲区的行驶策略、行人闯入风险判断、左转直行冲突情形判断。需要说明的是,对安全风险的识别能力进行测试可以包括上述各项测试的至少之一,也即,可以包括上述各项测试的一项或多项。
所述对用户介入请求的反应基于驾乘人员接管状态与当前安全风险识别情形确定。
对待测无人驾驶车辆决策与控制逻辑测试过程基于FLOWSIM仿真软件开发,在该交通流仿真中,有人驾驶车辆的行为经过多年真人驾驶行为特征的采集与提取,类别人类驾乘人员在驾驶过程中的模糊决策,采用模糊数学的决策与控制逻辑,每辆车辆都具备独立的驾驶人的特质,很好地将真实世界中的有人驾驶车辆特征映射到仿真系统中。以真实车辆生成的仿真模型与无人驾驶车辆交互,产生真实的测试场景。同时,基于对感知系统测试得到的,在由天气、光线、静态背景等不同环境下车辆对环境数据的感知范围、感知准确性及有效性等情况,需测试控制系统对于自身感知能力局限性的识别与判断能力,及是否能够根据感知受损或遮挡的情况来调整自身的决策与控制逻辑。
进一步地,在本公开的一个实施例中,对无人驾驶车辆感知系统进行测试,包括:
感知系统能力标定,和/或认知体系对物体类型识别准确性与对运动状态评估的准确性。
对于无人驾驶感知系统测试环节,主要由三大要素构成:感知系统能力标定(是否可感知到有物体存在)、认知体系对物体类型识别准确性(是否能够准确识别物体类别)与对运动状态评估的准确性(对于物体的位置、距离、速度、加速度、运动角度等信息判断是否准确)。测试标定的结果用于完善仿真平台的虚拟感知模块,使虚拟感知传向待测车辆的数据更具真实。
进一步地,在本公开的一个实施例中,对无人驾驶车辆感知系统进行测试包括静态物感知能力测试和/或动态物的感知能力测试。
其中,所述静态物感知能力测试包括通过将传感器端测量得到的静态物体的运行状态信息与仿真平台中预先测定、建模、尺寸标注完的静态固定物进行比对,得到待测车静态物体的感知能力。所述动态物感知能力测试包括通过预先测定物体的形状、尺寸、颜色信息,再对待测车与动态物体分别加装高精度差分定位全球定位系统(GPS)、惯性导航系统(INS)与陀螺仪得到待测车坐标下动态物体的运动情况与信息,并与无人驾驶感知-认知系统所得到的结果进行比对,从而测定待测车感知-认知系统的有效性。
进一步地,在本公开的一个实施例中,还包括:
所述虚拟传感器可通过实车传感器对无人驾驶测试场地中静态与动态标定物的感知与真实信息进行比对与标定。
进一步地,由于该测试方法的测试效果很大程度依赖于测试平台对于真实道路交通场景的模拟情况,为保证测试结果的真实性与可靠性,需要对测试平台的拟真效果进行评估与迭代优化。对测试平台的拟真效果进行评估与迭代,包括如下至少之一:有人驾驶的参数标定,无人驾驶的参数标定,行人与非机动车的参数标定。
其中,有人驾驶的参数标定,包括:通过有限有人驾驶车辆之间的交互运动情况与仿真平台的车辆运动交互情况进行比对与动态校准,其中,所述交互运动情况包括静态起步、直线巡航、变道超车、路口停车、路口起步、紧急刹车。
无人驾驶的参数标定,包括:对无人驾驶车辆的驾驶行为进行学习及特征参数提取,再将采集的无人驾驶的行为数据储存在无人驾驶车辆驾驶行为库中,之后测试无人驾驶车辆时对无人驾驶车辆的仿真模块进行调用。
行人与非机动车的参数标定,包括:通过对不同类别的行人及不同类别的非机动车的运动、行驶特征进行提取采集与建模,使之融入交通流仿真平台体系中。
本公开实施例提出的在线交通流仿真与真实道路环境融合的无人驾驶测试方法,第一,极大方便了无人驾驶测试的开展,通过在线仿真与实车测试相结合的方法,可以在远程接入待测车辆,并提供系列的场景测试,而无需考虑测试场地的位置以及距离;第二,无人驾驶真车与线上仿真车辆的交互过程,在碰撞情形发生时,不会构成交通事故,极大地提升测试的安全性能;第三,该测试方法减少了实体有人车辆的运输成本、等比例实体模型的制作 成本,并避免了复杂的真实测试场景的构建、组织与调度;第四,基于仿真的快速大量测试场景汇总以及精准的问题定位溯源可形成统一的、标准化的测试标准,未来可以成为无人驾驶统一的测试标准;第五,无人驾驶车辆测试由感知、决策、规划和动作控制等若干模块构成:其中仿真平台提供了感知环境,无人驾驶车辆控制端提供决策与规划结果,实车的无人驾驶可以提供无人驾驶的动作控制效果测试。通过高信息量的信号传输实现了不同功能分区的物理分隔与虚拟相连。
本公开的实施例还提供一种无人驾驶测试方法,可以用于平台端,也即用于运行交通流仿真平台的计算机或服务器。
该方法包括如下操作:
在交通流仿真平台中,搭建与无人驾驶测试场地一致的静态道路环境,并根据测试要求构建待测无人驾驶车辆的虚拟镜像及虚拟交通场景,其中,交通流仿真平台包括FLOWSIM交通流仿真平台;
通过虚拟镜像配置的虚拟传感器对交通流仿真平台的实时动态仿真结果进行感知,并基于感知获取的第一运行状态信息实时向待测无人驾驶车辆的感知端返回感知信息;
实时接收待测无人驾驶车辆反馈给交通流仿真平台的第二运行状态信息,更新虚拟镜像在虚拟交通场景中的位置与状态。
本公开的实施例还提供一种无人驾驶测试方法,可以用于测试端,也即用于无人驾驶车辆及车载控制器。
该方法包括如下操作:
使待测无人驾驶车辆的感知端接收感知信息,其中,感知信息是基于通过待测无人驾驶车辆在交通流仿真平台中的虚拟镜像配置的虚拟传感器对交通流仿真平台的实时动态仿真结果进行感知而获取的第一运行状态信息实时发送的,交通流仿真平台包括FLOWSIM交通流仿真平台;
根据接收的感知信息进行决策,并基于决策结果控制待测无人驾驶车辆在无人驾驶测试场地中行驶,并实时将待测无人驾驶车辆的第二运行状态信息反馈给交通流仿真平台,其中,交通流仿真平台中搭建有与无人驾驶测试场地一致的静态道路环境且根据测试要求构建有虚拟交通场景。
为了实现上述各个实施例,本公开的实施例还提出一种在线交通流仿真与真实道路环境融合的无人驾驶测试系统。
图2为本公开实施例提供的一种在线交通流仿真与真实道路环境融合的无人驾驶测试系统的结构示意图。
如图2所示,该基于在线交通流仿真与真实道路环境融合的无人驾驶测试系统包括:交通流仿真平台(或者也可以称为交通仿真平台)、实体待测车辆、实体静态与动态标定物等。需要说明的是,本公开的实施例中,无人驾驶测试系统所包括的部件不限于上文列举的内容,还可以包括其他部件或组件,这可以根据实际需求而定。
其中,所述交通流仿真平台用于提供周围环境数据给实体待测车辆;所述实体待测车辆根据周围环境数据做出响应,进行相应的纵向运动与横向运动。
当所述实体待测车辆发生特殊情形时被接管驾驶权,仿真测试中断,所述特殊情形包括如下情形至少之一:
所述实体待测车辆将发生真实危险的情形;
仿真系统中发生事故,仿真测试暂停时,要求驾乘人员接管;
仿真测试结束,要求驾乘人员接管。
具体地,仿真系统提供准确的周围环境数据(周围车辆的位置、姿态、距离等)给待测车,待测车辆根据周围环境数据做出响应,进行相应的纵向运动与横向运动。同时通过附加安装的GPS和INS(惯性导航系统)提供实时的位置信息,并由陀螺仪获取待测车的姿态信息。通过通信信号,将车端的信息同步给仿真平台,仿真平台经过计算更新虚拟环境中无人车的位置及状态,周围车辆据此进行交互并生成下一时间步的位置、姿态、速度、加速度等信息,通过通信信号传回给无人驾驶车辆,作为其实时感知的数据——类似于给无人驾驶车辆戴上虚拟现实的眼镜,仿真与实地测试同步进行,双线递进。同时,基于表1中智能驾驶的自动化等级评定,该测试方法监测在不同风险情形出现时,驾驶自动化系统对风险评估判断情况,对驾乘人员接管的请求情况,对驾乘人员是否合适接管的判断情况。
通过该系统预期实现无人驾驶分级标准中从0-5级的分级的各项能力测试,并为测试车辆定级提供参考。
表1
Figure PCTCN2022134343-appb-000001
Figure PCTCN2022134343-appb-000002
Figure PCTCN2022134343-appb-000003
Figure PCTCN2022134343-appb-000004
本公开的实施例还提出一种计算机设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述 计算机程序时,实现如上所述在线交通流仿真与真实道路环境融合的无人驾驶测试方法。该计算机设备可以为任意类型的具有处理、计算功能的设备,例如为服务器、终端设备、个人电脑等,本公开的实施例对此不作限制。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本公开的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本公开实施例的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
尽管上面已经示出和描述了本公开的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本公开的限制,本领域的普通技术人员在本公开的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (15)

  1. 一种无人驾驶测试方法,包括:
    在交通流仿真平台中搭建与无人驾驶测试场地一致的静态道路环境,并根据测试要求构建待测无人驾驶车辆的虚拟镜像及虚拟交通场景,其中,所述交通流仿真平台包括FLOWSIM交通流仿真平台;
    通过所述虚拟镜像配置的虚拟传感器对所述交通流仿真平台的实时动态仿真结果进行感知,并基于感知获取的第一运行状态信息实时向所述待测无人驾驶车辆的感知端返回感知信息;
    所述待测无人驾驶车辆根据接收的所述感知信息进行决策,并基于决策结果在所述无人驾驶测试场地中行驶,并实时将所述待测无人驾驶车辆的第二运行状态信息反馈给所述交通流仿真平台,所述交通流仿真平台更新所述虚拟镜像在所述虚拟交通场景中的位置与状态。
  2. 根据权利要求1所述的方法,还包括:
    所述待测无人驾驶车辆成为虚拟交通场景中的参与者,与所述虚拟交通场景中的周围车辆和环境形成互动,
    其中,所述交通流仿真平台与所述待测无人驾驶车辆之间采用双向通讯进行交互。
  3. 根据权利要求1或2所述的方法,还包括:
    对所述待测无人驾驶车辆的决策与控制逻辑进行测试,和/或对无人驾驶车辆感知系统进行测试。
  4. 根据权利要求3所述的方法,其中,对所述待测无人驾驶车辆的决策与控制逻辑进行测试,包括如下至少之一:
    对所述待测无人驾驶车辆在设计运行范围内运行情况进行测试,对驾乘人员接管状态的判断能力进行测试,对安全风险的识别能力进行测试,对用户介入请求的反应进行测试。
  5. 根据权利要求4所述的方法,其中,
    对所述待测无人驾驶车辆在设计运行范围内运行情况进行测试包括:测试不同驾驶工况下的决策与控制策略;
    所述驾驶工况包括:跟驰、换道、左右转向、掉头行驶、路口停车、倒车、车辆汇入本车车道、车辆驶出本车车道、与行人和自行车交互;
    其中,所述测试通过双向通讯技术构建所述待测无人驾驶车辆与仿真系统的交互,进行所述待测无人驾驶车辆与对应的仿真环境中的虚拟镜像的双向映射;
    所述对驾乘人员接管状态的判断能力进行测试包括:对超出运行范围时驾乘人员接管情况、紧急情形下驾乘人员接管情况、驾乘人员的驾驶状态至少之一进行识别;
    所述对安全风险的识别能力进行测试包括如下至少之一:测试前方存在视觉盲区的行驶策略,对行人闯入风险判断,对左转直行冲突情形判断;
    所述对用户介入请求的反应基于驾乘人员接管状态与当前安全风险识别情形确定。
  6. 根据权利要求3-5任一项所述的方法,其中,所述对无人驾驶车辆感知系统进行测试,包括:感知系统能力标定,和/或认知体系对物体类型识别准确性与对运动状态评估的准确性。
  7. 根据权利要求6所述的方法,还包括:
    所述对无人驾驶车辆感知系统进行测试包括静态物感知能力测试和/或动态物的感知能力测试;
    其中,所述静态物感知能力测试包括通过将传感器端测量得到的静态物体的运行状态信息与仿真平台中预先测定、建模、尺寸标注完的静态固定物进行比对,得到待测车静态物体的感知能力;
    所述动态物感知能力测试包括通过预先测定物体的形状、尺寸、颜色信息,再对待测车与动态物体分别加装差分定位全球定位系统、惯性导航系统与陀螺仪得到待测车坐标下动态物体的运动情况与信息,并与无人驾驶感知-认知系统所得到的结果进行比对,从而测定待测车感知-认知系统的有效性。
  8. 根据权利要求1-7任一项所述的方法,还包括:
    所述虚拟传感器通过实车传感器对所述无人驾驶测试场地中静态与动态标定物的感知与真实信息进行比对与标定。
  9. 根据权利要求1-8任一项所述的方法,其中,所述FLOWSIM交通流仿真平台基于多个实车驾驶行为数据构建。
  10. 根据权利要求1-9任一项所述的方法,其中,所述方法用于实现在线交通流仿真与真实道路环境的融合。
  11. 根据权利要求1-10任一项所述的方法,其中,所述方法被执行多次直至所述待测无人驾驶车辆的功能和等级测试结束。
  12. 一种无人驾驶测试方法,包括:
    在交通流仿真平台中,搭建与无人驾驶测试场地一致的静态道路环境,并根据测试要求构建待测无人驾驶车辆的虚拟镜像及虚拟交通场景,其中,所述交通流仿真平台包括FLOWSIM交通流仿真平台;
    通过所述虚拟镜像配置的虚拟传感器对所述交通流仿真平台的实时动态仿真结果进行感知,并基于感知获取的第一运行状态信息实时向所述待测无人驾驶车辆的感知端返回感知信息;
    实时接收所述待测无人驾驶车辆反馈给所述交通流仿真平台的第二运行状态信息,更新所述虚拟镜像在所述虚拟交通场景中的位置与状态。
  13. 一种无人驾驶测试方法,包括:
    使待测无人驾驶车辆的感知端接收感知信息,其中,所述感知信息是基于通过所述待测无人驾驶车辆在交通流仿真平台中的虚拟镜像配置的虚拟传感器对所述交通流仿真平台的实时动态仿真结果进行感知而获取的第一运行状态信息实时发送的,所述交通流仿真平台包括FLOWSIM交通流仿真平台;
    根据接收的所述感知信息进行决策,并基于决策结果控制所述待测无人驾驶车辆在无人驾驶测试场地中行驶,并实时将所述待测无人驾驶车辆的第二运行状态信息反馈给所述交通流仿真平台,其中,所述交通流仿真平台中搭建有与所述无人驾驶测试场地一致的静态道路环境且根据测试要求构建有 虚拟交通场景。
  14. 一种无人驾驶测试系统,用于实施如权利要求1-11任一项所述的无人驾驶测试方法,其中,所述无人驾驶测试系统包括:交通流仿真平台、实体待测车辆;
    其中,所述交通流仿真平台用于提供周围环境数据给所述实体待测车辆;所述实体待测车辆根据所述周围环境数据做出响应,进行相应的纵向运动与横向运动;其中,
    当所述实体待测车辆发生特殊情形时被接管驾驶权,仿真测试中断,所述特殊情形包括如下至少之一:
    所述实体待测车辆将发生真实危险的情形;
    仿真系统中发生事故,仿真测试暂停时,要求驾乘人员接管;
    仿真测试结束,要求驾乘人员接管。
  15. 一种计算机设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,其中,所述处理器执行所述计算机程序时,实现如权利要求1-13中任一所述的方法。
PCT/CN2022/134343 2021-12-23 2022-11-25 无人驾驶测试方法、无人驾驶测试系统及计算机设备 WO2023116344A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111592703.4A CN114326667B (zh) 2021-12-23 2021-12-23 在线交通流仿真与真实道路环境融合的无人驾驶测试方法
CN202111592703.4 2021-12-23

Publications (1)

Publication Number Publication Date
WO2023116344A1 true WO2023116344A1 (zh) 2023-06-29

Family

ID=81054905

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/134343 WO2023116344A1 (zh) 2021-12-23 2022-11-25 无人驾驶测试方法、无人驾驶测试系统及计算机设备

Country Status (2)

Country Link
CN (1) CN114326667B (zh)
WO (1) WO2023116344A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116957345A (zh) * 2023-09-21 2023-10-27 上海伯镭智能科技有限公司 用于无人驾驶系统中的数据处理方法

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114326667B (zh) * 2021-12-23 2023-08-08 水木东方(深圳)科技有限公司 在线交通流仿真与真实道路环境融合的无人驾驶测试方法
CN115114786B (zh) * 2022-06-29 2024-06-25 重庆长安汽车股份有限公司 一种用于交通流仿真模型的评估方法、系统和存储介质
CN115616937B (zh) * 2022-12-02 2023-04-04 广汽埃安新能源汽车股份有限公司 自动驾驶仿真测试方法、装置、设备和计算机可读介质
CN116244902A (zh) * 2022-12-20 2023-06-09 北京国家新能源汽车技术创新中心有限公司 车路云融合的道路环境场景仿真方法、电子设备及介质
CN116046417B (zh) * 2023-04-03 2023-11-24 安徽深信科创信息技术有限公司 自动驾驶感知局限测试方法、装置、电子设备及存储介质
CN116167255B (zh) * 2023-04-26 2023-07-07 北京市计量检测科学研究院 一种基于vtd的车路协同闭环仿真测试方法及系统
CN117931682A (zh) * 2024-03-25 2024-04-26 中国汽车技术研究中心有限公司 自动驾驶汽车的人机接管测试平台及测试方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018161278A1 (zh) * 2017-03-08 2018-09-13 深圳市速腾聚创科技有限公司 无人驾驶汽车系统及其控制方法、汽车
EP3401702A1 (de) * 2017-05-10 2018-11-14 Leuze electronic GmbH + Co. KG Sensorsystem
CN109032102A (zh) * 2017-06-09 2018-12-18 百度在线网络技术(北京)有限公司 无人驾驶车辆测试方法、装置、设备及存储介质
CN109213126A (zh) * 2018-09-17 2019-01-15 安徽江淮汽车集团股份有限公司 自动驾驶汽车测试系统和方法
US20190220011A1 (en) * 2018-01-16 2019-07-18 Nio Usa, Inc. Event data recordation to identify and resolve anomalies associated with control of driverless vehicles
CN112924185A (zh) * 2021-01-22 2021-06-08 大连理工大学 一种基于数字孪生虚实交互技术的人机共驾测试方法
CN114326667A (zh) * 2021-12-23 2022-04-12 清华大学 在线交通流仿真与真实道路环境融合的无人驾驶测试方法

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190271614A1 (en) * 2018-03-01 2019-09-05 RightHook, Inc. High-Value Test Generation For Autonomous Vehicles
DE102018213844A1 (de) * 2018-08-17 2020-02-20 Robert Bosch Gmbh Verfahren zum Testen einer zumindest teilautomatisierten Fahrfunktion für Kraftfahrzeuge
DE102018215329A1 (de) * 2018-08-31 2020-03-05 Robert Bosch Gmbh Computerimplementiertes Simulationsverfahren und Anordnung zum Testen von Steuergeräten
CN109781431B (zh) * 2018-12-07 2019-12-10 山东省科学院自动化研究所 基于混合现实的自动驾驶测试方法及系统
CN110764494A (zh) * 2019-11-26 2020-02-07 畅加风行(苏州)智能科技有限公司 一种基于SCANeR的人机混驾自动驾驶虚拟测试平台
CN112631257B (zh) * 2020-12-29 2021-12-17 清华大学苏州汽车研究院(相城) 一种用于自动驾驶车辆误操作的预期功能安全测试评价方法
CN113064487A (zh) * 2021-03-24 2021-07-02 智科云创(北京)科技有限公司 一种交互虚拟仿真系统
CN113311727A (zh) * 2021-05-13 2021-08-27 际络科技(上海)有限公司 一种用于自动驾驶测试的仿真模拟系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018161278A1 (zh) * 2017-03-08 2018-09-13 深圳市速腾聚创科技有限公司 无人驾驶汽车系统及其控制方法、汽车
EP3401702A1 (de) * 2017-05-10 2018-11-14 Leuze electronic GmbH + Co. KG Sensorsystem
CN109032102A (zh) * 2017-06-09 2018-12-18 百度在线网络技术(北京)有限公司 无人驾驶车辆测试方法、装置、设备及存储介质
US20190220011A1 (en) * 2018-01-16 2019-07-18 Nio Usa, Inc. Event data recordation to identify and resolve anomalies associated with control of driverless vehicles
CN109213126A (zh) * 2018-09-17 2019-01-15 安徽江淮汽车集团股份有限公司 自动驾驶汽车测试系统和方法
CN112924185A (zh) * 2021-01-22 2021-06-08 大连理工大学 一种基于数字孪生虚实交互技术的人机共驾测试方法
CN114326667A (zh) * 2021-12-23 2022-04-12 清华大学 在线交通流仿真与真实道路环境融合的无人驾驶测试方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116957345A (zh) * 2023-09-21 2023-10-27 上海伯镭智能科技有限公司 用于无人驾驶系统中的数据处理方法
CN116957345B (zh) * 2023-09-21 2023-12-08 上海伯镭智能科技有限公司 用于无人驾驶系统中的数据处理方法

Also Published As

Publication number Publication date
CN114326667A (zh) 2022-04-12
CN114326667B (zh) 2023-08-08

Similar Documents

Publication Publication Date Title
WO2023116344A1 (zh) 无人驾驶测试方法、无人驾驶测试系统及计算机设备
CN111123933B (zh) 车辆轨迹规划的方法、装置、智能驾驶域控制器和智能车
US20190235521A1 (en) System and method for end-to-end autonomous vehicle validation
US10831636B2 (en) Software validation for autonomous vehicles
CN113032285B (zh) 一种高精地图测试方法、装置、电子设备及存储介质
US20190354643A1 (en) Mixed reality simulation system for testing vehicle control system designs
JP2023507695A (ja) 自律運転アプリケーションのための3次元交差点構造予測
US20170043771A1 (en) Device for signalling objects to a navigation module of a vehicle equipped with this device
JP2023533507A (ja) 人間の運転振る舞いに基づいて軌跡プランナを最適化するシステム及び方法
US20210118288A1 (en) Attention-Based Control of Vehicular Traffic
CN113076897B (zh) 智能网联汽车的博弈动态行驶安全测控方法及调控终端
US11604908B2 (en) Hardware in loop testing and generation of latency profiles for use in simulation
US20230150549A1 (en) Hybrid log simulated driving
CN116564116A (zh) 数字孪生驱动的智能辅助驾驶引导系统与方法
KR20220073472A (ko) V2x 통신기반 자율주행 혼합류환경 교차로 통합정보 제공 시스템 및 방법
US20230256999A1 (en) Simulation of imminent crash to minimize damage involving an autonomous vehicle
CN115855531B (zh) 自动驾驶汽车的测试场景构建方法、设备和介质
CN114787894A (zh) 感知误差模型
US20220297726A1 (en) Computerized detection of unsafe driving scenarios
JP2019214320A (ja) 認識処理装置、車両制御装置、認識制御方法、およびプログラム
CN115729228A (zh) 利用可驾驶区域检测来导航的方法和系统以及存储介质
US12030509B1 (en) Realism in log-based simulations
US20240017741A1 (en) Validation of trajectory planning for autonomous vehicles
WO2022168672A1 (ja) 処理装置、処理方法、処理プログラム、処理システム
WO2022168671A1 (ja) 処理装置、処理方法、処理プログラム、処理システム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22909659

Country of ref document: EP

Kind code of ref document: A1