WO2021056556A1 - Vehicle-mounted autonomous driving test system and test method - Google Patents

Vehicle-mounted autonomous driving test system and test method Download PDF

Info

Publication number
WO2021056556A1
WO2021056556A1 PCT/CN2019/109155 CN2019109155W WO2021056556A1 WO 2021056556 A1 WO2021056556 A1 WO 2021056556A1 CN 2019109155 W CN2019109155 W CN 2019109155W WO 2021056556 A1 WO2021056556 A1 WO 2021056556A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
virtual
vehicle
module
sensor
Prior art date
Application number
PCT/CN2019/109155
Other languages
French (fr)
Chinese (zh)
Inventor
张宇
石磊
林伟
冯威
刘晓彤
Original Assignee
驭势科技(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 驭势科技(北京)有限公司 filed Critical 驭势科技(北京)有限公司
Priority to PCT/CN2019/109155 priority Critical patent/WO2021056556A1/en
Priority to CN201980002545.7A priority patent/CN110785718B/en
Publication of WO2021056556A1 publication Critical patent/WO2021056556A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0208Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterized by the configuration of the monitoring system
    • G05B23/0213Modular or universal configuration of the monitoring system, e.g. monitoring system having modules that may be combined to build monitoring program; monitoring system that can be applied to legacy systems; adaptable monitoring system; using different communication protocols
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/24Pc safety
    • G05B2219/24065Real time diagnostics

Definitions

  • This application relates to the field of automotive electronics technology, and in particular to a vehicle-mounted automatic driving test system and method.
  • the information perception system, decision-making system, and automobile execution system require perfect coordination to complete the entire autonomous driving process.
  • the autonomous vehicle is allowed to drive directly on the actual road, it will not only bring safety hazards, but also need to prepare various props and scenes, such as dummy, fake car and fake animal, in order to simulate different environments.
  • the present application discloses a vehicle-mounted automatic driving test system, including: a virtual scene superimposing module, in a working state, the virtual scene superimposing module obtains driving data of an automatic driving vehicle; and obtains the exterior of the automatic driving vehicle Environment data; add virtual data to the external environment data according to the driving data of the autonomous vehicle to generate modified external environment data; transmit the modified external environment data to the on-board control module of the autonomous vehicle .
  • the present application provides an automatic driving vehicle test method, which is applied to a test device of an automatic driving vehicle.
  • the automatic driving vehicle test method includes: acquiring driving data of the automatic driving vehicle; acquiring the external environment of the automatic driving vehicle Data; add virtual data to the external environment data according to the driving data of the autonomous vehicle to generate modified external environment data; transmit the modified external environment data to the on-board control module of the autonomous vehicle.
  • the invention in this application requires relatively high network delay and data transmission speed.
  • the technology disclosed in the present invention can be applied to a 4G network environment, but is more suitable for a 5G network environment.
  • Fig. 1 is a block diagram of an exemplary vehicle with automatic driving capability in the present application
  • 2A-2D are schematic diagrams of a coupling mode of a virtual scene overlay module and a visual sensor in this application;
  • Fig. 4 is a schematic diagram of a modified external environment data scenario in this application.
  • This application discloses a vehicle-mounted automatic driving test system and method.
  • a virtual scene can be added to external environment data through a virtual scene overlay module to generate modified external environment data, which is used to test and verify the driving performance of an autonomous vehicle on the road.
  • Safety to provide guidance for the technology research and development of autonomous vehicles, road test permits and product access certification.
  • the technology involved in this application requires relatively high network delay and data transmission speed.
  • the technology disclosed in this application can be applied to a 4G network environment, but is more suitable for a 5G network environment.
  • the data transmission rate of 4G is on the order of 100Mbps
  • the delay is 30-50ms
  • the maximum number of connections per square kilometer is on the order of 10,000
  • the mobility is about 350KM/h
  • the transmission rate of 5G is on the order of 10Gbps
  • the delay is 1ms
  • the maximum number of connections per square kilometer is in the order of millions
  • the mobility is about 500km/h.
  • 5G has a higher transmission rate, shorter delay, more square kilometers of connections, and higher speed tolerance. Another change in 5G is the change in the transmission path.
  • the flowchart used in this application shows the operations implemented by the system according to some embodiments in this application. It should be clearly understood that the operations of the flowchart can be implemented out of order. Instead, the operations can be performed in reverse order or simultaneously. In addition, one or more other operations can be added to the flowchart. One or more operations can be removed from the flowchart.
  • the circuit and method in this application mainly describe the on-vehicle automatic driving test system and method, it should be understood that this is only an exemplary embodiment.
  • the device and method of the present application can also be applied to other types of systems.
  • the system or method of the present application can be applied to driving tests of transportation systems in different environments, including land, sea, aerospace, etc., or any combination thereof.
  • the autonomous vehicles of the transportation system may include taxis, private cars, trailers, buses, trains, bullet trains, high-speed railways, subways, ships, airplanes, spacecraft, hot air balloons, unmanned vehicles, etc., or any combination thereof.
  • the system or method may find applications in, for example, logistics warehouses, military affairs (such as pilots' flight simulations or automated flight tests of drones).
  • Fig. 1 is a block diagram of an automated driving test system disclosed according to some embodiments.
  • the system includes the whole or any part of an exemplary vehicle 200 having automatic driving capabilities and automatic driving test capabilities.
  • the vehicle 200 with automatic driving capability may include a control module 160, a sensor module 150, a memory 140, an instruction module 130, a controller area network (CAN) 120, an actuator 110, a communication module 180, an automatic driving test device 190, a test A module 192, a planning control module, and a virtual scene superimposing module 100.
  • the control module 160 of the autonomous vehicle and the virtual scene overlay module 100 are both connected to the network 170 through the communication module 180.
  • the virtual scene overlay module 100 may be an independent hardware module, or may be a hardware or software module subordinate to the control module 160.
  • control module 160 may include one or more central processors (for example, single-core processors or multi-core processors).
  • the control module may include a central processing unit (CPU), an application-specific integrated circuit (ASIC), an application-specific instruction-set processor (ASIP), and graphics Processing unit (graphics processing unit, GPU), physical processing unit (physics processing unit, PPU), digital signal processor (digital signal processor, DSP), field programmable gate array (field programmable gate array, FPGA), programmable logic Device (programmable logic device, PLD), controller, microcontroller unit, reduced instruction-set computer (RISC), microprocessor (microprocessor), etc., or any combination thereof.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • ASIP application-specific instruction-set processor
  • GPU graphics Processing unit
  • PPU physical processing unit
  • digital signal processor digital signal processor
  • field programmable gate array field programmable gate array
  • FPGA field programmable gate array
  • PLD programmable logic Device
  • controller microcontroller unit, reduced instruction-
  • the memory 140 can store data and/or instructions.
  • the memory 140 may store data obtained from sensors of an autonomous vehicle.
  • the memory may store data and/or instructions that can be executed or used by the control module to perform the exemplary methods described in the present disclosure.
  • the instruction may include the virtual scene superimposing module 100 and so on.
  • the virtual scene superimposing module 100 may also be an independent hardware module.
  • the memory may include a mass memory, a removable memory, a volatile read-and-write memory (volatile read-and-write memory), a read-only memory (ROM), etc., or any combination thereof.
  • the storage 140 may be a local storage, that is, the storage 140 may be a part of the autonomous vehicle 200. In some embodiments, the storage 140 may also be a remote storage.
  • the central processing unit may connect to the remote storage through the network 170 to communicate with one or more components of the autonomous vehicle 200 (for example, the sensor module 150 and the virtual scene overlay module 100). One or more components in the autonomous vehicle 200 may access data or instructions remotely stored in a remote storage via the network 170.
  • the memory 140 may be directly connected to or communicate with one or more components in the autonomous vehicle 200 (for example, the control module 160, the sensor module 150).
  • the virtual scene overlay module 100 is responsible for receiving the data from the sensor module 150, and according to the needs of the automated driving vehicle test, part or all of the virtual scene is embedded in the sensor module 150 to transmit the data, and then the sensor module 150 The incoming data is sent to the planning control module for planning control of automatic driving.
  • the planning control module performs automatic driving planning decision information and vehicle control information based on the external environment image data, combined with maps and other information sensed and received by the autonomous driving vehicle 100.
  • the planning decision information and control information may include planning control information such as vehicle path, lane change information, acceleration and deceleration information, and turning information. Then the planning control module sends the control information to the instruction module 130.
  • the planning control module can also share the planning decision information with other vehicles through the communication data processing module.
  • the network 170 may facilitate the exchange of information and/or data.
  • one or more components in the autonomous vehicle 200 may send information and/or data to the autonomous vehicle 200 via the network 170 Other components.
  • the control module 160 may obtain/acquire dynamic conditions of the vehicle and/or environmental information around the vehicle via the network 170.
  • the network 170 may be any type of wired or wireless network, or a combination thereof.
  • the network 170 may include a wired network, a wired network, an optical fiber network, a telecommunication network, an intranet, the Internet, a local area network (LAN), a wide area network (WAN), a wireless local area network (WLAN), and a metropolitan area network (MAN) , Wide Area Network (WAN), Public Switched Telephone Network (PSTN), Bluetooth network, ZigBee network, Near Field Communication (NFC) network, 3G/4G/5G network, etc., or any combination thereof.
  • the network 170 may include one or more network access points.
  • the network 170 may include wired or wireless network access points.
  • One or more components of the autonomous vehicle 200 may be connected to the network 170 to exchange data and/or information.
  • the actuator 110 may include, but is not limited to, the driving execution of the accelerator, the engine, the brake, and the steering system (including the steering of the tire and/or the operation of the turn signal).
  • the steering system can steer the autonomous vehicle 200.
  • the steering system may manipulate the autonomous vehicle 200 based on a control signal sent from the control module.
  • the control signal may include information related to turning direction, turning position, turning angle, turn signal, etc., or any combination thereof. For example, when the control signal sent by the control module is to turn the front wheel of the self-driving vehicle 45° counterclockwise, the steering system may guide the self-driving vehicle 200 to turn left based on the control signal sent from the control module, And the front wheel turning angle is 45°.
  • the braking system can control the motion state of the autonomous vehicle 200.
  • the braking system may decelerate the autonomous vehicle 200.
  • the braking system may stop the autonomous vehicle 200 from moving forward in one or more road conditions (e.g., downhill).
  • the braking system can keep the autonomous vehicle 200 at a constant speed when driving downhill.
  • the braking system may include mechanical control components, hydraulic units, power units (for example, vacuum pumps), execution units, etc., or any combination thereof.
  • Mechanical control components can include pedals, hand brakes, etc.
  • the hydraulic unit may include hydraulic oil, hydraulic hoses, brake pumps, etc.
  • the execution unit may include brake calipers, brake pads, brake discs, and so on.
  • the engine system can determine the engine performance of the autonomous vehicle 200.
  • the engine system may determine the engine performance of the autonomous vehicle 200 based on the control signal from the control module.
  • the engine system may determine the engine performance of the autonomous vehicle 200 based on the control signal associated with the acceleration from the control module.
  • the engine system may include a plurality of sensors and at least one microprocessor.
  • the plurality of sensors may be configured to detect one or more physical signals and convert the one or more physical signals into electrical signals for processing.
  • the plurality of sensors may include various temperature sensors, air flow sensors, throttle position sensors, pump pressure sensors, speed sensors, oxygen sensors, load sensors, knock sensors, etc., or any combination thereof.
  • the one or more physical signals may include, but are not limited to, engine temperature, engine air intake, cooling water temperature, engine speed, etc., or any combination thereof.
  • the microprocessor may determine engine performance based on a plurality of engine control parameters.
  • the microprocessor may determine multiple engine control parameters based on multiple electrical signals, and may determine multiple engine control parameters to optimize engine performance.
  • the plurality of engine control parameters may include ignition timing, fuel delivery, idling air flow, etc., or any combination thereof.
  • the throttle system can change the driving speed of the autonomous vehicle 200.
  • the throttle system can maintain the driving speed of the autonomous vehicle 200 under one or more road conditions.
  • the throttle system can increase the driving speed of the autonomous vehicle 200 when acceleration is required. For example, if the self-driving vehicle 200 surpasses the preceding vehicle when conditions permit, the accelerator system may accelerate the self-driving vehicle 200.
  • the instruction module 130 receives the information from the control module 160, converts it into an instruction to drive the actuator, and sends it to the Controller Area Network (CAN) bus 120.
  • the control module 160 sends the driving strategy (acceleration, deceleration, turning, etc.) of the autonomous vehicle 200 to the instruction module 130, and the instruction module 130 receives the driving strategy and converts it into a pair of actuators.
  • Drive instructions drive instructions for the accelerator, brake mechanism, and steering mechanism.
  • the instruction module 130 then issues the finger to the actuator through the CAN bus 120.
  • the execution of the instructions by the actuator 110 is then detected by the vehicle component sensors and fed back to the control module 16, thereby completing the closed-loop control and driving of the autonomous vehicle 200.
  • the CAN bus 120 is a reliable vehicle bus standard (for example, a message-based protocol), which allows microcontrollers (for example, the control module 150) and devices (for example, engine systems, braking systems, and steering systems) And/or throttle system, etc.) communicate with each other in applications that do not have a host computer.
  • the CAN bus 120 may be configured to connect the control module 150 with multiple ECUs (for example, an engine system, a braking system, a steering system, and a throttle system).
  • the sensor module 150 may include one or more sensors.
  • the plurality of sensors may include various internal and external sensors that provide data to the autonomous vehicle 200.
  • the plurality of sensors may include vehicle component sensors and environmental sensors.
  • the vehicle component sensor is connected to the actuator of the vehicle 200, and can detect the operating status and parameters of each component of the actuator.
  • the environmental sensor allows the vehicle to understand and potentially respond to its environment, so as to help the autonomous vehicle 200 perform navigation, route planning, and ensure the safety of passengers and people or property in the surrounding environment.
  • the environmental sensors can also be used to identify, track and predict the movement of objects, such as pedestrians and other vehicles.
  • the environmental sensor may include a position sensor and an external object sensor.
  • the position sensor may include a GPS receiver, an accelerometer and/or a gyroscope, and a receiver.
  • the location sensor can sense and/or determine the geographic location and orientation of the autonomous vehicle 200. For example, determine the latitude, longitude and altitude of the vehicle.
  • the external object sensor can detect objects outside the autonomous vehicle 200, such as other vehicles, obstacles in the road, traffic signals, signs, trees, and so on.
  • the external object sensor may include a vision sensor, a lidar, a sonar sensor, a climate sensor, an acceleration sensor, and/or other detection devices, and/or any combination thereof.
  • the lidar may be located on the top, front and rear of the autonomous vehicle 200, and either side of the front bumper.
  • Lidar may include, but is not limited to, single-line lidar, 4-line lidar, 16-line lidar, 32-line lidar, 64-line lidar, etc., or any combination thereof.
  • the lidar collects laser beams reflected by objects in the environment in different directions to form a lattice of the surrounding environment.
  • lidar In addition to using lidar to determine the relative position of external objects, other types of radar can also be used for other purposes, such as traditional speed detectors. Shortwave radar can be used to determine the depth of snow on the road and determine the location and condition of the road.
  • the data collected by the lidar can be further added with virtual data.
  • the virtual image superimposition module can also be connected to the lidar or other types of radar, so as to add the signal of the virtual object to the detection signal of the radar.
  • the sonar sensor can detect the distance between the autonomous vehicle 200 and surrounding obstacles.
  • the sonar may be an ultrasonic rangefinder.
  • the ultrasonic rangefinder is installed on both sides and the back of the autonomous vehicle 200 and is turned on when parking to detect obstacles around the parking space and the distance between the autonomous vehicle 200 and the obstacle.
  • the data detected by the sonar sensor may be further added with virtual data (for example, virtual obstacles).
  • the virtual image overlay module can also be connected to the sonar sensor, so as to add the data of the virtual object to the detection data of the sonar sensor.
  • the climate sensor can detect weather information outside the autonomous vehicle 200.
  • the weather information includes, but is not limited to, wind, frost, rain, snow, temperature, light, etc., or any combination thereof.
  • the climate sensor detects that the exterior of the autonomous vehicle 200 is night.
  • the data detected by the climate sensor may be further added with virtual data (for example, it is snowing, raining).
  • the virtual image overlay module can also be connected to the climate sensor, so as to add virtual weather data to the actual weather data.
  • the acceleration sensor can detect the acceleration of an object external to the autonomous vehicle 200.
  • the external object may be static or movable.
  • the data detected by the acceleration sensor can be further added with virtual data.
  • the virtual data is added by the virtual scene overlay module 100.
  • the vision sensor can capture a visual image around the autonomous vehicle 200 and extract content therefrom.
  • the vision sensor may include, but is not limited to, a monocular or binocular ordinary camera, a monocular or binocular wide-angle camera (such as a fisheye camera), a laser scanner, a linear CCD camera, an area CCD camera, a TV camera [MOU1 ], digital camera, etc., or any combination thereof.
  • the visual sensor can take pictures of street signs on both sides of the road, and use the control module 160 to recognize the meaning of these signs.
  • the visual sensor is used to determine the speed limit of the road.
  • the self-driving vehicle 200 can also calculate the distance of surrounding objects from the self-driving vehicle 200 through the parallax of different images taken by multiple vision sensors.
  • the visual image captured by the visual sensor may also be used to judge the surrounding environment such as obstacles, pedestrians, vehicles, and weather. For example, the captured visual image shows that there are dark clouds ahead, and the control module 160 analyzes that there may be thunderstorms ahead.
  • the captured visual image can also be superimposed with the virtual data output by the virtual scene superimposing module 100, and the superimposed data can be sent to the control module 160.
  • the vision sensor may be a monocular or binocular ordinary camera, a monocular or binocular wide-angle camera (such as a fisheye camera), a linear CCD camera, an area CCD camera, a TV camera, a digital camera, etc. Any kind of camera, or any combination.
  • the camera may include a lens 212, an image sensor 214, an output module 216, etc., or any combination thereof.
  • the camera lens 212 captures the light of the external scenery, and projects the light onto the imaging surface of the image sensor 214 (such as an imager) to expose the photosensitive array.
  • the photosensitive array converts the exposure into electric charges.
  • the image sensor 214 converts the accumulated electric charges into a continuous analog signal for output or digitized output. After the conversion is completed, the camera will be reset to start the exposure of the next video frame.
  • the electrical signal output from the image sensor 214 is input to the output module 216, and is scanned in the output module as a frame of image output.
  • the virtual data can be represented in different forms.
  • the virtual data when adding virtual data to the lidar, can be point cloud information or point cloud sequence; when adding information to a visual sensor , Virtual data can include pixel information, and different obstacles can be expressed through pixel information.
  • virtual data expresses the content that needs to be added in different forms, such as one or more of virtual obstacles, virtual vehicles, virtual pedestrians, virtual roads, and different virtual weather.
  • the automatic driving test device 190 may be a vehicle-mounted hardware device, or may be combined with the test module 192 to become a software module subordinate to the control module 160.
  • the automated driving test device may be connected to the sensor module 150 and/or the control module 160 of the automated driving vehicle 200 for use.
  • the automatic driving test device may include a test module 192.
  • the test module 192 may be a hardware module and is connected to the sensor module 150 and/or the control module 160 of the autonomous vehicle 200 to measure when the virtual scene overlay module 100 inputs virtual data to the autonomous vehicle 200 to generate part or all of the virtual driving Under the environment, the autonomous vehicle 200 responds accordingly.
  • the performance of the automatic driving system of the automatic driving vehicle 200 (the response speed of the algorithm, accuracy, etc.) is tested.
  • the automatic driving test device 190 may include a mounting structure.
  • the installation structure may be a mechanical structure for fixing the virtual scene superimposing module 100 on the autonomous vehicle.
  • the mounting structure in question can include screws, bases, and so on.
  • the automatic driving test device 190 will be fixed on the base, and then fixed on the automatic driving vehicle 200 with screws.
  • the illustrated automated driving test device 190 may include an automated driving vehicle 200. That is, in some embodiments, the autonomous driving vehicle 200 may be an autonomous driving vehicle 200 for testing that includes a virtual scene superimposing module 100 and a testing module 192.
  • the illustrated automatic driving test device 190 may also be located on a cloud server.
  • the cloud server is used for unified scheduling and monitoring of the autonomous vehicle.
  • the cloud server may send virtual data to the sensor module 150 or the control module 160 of the autonomous vehicle 200 through the automatic driving test device 190 to generate part or all of the virtual driving environment, so as to obtain The test result of the autonomous vehicle 200.
  • control module 160 in this application may also include multiple processors. Therefore, the operations and/or method steps disclosed in this application can be executed by one processor as described in this application, or Jointly executed by multiple processors. For example, if the central processing unit of the control module 160 described in this application performs step A and step B, it should be understood that step A and step B can also be performed jointly or separately by two different processors in information processing (for example, , The first processor executes step A, the second processor executes step B, or the first and second processors execute steps A and B) together.
  • FIGS. 2A-2D are schematic diagrams of a coupling manner of a virtual scene overlay module and the external object sensor in this application.
  • the virtual scene overlay module 100 may be an independent hardware module, or may be a hardware or software module subordinate to the control module 160.
  • the external object sensor may be any one or more of the above-mentioned external object sensors in the autonomous vehicle.
  • a visual sensor is mainly used as an example to illustrate the technical points of the present disclosure. However, those skilled in the art can easily understand that this technical point can be directly applied to other forms of external object sensors. For example, this technology can be directly applied to lidar.
  • the virtual scene superimposing module 100 is connected to the output terminal of the output module 216.
  • the virtual scene superimposing module 100 adds virtual data to the data output by the camera output module 216 to generate modified data.
  • the virtual data is data corresponding to the virtual image.
  • the virtual scene superimposing module 100 is connected to the output terminal of the output module 216, and receives a real image at the output terminal of the output module. Then, the virtual scene superimposing module 100 superimposes the pixels of the virtual image to the position corresponding to the real image.
  • the virtual data may be a virtual traffic light image in front of the road, images of other virtual vehicles driving on the road, and/or virtual pedestrian images crossing the road.
  • the aforementioned virtual image will appear at a predetermined position in front of the autonomous vehicle 200.
  • the aforementioned virtual image will appear in the corresponding predetermined position in the real image. Therefore, the virtual scene superimposing module 100 can superimpose the pixels of the virtual image on a predetermined position of the real image.
  • the predetermined position and the size, angle of view, etc. of the virtual image will change as the autonomous vehicle 200 moves forward, and its effect is equivalent to the appearance of the corresponding position and moving objects in the real world after being photographed by the camera.
  • the state in the real image is the image of the road ahead when the autonomous vehicle 200 is driving
  • the virtual data may be a virtual traffic light image in front of the road, images of other virtual vehicles driving on the road, and/or virtual pedestrian images crossing the road.
  • the virtual scene superimposing module 100 superimposing the pixels of the virtual image on the real image may be performed by weighting the pixel values of the virtual image and the pixel values of the real image to obtain the modified The image data; it can also be that the pixels planned to be added to the virtual image in the output image are deleted first, and the pixels of the virtual image are added.
  • the image data output in this way also includes the image data of the real scene and the image data of the artificially added virtual data, so that the virtual image is added to the real image.
  • the virtual scene superimposing module 100 can also be connected to the output end of the lidar. By embedding the lidar dot matrix data of the virtual object into the real surrounding environment dot matrix collected by the lidar, the virtual scene superimposing module 100 can generate the modified data of the lidar.
  • the virtual scene superimposing module 100 can also be connected to the sonar and/or climate sensor, and the control module 160 can modify the driving environment by modifying the data of the sonar and/or climate sensor. "Cognition”.
  • the virtual scene superimposing module 100 may be coupled with the camera in other ways.
  • the virtual scene superimposing module 100 may be connected between the camera lens 212 and the camera image sensor 214. Therefore, before the image information (or the light of the scenery) passes through the camera lens 212 but has not reached the camera image sensor 214, the virtual scene superimposing module 100 can add virtual data representing the virtual image to the data collected by the camera lens 212 To generate the modified data.
  • the virtual scene superimposing module 100 may be a set of instructions in the control module 160, which receives and modifies the signal from the camera 212 by means of software and transmits the modified signal back to the camera for processing; it may also be a set of instructions.
  • kind of projection device or equivalent projection device On the one hand, the projection device projects an image of a virtual object onto the imaging surface of the image sensor 214, and on the other hand blocks the light from the lens 212 corresponding to the image. In this way, from the stage of incident light, the scene captured by the camera is modified. Then, the modified data is collected by the camera image sensor 214, and the output module 216 generates image data.
  • the image data includes image data of a real scene and image data of artificially added virtual data, so that the virtual image is added to the real image.
  • the virtual scene superimposing module 100 may be connected between the camera image sensor 214 and the camera output module 216. Therefore, after the image information (or the light of the scenery) reaches the imaging surface of the camera image sensor 214 through the camera lens 212 and is output as an electrical signal, but before being input to the camera output module 216, this part of the electrical signal is input to the virtual scene superimposing module 100. Then, the virtual scene superimposing module 100 can cut off a part of the electrical signal and add the electrical signal corresponding to the virtual object, and then input the modified electrical signal into the output module 216 to scan to generate image data.
  • the image data also includes the image data of the real scene and the image data of the artificially added virtual data, so that the virtual image is added to the real image.
  • the virtual scene superimposing module 100 may be a set of instructions in the control module 160, which receives and modifies the signal from the image sensor 214 through software, and transmits the modified signal to the output module 216 for further processing; or It is a hardware module embedded between the image sensor 214 and the output module 216 of the camera.
  • the virtual scene overlay module 100 adds virtual data to the data output by the camera output module 216 to generate modified data.
  • the virtual scene superimposing module 100 is electrically connected to the output module 216 and controls the output module to scan the signal from the camera image sensor 214.
  • the virtual scene superimposing module 100 can block the normal scanning of the output module and replace it with pixels of the virtual image.
  • the image data output in this way also includes the image data of the real scene and the image data of the artificially added virtual data, so that the virtual image is added to the real image.
  • the virtual scene superimposing module 100 may be a set of instructions in the control module 160, which receives and modifies the signal acquired during scanning by the output module 216 through software and transmits the modified signal back to the camera for processing; or It is a hardware module embedded in the output module 216 of the camera.
  • the virtual data may be image data of a virtual object, or other virtual data, such as temperature, wind speed, and so on.
  • the virtual data may be data stored in the virtual scene overlay module 100, or data received in real time via the network 170.
  • the object corresponding to the data may be an object actually photographed, or an object synthesized by a computer.
  • the time for adding virtual data to the data detected by the real sensor can be predetermined according to the scene, or it can be random.
  • the control module 160 may process information and/or data related to vehicle driving (for example, automatic driving) to perform one or more functions described in the present disclosure.
  • the control module 160 may be configured to autonomously drive the vehicle.
  • the control module 160 may output multiple control signals.
  • the multiple control signals may be configured to be received by one or more electronic control units (ECU) to control the driving of the vehicle.
  • the control module may output a control signal based on the external environment information of the vehicle and the superimposed virtual scene.
  • the technology disclosed in the present invention can be applied in a 4G network environment.
  • the 5G network environment is more suitable.
  • the data transmission rate of 4G is on the order of 100Mbps
  • the delay is 30-50ms
  • the maximum number of connections per square kilometer is on the order of 10,000
  • the mobility is about 350KM/h
  • the transmission rate of 5G is on the order of 10Gbps
  • the delay is 1ms
  • the maximum number of connections per square kilometer is in the order of millions
  • the mobility is about 500km/h.
  • 5G has a higher transmission rate, shorter delay, more square kilometers of connections, and higher speed tolerance. Therefore, although the present invention is also suitable for 4G environment, it will get better technical performance when running in 5G environment and reflect higher commercial value.
  • the virtual scene overlay module 100 can modify the external environment data of the autonomous vehicle 200.
  • the external environment data of the autonomous vehicle 200 can be acquired through the sensor module 150.
  • the external environment data may include, but is not limited to, location information, weather information, traffic sign information, pedestrian information, vehicle information, driving lane information, obstacle information, signal light information, lighting information, etc.
  • the external environmental data may be collected by environmental sensors.
  • the external environment data may collect obstacles (for example, rocks, falling objects, etc.) in the driving road of the autonomous vehicle 200 through external object sensors.
  • the external environment data may collect geographic location information such as longitude and latitude of the autonomous vehicle 200 through a location sensor.
  • the sensor module 150 of the self-driving vehicle 200 may or may not obtain null value data, and the external environment data may be partially or completely provided by the virtual scene overlay module 100.
  • the autonomous driving vehicle 200 performs an autonomous driving road test. When the autonomous driving vehicle 200 is in a stopped state, the data collected by the external object sensor may not obtain data.
  • the virtual scene overlay module 100 will provide a simulated road test. Relevant external environmental data.
  • the self-driving vehicle 200 performs an indoor automatic driving test, the data collected by the external object sensor can be indoor environment data, and the virtual scene overlay module 100 will provide the relevant external environment data of the simulated drive test instead of collecting it. Indoor environmental data.
  • the virtual scene superimposing module 100 may provide the relevant external environment corresponding to the calibration execution mechanism 110.
  • the virtual scene overlay module 100 can provide initial calibration of the actuator.
  • the virtual scene overlay module 100 provides crossroads.
  • the virtual scene superimposing module 100 may be connected to one or more sensors in the sensor module 150.
  • the virtual scene superimposing module 100 may be connected to an external object sensor (for example, a vision sensor, a climate sensor, a sonar sensor, etc.), and receive the external environment data collected by the external object sensor in real time.
  • the virtual scene superimposing module 100 may be connected to a position sensor, and receive position data collected by the visual sensor in real time.
  • the virtual data may be an environment difference from the real environment collected by the sensor module 150.
  • the environmental difference may include, but is not limited to, pedestrians, vehicles, obstacles, weather factors (for example, wind, frost, rain, snow, temperature, light), etc., or any combination thereof.
  • the pedestrian can walk across the road at a normal speed, or run into the road.
  • the self-driving vehicle 200 is running normally on the test road, and there are no pedestrians in the real environment.
  • the vehicle includes, but is not limited to, motor vehicles, non-motor vehicles, etc., or any combination thereof.
  • the motor vehicle may include, but is not limited to, an automobile, an electric automobile, a motorcycle, an electric motorcycle, etc., or any combination thereof.
  • the non-motor vehicle may include, but is not limited to, bicycles, tricycles, scooters, electric bicycles, etc., or any combination thereof.
  • the obstacle may be dynamic (e.g., animal).
  • the obstacle may be static (for example, the road pavement has a concave pit, and the road pavement has a raised boulder).
  • weather factors can make the sensor's signal different from the effect of testing normal weather. For example, rainy days may cause blurring of the vision sensor, and the collected data will be biased.
  • the environmental difference may be an environmental difference of a geographic location. For example, the road on which the autonomous vehicle 200 is traveling during the test is straight, the target scene of the test may be a winding mountain road with a constantly changing altitude, and the environmental difference is a change in altitude.
  • the virtual data may be preset according to the scene.
  • the virtual data may include one and/or more different scenes, and the one and/or more different scenes may be stored in a memory.
  • the virtual data may also be scenes encountered by manned and/or unmanned vehicles through autonomous learning. For example, in the blind spot at the rear of a man-driving vehicle, a child squats down to play, and the driver may not find it and backs up, and then hits the child; the virtual data can be a child squatting down and playing in the rear of the vehicle.
  • the virtual data When testing an autonomous vehicle When the vehicle 200 is reversing, the virtual data will be added to the external environment data of the autonomous vehicle 200.
  • the virtual scene superimposing module 100 when the virtual scene superimposing module 100 is an independent hardware structure, it can be connected to the control module 160.
  • the virtual scene superimposing module 100 is connected to the input/output end of the control module 160, and the virtual scene superimposing module 100 can obtain/acquire the virtual data from the control module 160.
  • the virtual scene superimposing module 100 may store the virtual data and receive a control signal from the control module 160, and the control signal may be used to select virtual data corresponding to the current test.
  • the virtual scene overlay module 100 after the virtual scene overlay module 100 adds virtual data to the external environment data, it can generate modified external environment data, and the modified external environment data is sent to the on-board control of the autonomous vehicle. Module.
  • the virtual scene superimposing module 100 may include a timer.
  • the timer may be inside the virtual scene superimposing module 100 or outside the virtual scene superimposing module 100 (for example, the control module 160, the sensor module 150).
  • the timer can be used to count down and issue an instruction to add dummy data.
  • the clock cycle of the countdown may be preset or randomly generated. For example, when the countdown clock cycle is preset to 30s, the virtual data will be added to the external environment data after 30s. For another example, when the countdown clock period is randomly generated, the dummy data will be added to the external environment data after the randomly generated clock period.
  • Fig. 3 is an exemplary flow chart of a vehicle-mounted automatic driving test device in the present application. As shown in FIG. 3, an on-vehicle automatic driving test device is applied.
  • the on-vehicle automatic driving test device includes a virtual scene overlay module 100.
  • the method includes the following steps:
  • the virtual scene overlay module 100 obtains driving data of the autonomous vehicle 200.
  • the driving data of the autonomous vehicle may include, but is not limited to, vehicle speed, acceleration, weather parameters, steering, etc., or any combination thereof.
  • the driving data of the autonomous vehicle 200 acquired by the virtual scene overlay module 100 is related to a snowy day.
  • the driving data related to a snowy day may be that the autonomous vehicle 200 is traveling at a slower speed, or may be a weaker braking ability, or a combination thereof.
  • the virtual scene overlay module 100 obtains real-time driving data of the autonomous vehicle 200.
  • the real-time driving data may include driving state data and/or external environment data of the autonomous vehicle 200.
  • the external environment data may be obtained through real-time collection of external object sensors, and the driving state data may be obtained through vehicle component sensors.
  • the external object sensor is connected to the virtual scene overlay module 100.
  • the virtual scene superimposition module 100 can obtain driving state data such as the driving speed, acceleration, and direction of the autonomous vehicle 200, and/or external environment data such as temperature, light, and wind speed around the autonomous vehicle 200.
  • step 330 based on the real-time driving data of the autonomous vehicle 200, virtual data is added to the external environment data to generate modified external environment data.
  • the driving data of the autonomous vehicle includes a vehicle speed, the virtual data changes with time, and the change is related to the vehicle speed.
  • the real-time driving data may include the driving speed. If the virtual data can be the zebra crossing on the road ahead on which the autonomous vehicle 200 is traveling and pedestrians crossing the road.
  • the virtual scene superimposing module 100 adds virtual data to the external environment data, it can adjust the virtual data according to the driving speed and add the virtual data to the scene to generate corresponding modified external environment data.
  • the image of the pedestrian in the camera becomes larger and larger.
  • the speed at which the pedestrian size increases should match the speed of the autonomous vehicle, causing the autonomous vehicle to continue to drive. Close to the perspective effect of pedestrians.
  • the acceleration of the driving is relatively large, the external environment is sunny, and the virtual data is thunderstorms.
  • the virtual scene superimposing module 100 adds virtual data to the external environment data, it can adjust the amount of rain of the thundershower according to the acceleration of driving and add it to the sunny day accordingly to generate the corresponding modified external environment data, causing the vehicle 200 to The effect of amplifying the amount of rain produced by a faster vehicle speed.
  • the modified external environment data is transmitted to the on-board control module of the autonomous vehicle 200.
  • the modified external environment data may be transmitted to the control module 160 by the virtual scene overlay module 100.
  • the modified external environment data may be sent to the sensor module 150 by the virtual scene overlay module 100, and then transmitted to the control module 160 through the sensor module 150.
  • the driving test module 192 collects the response of the autonomous vehicle 200 to the modified external environment data, and evaluates the performance of the autonomous driving system of the autonomous vehicle 200 based on the response.
  • the response refers to a change in the driving state of the autonomous vehicle 200 due to the addition of virtual data, such as a change in parameters such as vehicle speed, braking, lights, turning, and wipers.
  • the present application in order to help understand a feature, for the purpose of simplifying the present application, the present application sometimes combines various features in a single embodiment, drawings, or descriptions thereof. Or, the present application disperses various features in multiple embodiments of the present application. However, this does not mean that the combination of these features is necessary. It is entirely possible for those skilled in the art to extract some of the features as a separate embodiment for understanding when reading this application. That is to say, the embodiments in this application can also be understood as an integration of multiple sub-embodiments. It is also true that the content of each sub-embodiment is less than all the features of a single aforementioned disclosed embodiment.
  • numbers expressing quantities or properties used to describe and claim certain embodiments of this application should be understood as modified by the terms “about”, “approximately” or “substantially” in some cases. For example, unless otherwise stated, “about”, “approximately” or “substantially” may mean a ⁇ 20% variation of the value described. Therefore, in some embodiments, the numerical parameters listed in the written description and appended claims are approximations, which may vary according to the desired properties that a particular embodiment is attempting to achieve. In some embodiments, the numerical parameters should be interpreted based on the number of significant figures reported and by applying common rounding techniques. Although some embodiments described in this application list a wide range of numerical ranges and the parameters are approximate values, the specific examples all list numerical values as accurate as possible.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

A vehicle-mounted autonomous driving test system, comprising: a virtual scene overlay module (100). In the working state, the virtual scene overlay module (100) is used for obtaining traveling data of an autonomous vehicle (200), obtaining the external environment data of the autonomous vehicle (200), adding, according to the traveling data of the autonomous vehicle (200), additional data to the external environment data to generate modified external environment data, and sending the modified external environment data to the control module (160) onboard the autonomous vehicle. Also disclosed is a corresponding autonomous vehicle test method. The autonomous driving test system and method can be applied in the 4G network environment, but the autonomous driving test system and method have higher requirements for network delay and data transmission speed and are more suitable for use in the 5G network environment.

Description

一种车载自动驾驶测试系统及测试方法Vehicle-mounted automatic driving test system and test method 技术领域Technical field
本申请涉及汽车电子技术领域,尤其涉及一种车载自动驾驶测试系统及方法。This application relates to the field of automotive electronics technology, and in particular to a vehicle-mounted automatic driving test system and method.
技术背景technical background
随着科技的发展,自动驾驶车辆成为未来汽车的重要发展方向。自动驾驶车辆不仅能保障人们的安全出行和舒适体验,还能极大提升人们出行的效率。自动驾驶系统相关的控制器越来越多的配置于量产车型,对于提高驾驶安全性和降低交通事故有重大意义。With the development of science and technology, self-driving vehicles have become an important development direction for future automobiles. Autonomous vehicles can not only guarantee people's safe travel and comfortable experience, but also greatly improve the efficiency of people's travel. More and more controllers related to autopilot systems are deployed in mass-produced models, which is of great significance for improving driving safety and reducing traffic accidents.
信息感知系统、决策系统、汽车执行系统之间需要非常完美的配合才能完成整个自动驾驶的过程。自动驾驶车辆测试过程中,需要不断地改变环境的变化去验证自动驾驶车辆是否正确的感知并在感知的基础上判断自动驾驶系统是否做出正确的决策和执行。但若让自动驾驶车辆直接在实际道路上行驶,不仅会带来安全隐患,而且为了模拟不同的环境,还需要准备各种道具和场景,如假人、假车和假动物等。The information perception system, decision-making system, and automobile execution system require perfect coordination to complete the entire autonomous driving process. During the testing of autonomous vehicles, it is necessary to constantly change the environment to verify whether the autonomous vehicle is perceiving correctly and to judge whether the autonomous driving system makes correct decisions and executes on the basis of perception. However, if the autonomous vehicle is allowed to drive directly on the actual road, it will not only bring safety hazards, but also need to prepare various props and scenes, such as dummy, fake car and fake animal, in order to simulate different environments.
因此,需要一种可以叠加虚拟场景的车载自动驾驶测试系统及方法以解决上述技术问题。Therefore, a vehicle-mounted automatic driving test system and method capable of superimposing virtual scenes are needed to solve the above technical problems.
发明内容Summary of the invention
本申请披露了一种用于一种车载自动驾驶测试系统,包括:虚拟场景叠加模块,在工作状态下,所述虚拟场景叠加模块获取自动驾驶车辆的行驶数据;获取所述自动驾驶车辆的外部环境数据;根据所述自动驾驶车辆的行驶数据,在所述外部环境数据中加入虚拟数据,生成修改后的外部环境数据;将所述修改后的外部环境数据传送给自动驾驶车辆的车载控制模块。The present application discloses a vehicle-mounted automatic driving test system, including: a virtual scene superimposing module, in a working state, the virtual scene superimposing module obtains driving data of an automatic driving vehicle; and obtains the exterior of the automatic driving vehicle Environment data; add virtual data to the external environment data according to the driving data of the autonomous vehicle to generate modified external environment data; transmit the modified external environment data to the on-board control module of the autonomous vehicle .
本申请一方面提供一种一种自动驾驶车辆测试方法,应用于自动驾驶车辆的测试装置,所述自动驾驶车辆测试方法包括:获取自动驾驶车辆的行驶数据;获取所述自动驾驶车辆的外部环境数据;根据所述自动驾驶车辆的行驶数据,在所述外部环境数据中加入虚拟数据,生成修改后的外部环境数据;将所述修改后的外部环境数据传送给自动驾驶车辆的车载控制模块。On the one hand, the present application provides an automatic driving vehicle test method, which is applied to a test device of an automatic driving vehicle. The automatic driving vehicle test method includes: acquiring driving data of the automatic driving vehicle; acquiring the external environment of the automatic driving vehicle Data; add virtual data to the external environment data according to the driving data of the autonomous vehicle to generate modified external environment data; transmit the modified external environment data to the on-board control module of the autonomous vehicle.
本申请中的发明对网络时延和数据的传输速度要求较高。比如,本发明中披露的技术可 以应用在4G网络环境,但是更适合5G网络环境。The invention in this application requires relatively high network delay and data transmission speed. For example, the technology disclosed in the present invention can be applied to a 4G network environment, but is more suitable for a 5G network environment.
附图说明Description of the drawings
以下附图详细描述了本申请中披露的示例性实施例。其中相同的附图标记在附图的若干视图中表示类似的结构。本领域的一般技术人员将理解这些实施例是非限制性的、示例性的实施例,附图仅用于说明和描述的目的,并不旨在限制本申请的范围,其他方式的实施例也可能同样的完成本申请中的发明意图。其中:The following drawings describe in detail the exemplary embodiments disclosed in this application. The same reference numerals indicate similar structures in several views of the drawings. Those of ordinary skill in the art will understand that these embodiments are non-limiting and exemplary embodiments. The drawings are only for illustration and description purposes, and are not intended to limit the scope of the application. Embodiments in other ways are also possible. The same is true of the intention of the invention in this application. among them:
图1是本申请中的一种具有自动驾驶能力的示例性车辆的框图;Fig. 1 is a block diagram of an exemplary vehicle with automatic driving capability in the present application;
图2A-2D是本申请中的一种虚拟场景叠加模块与视觉传感器耦合方式的示意图;2A-2D are schematic diagrams of a coupling mode of a virtual scene overlay module and a visual sensor in this application;
图3是本申请中的一种车载自动驾驶测试系统的示例性流程图;Fig. 3 is an exemplary flow chart of an on-vehicle automatic driving test system in the present application;
图4是本申请中的一种应用修改后的外部环境数据场景的示意图。Fig. 4 is a schematic diagram of a modified external environment data scenario in this application.
具体实施方式detailed description
本申请披露了一种用于车载自动驾驶测试系统及方法,可以通过虚拟场景叠加模块将虚拟场景加入外部环境数据生成修改后的外部环境数据,用于测试和验证自动驾驶车辆在道路上行驶的安全性,为自动驾驶车辆的技术研发、上路测试许可和产品准入认证提供指导。This application discloses a vehicle-mounted automatic driving test system and method. A virtual scene can be added to external environment data through a virtual scene overlay module to generate modified external environment data, which is used to test and verify the driving performance of an autonomous vehicle on the road. Safety, to provide guidance for the technology research and development of autonomous vehicles, road test permits and product access certification.
由于本申请涉及自动驾驶,本申请中涉及的技术对网络时延和数据的传输速度要求较高。比如,本申请中披露的技术可以应用在4G网络环境,但是更适合5G网络环境。4G的数据传输速率是100Mbps量级,时延是30-50ms,每平方千米的最大连接数1万量级,移动性350KM/h左右,而5G的传输速率是10Gbps量级,时延是1ms,每平方千米的最大连接数是百万量级,移动性是500km/h左右。5G具有更高的传输速率,更短的时延,更多的平方千米连接数,以及更高的速度容忍度。5G还有一个变化,就是传输路径的变化。以往我们打电话或者传照片,信号都要通过基站进行中转,但是5G之后,设备和设备之间就可以直接进行传输,不需要再通过基站。因此,本发明虽然也适用于4G环境,但是5G环境下运行会得到更好的技术表现,体现更高的商业价值。Since this application relates to automatic driving, the technology involved in this application requires relatively high network delay and data transmission speed. For example, the technology disclosed in this application can be applied to a 4G network environment, but is more suitable for a 5G network environment. The data transmission rate of 4G is on the order of 100Mbps, the delay is 30-50ms, the maximum number of connections per square kilometer is on the order of 10,000, and the mobility is about 350KM/h, while the transmission rate of 5G is on the order of 10Gbps, and the delay is 1ms, the maximum number of connections per square kilometer is in the order of millions, and the mobility is about 500km/h. 5G has a higher transmission rate, shorter delay, more square kilometers of connections, and higher speed tolerance. Another change in 5G is the change in the transmission path. In the past, when we make calls or send photos, the signal must be relayed through the base station, but after 5G, the device and the device can be directly transmitted without passing through the base station. Therefore, although the present invention is also suitable for 4G environment, it will get better technical performance when running in 5G environment and reflect higher commercial value.
为了给本领域普通技术人员提供相关披露的透彻理解,在以下详细描述中通过示例阐述了本申请的具体细节。然而本申请披露的内容应该理解为与权利要求的保护范围一致,而不限于该具体发明细节。比如,对于本领域普通技术人员来说,对本申请中披露的实施例进行各种修改是显而易见的;并且在不脱离本申请的精神和范围的情况下,本领域的普通技术人员可以将这里定义的一般原理应用于其他实施例和应用。再比如,这些细节如果没有以下 披露,对本领域普通技术人员来说也可以在不知道这些细节的情况下实践本申请。另一方面,为了避免不必要地模糊本申请的内容,本申请对公知的方法,过程,系统,组件和/或电路做了一般性概括而没有详细描述。因此,本申请披露的内容不限于所示的实施例,而是与权利要求的范围一致。In order to provide those of ordinary skill in the art with a thorough understanding of the relevant disclosures, specific details of the present application are illustrated by examples in the following detailed description. However, the content disclosed in this application should be understood as consistent with the protection scope of the claims, and not limited to the specific invention details. For example, it is obvious for a person of ordinary skill in the art to make various modifications to the embodiments disclosed in this application; and without departing from the spirit and scope of the application, those of ordinary skill in the art can define here The general principle is applied to other embodiments and applications. For another example, if these details are not disclosed below, those of ordinary skill in the art can also practice this application without knowing these details. On the other hand, in order to avoid unnecessarily obscuring the content of this application, this application provides a general summary of well-known methods, processes, systems, components and/or circuits without detailed descriptions. Therefore, the content disclosed in this application is not limited to the illustrated embodiment, but is consistent with the scope of the claims.
本申请中使用的术语仅用于描述特定示例实施例的目的,而不是限制性的。比如除非上下文另有明确说明,本申请中如果对某要件使用了单数形式的描述(比如,“一”、“一个”和/或等同性的说明)也可以包括多个该要件。在本申请中使用的术语“包括”和/或“包含”是指开放性的概念。比如A包括/包含B仅仅表示A中有B特征的存在,但并不排除其他要件(比如C)在A中存在或添加的可能性。The terms used in this application are only used for the purpose of describing specific example embodiments, and are not restrictive. For example, unless the context clearly indicates otherwise, if a singular form is used for a certain element in this application (for example, "a", "an" and/or equivalent description), a plurality of such elements may also be included. The terms "including" and/or "including" used in this application refer to an open concept. For example, A includes/includes B only means that there is a feature of B in A, but it does not exclude the possibility that other elements (such as C) exist or be added to A.
应当理解的是,本申请中使用的术语,比如“系统”,“单元”,“模块”和/或“块”,是用于区分不同级别的不同组件,元件,部件,部分或组件的一种方法。但是,如果其他术语可以达到同样的目的,本申请中也可能使用该其他术语来替代上述术语。It should be understood that the terms used in this application, such as "system", "unit", "module" and/or "block", are used to distinguish different components, elements, components, parts, or components at different levels. Kind of method. However, if other terms can achieve the same purpose, the other terms may also be used in this application to replace the above-mentioned terms.
本申请中描述的模块(或单元,块,单元)可以实现为软件和/或硬件模块。除非上下文另有明确说明,当某单元或模块被描述为“接通”、“连接到”或“耦合到”另一个单元或模块时,该表达可能是指该单元或模块直接接通、链接或耦合到该另一个单元或模块上,也可能是指该单元或模块间接的以某种形式接通、连接或耦合到该另一个单元或模块上。在本申请中,术语“和/或”包括一个或多个相关所列项目的任何和所有组合。The modules (or units, blocks, units) described in this application can be implemented as software and/or hardware modules. Unless the context clearly dictates otherwise, when a unit or module is described as being “connected”, “connected to” or “coupled to” another unit or module, the expression may mean that the unit or module is directly connected or linked Or coupled to the other unit or module, it may also mean that the unit or module is indirectly connected, connected, or coupled to the other unit or module in some form. In this application, the term "and/or" includes any and all combinations of one or more related listed items.
在本申请中,术语“自动驾驶车辆”、“自动驾驶汽车”可以指能够感知其环境并且在没有人(例如,驾驶员,飞行员等)输入和/或干预的情况下对外界环境自动进行感知、判断并进而做出决策的车辆。术语“自动驾驶车辆”、“自动驾驶汽车”和“车辆”可以互换使用。术语“自动驾驶”可以指没有人(例如,驾驶员,飞行员等)输入的对周边环境进行智能判断并进行导航的能力。In this application, the terms "self-driving vehicle" and "self-driving car" may refer to the ability to perceive its environment and automatically perceive the external environment without human input and/or intervention (eg, driver, pilot, etc.) , Judge and then make a decision-making vehicle. The terms "self-driving vehicle", "self-driving car" and "vehicle" can be used interchangeably. The term "autopilot" may refer to the ability to make intelligent judgments and navigate the surrounding environment without input from a person (for example, a driver, a pilot, etc.).
考虑到以下描述,本申请的这些特征和其他特征、以及结构的相关元件的操作和功能、以及部件的组合和制造的经济性可以得到明显提高。参考附图,所有这些形成本申请的一部分。然而,应该清楚地理解,附图仅用于说明和描述的目的,并不旨在限制本申请的范围。应理解,附图未按比例绘制。In consideration of the following description, the operation and function of these and other features of the present application, as well as related elements of the structure, as well as the combination of components and the economics of manufacturing can be significantly improved. With reference to the drawings, all of these form a part of this application. However, it should be clearly understood that the drawings are only for illustration and description purposes, and are not intended to limit the scope of the application. It should be understood that the drawings are not drawn to scale.
本申请中使用的流程图示出了根据本申请中的一些实施例的系统实现的操作。应该清楚地理解,流程图的操作可以不按顺序实现。相反,操作可以以反转顺序或同时实现。此外,可以向流程图添加一个或多个其他操作。可以从流程图中移除一个或多个操作。The flowchart used in this application shows the operations implemented by the system according to some embodiments in this application. It should be clearly understood that the operations of the flowchart can be implemented out of order. Instead, the operations can be performed in reverse order or simultaneously. In addition, one or more other operations can be added to the flowchart. One or more operations can be removed from the flowchart.
此外,尽管本申请中的电路和方法主要描述了关于车载自动驾驶测试系统及方法,但是应该理解,这仅是示例性实施例。本申请的装置及方法也可以应用于其他类型的系统。例如,本申请的系统或方法可以应用于不同环境的运输系统的驾驶测试,包括陆地,海洋,航空航天等,或其任何组合。运输系统的自动驾驶车辆可包括出租车,私家车,挂车,公共汽车,火车,子弹列车,高速铁路,地铁,船只,飞机,宇宙飞船,热气球,无人驾驶车辆等,或其任何组合。在一些实施例中,该系统或方法可以在例如物流仓库,军事事务(比如飞行员的模拟飞行或者无人机的自动飞行测试)中找到应用。In addition, although the circuit and method in this application mainly describe the on-vehicle automatic driving test system and method, it should be understood that this is only an exemplary embodiment. The device and method of the present application can also be applied to other types of systems. For example, the system or method of the present application can be applied to driving tests of transportation systems in different environments, including land, sea, aerospace, etc., or any combination thereof. The autonomous vehicles of the transportation system may include taxis, private cars, trailers, buses, trains, bullet trains, high-speed railways, subways, ships, airplanes, spacecraft, hot air balloons, unmanned vehicles, etc., or any combination thereof. In some embodiments, the system or method may find applications in, for example, logistics warehouses, military affairs (such as pilots' flight simulations or automated flight tests of drones).
图1是根据一些实施例披露的自动驾驶测试系统的框图。所述系统包括具有自动驾驶能力以及自动驾驶测试能力的示例性车辆200的整体或者任意一部分。所述具有自动驾驶能力的车辆200可包括控制模块160、传感器模块150、存储器140、指令模块130和控制器区域网络(CAN)120、执行机构110、通讯模块180、自动驾驶测试装置190、测试模块192、规划控制模块以及虚拟场景叠加模块100。自动驾驶车辆的控制模块160和虚拟场景叠加模块100均通过通讯模块180同网络170相连。其中所述虚拟场景叠加模块100可以是独立的硬件模块,也可以是从属于控制模块160的硬件或者软件模块。Fig. 1 is a block diagram of an automated driving test system disclosed according to some embodiments. The system includes the whole or any part of an exemplary vehicle 200 having automatic driving capabilities and automatic driving test capabilities. The vehicle 200 with automatic driving capability may include a control module 160, a sensor module 150, a memory 140, an instruction module 130, a controller area network (CAN) 120, an actuator 110, a communication module 180, an automatic driving test device 190, a test A module 192, a planning control module, and a virtual scene superimposing module 100. The control module 160 of the autonomous vehicle and the virtual scene overlay module 100 are both connected to the network 170 through the communication module 180. The virtual scene overlay module 100 may be an independent hardware module, or may be a hardware or software module subordinate to the control module 160.
在一些实施例中,所述控制模块160可以包括一个或多个中央处理器(例如,单核处理器或多核处理器)。仅作为示例,控制模块可以包括中央处理单元(central processing unit,CPU),专用集成电路(application-specific integrated circuit,ASIC),专用指令集处理器(application-specific instruction-set processor,ASIP),图形处理单元(graphics processing unit,GPU),物理处理单元(physics processing unit,PPU),数字信号处理器(digital signal processor,DSP),场可编程门阵列(field programmable gate array,FPGA),可编程逻辑器件(programmable logic device,PLD),控制器,微控制器单元,精简指令集计算机(reduced instruction-set computer,RISC),微处理器(microprocessor)等,或其任何组合。In some embodiments, the control module 160 may include one or more central processors (for example, single-core processors or multi-core processors). For example only, the control module may include a central processing unit (CPU), an application-specific integrated circuit (ASIC), an application-specific instruction-set processor (ASIP), and graphics Processing unit (graphics processing unit, GPU), physical processing unit (physics processing unit, PPU), digital signal processor (digital signal processor, DSP), field programmable gate array (field programmable gate array, FPGA), programmable logic Device (programmable logic device, PLD), controller, microcontroller unit, reduced instruction-set computer (RISC), microprocessor (microprocessor), etc., or any combination thereof.
所述存储器140(本地存储器、远程存储器)可以存储数据和/或指令。在一些实施例中,所述存储器140可以存储从自动驾驶车辆传感器获得的数据。在一些实施例中,所述存储器可以存储控制模块可以执行或使用的数据和/或指令,以执行本公开中描述的示例性方法。比如,所述指令可以包括虚拟场景叠加模块100等等。当然,所述虚拟场景叠加模块100也可以是独立的硬件模块。在一些实施例中,所述存储器可以包括大容量存储器,可移动存储器,易失性读写存储器(volatile read-and-write memory),只读存储器(ROM)等,或其任何组合。作为示例,比如大容量存储器可以包括磁盘,光盘,固态驱动器等;比如可移 动存储器可以包括闪存驱动器,软盘,光盘,存储卡,拉链盘,磁带;比如易失性读写存储器可以包括随机存取存储器(RAM);比如RAM可以包括动态RAM(DRAM),双倍数据速率同步动态RAM(DDR SDRAM),静态RAM(SRAM),可控硅RAM(T-RAM)和零电容器RAM(Z-RAM);比如ROM可以包括掩模ROM(MROM),可编程ROM(PROM),可擦除可编程ROM(EPROM),电可擦除可编程ROM(EEPROM),光盘ROM(CD-ROM),以及数字通用磁盘ROM等。在一些实施例中,存储可以在云平台上实现。仅作为示例,云平台可以包括私有云,公共云,混合云,社区云,分布式云,云间云,多云等,或其任何组合。The memory 140 (local memory, remote memory) can store data and/or instructions. In some embodiments, the memory 140 may store data obtained from sensors of an autonomous vehicle. In some embodiments, the memory may store data and/or instructions that can be executed or used by the control module to perform the exemplary methods described in the present disclosure. For example, the instruction may include the virtual scene superimposing module 100 and so on. Of course, the virtual scene superimposing module 100 may also be an independent hardware module. In some embodiments, the memory may include a mass memory, a removable memory, a volatile read-and-write memory (volatile read-and-write memory), a read-only memory (ROM), etc., or any combination thereof. As an example, for example, mass storage may include magnetic disks, optical disks, solid-state drives, etc.; for example, removable storage may include flash drives, floppy disks, optical disks, memory cards, zipper disks, and magnetic tapes; for example, volatile read-write memory may include random access Memory (RAM); for example, RAM can include dynamic RAM (DRAM), double data rate synchronous dynamic RAM (DDR SDRAM), static RAM (SRAM), thyristor RAM (T-RAM) and zero capacitor RAM (Z-RAM) ); For example, ROM may include mask ROM (MROM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), compact disk ROM (CD-ROM), and Digital universal disk ROM, etc. In some embodiments, storage can be implemented on a cloud platform. For example only, the cloud platform may include private cloud, public cloud, hybrid cloud, community cloud, distributed cloud, inter-cloud cloud, multi-cloud, etc., or any combination thereof.
在一些实施例中,所述存储器140可以为本地存储器,即所述存储器140可以是自动驾驶车辆200的一部分。在一些实施例中,所述存储器140也可以是远程存储器。所述中央处理器可以通过网络170连接所述远程存储器以与自动驾驶车辆200的一个或多个组件(例如,传感器模块150和虚拟场景叠加模块100)通信。自动驾驶车辆200中的一个或多个组件可以经由网络170访问远程存储在远程存储器中的数据或指令。在一些实施例中,存储器140可以直接连接到自动驾驶车辆200中的一个或多个组件或与其通信(例如,控制模块160,传感器模块150)。In some embodiments, the storage 140 may be a local storage, that is, the storage 140 may be a part of the autonomous vehicle 200. In some embodiments, the storage 140 may also be a remote storage. The central processing unit may connect to the remote storage through the network 170 to communicate with one or more components of the autonomous vehicle 200 (for example, the sensor module 150 and the virtual scene overlay module 100). One or more components in the autonomous vehicle 200 may access data or instructions remotely stored in a remote storage via the network 170. In some embodiments, the memory 140 may be directly connected to or communicate with one or more components in the autonomous vehicle 200 (for example, the control module 160, the sensor module 150).
所述虚拟场景叠加模块100负责接收传感器模块150传来的数据,并根据自动驾驶车辆测试的需要将虚拟场景部分或者全部的嵌入到传感器模块150传来到数据中,然后将所述传感器模块150传来的数据发送给规划控制模块进行自动驾驶的规划控制。The virtual scene overlay module 100 is responsible for receiving the data from the sensor module 150, and according to the needs of the automated driving vehicle test, part or all of the virtual scene is embedded in the sensor module 150 to transmit the data, and then the sensor module 150 The incoming data is sent to the planning control module for planning control of automatic driving.
所述规划控制模块根据所述外部环境图像数据,并结合地图以及其他自动驾驶车辆100感知和接收到的信息进行自动驾驶的规划决策信息和车辆控制信息。比如所述规划决策信息和控制信息可以包括车辆形式路径、变道信息、加减速信息、转弯信息等规划控制信息。然后所述规划控制模块再将所述控制信息发送给指令模块130。所述规划控制模块还可以将所述规划决策信息通过通讯数据处理模块分享给其他车辆。The planning control module performs automatic driving planning decision information and vehicle control information based on the external environment image data, combined with maps and other information sensed and received by the autonomous driving vehicle 100. For example, the planning decision information and control information may include planning control information such as vehicle path, lane change information, acceleration and deceleration information, and turning information. Then the planning control module sends the control information to the instruction module 130. The planning control module can also share the planning decision information with other vehicles through the communication data processing module.
所述网络170可以促进信息和/或数据的交换。在一些实施例中,所述自动驾驶车辆200中的一个或多个组件(例如,控制模块160,传感器150)可以经由所述网络170将信息和/或数据发送到所述自动驾驶车辆200中的其他组件。例如。所述控制模块160可以经由所述网络170获得/获取车辆的动态情况和/或车辆周围的环境信息。在一些实施例中,所述网络170可以是任何类型的有线或无线网络,或其组合。仅作为示例,所述网络170可以包括有线网络,有线网络,光纤网络,远程通信网络,内联网,因特网,局域网(LAN),广域网(WAN),无线局域网(WLAN),城域网(MAN),广域网(WAN),公共电话交换网(PSTN), 蓝牙网络,ZigBee网络,近场通信(NFC)网络,3G/4G/5G网络等,或其任何组合。在一些实施例中,所述网络170可以包括一个或多个网络接入点。例如,所述网络170可以包括有线或无线网络接入点。通过所述自动驾驶车辆200的一个或多个部件可以连接到所述网络170以交换数据和/或信息。The network 170 may facilitate the exchange of information and/or data. In some embodiments, one or more components in the autonomous vehicle 200 (for example, the control module 160, the sensor 150) may send information and/or data to the autonomous vehicle 200 via the network 170 Other components. E.g. The control module 160 may obtain/acquire dynamic conditions of the vehicle and/or environmental information around the vehicle via the network 170. In some embodiments, the network 170 may be any type of wired or wireless network, or a combination thereof. For example only, the network 170 may include a wired network, a wired network, an optical fiber network, a telecommunication network, an intranet, the Internet, a local area network (LAN), a wide area network (WAN), a wireless local area network (WLAN), and a metropolitan area network (MAN) , Wide Area Network (WAN), Public Switched Telephone Network (PSTN), Bluetooth network, ZigBee network, Near Field Communication (NFC) network, 3G/4G/5G network, etc., or any combination thereof. In some embodiments, the network 170 may include one or more network access points. For example, the network 170 may include wired or wireless network access points. One or more components of the autonomous vehicle 200 may be connected to the network 170 to exchange data and/or information.
所述执行机构110可以包括,但不限于,油门、引擎、制动和转向系统(包括轮胎的转向和/或转向灯的操作)的驱动执行。所述转向系统可以操纵所述自动驾驶车辆200。在一些实施例中,所述转向系统可以基于从所述控制模块发送的控制信号来操纵所述自动驾驶车辆200。控制信号可以包括与转弯方向,转弯位置,转弯角度,转向灯等有关的信息,或其任何组合。例如,当控制模块发送的控制信号是将自动驾驶车辆的前轮逆时针转45°,所述转向系统可以基于从所述控制模块发送的控制信号来引导所述自动驾驶车辆200向左转,且前轮转弯角度为45°。The actuator 110 may include, but is not limited to, the driving execution of the accelerator, the engine, the brake, and the steering system (including the steering of the tire and/or the operation of the turn signal). The steering system can steer the autonomous vehicle 200. In some embodiments, the steering system may manipulate the autonomous vehicle 200 based on a control signal sent from the control module. The control signal may include information related to turning direction, turning position, turning angle, turn signal, etc., or any combination thereof. For example, when the control signal sent by the control module is to turn the front wheel of the self-driving vehicle 45° counterclockwise, the steering system may guide the self-driving vehicle 200 to turn left based on the control signal sent from the control module, And the front wheel turning angle is 45°.
所述制动系统可以控制所述自动驾驶车辆200的运动状态。例如,所述制动系统可以使所述自动驾驶车辆200减速。在一些实施例中,所述制动系统可以在一个或多个道路状况(例如,下坡)下使所述自动驾驶车辆200停止前行。在一些实施例中,所述制动系统可以在下坡上行驶时使所述自动驾驶车辆200保持恒定速度。所述制动系统可包括机械控制部件,液压单元,动力单元(例如,真空泵),执行单元等,或其任何组合。机械控制部件可包括踏板,手制动器等。液压单元可包括液压油,液压软管,制动泵等。执行单元可包括制动钳,制动衬块,制动盘,等等。The braking system can control the motion state of the autonomous vehicle 200. For example, the braking system may decelerate the autonomous vehicle 200. In some embodiments, the braking system may stop the autonomous vehicle 200 from moving forward in one or more road conditions (e.g., downhill). In some embodiments, the braking system can keep the autonomous vehicle 200 at a constant speed when driving downhill. The braking system may include mechanical control components, hydraulic units, power units (for example, vacuum pumps), execution units, etc., or any combination thereof. Mechanical control components can include pedals, hand brakes, etc. The hydraulic unit may include hydraulic oil, hydraulic hoses, brake pumps, etc. The execution unit may include brake calipers, brake pads, brake discs, and so on.
所述引擎系统可以确定所述自动驾驶车辆200的引擎性能。在一些实施例中,所述引擎系统可以基于来自所述控制模块的控制信号确定自动驾驶车辆200的引擎性能。例如。所述引擎系统可以基于与来自所述控制模块的加速度相关联的控制信号来确定自动驾驶车辆200的引擎性能。所述引擎系统可以包括多个传感器和至少一个微处理器。多个传感器可以被配置为检测一个或多个物理信号并将一个或多个物理信号转换为电信号以进行处理。在一些实施例中,所述多个传感器可包括各种温度传感器,空气流量传感器,节气门位置传感器,泵压力传感器,速度传感器,氧传感器,负载传感器,爆震传感器等,或其任何组合。所述一个或多个物理信号可包括但不限于引擎温度,引擎进气量,冷却水温度,发动机速度等,或其任何组合。所述微处理器可以基于多个引擎控制参数确定引擎性能。所述微处理器可以基于多个电信号确定多个发动机控制参数,可以确定多个引擎控制参数以优化引擎性能。所述多个发动机控制参数可包括点火时机,燃料输送,空转气流等,或其任何组合。The engine system can determine the engine performance of the autonomous vehicle 200. In some embodiments, the engine system may determine the engine performance of the autonomous vehicle 200 based on the control signal from the control module. E.g. The engine system may determine the engine performance of the autonomous vehicle 200 based on the control signal associated with the acceleration from the control module. The engine system may include a plurality of sensors and at least one microprocessor. The plurality of sensors may be configured to detect one or more physical signals and convert the one or more physical signals into electrical signals for processing. In some embodiments, the plurality of sensors may include various temperature sensors, air flow sensors, throttle position sensors, pump pressure sensors, speed sensors, oxygen sensors, load sensors, knock sensors, etc., or any combination thereof. The one or more physical signals may include, but are not limited to, engine temperature, engine air intake, cooling water temperature, engine speed, etc., or any combination thereof. The microprocessor may determine engine performance based on a plurality of engine control parameters. The microprocessor may determine multiple engine control parameters based on multiple electrical signals, and may determine multiple engine control parameters to optimize engine performance. The plurality of engine control parameters may include ignition timing, fuel delivery, idling air flow, etc., or any combination thereof.
所述油门系统可以改变所述自动驾驶车辆200的行驶速度。在一些实施例中,所述油门系统可以在一个或多个道路状况下使所述自动驾驶车辆200保持行驶速度。在一些实施例中,所述油门系统可以在需要加速时使所述自动驾驶车辆200的行驶速度提高。例如,所述自动驾驶车辆200在条件允许情况下超过前面行驶的车辆,所述油门系统可以使所述自动驾驶车辆200加速。The throttle system can change the driving speed of the autonomous vehicle 200. In some embodiments, the throttle system can maintain the driving speed of the autonomous vehicle 200 under one or more road conditions. In some embodiments, the throttle system can increase the driving speed of the autonomous vehicle 200 when acceleration is required. For example, if the self-driving vehicle 200 surpasses the preceding vehicle when conditions permit, the accelerator system may accelerate the self-driving vehicle 200.
所述指令模块130接收控制模块160传来的信息,并将之转换成驱动执行机构的指令传给控制器区域网络(Controller Area Network,CAN)总线120。比如,所述控制模块160向所述指令模块130发送自动驾驶车辆200的行驶策略(加速、减速、转弯等等),所述指令模块130接收所述行驶策略,并将之转换成对执行机构的驱动指令(对油门、制动机构、转向机构的驱动指令)。同时,所述指令模块130再将所述指通过CAN总线120下发到所述执行机构去。执行机构110对所述指令的执行情况再由车辆部件传感器检测并反馈到所述控制模块16,从而完成对所述自动驾驶车辆200到闭环控制和驱动。The instruction module 130 receives the information from the control module 160, converts it into an instruction to drive the actuator, and sends it to the Controller Area Network (CAN) bus 120. For example, the control module 160 sends the driving strategy (acceleration, deceleration, turning, etc.) of the autonomous vehicle 200 to the instruction module 130, and the instruction module 130 receives the driving strategy and converts it into a pair of actuators. Drive instructions (drive instructions for the accelerator, brake mechanism, and steering mechanism). At the same time, the instruction module 130 then issues the finger to the actuator through the CAN bus 120. The execution of the instructions by the actuator 110 is then detected by the vehicle component sensors and fed back to the control module 16, thereby completing the closed-loop control and driving of the autonomous vehicle 200.
所述CAN总线120是个可靠的车辆总线标准(例如,基于消息的协议message-based protocol),其允许微控制器(例如,控制模块150)和设备(例如,引擎系统,制动系统,转向系统和/或油门系统等)在没有主计算机的应用程序中彼此通信。所述CAN总线120可以被配置为将所述控制模块150与多个ECU(例如,引擎系统,制动系统,转向系统,油门系统)连接。The CAN bus 120 is a reliable vehicle bus standard (for example, a message-based protocol), which allows microcontrollers (for example, the control module 150) and devices (for example, engine systems, braking systems, and steering systems) And/or throttle system, etc.) communicate with each other in applications that do not have a host computer. The CAN bus 120 may be configured to connect the control module 150 with multiple ECUs (for example, an engine system, a braking system, a steering system, and a throttle system).
所述传感器模块150可以包括一个或多个传感器。所述多个传感器可以包括向自动驾驶车辆200提供数据的各种内部和外部传感器。比如图1中所示,所述多个传感器可以包括车辆部件传感器和环境传感器。车辆部件传感器连接着车辆200的执行机构,可以检测到所述执行机构各个部件的运行状态和参数。The sensor module 150 may include one or more sensors. The plurality of sensors may include various internal and external sensors that provide data to the autonomous vehicle 200. For example, as shown in FIG. 1, the plurality of sensors may include vehicle component sensors and environmental sensors. The vehicle component sensor is connected to the actuator of the vehicle 200, and can detect the operating status and parameters of each component of the actuator.
所述环境传感器允许车辆理解并潜在地响应其环境,以便帮助自动驾驶车辆200进行导航、路径规划以及保障乘客以及周围环境中的人或财产的安全。所述环境传感器还可用于识别,跟踪和预测物体的运动,例如行人和其他车辆。所述环境传感器可以包括位置传感器和外部对象传感器。The environmental sensor allows the vehicle to understand and potentially respond to its environment, so as to help the autonomous vehicle 200 perform navigation, route planning, and ensure the safety of passengers and people or property in the surrounding environment. The environmental sensors can also be used to identify, track and predict the movement of objects, such as pedestrians and other vehicles. The environmental sensor may include a position sensor and an external object sensor.
所述位置传感器可以包括GPS接收器、加速度计和/或陀螺仪,接收器。所述位置传感器可以感知和/或确定自动驾驶车辆200的地理位置和方位。例如,确定车辆的纬度,经度和高度。The position sensor may include a GPS receiver, an accelerometer and/or a gyroscope, and a receiver. The location sensor can sense and/or determine the geographic location and orientation of the autonomous vehicle 200. For example, determine the latitude, longitude and altitude of the vehicle.
所述外部对象传感器可以检测自动驾驶车辆200外部的物体,例如其他车辆,道路中 的障碍物,交通信号,标志,树木等。外部对象传感器可以包括视觉传感器,激光雷达,声纳传感器,气候传感器,加速度传感器,和/或其他检测装置,和/或其任何组合。The external object sensor can detect objects outside the autonomous vehicle 200, such as other vehicles, obstacles in the road, traffic signals, signs, trees, and so on. The external object sensor may include a vision sensor, a lidar, a sonar sensor, a climate sensor, an acceleration sensor, and/or other detection devices, and/or any combination thereof.
所述激光雷达可以位于自动驾驶车辆200的顶部、前部和后部以及前保险杠的任一侧。激光雷达可以包括但不限于,单线激光雷达,4线激光雷达,16线激光雷达,32线激光雷达,64线激光雷达等,或其任何组合。在一些实施例中,所述激光雷达采集被不同方向环境中的物体反射回来的激光束,形成周边环境的点阵。The lidar may be located on the top, front and rear of the autonomous vehicle 200, and either side of the front bumper. Lidar may include, but is not limited to, single-line lidar, 4-line lidar, 16-line lidar, 32-line lidar, 64-line lidar, etc., or any combination thereof. In some embodiments, the lidar collects laser beams reflected by objects in the environment in different directions to form a lattice of the surrounding environment.
除了使用激光雷达来确定外部物体的相对位置之外,其他类型的雷达也可以用于其他目的,例如传统的速度检测器。短波雷达可用于确定道路上的积雪深度并确定路面的位置和状况。In addition to using lidar to determine the relative position of external objects, other types of radar can also be used for other purposes, such as traditional speed detectors. Shortwave radar can be used to determine the depth of snow on the road and determine the location and condition of the road.
所述激光雷达采集到的数据,可以进一步加入虚拟数据。在一些实施例中,基于图2A-2C同样的道理,虚拟图像叠加模块也可以接入所述激光雷达或其他类型的雷达,从而将虚拟物体的信号加入到所述雷达的探测信号中。The data collected by the lidar can be further added with virtual data. In some embodiments, based on the same principles in FIGS. 2A-2C, the virtual image superimposition module can also be connected to the lidar or other types of radar, so as to add the signal of the virtual object to the detection signal of the radar.
所述声纳传感器可以探测自动驾驶车辆200同周围障碍物到距离。例如,所述声纳可以是超声波测距仪。所述超声波测距仪安装在自动驾驶车辆200的两侧和后面,在泊车的时候开启来探测泊车位周围的障碍物以及自动驾驶车辆200同所述障碍物的距离。所述声纳传感器探测到的数据,可以进一步加入虚拟数据(例如,虚拟障碍物)。在一些实施例中,基于图2A-2C同样的道理,虚拟图像叠加模块也可以接入到所述声纳传感器,从而将虚拟物体的数据加入到所述声纳传感器的探测数据中。The sonar sensor can detect the distance between the autonomous vehicle 200 and surrounding obstacles. For example, the sonar may be an ultrasonic rangefinder. The ultrasonic rangefinder is installed on both sides and the back of the autonomous vehicle 200 and is turned on when parking to detect obstacles around the parking space and the distance between the autonomous vehicle 200 and the obstacle. The data detected by the sonar sensor may be further added with virtual data (for example, virtual obstacles). In some embodiments, based on the same principles of FIGS. 2A-2C, the virtual image overlay module can also be connected to the sonar sensor, so as to add the data of the virtual object to the detection data of the sonar sensor.
所述气候传感器可以探测自动驾驶车辆200外部的天气信息。所述天气信息包括但不限于,风,霜,雨,雪,温度,光照等,或其任何组合。例如,所述气候传感器探测到所述自动驾驶车辆200外部是夜晚。所述气候传感器探测到的数据,可以进一步加入虚拟数据(例如,下雪,下雨)。在一些实施例中,基于图2A-2C同样的道理,虚拟图像叠加模块也可以接入所述气候传感器中,从而将虚拟天气数据加入到真实测量到的天气数据中。The climate sensor can detect weather information outside the autonomous vehicle 200. The weather information includes, but is not limited to, wind, frost, rain, snow, temperature, light, etc., or any combination thereof. For example, the climate sensor detects that the exterior of the autonomous vehicle 200 is night. The data detected by the climate sensor may be further added with virtual data (for example, it is snowing, raining). In some embodiments, based on the same principles in FIGS. 2A-2C, the virtual image overlay module can also be connected to the climate sensor, so as to add virtual weather data to the actual weather data.
所述加速度传感器可以探测自动驾驶车辆200外部对象的加速度。所述外部对象可以是静止的,也可以是活动的。所述加速度传感器探测到的数据,可以进一步加入虚拟数据。在一些实施例中,所述虚拟数据由虚拟场景叠加模块100加入。The acceleration sensor can detect the acceleration of an object external to the autonomous vehicle 200. The external object may be static or movable. The data detected by the acceleration sensor can be further added with virtual data. In some embodiments, the virtual data is added by the virtual scene overlay module 100.
所述视觉传感器可以捕获自动驾驶车辆200周围的视觉图像并从中提取内容。所述视觉传感器可以包括但不限于,单目或者双目普通摄像头、单目或者双目广角摄像头(比如鱼 眼摄像头)、激光扫描器,线阵CCD摄像头,面阵CCD摄像头,TV摄像头[MOU1],数字摄像头等,或其任何组合。例如,视觉传感器可以拍摄道路两边的路牌标识,并通过控制模块160识别这些标识的意义。比如利用所述视觉传感器来判断道路的限速。自动驾驶车辆200还可以通过多个视觉传感器拍摄的不同图像的视差计算周围物体离自动驾驶车辆200的距离。在一些实施例中,还可以利用所述视觉传感器捕获的视觉图像判断障碍物、行人、车辆、天气等周围环境。例如,捕获的视觉图像显示前方有乌云,通过控制模块160分析前方可能有雷阵雨。在一些实施例中,捕获的视觉图像还可以与虚拟场景叠加模块100输出的虚拟数据进行叠加,叠加后的数据可以发送给控制模块160。The vision sensor can capture a visual image around the autonomous vehicle 200 and extract content therefrom. The vision sensor may include, but is not limited to, a monocular or binocular ordinary camera, a monocular or binocular wide-angle camera (such as a fisheye camera), a laser scanner, a linear CCD camera, an area CCD camera, a TV camera [MOU1 ], digital camera, etc., or any combination thereof. For example, the visual sensor can take pictures of street signs on both sides of the road, and use the control module 160 to recognize the meaning of these signs. For example, the visual sensor is used to determine the speed limit of the road. The self-driving vehicle 200 can also calculate the distance of surrounding objects from the self-driving vehicle 200 through the parallax of different images taken by multiple vision sensors. In some embodiments, the visual image captured by the visual sensor may also be used to judge the surrounding environment such as obstacles, pedestrians, vehicles, and weather. For example, the captured visual image shows that there are dark clouds ahead, and the control module 160 analyzes that there may be thunderstorms ahead. In some embodiments, the captured visual image can also be superimposed with the virtual data output by the virtual scene superimposing module 100, and the superimposed data can be sent to the control module 160.
在一些实施例中,所述视觉传感器可以是单目或者双目普通摄像头、单目或者双目广角摄像头(比如鱼眼摄像头)、线阵CCD摄像头,面阵CCD摄像头,TV摄像头,数字摄像头等任意一种摄像头,或其任何组合。如图2A所示,所述摄像头可以包括镜头212,图像传感器214,输出模块216等,或其任何组合。在工作状态下,摄像头镜头212摄取外界景色的光线,将光线投射到图像传感器214(比如成像器)成像表面上使光敏阵列曝光。该光敏阵列将曝光转换为电荷,在定时曝光结束时,图像传感器214将光将累积的电荷转换为成连续模拟信号输出或者数字化后输出。转换完成后,所述摄像头会重置以开始下一个视频帧的曝光。从图像传感器214输出的电信号被输入到输出模块216,并在输出模块中进行扫描后成为一帧图像输出。In some embodiments, the vision sensor may be a monocular or binocular ordinary camera, a monocular or binocular wide-angle camera (such as a fisheye camera), a linear CCD camera, an area CCD camera, a TV camera, a digital camera, etc. Any kind of camera, or any combination. As shown in FIG. 2A, the camera may include a lens 212, an image sensor 214, an output module 216, etc., or any combination thereof. In the working state, the camera lens 212 captures the light of the external scenery, and projects the light onto the imaging surface of the image sensor 214 (such as an imager) to expose the photosensitive array. The photosensitive array converts the exposure into electric charges. At the end of the timed exposure, the image sensor 214 converts the accumulated electric charges into a continuous analog signal for output or digitized output. After the conversion is completed, the camera will be reset to start the exposure of the next video frame. The electrical signal output from the image sensor 214 is input to the output module 216, and is scanned in the output module as a frame of image output.
在一些实施例中,针对不同的传感器,所述虚拟数据的表现形式可以不同,例如在给激光雷达添加虚拟数据时,虚拟数据可以是点云信息或点云序列;在给视觉传感器添加信息时,虚拟数据可以包括像素信息,通过像素信息去表达不同的障碍物。针对不同的传感器,虚拟数据以不同的形式表现出需要添加的内容,例如虚拟障碍物,虚拟车辆,虚拟行人,虚拟道路,不同的虚拟天气等中一种或多种。In some embodiments, for different sensors, the virtual data can be represented in different forms. For example, when adding virtual data to the lidar, the virtual data can be point cloud information or point cloud sequence; when adding information to a visual sensor , Virtual data can include pixel information, and different obstacles can be expressed through pixel information. For different sensors, virtual data expresses the content that needs to be added in different forms, such as one or more of virtual obstacles, virtual vehicles, virtual pedestrians, virtual roads, and different virtual weather.
所述自动驾驶测试装置190可以是车载的硬件装置,也可以同测试模块192合并,成为从属于控制模块160的软件模块。例如,当自动驾驶车辆200需要进行自动驾驶测试时,可以将所述自动驾驶测试装置与所述自动驾驶车辆200的传感器模块150和/或控制模块160连接使用。在一些实施例中,所述自动驾驶测试装置可以包括测试模块192。所述测试模块192可以是硬件模块,并同自动驾驶车辆200的传感器模块150和/或控制模块160连接,来测量当虚拟场景叠加模块100向自动驾驶车辆200输入虚拟数据产生部分或者全部虚拟驾驶环境的时候,自动驾驶车辆200的相应反应。从而测试自动驾驶车辆200的自动驾驶系统的 表现(算法的反应速度、精确度等等)。The automatic driving test device 190 may be a vehicle-mounted hardware device, or may be combined with the test module 192 to become a software module subordinate to the control module 160. For example, when the automated driving vehicle 200 needs to perform an automated driving test, the automated driving test device may be connected to the sensor module 150 and/or the control module 160 of the automated driving vehicle 200 for use. In some embodiments, the automatic driving test device may include a test module 192. The test module 192 may be a hardware module and is connected to the sensor module 150 and/or the control module 160 of the autonomous vehicle 200 to measure when the virtual scene overlay module 100 inputs virtual data to the autonomous vehicle 200 to generate part or all of the virtual driving Under the environment, the autonomous vehicle 200 responds accordingly. Thus, the performance of the automatic driving system of the automatic driving vehicle 200 (the response speed of the algorithm, accuracy, etc.) is tested.
在一些实施例中,所述自动驾驶测试装置190可以包括安装结构。所述安装结构可以是机械结构,用来将所述虚拟场景叠加模块100固定在所述自动驾驶车辆上。比如所诉安装结构可以包括螺钉,基座等等。所述自动驾驶测试装置190会被固定在所述基座上,然后用螺钉固定在自动驾驶车辆200上。In some embodiments, the automatic driving test device 190 may include a mounting structure. The installation structure may be a mechanical structure for fixing the virtual scene superimposing module 100 on the autonomous vehicle. For example, the mounting structure in question can include screws, bases, and so on. The automatic driving test device 190 will be fixed on the base, and then fixed on the automatic driving vehicle 200 with screws.
在一些实施例中,所示自动驾驶测试装置190可以包括自动驾驶车辆200。也就是说,在一些实施例中,所述自动驾驶车辆200可以为包括了虚拟场景叠加模块100和测试模块192的用于测试的自动驾驶车辆200。In some embodiments, the illustrated automated driving test device 190 may include an automated driving vehicle 200. That is, in some embodiments, the autonomous driving vehicle 200 may be an autonomous driving vehicle 200 for testing that includes a virtual scene superimposing module 100 and a testing module 192.
在一些实施例中,所示自动驾驶测试装置190也可以位于云端服务器。其中所述云端服务器用于对所述自动驾驶车辆进行统一调度和监控。在一些实施例中,在进行车辆测试时,云端服务器可以通过自动驾驶测试装置190向所述自动驾驶车辆200的传感器模块150或控制模块160发送虚拟数据以产生部分或全部虚拟驾驶环境,从而获取所述自动驾驶车辆200的测试结果。In some embodiments, the illustrated automatic driving test device 190 may also be located on a cloud server. The cloud server is used for unified scheduling and monitoring of the autonomous vehicle. In some embodiments, during the vehicle test, the cloud server may send virtual data to the sensor module 150 or the control module 160 of the autonomous vehicle 200 through the automatic driving test device 190 to generate part or all of the virtual driving environment, so as to obtain The test result of the autonomous vehicle 200.
仅仅为了说明问题,在本申请中所述控制模块160中仅描述了一个处理器。然而,应当注意,本申请中的所述控制模块160还可以包括多个处理器,因此,本申请中披露的操作和/或方法步骤可以如本申请所述的由一个处理器执行,也可以由多个处理器联合执行。例如,如果在本申请中所述控制模块160的中央处理器执行步骤A和步骤B,则应该理解,步骤A和步骤B也可以由信息处理中的两个不同处理器联合或分开执行(例如,第一处理器执行步骤A,第二处理器执行步骤B,或者第一和第二处理器共同执行步骤A和B)。Just to illustrate the problem, only one processor is described in the control module 160 in this application. However, it should be noted that the control module 160 in this application may also include multiple processors. Therefore, the operations and/or method steps disclosed in this application can be executed by one processor as described in this application, or Jointly executed by multiple processors. For example, if the central processing unit of the control module 160 described in this application performs step A and step B, it should be understood that step A and step B can also be performed jointly or separately by two different processors in information processing (for example, , The first processor executes step A, the second processor executes step B, or the first and second processors execute steps A and B) together.
图2A-2D是本申请中的一种虚拟场景叠加模块与所述外部对象传感器耦合方式的示意图。需要指出的是,所述虚拟场景叠加模块100可以是独立的硬件模块,也可以是从属于控制模块160的硬件或者软件模块。所述外部对象传感器可以是上述自动驾驶车辆中外部对象传感器的任何一种或者多种。为了方便说明,在下文中主要以视觉传感器为例阐述本公开的技术点。然而,本领域的技术人员可以很容易理解,本技术点可以直接应用在其他形式的外部对象传感器中。比如本技术可以直接应用在激光雷达上。2A-2D are schematic diagrams of a coupling manner of a virtual scene overlay module and the external object sensor in this application. It should be pointed out that the virtual scene overlay module 100 may be an independent hardware module, or may be a hardware or software module subordinate to the control module 160. The external object sensor may be any one or more of the above-mentioned external object sensors in the autonomous vehicle. For the convenience of description, a visual sensor is mainly used as an example to illustrate the technical points of the present disclosure. However, those skilled in the art can easily understand that this technical point can be directly applied to other forms of external object sensors. For example, this technology can be directly applied to lidar.
如图2A所示,所述虚拟场景叠加模块100连接在所述输出模块216的输出端。所述虚拟场景叠加模块100将虚拟数据加入到摄像头输出模块216输出的数据上,生成修改后的数据。具体地,所述虚拟数据为虚拟图像对应的数据。所述虚拟场景叠加模块100连接到输出模块216的输出端,接收输出模块输出端真实图像。然后,虚拟场景叠加模块100将所述 虚拟图像的像素叠加到所述真实图像对应的位置。比如,假设真实图像为自动驾驶车辆200行驶时的前方道路图像,则所述虚拟数据可以为道路前方的虚拟红绿灯图像、行驶在道路上的其他虚拟车辆图像和/或横穿道路的虚拟行人形象。根据预先设定,上述虚拟图像会出现在自动驾驶车辆200前方的预定位置。相应的,上述虚拟图像在所述真实图像中会出现在对应的预定位置上。因此,虚拟场景叠加模块100可以讲所述虚拟图像的像素叠加在所述真实图像的预定位置上。该预定位置以及所述虚拟图像的大小、视角等都会随着所述自动驾驶车辆200的向前行驶而变化,其产生的效果等同于真实世界中相应位置和移动下的物体经过摄像头拍摄后出现在所述真实图像中的状态。As shown in FIG. 2A, the virtual scene superimposing module 100 is connected to the output terminal of the output module 216. The virtual scene superimposing module 100 adds virtual data to the data output by the camera output module 216 to generate modified data. Specifically, the virtual data is data corresponding to the virtual image. The virtual scene superimposing module 100 is connected to the output terminal of the output module 216, and receives a real image at the output terminal of the output module. Then, the virtual scene superimposing module 100 superimposes the pixels of the virtual image to the position corresponding to the real image. For example, assuming that the real image is the image of the road ahead when the autonomous vehicle 200 is driving, the virtual data may be a virtual traffic light image in front of the road, images of other virtual vehicles driving on the road, and/or virtual pedestrian images crossing the road. . According to a preset setting, the aforementioned virtual image will appear at a predetermined position in front of the autonomous vehicle 200. Correspondingly, the aforementioned virtual image will appear in the corresponding predetermined position in the real image. Therefore, the virtual scene superimposing module 100 can superimpose the pixels of the virtual image on a predetermined position of the real image. The predetermined position and the size, angle of view, etc. of the virtual image will change as the autonomous vehicle 200 moves forward, and its effect is equivalent to the appearance of the corresponding position and moving objects in the real world after being photographed by the camera. The state in the real image.
所述虚拟场景叠加模块100将所述虚拟图像的像素叠加到所述真实图像可以是将所述虚拟图像的像素值和所述真实图像中的像素值分别进行加权后叠加,得到所述修改后的图像数据;也可以是先将输出图像中计划加入虚拟图像的像素点删除,并加上所述虚拟图像的像素。这样输出的图像数据也就包括了真实景象的图像数据以及人工加入的虚拟数据的图像数据,从而在真实图像中加入了所述虚拟图像。The virtual scene superimposing module 100 superimposing the pixels of the virtual image on the real image may be performed by weighting the pixel values of the virtual image and the pixel values of the real image to obtain the modified The image data; it can also be that the pixels planned to be added to the virtual image in the output image are deleted first, and the pixels of the virtual image are added. The image data output in this way also includes the image data of the real scene and the image data of the artificially added virtual data, so that the virtual image is added to the real image.
基于相同的道理,所述虚拟场景叠加模块100也可以连接在所述激光雷达的输出端。通过将虚拟物体的激光雷达点阵数据嵌入到激光雷达采集到的真实周围环境点阵当中,所述虚拟场景叠加模块100可以产生激光雷达的修改数据。Based on the same principle, the virtual scene superimposing module 100 can also be connected to the output end of the lidar. By embedding the lidar dot matrix data of the virtual object into the real surrounding environment dot matrix collected by the lidar, the virtual scene superimposing module 100 can generate the modified data of the lidar.
类似的,所述所述虚拟场景叠加模块100也可以连接在所述声纳和/或者气候传感器上,通过修改所述声纳和/或气候传感器的数据而修改所述控制模块160对行驶环境的“认知”。Similarly, the virtual scene superimposing module 100 can also be connected to the sonar and/or climate sensor, and the control module 160 can modify the driving environment by modifying the data of the sonar and/or climate sensor. "Cognition".
在一些实施例中,所述虚拟场景叠加模块100可以与摄像头进行其他方式的耦合。例如,如图2B所示,虚拟场景叠加模块100可以接入到在摄像头镜头212和摄像头图像传感器214之间。从而在图像信息(或者景色的光线)经过摄像头镜头212但是还没有达到摄像头图像传感器214之前,所述虚拟场景叠加模块100便可以将表示虚拟图像的虚拟数据加入到由摄像头镜头212采集的数据上,生成修改后的数据。In some embodiments, the virtual scene superimposing module 100 may be coupled with the camera in other ways. For example, as shown in FIG. 2B, the virtual scene superimposing module 100 may be connected between the camera lens 212 and the camera image sensor 214. Therefore, before the image information (or the light of the scenery) passes through the camera lens 212 but has not reached the camera image sensor 214, the virtual scene superimposing module 100 can add virtual data representing the virtual image to the data collected by the camera lens 212 To generate the modified data.
比如,虚拟场景叠加模块100可以是控制模块160中的一组指令,通过软件的方式接收并修改摄像头212传来的信号并将修改后的信号传回所述摄像头继续进行处理;也可以是一种投影装置或者等效的投影装置。该投影装置一方面向图像传感器214成像表面上投射一个虚拟物体的影像,一方面挡住镜头212传来的与影像相应部分的光。这样,从入射光的阶段,摄像头所摄取的景象便被修改。然后,所述修改后的数据再被摄像头图像传感器214所 采集,并在输出模块216生成图像数据。该图像数据包括真实景象的图像数据以及人工加入的虚拟数据的图像数据,从而在真实图像中加入了所述虚拟图像。For example, the virtual scene superimposing module 100 may be a set of instructions in the control module 160, which receives and modifies the signal from the camera 212 by means of software and transmits the modified signal back to the camera for processing; it may also be a set of instructions. Kind of projection device or equivalent projection device. On the one hand, the projection device projects an image of a virtual object onto the imaging surface of the image sensor 214, and on the other hand blocks the light from the lens 212 corresponding to the image. In this way, from the stage of incident light, the scene captured by the camera is modified. Then, the modified data is collected by the camera image sensor 214, and the output module 216 generates image data. The image data includes image data of a real scene and image data of artificially added virtual data, so that the virtual image is added to the real image.
再例如,如图2C所示,虚拟场景叠加模块100可以接入到摄像头图像传感器214和摄像头输出模块216之间。从而在图像信息(或者景色的光线)经过摄像头镜头212达到摄像头图像传感器214成像表面并以电信号输出之后,但是在输入摄像头输出模块216之前,这部分电信号输入虚拟场景叠加模块100。然后,虚拟场景叠加模块100可以截断部分电信号而加入对应于虚拟物体的电信号,再将该修改之后的电信号输入输出模块216,扫描生成图像数据。该图像数据也就包括了真实景象的图像数据以及人工加入的虚拟数据的图像数据,从而在真实图像中加入了所述虚拟图像。所述虚拟场景叠加模块100可以是控制模块160中的一组指令,通过软件的方式接收并修改图像传感器214传来的信号,并将修改后的信号传给输出模块216继续进行处理;也可以是一个硬件模块,嵌入到所述摄像头的图像传感器214和输出模块216之间。For another example, as shown in FIG. 2C, the virtual scene superimposing module 100 may be connected between the camera image sensor 214 and the camera output module 216. Therefore, after the image information (or the light of the scenery) reaches the imaging surface of the camera image sensor 214 through the camera lens 212 and is output as an electrical signal, but before being input to the camera output module 216, this part of the electrical signal is input to the virtual scene superimposing module 100. Then, the virtual scene superimposing module 100 can cut off a part of the electrical signal and add the electrical signal corresponding to the virtual object, and then input the modified electrical signal into the output module 216 to scan to generate image data. The image data also includes the image data of the real scene and the image data of the artificially added virtual data, so that the virtual image is added to the real image. The virtual scene superimposing module 100 may be a set of instructions in the control module 160, which receives and modifies the signal from the image sensor 214 through software, and transmits the modified signal to the output module 216 for further processing; or It is a hardware module embedded between the image sensor 214 and the output module 216 of the camera.
又例如,如图2D所示,所述虚拟场景叠加模块100将虚拟数据加入到摄像头输出模块216输出的数据上,生成修改后的数据。具体地,所述虚拟场景叠加模块100同输出模块216电连接并控制输出模块对来自摄像头图像传感器214信号的扫描。在输出图像中计划加入虚拟图像的像素点上,虚拟场景叠加模块100可以阻断输出模块的正常扫描并用所述虚拟图像的像素来替代。这样输出的图像数据也就包括了真实景象的图像数据以及人工加入的虚拟数据的图像数据,从而在真实图像中加入了所述虚拟图像。所述虚拟场景叠加模块100可以是控制模块160中的一组指令,通过软件的方式接收并修改输出模块216扫描时获取的信号并将修改后的信号传回所述摄像头继续进行处理;也可以是一个硬件模块,嵌入到所述摄像头的输出模块216中。For another example, as shown in FIG. 2D, the virtual scene overlay module 100 adds virtual data to the data output by the camera output module 216 to generate modified data. Specifically, the virtual scene superimposing module 100 is electrically connected to the output module 216 and controls the output module to scan the signal from the camera image sensor 214. At the pixel points of the virtual image planned to be added to the output image, the virtual scene superimposing module 100 can block the normal scanning of the output module and replace it with pixels of the virtual image. The image data output in this way also includes the image data of the real scene and the image data of the artificially added virtual data, so that the virtual image is added to the real image. The virtual scene superimposing module 100 may be a set of instructions in the control module 160, which receives and modifies the signal acquired during scanning by the output module 216 through software and transmits the modified signal back to the camera for processing; or It is a hardware module embedded in the output module 216 of the camera.
所述虚拟数据可以是虚拟物体的图像数据,也可以是虚拟的其他数据,比如温度,风速等等。所述虚拟数据可以是虚拟场景叠加模块100内部存储的数据,也可以是通过网络170实时接收的数据。所述数据对应的物体可以是实际拍摄的物体,也可以是电脑合成的物体。向真实传感器探测的数据加入虚拟数据的时间可以是根据场景预先定好的,也可以是随机的。The virtual data may be image data of a virtual object, or other virtual data, such as temperature, wind speed, and so on. The virtual data may be data stored in the virtual scene overlay module 100, or data received in real time via the network 170. The object corresponding to the data may be an object actually photographed, or an object synthesized by a computer. The time for adding virtual data to the data detected by the real sensor can be predetermined according to the scene, or it can be random.
所述控制模块160接收所述多个传感器感知的信息后,可以处理与车辆驾驶(例如,自动驾驶)有关的信息和/或数据,以执行本公开中描述的一个或多个功能。在一些实施例中,控制模块160可以配置成自主地驱动车辆。例如,控制模块160可以输出多个控制信号。多个控制信号可以被配置为由一个或者多个电子控制模块(electronic control units,ECU) 接收,以控制车辆的驱动。在一些实施例中,控制模块可基于车辆的外部环境信息和叠加的虚拟场景输出控制信号。After receiving the information sensed by the multiple sensors, the control module 160 may process information and/or data related to vehicle driving (for example, automatic driving) to perform one or more functions described in the present disclosure. In some embodiments, the control module 160 may be configured to autonomously drive the vehicle. For example, the control module 160 may output multiple control signals. The multiple control signals may be configured to be received by one or more electronic control units (ECU) to control the driving of the vehicle. In some embodiments, the control module may output a control signal based on the external environment information of the vehicle and the superimposed virtual scene.
本发明中披露的技术可以应用在4G网络环境。不过,由于本申请中的发明对网络时延和数据的传输速度要求较高,5G网络环境更加适合。4G的数据传输速率是100Mbps量级,时延是30-50ms,每平方千米的最大连接数1万量级,移动性350KM/h左右,而5G的传输速率是10Gbps量级,时延是1ms,每平方千米的最大连接数是百万量级,移动性是500km/h左右。5G具有更高的传输速率,更短的时延,更多的平方千米连接数,以及更高的速度容忍度。因此,本发明虽然也适用于4G环境,但是5G环境下运行会得到更好的技术表现,体现更高的商业价值。The technology disclosed in the present invention can be applied in a 4G network environment. However, since the invention in this application requires high network delay and data transmission speed, the 5G network environment is more suitable. The data transmission rate of 4G is on the order of 100Mbps, the delay is 30-50ms, the maximum number of connections per square kilometer is on the order of 10,000, and the mobility is about 350KM/h, while the transmission rate of 5G is on the order of 10Gbps, and the delay is 1ms, the maximum number of connections per square kilometer is in the order of millions, and the mobility is about 500km/h. 5G has a higher transmission rate, shorter delay, more square kilometers of connections, and higher speed tolerance. Therefore, although the present invention is also suitable for 4G environment, it will get better technical performance when running in 5G environment and reflect higher commercial value.
如上所述,所述虚拟场景叠加模块100可以修改自动驾驶车辆200的外部环境数据。所述自动驾驶车辆200的外部环境数据可以通过传感器模块150获取。所述外部环境数据可以包括,但不限于,位置信息,天气信息,交通标识信息,行人信息,车辆信息,行驶车道信息,障碍物信息,信号灯信息,光照信息等。As described above, the virtual scene overlay module 100 can modify the external environment data of the autonomous vehicle 200. The external environment data of the autonomous vehicle 200 can be acquired through the sensor module 150. The external environment data may include, but is not limited to, location information, weather information, traffic sign information, pedestrian information, vehicle information, driving lane information, obstacle information, signal light information, lighting information, etc.
在一些实施例中,所述外部环境数据可以通过环境传感器采集。例如,所述外部环境数据可以通过外部对象传感器采集自动驾驶车辆200的行驶道路中的障碍物(例如,石块,坠物等)。又例如,所述外部环境数据可以通过位置传感器采集自动驾驶车辆200的经度,纬度等地理位置信息。In some embodiments, the external environmental data may be collected by environmental sensors. For example, the external environment data may collect obstacles (for example, rocks, falling objects, etc.) in the driving road of the autonomous vehicle 200 through external object sensors. For another example, the external environment data may collect geographic location information such as longitude and latitude of the autonomous vehicle 200 through a location sensor.
在一些实施例中,所述自动驾驶车辆200的传感器模块150可以获取空值数据,也可以不获取数据,外部环境数据可以部分地,也可以全部由所述虚拟场景叠加模块100提供。例如,所述自动驾驶车辆200进行自动驾驶路测,当所述自动驾驶车辆200处于停止状态时,外部对象传感器采集的数据可以不获取数据,所述虚拟场景叠加模块100将提供模拟路测的相关外部环境数据。又例如,所述自动驾驶车辆200进行室内的自动驾驶测试,所述外部对象传感器采集的数据可以室内的环境数据,所述虚拟场景叠加模块100将提供模拟路测的相关外部环境数据代替采集到的室内环境数据。在一些实施例中,所述虚拟场景叠加模块100可以提供校准执行机构110对应的相关外部环境。例如,当所述自动驾驶车辆200进行自动驾驶路测时,所述虚拟场景叠加模块100可以提供执行机构初始校准,为了校准转向系统的右转转向灯,虚拟场景叠加模块100提供需要右转的十字路口。In some embodiments, the sensor module 150 of the self-driving vehicle 200 may or may not obtain null value data, and the external environment data may be partially or completely provided by the virtual scene overlay module 100. For example, the autonomous driving vehicle 200 performs an autonomous driving road test. When the autonomous driving vehicle 200 is in a stopped state, the data collected by the external object sensor may not obtain data. The virtual scene overlay module 100 will provide a simulated road test. Relevant external environmental data. For another example, the self-driving vehicle 200 performs an indoor automatic driving test, the data collected by the external object sensor can be indoor environment data, and the virtual scene overlay module 100 will provide the relevant external environment data of the simulated drive test instead of collecting it. Indoor environmental data. In some embodiments, the virtual scene superimposing module 100 may provide the relevant external environment corresponding to the calibration execution mechanism 110. For example, when the autonomous vehicle 200 is performing an autonomous driving road test, the virtual scene overlay module 100 can provide initial calibration of the actuator. In order to calibrate the right turn indicator of the steering system, the virtual scene overlay module 100 provides crossroads.
在一些实施例中,所述虚拟场景叠加模块100可以与传感器模块150中的一个或多个传感器连接。例如,所述虚拟场景叠加模块100可以与外部对象传感器(例如,视觉传感器, 气候传感器,声纳传感器等)连接,并接收所述外部对象传感器实时采集的外部环境数据。又例如,所述虚拟场景叠加模块100可以与位置传感器连接,并接收所述视觉传感器实时采集的位置数据。In some embodiments, the virtual scene superimposing module 100 may be connected to one or more sensors in the sensor module 150. For example, the virtual scene superimposing module 100 may be connected to an external object sensor (for example, a vision sensor, a climate sensor, a sonar sensor, etc.), and receive the external environment data collected by the external object sensor in real time. For another example, the virtual scene superimposing module 100 may be connected to a position sensor, and receive position data collected by the visual sensor in real time.
在一些实施例中,所述虚拟场景叠加模块100可以与所述传感器模块150中的一个或多个传感器的输入端连接。在一些实施例中,所述虚拟场景叠加模块100可以将虚拟数据发送到所述传感器模块150中的一个或多个传感器。所述虚拟数据可以是出现在所述自动驾驶车辆200行驶路径上的静止物体,也可以是运动的物体,比如行人、行驶中的其他车辆等等。例如,在实时采集的外部环境中,所述自动驾驶车辆200行驶的道路上没有交通信号灯,所述虚拟场景叠加模块100可以在所述自动驾驶车辆200行驶的道路图像上加入虚拟的交通信号灯。比如,所述虚拟场景叠加模块100可以在所述自动驾驶车200的前方100米处加入交通信号灯。再比如,在自动驾驶车辆行驶的路径上,虚拟的加入一个过马路的行人或者换车道的车辆等等。所述虚拟加入的物体时虚拟数据。In some embodiments, the virtual scene superimposing module 100 may be connected to the input terminals of one or more sensors in the sensor module 150. In some embodiments, the virtual scene overlay module 100 may send virtual data to one or more sensors in the sensor module 150. The virtual data may be stationary objects appearing on the driving path of the autonomous vehicle 200, or moving objects, such as pedestrians, other vehicles in motion, and so on. For example, in the real-time collected external environment, there are no traffic lights on the road on which the autonomous vehicle 200 is traveling, and the virtual scene superimposing module 100 may add virtual traffic lights on the road image on which the autonomous vehicle 200 is traveling. For example, the virtual scene superimposing module 100 may add a traffic signal 100 meters in front of the autonomous vehicle 200. For another example, on the path of an autonomous vehicle, a pedestrian crossing the road or a vehicle changing lanes is virtually added. The virtual added object is virtual data.
所述虚拟数据可以是与所述传感器模块150采集的真实环境的环境差。所述环境差可以包括但不限于,行人,车辆,障碍物,天气因素(例如,风,霜,雨,雪,温度,光照)等,或其任何组合。所述行人可以是正常速度行走过马路的,也可以是奔跑冲入马路的。例如,所述自动驾驶车辆200正常行驶在测试的道路上,真实环境中没有行人,为了测试行人突然进入马路时的所述自动驾驶车辆200,虚拟数据对应的突然冲入马路的行人将被加入外部环境数据中。所述车辆包括但不限于,机动车辆,非机动车辆等,或其任何组合。所述机动车辆可以包括但不限于,汽车,电动汽车,摩托车,电动摩托车等,或其任何组合。所述非机动车辆可以包括但不限于,自行车,三轮车,老年代步车,电动单车等,或其任何组合。在一些实施例中,所述障碍物可以是动态的(例如,动物)。在一些实施例中,所述障碍物可以是静态的(例如,道路路面有凹陷的坑,道路路面有凸起的大石块)。在一些实施例中,天气因素可以使传感器得信号同测试正常天气的效果不同。例如,雨天可能会造成视觉传感器出现模糊的情况,采集的数据会有偏差。在一些实施例中,所述环境差可以是地理位置的环境差。例如,测试时的所述自动驾驶车辆200行驶的道路是笔直的,测试的目标场景可以是海拔不断变化的盘山公路,所述环境差是海拔高度的变化。The virtual data may be an environment difference from the real environment collected by the sensor module 150. The environmental difference may include, but is not limited to, pedestrians, vehicles, obstacles, weather factors (for example, wind, frost, rain, snow, temperature, light), etc., or any combination thereof. The pedestrian can walk across the road at a normal speed, or run into the road. For example, the self-driving vehicle 200 is running normally on the test road, and there are no pedestrians in the real environment. To test the self-driving vehicle 200 when a pedestrian suddenly enters the road, the pedestrians who suddenly rushed into the road corresponding to the virtual data will be added In the external environment data. The vehicle includes, but is not limited to, motor vehicles, non-motor vehicles, etc., or any combination thereof. The motor vehicle may include, but is not limited to, an automobile, an electric automobile, a motorcycle, an electric motorcycle, etc., or any combination thereof. The non-motor vehicle may include, but is not limited to, bicycles, tricycles, scooters, electric bicycles, etc., or any combination thereof. In some embodiments, the obstacle may be dynamic (e.g., animal). In some embodiments, the obstacle may be static (for example, the road pavement has a concave pit, and the road pavement has a raised boulder). In some embodiments, weather factors can make the sensor's signal different from the effect of testing normal weather. For example, rainy days may cause blurring of the vision sensor, and the collected data will be biased. In some embodiments, the environmental difference may be an environmental difference of a geographic location. For example, the road on which the autonomous vehicle 200 is traveling during the test is straight, the target scene of the test may be a winding mountain road with a constantly changing altitude, and the environmental difference is a change in altitude.
在一些实施例中,所述虚拟数据可以是根据场景预设的。例如,所述虚拟数据可以包括一个和/或多个不同的场景,所述一个和/或多个不同的场景可以储存在存储器中。在一些实施例中,所述虚拟数据也可以是通过自主学习有人和/或无人驾驶的车辆遇到的场景。例如,有人驾驶车辆的车尾盲区处,有小朋友蹲下玩耍,驾驶员可能没有发现并倒车,则会撞到小 朋友;所述虚拟数据可以是车尾蹲下玩耍的小朋友,当测试自动驾驶车辆200倒车时,所述虚拟数据则会加入到所述自动驾驶车辆200的外部环境数据中。In some embodiments, the virtual data may be preset according to the scene. For example, the virtual data may include one and/or more different scenes, and the one and/or more different scenes may be stored in a memory. In some embodiments, the virtual data may also be scenes encountered by manned and/or unmanned vehicles through autonomous learning. For example, in the blind spot at the rear of a man-driving vehicle, a child squats down to play, and the driver may not find it and backs up, and then hits the child; the virtual data can be a child squatting down and playing in the rear of the vehicle. When testing an autonomous vehicle When the vehicle 200 is reversing, the virtual data will be added to the external environment data of the autonomous vehicle 200.
在一些实施例中,当所述虚拟场景叠加模块100为独立硬件结构时,其可以与控制模块160连接。例如,所述虚拟场景叠加模块100与所述控制模块160的输入/输出端连接,所述虚拟场景叠加模块100可以从所述控制模块160获得/获取所述虚拟数据。又例如,所述虚拟场景叠加模块100可以储存所述虚拟数据,从所述控制模块160接收控制信号,所述控制信号可以用于选取当前测试对应的虚拟数据。在一些实施例中,所述虚拟场景叠加模块100在外部环境数据中加入虚拟数据后,可以生成修改后的外部环境数据,所述修改后的外部环境数据发送给所述自动驾驶车辆的车载控制模块。In some embodiments, when the virtual scene superimposing module 100 is an independent hardware structure, it can be connected to the control module 160. For example, the virtual scene superimposing module 100 is connected to the input/output end of the control module 160, and the virtual scene superimposing module 100 can obtain/acquire the virtual data from the control module 160. For another example, the virtual scene superimposing module 100 may store the virtual data and receive a control signal from the control module 160, and the control signal may be used to select virtual data corresponding to the current test. In some embodiments, after the virtual scene overlay module 100 adds virtual data to the external environment data, it can generate modified external environment data, and the modified external environment data is sent to the on-board control of the autonomous vehicle. Module.
所述虚拟场景叠加模块100可以包括计时器。所述计时器可以在所述虚拟场景叠加模块100内部,也可以在所述虚拟场景叠加模块100外部(例如,控制模块160,传感器模块150)。在一些实施例中,所述计时器可以用于倒计时发出加入虚拟数据的指令。所述倒计时的时钟周期可以是预先设定的,也可以是随机生成的。例如,当所述倒计时的时钟周期预设30s时,虚拟数据会在30s后加入到外部环境数据中。又例如,当所述倒计时的时钟周期是随机生成的,虚拟数据会在随机生成的时钟周期后加入到外部环境数据中。The virtual scene superimposing module 100 may include a timer. The timer may be inside the virtual scene superimposing module 100 or outside the virtual scene superimposing module 100 (for example, the control module 160, the sensor module 150). In some embodiments, the timer can be used to count down and issue an instruction to add dummy data. The clock cycle of the countdown may be preset or randomly generated. For example, when the countdown clock cycle is preset to 30s, the virtual data will be added to the external environment data after 30s. For another example, when the countdown clock period is randomly generated, the dummy data will be added to the external environment data after the randomly generated clock period.
图3是本申请中的一种车载自动驾驶测试装置的示例性流程图。如图3所示,应用车载自动驾驶测试装置,所述车载自动驾驶测试装置包括虚拟场景叠加模块100,方法包括如下步骤:Fig. 3 is an exemplary flow chart of a vehicle-mounted automatic driving test device in the present application. As shown in FIG. 3, an on-vehicle automatic driving test device is applied. The on-vehicle automatic driving test device includes a virtual scene overlay module 100. The method includes the following steps:
在步骤310中,所述虚拟场景叠加模块100获取自动驾驶车辆200的行驶数据。在一些实施例中,所述自动驾驶车辆的行驶数据可以包括但不限于,车速,加速度,天气参数,转向等,或其任何组合。在一些实施例中,当天气是下雪天时,虚拟场景叠加模块100获取的所述自动驾驶车辆200的行驶数据与下雪天有关。例如,与下雪天有关的行驶数据可以是自动驾驶车辆200行驶速度较慢,也可以是制动能力较弱,或其组合。In step 310, the virtual scene overlay module 100 obtains driving data of the autonomous vehicle 200. In some embodiments, the driving data of the autonomous vehicle may include, but is not limited to, vehicle speed, acceleration, weather parameters, steering, etc., or any combination thereof. In some embodiments, when the weather is a snowy day, the driving data of the autonomous vehicle 200 acquired by the virtual scene overlay module 100 is related to a snowy day. For example, the driving data related to a snowy day may be that the autonomous vehicle 200 is traveling at a slower speed, or may be a weaker braking ability, or a combination thereof.
在步骤320中,所述虚拟场景叠加模块100获取所述自动驾驶车辆200的实时行驶数据。所述实时行驶数据可以包括自动驾驶车辆200的行驶状态数据和/或外部环境数据。在一些实施例中,所述外部环境数据可以是通过外部对象传感器实时采集获得的,所述行驶状态数据可以是通过车辆部件传感器获得。在一些实施例中,所述外部对象传感器连接在所述虚拟场景叠加模块100上。比如,虚拟场景叠加模块100可以获得自动驾驶车辆200的行驶速度、加速度、方向等行驶状态数据,和/或自动驾驶车辆200周围的气温、光照、风速等外部 环境数据。In step 320, the virtual scene overlay module 100 obtains real-time driving data of the autonomous vehicle 200. The real-time driving data may include driving state data and/or external environment data of the autonomous vehicle 200. In some embodiments, the external environment data may be obtained through real-time collection of external object sensors, and the driving state data may be obtained through vehicle component sensors. In some embodiments, the external object sensor is connected to the virtual scene overlay module 100. For example, the virtual scene superimposition module 100 can obtain driving state data such as the driving speed, acceleration, and direction of the autonomous vehicle 200, and/or external environment data such as temperature, light, and wind speed around the autonomous vehicle 200.
在步骤330中,根据所述自动驾驶车辆200的实时行驶数据,在所述外部环境数据中加入虚拟数据,生成修改后的外部环境数据。所述自动驾驶车辆的行驶数据包括车速,所述虚拟数据随时间变化,所述变化同所述车速相关。比如在一些实施例中,如图4所示,当自动驾驶车辆200的测试地点是城市道路上,所述实时行驶数据可以包括行驶速度。若虚拟数据可以是自动驾驶车辆200行驶的前方道路上的斑马线以及正在过马路的行人。所述虚拟场景叠加模块100将虚拟数据加入外部环境数据时,可以根据行驶的速度,将虚拟数据做相应调整后加入场景中,生成对应的修改后的外部环境数据。比如自动驾驶车辆在缓慢驶向斑马线处的行人时,行人在摄像头传来的图片中相应的越来越大,行人大小变大的速度应该同自动驾驶车辆的速度相匹配,造成自动驾驶车辆不断接近行人的透视效果。在一些实施例中,当自动驾驶车辆200的测试时,行驶的加速度较大,外部环境是晴天,虚拟数据是雷阵雨。所述虚拟场景叠加模块100将虚拟数据加入外部环境数据时,可以根据行驶的加速度,将雷阵雨的雨量大小场景做相应调整相应加入晴天中,生成对应的修改后的外部环境数据,造成车辆200因为车速较快而产生的雨量放大的效果。In step 330, based on the real-time driving data of the autonomous vehicle 200, virtual data is added to the external environment data to generate modified external environment data. The driving data of the autonomous vehicle includes a vehicle speed, the virtual data changes with time, and the change is related to the vehicle speed. For example, in some embodiments, as shown in FIG. 4, when the test location of the autonomous vehicle 200 is on an urban road, the real-time driving data may include the driving speed. If the virtual data can be the zebra crossing on the road ahead on which the autonomous vehicle 200 is traveling and pedestrians crossing the road. When the virtual scene superimposing module 100 adds virtual data to the external environment data, it can adjust the virtual data according to the driving speed and add the virtual data to the scene to generate corresponding modified external environment data. For example, when an autonomous vehicle is slowly driving toward a pedestrian at a zebra crossing, the image of the pedestrian in the camera becomes larger and larger. The speed at which the pedestrian size increases should match the speed of the autonomous vehicle, causing the autonomous vehicle to continue to drive. Close to the perspective effect of pedestrians. In some embodiments, when the autonomous vehicle 200 is tested, the acceleration of the driving is relatively large, the external environment is sunny, and the virtual data is thunderstorms. When the virtual scene superimposing module 100 adds virtual data to the external environment data, it can adjust the amount of rain of the thundershower according to the acceleration of driving and add it to the sunny day accordingly to generate the corresponding modified external environment data, causing the vehicle 200 to The effect of amplifying the amount of rain produced by a faster vehicle speed.
在步骤340中,将所述修改后的外部环境数据传送给自动驾驶车辆200的车载控制模块。在一些实施例中,所述修改后的外部环境数据可以由所述虚拟场景叠加模块100传送给控制模块160。在一些实施例中,所述修改后的外部环境数据可以所述虚拟场景叠加模块100发送到传感器模块150,再通过所述传感器模块150传送给控制模块160。In step 340, the modified external environment data is transmitted to the on-board control module of the autonomous vehicle 200. In some embodiments, the modified external environment data may be transmitted to the control module 160 by the virtual scene overlay module 100. In some embodiments, the modified external environment data may be sent to the sensor module 150 by the virtual scene overlay module 100, and then transmitted to the control module 160 through the sensor module 150.
在步骤350中,驾驶测试模块192收集自动驾驶车辆200对修改后的外部环境数据做出的响应,根据所述响应评估自动驾驶车辆200的自动驾驶系统的表现。所述响应是指因为虚拟数据的加入而引起自动驾驶车辆200行驶状态的改变,比如车速、制动、灯光、转弯、雨刷等参数的改变。In step 350, the driving test module 192 collects the response of the autonomous vehicle 200 to the modified external environment data, and evaluates the performance of the autonomous driving system of the autonomous vehicle 200 based on the response. The response refers to a change in the driving state of the autonomous vehicle 200 due to the addition of virtual data, such as a change in parameters such as vehicle speed, braking, lights, turning, and wipers.
综上所述,在阅读本详细公开内容之后,本领域技术人员可以明白,前述详细公开内容可以仅以示例的方式呈现,并且可以不是限制性的。尽管这里没有明确说明,本领域技术人员可以理解本申请意图囊括对实施例的各种合理改变,改进和修改。这些改变,改进和修改旨在由本申请提出,并且在本申请的示例性实施例的精神和范围内。In summary, after reading this detailed disclosure, those skilled in the art can understand that the foregoing detailed disclosure may be presented by way of example only, and may not be restrictive. Although there is no clear description here, those skilled in the art can understand that this application intends to cover various reasonable changes, improvements and modifications to the embodiments. These changes, improvements and modifications are intended to be proposed by this application and are within the spirit and scope of the exemplary embodiments of this application.
此外,本申请中的某些术语已被用于描述本申请的实施例。例如,“一个实施例”,“实施例”和/或“一些实施例”意味着结合该实施例描述的特定特征,结构或特性可以包括 在本申请的至少一个实施例中。因此,可以强调并且应当理解,在本说明书的各个部分中对“实施例”或“一个实施例”或“替代实施例”的两个或更多个引用不一定都指代相同的实施例。此外,特定特征,结构或特性可以在本申请的一个或多个实施例中适当地组合。In addition, certain terms in this application have been used to describe the embodiments of this application. For example, "one embodiment", "an embodiment" and/or "some embodiments" mean that a specific feature, structure or characteristic described in conjunction with the embodiment may be included in at least one embodiment of the present application. Therefore, it can be emphasized and should be understood that two or more references to "an embodiment" or "one embodiment" or "alternative embodiment" in various parts of this specification do not necessarily all refer to the same embodiment. In addition, specific features, structures, or characteristics can be appropriately combined in one or more embodiments of the present application.
应当理解,在本申请的实施例的前述描述中,为了帮助理解一个特征,出于简化本申请的目的,本申请有时将各种特征组合在单个实施例、附图或其描述中。或者,本申请又是将各种特征分散在多个本申请的实施例中。然而,这并不是说这些特征的组合是必须的,本领域技术人员在阅读本申请的时候完全有可能将其中一部分特征提取出来作为单独的实施例来理解。也就是说,本申请中的实施例也可以理解为多个次级实施例的整合。而每个次级实施例的内容在于少于单个前述公开实施例的所有特征的时候也是成立的。It should be understood that in the foregoing description of the embodiments of the present application, in order to help understand a feature, for the purpose of simplifying the present application, the present application sometimes combines various features in a single embodiment, drawings, or descriptions thereof. Or, the present application disperses various features in multiple embodiments of the present application. However, this does not mean that the combination of these features is necessary. It is entirely possible for those skilled in the art to extract some of the features as a separate embodiment for understanding when reading this application. That is to say, the embodiments in this application can also be understood as an integration of multiple sub-embodiments. It is also true that the content of each sub-embodiment is less than all the features of a single aforementioned disclosed embodiment.
在一些实施方案中,表达用于描述和要求保护本申请的某些实施方案的数量或性质的数字应理解为在某些情况下通过术语“约”,“近似”或“基本上”修饰。例如,除非另有说明,否则“约”,“近似”或“基本上”可表示其描述的值的±20%变化。因此,在一些实施方案中,书面描述和所附权利要求书中列出的数值参数是近似值,其可以根据特定实施方案试图获得的所需性质而变化。在一些实施方案中,数值参数应根据报告的有效数字的数量并通过应用普通的舍入技术来解释。尽管阐述本申请的一些实施方案列出了广泛范围的数值范围和参数是近似值,但具体实施例中都列出了尽可能精确的数值。In some embodiments, numbers expressing quantities or properties used to describe and claim certain embodiments of this application should be understood as modified by the terms "about", "approximately" or "substantially" in some cases. For example, unless otherwise stated, "about", "approximately" or "substantially" may mean a ±20% variation of the value described. Therefore, in some embodiments, the numerical parameters listed in the written description and appended claims are approximations, which may vary according to the desired properties that a particular embodiment is attempting to achieve. In some embodiments, the numerical parameters should be interpreted based on the number of significant figures reported and by applying common rounding techniques. Although some embodiments described in this application list a wide range of numerical ranges and the parameters are approximate values, the specific examples all list numerical values as accurate as possible.
本文引用的每个专利,专利申请,专利申请的出版物和其他材料,例如文章,书籍,说明书,出版物,文件,物品等,可以通过引用结合于此。用于所有目的的全部内容,除了与其相关的任何起诉文件历史,可能与本文件不一致或相冲突的任何相同的,或者任何可能对权利要求的最宽范围具有限制性影响的任何相同的起诉文件历史。现在或以后与本文件相关联。举例来说,如果在与任何所包含的材料相关联的术语的描述、定义和/或使用与本文档相关的术语、描述、定义和/或之间存在任何不一致或冲突时,使用本文件中的术语为准。Each patent, patent application, patent application publication and other materials cited herein, such as articles, books, specifications, publications, documents, articles, etc., may be incorporated herein by reference. All content used for all purposes, except for any related file history of the prosecution, may be inconsistent with or conflict with this document, or any identical prosecution file that may have restrictive effects on the broadest scope of the claims. history. Associate with this document now or in the future. For example, if there is any inconsistency or conflict between the description, definition, and/or use of terms related to any contained material The terminology shall prevail.
最后,应理解,本文公开的申请的实施方案是对本申请的实施方案的原理的说明。其他修改后的实施例也在本申请的范围内。因此,本申请披露的实施例仅仅作为示例而非限制。本领域技术人员可以根据本申请中的实施例采取替代配置来实现本申请中的发明。因此,本申请的实施例不限于申请中被精确地描述过的哪些实施例。Finally, it should be understood that the embodiment of the application disclosed herein is an explanation of the principle of the embodiment of the application. Other modified embodiments are also within the scope of this application. Therefore, the embodiments disclosed in this application are merely examples rather than limitations. Those skilled in the art can adopt alternative configurations according to the embodiments of the present application to implement the invention of the present application. Therefore, the embodiments of the present application are not limited to those exactly described in the application.

Claims (18)

  1. 一种车载自动驾驶测试系统,包括:虚拟场景叠加模块,在工作状态下,所述虚拟场景叠加模块:A vehicle-mounted automatic driving test system includes: a virtual scene superimposing module. In a working state, the virtual scene superimposing module:
    获取自动驾驶车辆的行驶数据;Obtain driving data of autonomous vehicles;
    获取所述自动驾驶车辆的外部环境数据;Acquiring external environment data of the autonomous vehicle;
    根据所述自动驾驶车辆的行驶数据,在所述外部环境数据中加入虚拟数据,生成修改后的外部环境数据;Adding virtual data to the external environment data according to the driving data of the autonomous vehicle to generate modified external environment data;
    将所述修改后的外部环境数据传送给自动驾驶车辆的车载控制模块。The modified external environment data is transmitted to the on-board control module of the self-driving vehicle.
  2. 如权利要求1中所述车载自动驾驶车辆测试系统,还包括:The vehicle-mounted automatic driving vehicle test system as claimed in claim 1, further comprising:
    测试模块,所述测试模块在工作状态下,收集所述自动驾驶车辆对修改后的外部环境数据做出的响应,所述响应包括因为所述虚拟数据的加入而引起的所述自动驾驶车辆的所述行驶状态的改变。The test module, in the working state, collects the response of the autonomous vehicle to the modified external environment data, and the response includes the response of the autonomous vehicle caused by the addition of the virtual data The change in the driving state.
  3. 如权利要求1中所述车载自动驾驶车辆测试系统,其中,所述自动驾驶车辆的行驶数据包括车速,所述虚拟数据随时间变化,所述变化同所述车速相关。The vehicle-mounted automatic driving vehicle test system according to claim 1, wherein the driving data of the automatic driving vehicle includes a vehicle speed, the virtual data changes with time, and the change is related to the vehicle speed.
  4. 如权利要求1中所述车载自动驾驶车辆的测试系统,其中,所述虚拟场景叠加模块通过5G网络获得所述虚拟数据;The test system for an on-vehicle self-driving vehicle according to claim 1, wherein the virtual scene overlay module obtains the virtual data through a 5G network;
    所述在所述外部环境数据中加入虚拟数据,生成修改后的外部环境数据在时间上为随机生成。The adding virtual data to the external environment data to generate the modified external environment data is randomly generated in time.
  5. 如权利要求1中所述车载自动驾驶车辆测试系统,还包括:The vehicle-mounted automatic driving vehicle test system as claimed in claim 1, further comprising:
    自动驾驶车辆外部对象传感器,连接在所述虚拟场景叠加模块上,并实时采集所述外部环境数据。The external object sensor of the self-driving vehicle is connected to the virtual scene overlay module and collects the external environment data in real time.
  6. 如权利要求5中所述车载自动驾驶车辆的测试系统,其中,所述自动驾驶车辆外部对象传感器包括视觉传感器、激光雷达、声纳传感器、气候传感器、加速度传感器、超声波传感器。The test system for an in-vehicle autonomous vehicle according to claim 5, wherein the external object sensors of the autonomous vehicle include a vision sensor, a lidar, a sonar sensor, a climate sensor, an acceleration sensor, and an ultrasonic sensor.
  7. 如权利要求1中所述车载自动驾驶车辆测试系统,其特征在于,所述虚拟数据至少包括虚拟车辆、虚拟障碍物、虚拟行人、虚拟车道、虚拟天气等虚拟数据中的一种;其中,基于不 同类型的传感器,所述虚拟数据的表现形式不同。The vehicle-mounted automatic driving vehicle test system according to claim 1, wherein the virtual data includes at least one of virtual vehicles, virtual obstacles, virtual pedestrians, virtual lanes, virtual weather and other virtual data; wherein, based on Different types of sensors have different manifestations of the virtual data.
  8. 如权利要求1所述车载自动驾驶车辆的测试系统,其中,The test system for an in-vehicle self-driving vehicle according to claim 1, wherein:
    所述自动驾驶车辆的外部对象传感器包括视觉传感器,所述视觉传感器包括:The external object sensor of the autonomous vehicle includes a visual sensor, and the visual sensor includes:
    镜头,摄取环境光线,Lens to capture ambient light,
    图像传感器,包括光敏阵列,所述光敏阵列接收镜头传来的环境光线并将所述环境光线转换为电信号,The image sensor includes a photosensitive array that receives ambient light from the lens and converts the ambient light into electrical signals,
    输出模块,接收所述电信号,并将所述电信号转化成外部数据;The output module receives the electrical signal and converts the electrical signal into external data;
    所述虚拟场景叠加模块同所述图像传感器连接;The virtual scene superimposing module is connected to the image sensor;
    所述在所述外部环境数据中加入虚拟数据包括:所述虚拟场景叠加模块将虚拟数据加入到所述电信号中。The adding virtual data to the external environment data includes: the virtual scene superimposing module adds virtual data to the electrical signal.
  9. 如权利要求1所述车载自动驾驶车辆的测试系统,其中,The test system for an in-vehicle self-driving vehicle according to claim 1, wherein:
    所述自动驾驶车辆的外部对象传感器包括视觉传感器,所述视觉传感器包括The external object sensor of the autonomous vehicle includes a visual sensor, and the visual sensor includes
    镜头,摄取环境光线,Lens to capture ambient light,
    图像传感器,包括光敏阵列,所述光敏阵列接收镜头传来的环境光线并将所述环境光线转换为电信号,The image sensor includes a photosensitive array that receives ambient light from the lens and converts the ambient light into electrical signals,
    输出模块,接收所述电信号,并将所述电信号转化成外部数据;The output module receives the electrical signal and converts the electrical signal into external data;
    所述虚拟场景叠加模块连接到所述输出模块;The virtual scene superimposing module is connected to the output module;
    所述在所述外部环境数据中加入虚拟数据包括:在所述将所述电信号转化成图像的时候将虚拟数据加入到所述外部数据。The adding virtual data to the external environment data includes: adding virtual data to the external data when the electrical signal is converted into an image.
  10. 如权利要求1所述车载自动驾驶车辆的测试系统,其中,The test system for an in-vehicle automatic driving vehicle according to claim 1, wherein:
    所述自动驾驶车辆的外部对象传感器包括视觉传感器;The external object sensor of the autonomous vehicle includes a visual sensor;
    所述虚拟场景叠加模块连接到所述视觉传感器的输出端;The virtual scene superimposing module is connected to the output terminal of the visual sensor;
    所述在所述外部环境数据中加入虚拟数据包括:将所述将虚拟数据加入到所述世界传感器输出的数据中。The adding virtual data to the external environment data includes: adding the virtual data to the data output by the world sensor.
  11. 一种自动驾驶车辆测试方法,应用于自动驾驶车辆的测试系统,所述自动驾驶车辆测试方法包括:A test method for an automatic driving vehicle is applied to a test system of an automatic driving vehicle, and the test method for an automatic driving vehicle includes:
    获取自动驾驶车辆的行驶数据;Obtain driving data of autonomous vehicles;
    获取所述自动驾驶车辆的外部环境数据;Acquiring external environment data of the autonomous vehicle;
    根据所述自动驾驶车辆的行驶数据,在所述外部环境数据中加入虚拟数据,生成修改后的外部环境数据;Adding virtual data to the external environment data according to the driving data of the autonomous vehicle to generate modified external environment data;
    将所述修改后的外部环境数据传送给自动驾驶车辆的车载控制模块。The modified external environment data is transmitted to the on-board control module of the self-driving vehicle.
  12. 如权利要求11中所述自动驾驶车辆测试方法,还包括:The method for testing an autonomous vehicle according to claim 11, further comprising:
    收集所述自动驾驶车辆对修改后的外部环境数据做出的响应,所述响应包括因为所述虚拟数据的加入而引起的所述自动驾驶车辆的所述行驶状态的改变。Collecting the response of the self-driving vehicle to the modified external environment data, the response including the change of the driving state of the self-driving vehicle caused by the addition of the virtual data.
  13. 如权利要求11中所述自动驾驶车辆测试方法,其中,所述自动驾驶车辆的行驶数据包括车速,所述虚拟数据随时间变化,所述变化同所述车速相关。The method for testing an autonomous vehicle according to claim 11, wherein the driving data of the autonomous vehicle includes a vehicle speed, the virtual data changes with time, and the change is related to the vehicle speed.
  14. 如权利要求11中所述自动驾驶车辆测试方法,其中,所述虚拟数据为通过5G网络获得;The method for testing an autonomous vehicle according to claim 11, wherein the virtual data is obtained through a 5G network;
    所述在所述外部环境数据中加入虚拟数据,生成修改后的外部环境数据在时间上为随机生成。The adding virtual data to the external environment data to generate the modified external environment data is randomly generated in time.
  15. 如权利要求11中所述自动驾驶车辆测试方法,其特征在于,所述虚拟数据至少包括虚拟车辆、虚拟障碍物、虚拟行人、虚拟车道、虚拟天气等虚拟数据中的一种;其中,基于不同类型的传感器,所述虚拟数据的表现形式不同。。The method for testing an autonomous vehicle according to claim 11, wherein the virtual data includes at least one of virtual vehicles, virtual obstacles, virtual pedestrians, virtual lanes, virtual weather and other virtual data; wherein, based on different Different types of sensors have different manifestations of the virtual data. .
  16. 如权利要求11所述自动驾驶车辆测试方法,其中,The self-driving vehicle testing method according to claim 11, wherein:
    所述自动驾驶车辆包括外部对象传感器,所述外部对象传感器包括视觉传感器,所述视觉传感器包括:The autonomous vehicle includes an external object sensor, the external object sensor includes a visual sensor, and the visual sensor includes:
    镜头,摄取环境光线,Lens to capture ambient light,
    图像传感器,包括光敏阵列,所述光敏阵列接收镜头传来的环境光线并将所述环境光线转换为电信号,The image sensor includes a photosensitive array that receives ambient light from the lens and converts the ambient light into electrical signals,
    输出模块,接收所述电信号,并将所述电信号转化成外部数据;The output module receives the electrical signal and converts the electrical signal into external data;
    所述虚拟场景叠加模块同所述图像传感器连接;The virtual scene superimposing module is connected to the image sensor;
    所述在所述外部环境数据中加入虚拟数据包括:将虚拟数据加入到所述电信号中。The adding virtual data to the external environment data includes: adding virtual data to the electrical signal.
  17. 如权利要求11所述自动驾驶车辆测试方法,其中,The self-driving vehicle testing method according to claim 11, wherein:
    所述自动驾驶车辆包括外部对象传感器,所述外部对象传感器包括视觉传感器,所述视觉传感器包括:The autonomous vehicle includes an external object sensor, the external object sensor includes a visual sensor, and the visual sensor includes:
    镜头,摄取环境光线,Lens to capture ambient light,
    图像传感器,包括光敏阵列,所述光敏阵列接收镜头传来的环境光线并将所述环境光线转换为电信号,The image sensor includes a photosensitive array that receives ambient light from the lens and converts the ambient light into electrical signals,
    输出模块,接收所述电信号,并将所述电信号转化成外部数据;The output module receives the electrical signal and converts the electrical signal into external data;
    所述虚拟场景叠加模块连接到所述输出模块;The virtual scene superimposing module is connected to the output module;
    所述在所述外部环境数据中加入虚拟数据包括:在所述将所述电信号转化成图像的时候将虚拟数据加入到所述外部数据。The adding virtual data to the external environment data includes: adding virtual data to the external data when the electrical signal is converted into an image.
  18. 如权利要求11所述自动驾驶车辆测试方法,其中,The self-driving vehicle testing method according to claim 11, wherein:
    所述自动驾驶车辆的外部对象传感器包括视觉传感器;The external object sensor of the autonomous vehicle includes a visual sensor;
    所述虚拟场景叠加模块连接到所述视觉传感器的输出端;The virtual scene superimposing module is connected to the output terminal of the visual sensor;
    所述在所述外部环境数据中加入虚拟数据包括:将所述将虚拟数据加入到所述世界传感器输出的数据中。The adding virtual data to the external environment data includes: adding the virtual data to the data output by the world sensor.
PCT/CN2019/109155 2019-09-29 2019-09-29 Vehicle-mounted autonomous driving test system and test method WO2021056556A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/109155 WO2021056556A1 (en) 2019-09-29 2019-09-29 Vehicle-mounted autonomous driving test system and test method
CN201980002545.7A CN110785718B (en) 2019-09-29 2019-09-29 Vehicle-mounted automatic driving test system and test method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/109155 WO2021056556A1 (en) 2019-09-29 2019-09-29 Vehicle-mounted autonomous driving test system and test method

Publications (1)

Publication Number Publication Date
WO2021056556A1 true WO2021056556A1 (en) 2021-04-01

Family

ID=69394850

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/109155 WO2021056556A1 (en) 2019-09-29 2019-09-29 Vehicle-mounted autonomous driving test system and test method

Country Status (2)

Country Link
CN (1) CN110785718B (en)
WO (1) WO2021056556A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113325734A (en) * 2021-06-10 2021-08-31 中国第一汽车股份有限公司 Simulation test system, method, device, equipment and storage medium for automatic windscreen wiper
CN113640010A (en) * 2021-08-02 2021-11-12 上海和夏新能源科技有限公司 Vehicle attitude simulation method and system based on real-time synchronous data acquisition
CN113838293A (en) * 2021-10-11 2021-12-24 特路(北京)科技有限公司 Rain and fog environment test field and test method suitable for intelligent automobile
CN114297827A (en) * 2021-12-06 2022-04-08 江苏航天大为科技股份有限公司 Software combined automatic driving system simulation method
CN114625637A (en) * 2022-02-23 2022-06-14 浙江吉利控股集团有限公司 Testing method and evaluation method based on dynamic virtual scene

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021056556A1 (en) * 2019-09-29 2021-04-01 驭势科技(北京)有限公司 Vehicle-mounted autonomous driving test system and test method
WO2021159357A1 (en) * 2020-02-12 2021-08-19 深圳元戎启行科技有限公司 Traveling scenario information processing method and apparatus, electronic device, and readable storage medium
CN111458157B (en) * 2020-04-14 2021-10-08 上汽依维柯红岩商用车有限公司 Test system and method for obtaining braking performance parameters of braking system
CN112557058B (en) * 2020-12-10 2023-12-05 清华大学苏州汽车研究院(吴江) Automatic driving test system
CN112835382A (en) * 2020-12-31 2021-05-25 南京安麦森电子科技有限公司 5G base station test system based on unmanned aerial vehicle
CN113301531A (en) * 2021-05-25 2021-08-24 上海商汤临港智能科技有限公司 Network access system, method and device for vehicle automatic driving test
CN113567778B (en) * 2021-06-30 2023-12-29 南京富士通南大软件技术有限公司 Scene-based real-vehicle automatic testing method for vehicle-mounted information entertainment system
CN113325261B (en) * 2021-07-15 2023-03-14 北京智能车联产业创新中心有限公司 Temperature and humidity adaptability test method and system for industrial control hardware of automatic driving vehicle
CN113589930B (en) * 2021-07-30 2024-02-23 广州市旗鱼软件科技有限公司 Mixed reality simulated driving environment generation method and system
CN113781471B (en) * 2021-09-28 2023-10-27 中国科学技术大学先进技术研究院 Automatic driving test field system and method
CN114179823A (en) * 2021-11-18 2022-03-15 鄂尔多斯市普渡科技有限公司 Speed control method of unmanned vehicle
CN114414257A (en) * 2021-12-22 2022-04-29 奇瑞汽车股份有限公司 Test system, method, device and storage medium for automobile
CN114755035B (en) * 2022-06-15 2022-09-09 中汽信息科技(天津)有限公司 Intelligent driving multidimensional test method based on vehicle-mounted terminal

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07294842A (en) * 1994-04-26 1995-11-10 Toyota Motor Corp Information display device for automobile
JP2015177404A (en) * 2014-03-17 2015-10-05 セイコーエプソン株式会社 Head-mounted display device and control method therefor
US20160334623A1 (en) * 2014-01-28 2016-11-17 JVC Kenwood Corporation Display device, display method, and non-transitory computer readable medium storing display program
CN108109210A (en) * 2017-12-15 2018-06-01 广州德科投资咨询有限公司 A kind of scene generating method and intelligent glasses for automatic driving vehicle
CN108241352A (en) * 2016-12-25 2018-07-03 青岛祥智电子技术有限公司 A kind of long-range control method of unmanned motor vehicle
CN207624060U (en) * 2017-08-08 2018-07-17 中国汽车工程研究院股份有限公司 A kind of automated driving system scene floor data acquisition system
CN108762226A (en) * 2018-05-14 2018-11-06 济南浪潮高新科技投资发展有限公司 A kind of automatic driving vehicle test method, apparatus and system
CN109032103A (en) * 2017-06-09 2018-12-18 百度在线网络技术(北京)有限公司 Test method, device, equipment and the storage medium of automatic driving vehicle
CN109270923A (en) * 2018-11-05 2019-01-25 安徽江淮汽车集团股份有限公司 The real vehicle of LDW controller is in ring test method and system
CN109557904A (en) * 2018-12-06 2019-04-02 百度在线网络技术(北京)有限公司 A kind of test method, device, equipment and medium
CN109752968A (en) * 2017-11-07 2019-05-14 瑞萨电子株式会社 Simulator and computer readable storage medium
CN109765060A (en) * 2018-12-29 2019-05-17 同济大学 A kind of automatic driving vehicle traffic coordinating virtual test system and method
CN110083163A (en) * 2019-05-20 2019-08-02 三亚学院 A kind of 5G C-V2X bus or train route cloud cooperation perceptive method and system for autonomous driving vehicle
CN110209146A (en) * 2019-05-23 2019-09-06 杭州飞步科技有限公司 Test method, device, equipment and the readable storage medium storing program for executing of automatic driving vehicle
CN110264586A (en) * 2019-05-28 2019-09-20 浙江零跑科技有限公司 L3 grades of automated driving system driving path data acquisitions, analysis and method for uploading
CN110785718A (en) * 2019-09-29 2020-02-11 驭势科技(北京)有限公司 Vehicle-mounted automatic driving test system and test method

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07294842A (en) * 1994-04-26 1995-11-10 Toyota Motor Corp Information display device for automobile
US20160334623A1 (en) * 2014-01-28 2016-11-17 JVC Kenwood Corporation Display device, display method, and non-transitory computer readable medium storing display program
JP2015177404A (en) * 2014-03-17 2015-10-05 セイコーエプソン株式会社 Head-mounted display device and control method therefor
CN108241352A (en) * 2016-12-25 2018-07-03 青岛祥智电子技术有限公司 A kind of long-range control method of unmanned motor vehicle
CN109032103A (en) * 2017-06-09 2018-12-18 百度在线网络技术(北京)有限公司 Test method, device, equipment and the storage medium of automatic driving vehicle
CN207624060U (en) * 2017-08-08 2018-07-17 中国汽车工程研究院股份有限公司 A kind of automated driving system scene floor data acquisition system
CN109752968A (en) * 2017-11-07 2019-05-14 瑞萨电子株式会社 Simulator and computer readable storage medium
CN108109210A (en) * 2017-12-15 2018-06-01 广州德科投资咨询有限公司 A kind of scene generating method and intelligent glasses for automatic driving vehicle
CN108762226A (en) * 2018-05-14 2018-11-06 济南浪潮高新科技投资发展有限公司 A kind of automatic driving vehicle test method, apparatus and system
CN109270923A (en) * 2018-11-05 2019-01-25 安徽江淮汽车集团股份有限公司 The real vehicle of LDW controller is in ring test method and system
CN109557904A (en) * 2018-12-06 2019-04-02 百度在线网络技术(北京)有限公司 A kind of test method, device, equipment and medium
CN109765060A (en) * 2018-12-29 2019-05-17 同济大学 A kind of automatic driving vehicle traffic coordinating virtual test system and method
CN110083163A (en) * 2019-05-20 2019-08-02 三亚学院 A kind of 5G C-V2X bus or train route cloud cooperation perceptive method and system for autonomous driving vehicle
CN110209146A (en) * 2019-05-23 2019-09-06 杭州飞步科技有限公司 Test method, device, equipment and the readable storage medium storing program for executing of automatic driving vehicle
CN110264586A (en) * 2019-05-28 2019-09-20 浙江零跑科技有限公司 L3 grades of automated driving system driving path data acquisitions, analysis and method for uploading
CN110785718A (en) * 2019-09-29 2020-02-11 驭势科技(北京)有限公司 Vehicle-mounted automatic driving test system and test method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113325734A (en) * 2021-06-10 2021-08-31 中国第一汽车股份有限公司 Simulation test system, method, device, equipment and storage medium for automatic windscreen wiper
CN113640010A (en) * 2021-08-02 2021-11-12 上海和夏新能源科技有限公司 Vehicle attitude simulation method and system based on real-time synchronous data acquisition
CN113838293A (en) * 2021-10-11 2021-12-24 特路(北京)科技有限公司 Rain and fog environment test field and test method suitable for intelligent automobile
CN114297827A (en) * 2021-12-06 2022-04-08 江苏航天大为科技股份有限公司 Software combined automatic driving system simulation method
CN114625637A (en) * 2022-02-23 2022-06-14 浙江吉利控股集团有限公司 Testing method and evaluation method based on dynamic virtual scene

Also Published As

Publication number Publication date
CN110785718B (en) 2021-11-02
CN110785718A (en) 2020-02-11

Similar Documents

Publication Publication Date Title
WO2021056556A1 (en) Vehicle-mounted autonomous driving test system and test method
JP7341864B2 (en) System and method for registering 3D data with 2D image data
WO2020133208A1 (en) Control method for self-driving vehicle, and self-driving system
WO2021103511A1 (en) Operational design domain (odd) determination method and apparatus and related device
WO2021057344A1 (en) Data presentation method and terminal device
WO2021036592A1 (en) Adaptive adjustment method and device for rear-view mirror
JP7486564B2 (en) Enhanced navigation guidance by landmarks under difficult driving conditions
WO2022246852A1 (en) Automatic driving system testing method based on aerial survey data, testing system, and storage medium
US20210061298A1 (en) System and method for transitioning a vehicle from an autonomous mode in response to a handover event
US11222215B1 (en) Identifying a specific object in a two-dimensional image of objects
CN113135183B (en) Control system for vehicle, control method for control system for vehicle, and computer-readable recording medium
CN109835344A (en) Controller of vehicle, control method for vehicle and storage medium
KR20210025523A (en) Information processing device and information processing method, computer program, and mobile device
CN115042821B (en) Vehicle control method, vehicle control device, vehicle and storage medium
US20230046691A1 (en) External environment sensor data prioritization for autonomous vehicle
CN115100377A (en) Map construction method and device, vehicle, readable storage medium and chip
CN115035494A (en) Image processing method, image processing device, vehicle, storage medium and chip
CN115205311B (en) Image processing method, device, vehicle, medium and chip
CN115202234B (en) Simulation test method and device, storage medium and vehicle
CN115042814A (en) Traffic light state identification method and device, vehicle and storage medium
CN115116161A (en) Vehicle data acquisition method and device, storage medium and vehicle
CN109572712B (en) Method for controlling a vehicle operating system
CN114822216B (en) Method and device for generating parking space map, vehicle, storage medium and chip
CN115115707B (en) Vehicle falling water detection method, vehicle, computer readable storage medium and chip
US20230251384A1 (en) Augmentation of sensor data under various weather conditions to train machine-learning systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19946363

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19946363

Country of ref document: EP

Kind code of ref document: A1