WO2021057344A1 - 一种数据呈现的方法及终端设备 - Google Patents

一种数据呈现的方法及终端设备 Download PDF

Info

Publication number
WO2021057344A1
WO2021057344A1 PCT/CN2020/110135 CN2020110135W WO2021057344A1 WO 2021057344 A1 WO2021057344 A1 WO 2021057344A1 CN 2020110135 W CN2020110135 W CN 2020110135W WO 2021057344 A1 WO2021057344 A1 WO 2021057344A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
data
presentation
automatic driving
cloud data
Prior art date
Application number
PCT/CN2020/110135
Other languages
English (en)
French (fr)
Inventor
胡月
孙喆
曾侃
董明杰
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20870130.0A priority Critical patent/EP4029750A4/en
Publication of WO2021057344A1 publication Critical patent/WO2021057344A1/zh
Priority to US17/704,709 priority patent/US20220215639A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/20Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
    • B60K35/21Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor using visual output, e.g. blinking lights or matrix displays
    • B60K35/23Head-up displays [HUD]
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/20Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
    • B60K35/28Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor characterised by the type of the output information, e.g. video entertainment or vehicle dynamics information; characterised by the purpose of the output information, e.g. for attracting the attention of the driver
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/85Arrangements for transferring vehicle- or driver-related data
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/16Type of output information
    • B60K2360/161Explanation of functions, e.g. instructions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/16Type of output information
    • B60K2360/166Navigation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/16Type of output information
    • B60K2360/167Vehicle dynamics information
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/16Type of output information
    • B60K2360/175Autonomous driving
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/16Type of output information
    • B60K2360/177Augmented reality
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/16Type of output information
    • B60K2360/178Warnings
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/583Data transfer between instruments
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/589Wireless data transfers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/592Data transfer involving external databases
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0043Signal treatments, identification of variables or parameters, parameter estimation or state estimation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0043Signal treatments, identification of variables or parameters, parameter estimation or state estimation
    • B60W2050/0052Filtering, filters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2520/00Input parameters relating to overall vehicle dynamics
    • B60W2520/10Longitudinal speed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2555/00Input parameters relating to exterior conditions, not covered by groups B60W2552/00, B60W2554/00
    • B60W2555/20Ambient conditions, e.g. wind or rain
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/45External transmission of data to or from the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering

Definitions

  • This application relates to the field of automatic driving technology, and in particular to a data presentation method and terminal equipment.
  • the self-driving car is equipped with a perception system, which will perceive the environment it is in, and will also display the information sensed by the perception system on the on-board terminal.
  • the perception of the surrounding environment for autonomous driving mainly includes lidar, triangulation ranging, structured light, and stereo vision.
  • lidar uses non-contact, active measurement to directly obtain three-dimensional surface data of the scanned ground and objects, which is minimally affected by the external environment, and has the advantages of high resolution, strong anti-interference ability, and high ranging accuracy.
  • the vehicle-mounted terminal will process all the information collected by the lidar and present it on the vehicle-mounted terminal, which undoubtedly has a large computational overhead.
  • the embodiment of the present application provides a data presentation method, which can adaptively present point cloud data related to an automatic driving device according to driving information or/and user demand information.
  • the first aspect of the present application provides a data presentation method, which may include: obtaining driving information of an automatic driving device and/or user demand information for data presentation; according to the driving information of the automatic driving device and/or the user
  • the demand information of data presentation determines the first point cloud data related to the automatic driving device, and determines the presentation mode of the first point cloud data, the first point cloud data being data expressed in the form of multiple points ; Present the first point cloud data in the presentation mode.
  • acquiring the driving information of the automatic driving device and/or the user's demand information for data presentation may be described as acquiring at least one of the driving information of the automatic driving device and the user's demand information for data presentation.
  • the first point cloud data related to the automatic driving device is determined according to the driving information of the automatic driving device and/or the user's demand information for data presentation, and the presentation of the first point cloud data is determined.
  • the method can be described as determining the first point cloud data related to the automatic driving device according to at least one of the driving information of the automatic driving device and the demand information presented by the user for the data, and determining the first point cloud data The presentation of point cloud data.
  • the driving information of the automatic driving device and/or the user's demand information for data presentation includes three situations, namely: A: driving information of the automatic driving device; B: user demand information for the data presentation; C: driving of the automatic driving device Information and user demand information for data presentation.
  • the first point cloud data is determined according to the driving information of the automatic driving device when determining the first point cloud data; if the user's demand information for data presentation is acquired in the acquiring step, When determining the first point cloud data, the first point cloud data is determined according to the user's demand information for the data presentation; if the driving information of the automatic driving device and the user's demand information for the data presentation are acquired in the acquiring step, the first point is determined In the case of cloud data, the first point cloud data is determined according to the driving information of the automatic driving device and the demand information of the user for data presentation. The same is true for determining the presentation mode of the first point cloud data.
  • the point cloud data related to the automatic driving device can be adaptively presented according to driving information or/and user demand information, without all the data being presented, and the complexity of data processing is reduced.
  • the method may further include:
  • determining the first point cloud data related to the automatic driving device according to the driving information of the automatic driving device and/or the demand information of the user for data presentation may include:
  • the first data and the first environmental data are filtered according to the driving information of the automatic driving device and/or the user's demand information for data presentation, so as to obtain the second data of the automatic driving device to be presented.
  • the second data and the second environmental data are converted into first point cloud data.
  • the data acquisition device may be one of a lidar, a triangulation ranging sensor, a structured light sensor, and a stereo vision sensor, or a combination of two or more.
  • the first data refers to the data of the automatic driving device itself. Taking the automatic driving device as a car as an example, the first data may be the body data of the car or the scene data in the car.
  • the first environmental data may be road surface data around the automatic driving device, other vehicle data, pedestrian data, or building data.
  • the collected data is filtered first, which can reduce the amount of data converted into point cloud data, thereby reducing the amount of data calculation.
  • the method may further include:
  • determining the first point cloud data related to the automatic driving device according to the driving information of the automatic driving device and/or the demand information of the user for data presentation may include:
  • the second point cloud data is filtered according to the driving information of the automatic driving device and the user's demand information for data presentation, so as to obtain the first point cloud data to be presented.
  • this possible implementation is only at a different timing of cloud data conversion, and other content can be understood by referring to the previous possible implementation.
  • the driving information includes at least one of the following: driving position, driving speed, lane where the automatic driving device is located, weather information or ambient light information;
  • the user's demand information for data presentation includes at least one of the following: a presentation perspective, a combined object during presentation, or an operation instruction;
  • the information of the presentation mode includes the density of the point cloud, or a combination of the density of the point cloud and at least one of the following information: the size of the point cloud, the object to be presented in the point cloud, the color of the point cloud, or superimposed warning information.
  • determining the presentation mode of the first point cloud data may include:
  • the scene including a highway, a city street, a suburb, a mountain road, or a desert;
  • the presenting the first point cloud data in the presentation manner may include:
  • the first point cloud data is presented in different densities according to the scene where the automatic driving device is located, and the density of the point cloud corresponding to the city street is greater than that of the highway, the suburb, the mountain road, or the desert The density of the corresponding point cloud.
  • the point cloud data presented in different densities can be called low-density point clouds or high-density point clouds according to the density. Of course, it is not limited to these two. There can be more point clouds for point clouds. call.
  • the low-density point cloud is relative to the high-density point cloud. In the low-density point cloud, the number of points in the point cloud is small, and the distance between the point and the point is larger, which can also be called sparse point cloud.
  • the high-density point cloud has a large number of points and is relatively dense, and it can also be called a dense point cloud. Using point clouds of different densities to present data in different scenarios can minimize the complexity of data processing on the premise of meeting security requirements.
  • determining a presentation mode of the first point cloud data may include:
  • the presenting the first point cloud data in the presentation manner may include:
  • the speed threshold can be one or multiple. Taking multiple as an example, the speed thresholds of multiple gears can be set.
  • the speed thresholds of different gears correspond to different point cloud densities, for example: 30km/h is the first gear, 50km/h is the second gear, and 90km/h is the third gear.
  • 30km/h is the first gear
  • 50km/h is the second gear
  • 90km/h is the third gear.
  • a higher density point cloud is used to present point cloud data.
  • the point cloud data is presented in the form of a medium-density point cloud.
  • the speed is 50km/h to 90km/h
  • the point cloud data is presented in the form of low-density point cloud.
  • the point cloud data is presented in the form of a lower density point cloud.
  • the density of point cloud data corresponding to the above-mentioned different gears is getting smaller and smaller.
  • This possible implementation method uses different point cloud presentation methods according to different driving speeds, which not only ensures safety and reduces the amount of calculation, but also improves the user's car experience.
  • determining the presentation mode of the first point cloud data may include:
  • Determining a presentation mode of the first point cloud data according to the presentation perspective where the presentation perspective includes a god perspective or a driver perspective;
  • the presenting the first point cloud data in the presentation manner includes:
  • the presentation perspective is the driver's perspective
  • the first point cloud data is presented from the driver's perspective.
  • the point cloud presentation mode is described according to the different viewing angles selected by the user.
  • God's perspective can display the overall information more comprehensively.
  • the driver's perspective can be more in line with the driver's intuitive experience.
  • Different viewing angles are presented in different ways, which can adapt to different display habits of different users.
  • determining the presentation mode of the first point cloud data may include:
  • the presentation mode of the first point cloud data is determined according to the combined object at the time of presentation, and the combined object at the time of presentation includes a high-precision map, a head-up display (HUD) or augmented reality (AR) ;
  • HUD head-up display
  • AR augmented reality
  • the presenting the first point cloud data in the presentation manner may include:
  • the combined object at the time of presentation is a high-precision map
  • the first point cloud data is projected to the first area on the windshield of the automatic driving device, and the area of the first area is smaller than that of the windshield. area;
  • the first point cloud data is projected onto the entire windshield glass of the automatic driving device for presentation.
  • the point cloud data can be presented in combination with a high-precision map, HUD, or AR scene, which increases the flexibility of point cloud presentation.
  • the determining the presentation manner of the first point cloud data includes:
  • the presenting the first point cloud data in the presentation manner may include:
  • the first point cloud data superimposed with the early warning information corresponding to the operation instruction is presented.
  • the warning information can be by color, such as red warning, or by sound.
  • an alarm can be given in different ways to improve the auxiliary effect of point cloud presentation on automatic driving, and further improve driving safety.
  • a second aspect of the present application provides a terminal device, which has the function of implementing the foregoing first aspect or any one of the possible implementation methods of the first aspect.
  • This function can be realized by hardware, or by hardware executing corresponding software.
  • the hardware or software includes one or more modules corresponding to the above functions, such as a receiving unit, a processing unit, and a sending unit.
  • the third aspect of the present application provides a terminal device.
  • the terminal device includes at least one processor, a memory, a communication port, a display, and computer-executable instructions stored in the memory and running on the processor.
  • the processor executes, the processor executes the method described in the foregoing first aspect or any one of the possible implementation manners of the first aspect.
  • the terminal device may be a vehicle-mounted device, and the vehicle-mounted device may be preset on the vehicle body, and may be integrated with the vehicle, or pluggable, or connected to the vehicle body in other ways.
  • the fourth aspect of the present application provides a computer-readable storage medium storing one or more computer-executable instructions.
  • the processor executes any of the above-mentioned first aspect or the first aspect.
  • One possible implementation is the method described.
  • the fifth aspect of the present application provides a computer program product (or computer program) that stores one or more computer-executable instructions.
  • the processor executes the first aspect. Or any one of the possible implementation methods of the first aspect.
  • a sixth aspect of the present application provides a chip system, which includes a processor, and is used to support a terminal device to implement the above-mentioned first aspect or the functions involved in any one of the possible implementation manners of the first aspect.
  • the chip system may also include a memory, and the memory is used to store the necessary program instructions and data of the terminal device.
  • the chip system can be composed of chips, or include chips and other discrete devices.
  • the technical effects brought by the second to sixth aspects or any one of the possible implementation manners may refer to the technical effects brought about by the first aspect or the different possible implementation manners of the first aspect, and details are not described herein again.
  • the embodiment of the present application can adaptively present the point cloud data related to the automatic driving device according to the driving information or/and user demand information, without presenting all the data, which reduces the complexity of data processing.
  • FIG. 1 is a functional block diagram of an automatic driving device with automatic driving function provided by an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of an automatic driving system provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an embodiment of an automatic driving system provided by an embodiment of the present application.
  • Fig. 4 is an interface diagram of point cloud data presentation provided by an embodiment of the present application.
  • Fig. 5 is a schematic diagram of an embodiment of a data presentation method provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of data conversion of lidar provided by an embodiment of the present application.
  • FIG. 7 is another interface diagram of point cloud data presentation provided by an embodiment of the present application.
  • FIG. 8 is another interface diagram of point cloud data presentation provided by an embodiment of the present application.
  • 9A to 9C are respectively another interface diagrams for presenting point cloud data provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of an interface of the driver's perspective provided by an embodiment of the present application.
  • FIG. 11 is another interface diagram of point cloud data presentation provided by an embodiment of the present application.
  • FIG. 12 is another interface diagram of point cloud data presentation provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of an embodiment of a terminal device provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of another embodiment of a terminal device provided by an embodiment of the present application.
  • the embodiment of the present application provides a data presentation method, which can adaptively present point cloud data related to a vehicle according to driving information or/and user demand information, thereby reducing the amount of calculation. Detailed descriptions are given below.
  • the Internet of Vehicles refers to a dynamic mobile communication system that interacts between vehicles and vehicles, vehicles and roads, vehicles and people, and vehicles and sensing equipment to achieve communication between vehicles and the public network. It can realize information sharing through the interconnection of cars and cars, cars and people, and cars and roads, collect information on vehicles, roads and the environment, and process, calculate, share and securely release the information collected from multiple sources on the information network platform , Provide effective guidance and supervision of vehicles according to different functional requirements, and provide professional multimedia and mobile Internet application services.
  • the automatic driving device will be introduced below in conjunction with Fig. 1 and Fig. 2.
  • FIG. 1 is a functional block diagram of an automatic driving device 100 with automatic driving function provided by an embodiment of the present application.
  • the automatic driving device 100 is configured in a fully or partially automatic driving mode.
  • the automatic driving device 100 can control itself while in the automatic driving mode, and can determine the current state of the automatic driving device and its surrounding environment through human operation, and determine the possible behavior of at least one other automatic driving device in the surrounding environment , And determine the confidence level corresponding to the possibility of the other automatic driving device performing possible behaviors, and control the automatic driving device 100 based on the determined information.
  • the automatic driving device 100 can be set to operate without interacting with a human.
  • the autonomous driving device 100 may include various subsystems, such as a traveling system 102, a sensor system 104, a control system 106, one or more peripheral devices 108, and a power supply 110, a computer system 112, and a user interface 116.
  • the automatic driving device 100 may include more or fewer sub-systems, and each sub-system may include multiple elements.
  • each sub-system and element of the automatic driving device 100 may be interconnected by wire or wirelessly.
  • the traveling system 102 may include components that provide power movement for the autonomous driving device 100.
  • the travel system 102 may include an engine 118, an energy source 119, a transmission 120, and wheels/tires 121.
  • the engine 118 may be an internal combustion engine, an electric motor, an air compression engine, or a combination of other types of engines, such as a hybrid engine composed of a gas oil engine and an electric motor, or a hybrid engine composed of an internal combustion engine and an air compression engine.
  • the engine 118 converts the energy source 119 into mechanical energy.
  • Examples of energy sources 119 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electricity.
  • the energy source 119 may also provide energy for other systems of the automatic driving device 100.
  • the transmission device 120 can transmit mechanical power from the engine 118 to the wheels 121.
  • the transmission device 120 may include a gearbox, a differential, and a drive shaft.
  • the transmission device 120 may also include other devices, such as a clutch.
  • the drive shaft may include one or more shafts that can be coupled to one or more wheels 121.
  • the sensor system 104 may include several sensors that sense information about the environment around the automatic driving device 100.
  • the sensor system 104 may include a positioning system 122 (the positioning system may be a global positioning system (GPS) system, a Beidou system or other positioning systems), an inertial measurement unit (IMU) 124, Radar 126, laser rangefinder 128, and camera 130.
  • the sensor system 104 may also include sensors of the internal system of the automatic driving device 100 to be monitored (for example, an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors can be used to detect objects and their corresponding characteristics (position, shape, direction, speed, etc.). Such detection and recognition are key functions for the safe operation of the autonomous automatic driving device 100.
  • the positioning system 122 may be used to estimate the geographic location of the automatic driving device 100.
  • the IMU 124 is used to sense changes in the position and orientation of the automatic driving device 100 based on inertial acceleration.
  • the IMU 124 may be a combination of an accelerometer and a gyroscope.
  • the radar 126 may use radio signals to sense objects in the surrounding environment of the automatic driving device 100. In some embodiments, in addition to sensing the object, the radar 126 may also be used to sense the speed and/or direction of the object.
  • the laser rangefinder 128 can use laser light to sense objects in the environment where the automatic driving device 100 is located.
  • the laser rangefinder 128 may include one or more laser sources, laser scanners, and one or more detectors, as well as other system components.
  • the camera 130 may be used to capture multiple images of the surrounding environment of the automatic driving device 100.
  • the camera 130 may be a still camera or a video camera.
  • the control system 106 controls the operation of the automatic driving device 100 and its components.
  • the control system 106 may include various components, including a steering system 132, a throttle 134, a braking unit 136, a sensor fusion algorithm 138, a computer vision system 140, a route control system 142, and an obstacle avoidance system 144.
  • the steering system 132 is operable to adjust the forward direction of the automatic driving device 100.
  • it may be a steering wheel system in one embodiment.
  • the throttle 134 is used to control the operating speed of the engine 118 and thereby control the speed of the automatic driving device 100.
  • the braking unit 136 is used to control the automatic driving device 100 to decelerate.
  • the braking unit 136 may use friction to slow down the wheels 121.
  • the braking unit 136 may convert the kinetic energy of the wheels 121 into electric current.
  • the braking unit 136 may also take other forms to slow down the rotation speed of the wheels 121 to control the speed of the automatic driving device 100.
  • the computer vision system 140 may be operable to process and analyze the images captured by the camera 130 in order to recognize objects and/or features in the surrounding environment of the autonomous driving device 100.
  • the objects and/or features may include traffic signals, road boundaries, and obstacles.
  • the computer vision system 140 may use object recognition algorithms, structure from motion (SFM) algorithms, video tracking, and other computer vision technologies.
  • SFM structure from motion
  • the computer vision system 140 may be used to map the environment, track objects, estimate the speed of objects, and so on.
  • the route control system 142 is used to determine the driving route of the automatic driving device 100.
  • the route control system 142 may combine data from the sensor 138, the positioning system 122, and one or more predetermined maps to determine the driving route for the autonomous driving device 100.
  • the obstacle avoidance system 144 is used to identify, evaluate and avoid or otherwise cross over potential obstacles in the environment of the automatic driving device 100.
  • control system 106 may add or alternatively include components other than those shown and described. Alternatively, a part of the components shown above may be reduced.
  • the automatic driving device 100 interacts with external sensors, other automatic driving devices, other computer systems, or users through the peripheral device 108.
  • the peripheral device 108 may include a wireless communication system 146, an onboard computer 148, a microphone 150, and/or a speaker 152.
  • the peripheral device 108 provides a means for the user of the autonomous driving apparatus 100 to interact with the user interface 116.
  • the onboard computer 148 may provide information to the user of the automatic driving device 100.
  • the user interface 116 can also operate the onboard computer 148 to receive user input.
  • the on-board computer 148 can be operated through a touch screen.
  • the peripheral device 108 may provide a means for the autonomous driving device 100 to communicate with other devices located in the vehicle.
  • the microphone 150 may receive audio (eg, voice commands or other audio input) from the user of the autonomous driving device 100.
  • the speaker 152 may output audio to the user of the automatic driving device 100.
  • the wireless communication system 146 may wirelessly communicate with one or more devices directly or via a communication network.
  • the wireless communication system 146 may use 3G cellular communication, such as code division multiple access (CDMA), EVD0, global system for mobile communications (GSM)/general packet radio service technology packet radio service, GPRS), or 4G cellular communication, such as long term evolution (LTE), or 5G cellular communication.
  • the wireless communication system 146 may use WiFi to communicate with a wireless local area network (WLAN).
  • the wireless communication system 146 may directly communicate with the device using an infrared link, Bluetooth, or ZigBee.
  • Other wireless protocols such as various autonomous driving device communication systems.
  • the wireless communication system 146 may include one or more dedicated short-range communications (DSRC) devices, which may include autonomous driving devices and/or roadsides. Public and/or private data communication between stations.
  • DSRC dedicated short-range communications
  • the power supply 110 may provide power to various components of the automatic driving device 100.
  • the power source 110 may be a rechargeable lithium ion or lead-acid battery.
  • One or more battery packs of such batteries may be configured as a power source to provide power to various components of the automatic driving device 100.
  • the power source 110 and the energy source 119 may be implemented together, such as in some all-electric vehicles.
  • the computer system 112 may include at least one processor 113 that executes instructions 115 stored in a non-transitory computer readable medium such as the memory 114.
  • the computer system 112 may also be multiple computing devices that control individual components or subsystems of the automatic driving apparatus 100 in a distributed manner.
  • the processor 113 may be any conventional processor, such as a commercially available central processing unit (CPU). Alternatively, the processor may be a dedicated device such as an application specific integrated circuit (ASIC) or other hardware-based processor.
  • FIG. 1 functionally illustrates the processor, memory, and other elements of the computer 110 in the same block, those of ordinary skill in the art should understand that the processor, computer, or memory may actually include Multiple processors, computers, or memories stored in the same physical enclosure.
  • the memory may be a hard disk drive or other storage medium located in a housing other than the computer 110. Therefore, a reference to a processor or computer will be understood to include a reference to a collection of processors or computers or memories that may or may not operate in parallel. Rather than using a single processor to perform the steps described here, some components such as steering components and deceleration components may each have its own processor that only performs calculations related to component-specific functions .
  • the processor may be located far away from the automatic driving device and wirelessly communicate with the automatic driving device.
  • some of the processes described herein are executed on a processor arranged in the autopilot device and others are executed by a remote processor, including taking the necessary steps to perform a single maneuver.
  • the memory 114 may include instructions 115 (for example, program logic), and the instructions 115 may be executed by the processor 113 to perform various functions of the automatic driving device 100, including those described above.
  • the memory 114 may also contain additional instructions, including those for sending data to, receiving data from, interacting with, and/or controlling one or more of the traveling system 102, the sensor system 104, the control system 106, and the peripheral device 108. instruction.
  • the memory 114 may also store data, such as road maps, route information, the position, direction, and speed of the automatic driving device, and other such automatic driving device data, as well as other information. Such information may be used by the autonomous driving device 100 and the computer system 112 during the operation of the autonomous driving device 100 in autonomous, semi-autonomous, and/or manual modes.
  • the user interface 116 is used to provide information to or receive information from the user of the automatic driving device 100.
  • the user interface 116 may include one or more input/output devices in the set of peripheral devices 108, such as a wireless communication system 146, an in-vehicle computer 148, a microphone 150, and a speaker 152.
  • the computer system 112 may control the functions of the automatic driving device 100 based on inputs received from various subsystems (for example, the traveling system 102, the sensor system 104, and the control system 106) and from the user interface 116. For example, the computer system 112 may utilize input from the control system 106 in order to control the steering unit 132 to avoid obstacles detected by the sensor system 104 and the obstacle avoidance system 144. In some embodiments, the computer system 112 is operable to provide control of many aspects of the autonomous driving device 100 and its subsystems.
  • one or more of the aforementioned components may be installed or associated with the automatic driving device 100 separately.
  • the memory 114 may exist partially or completely separately from the automatic driving device 100.
  • the aforementioned components may be communicatively coupled together in a wired and/or wireless manner.
  • FIG. 1 should not be construed as a limitation to the embodiments of the present application.
  • a self-driving car traveling on a road can recognize objects in its surrounding environment to determine the adjustment to the current speed.
  • the object may be other automatic driving devices, traffic control equipment, or other types of objects.
  • each recognized object can be considered independently, and based on the respective characteristics of the object, such as its current speed, acceleration, distance from the automatic driving device, etc., can be used to determine the speed to be adjusted by the self-driving car .
  • the self-driving car automatic driving device 100 or a computing device associated with the automatic driving device 100 may be based on the characteristics of the recognized object and the surrounding environment
  • the state of the object e.g., traffic, rain, ice on the road, etc.
  • each recognized object depends on each other's behavior, so all recognized objects can also be considered together to predict the behavior of a single recognized object.
  • the automatic driving device 100 can adjust its speed based on the predicted behavior of the recognized object.
  • an autonomous vehicle can determine what stable state the autonomous driving device will need to adjust to (for example, accelerate, decelerate, or stop) based on the predicted behavior of the object.
  • other factors may also be considered to determine the speed of the automatic driving device 100, such as the lateral position of the automatic driving device 100 on the traveling road, the curvature of the road, the proximity of static and dynamic objects, and so on.
  • the computing device can also provide instructions to modify the steering angle of the self-driving device 100, so that the self-driving car follows a given trajectory and/or maintains objects near the self-driving car. (For example, a car in an adjacent lane on a road) The safe horizontal and vertical distance.
  • the above-mentioned automatic driving device 100 may be a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a lawn mower, a recreational vehicle, a playground automatic driving device, construction equipment, a tram, a golf cart, a train, and a trolley.
  • the embodiments of the present application do not make special limitations.
  • FIG. 1 introduces a functional block diagram of the automatic driving device 100, and the automatic driving system in the automatic driving device 100 is introduced below.
  • Fig. 2 is a schematic structural diagram of an automatic driving system provided by an embodiment of the application.
  • FIG. 1 and FIG. 2 describe the automatic driving device 100 from different angles.
  • the computer system 101 in FIG. 2 is the computer system 112 in FIG. 1.
  • the computer system 101 includes a processor 103, and the processor 103 is coupled with a system bus 105.
  • the processor 103 may be one or more processors, where each processor may include one or more processor cores.
  • a display adapter (video adapter) 107, the display adapter 107 can drive the display 109, and the display 109 is coupled to the system bus 105.
  • the system bus 105 is coupled with an input/output (I/O) bus 113 through a bus bridge 111.
  • the I/O interface 115 is coupled to the I/O bus.
  • the I/O interface 115 communicates with various I/O devices, such as an input device 117 (such as a keyboard, a mouse, a touch screen, etc.), a media tray 121, such as a CD-ROM, a multimedia interface, and the like.
  • Transceiver 123 can send and/or receive radio communication signals
  • camera 155 can capture scene and dynamic digital video images
  • external USB interface 125 external USB interface 125.
  • the interface connected to the I/O interface 115 may be a USB interface.
  • the processor 103 may be any conventional processor, including a reduced instruction set computing (“RISC”) processor, a complex instruction set computing (“CISC”) processor, or a combination of the foregoing.
  • the processor may be a dedicated device such as an application specific integrated circuit (“ASIC").
  • the processor 103 may be a neural-network processing unit (NPU) or a combination of a neural-network processing unit and the foregoing traditional processors.
  • the processor 103 is mounted with a neural network processor.
  • the computer system 101 can communicate with the software deployment server 149 through the network interface 129.
  • the network interface 129 is a hardware network interface, such as a network card.
  • the network 127 may be an external network, such as the Internet, or an internal network, such as an Ethernet or a virtual private network (VPN).
  • the network 127 may also be a wireless network, such as a WiFi network, a cellular network, and so on.
  • the hard disk drive interface is coupled to the system bus 105.
  • the hardware drive interface is connected with the hard drive.
  • the system memory 135 is coupled to the system bus 105.
  • the data running in the system memory 135 may include the operating system 137 and application programs 143 of the computer system 101.
  • the operating system includes a shell (Shell) 139 and a kernel (kernel) 141.
  • the shell 139 is an interface between the user and the kernel of the operating system.
  • the shell 139 is the outermost layer of the operating system.
  • the shell 139 manages the interaction between the user and the operating system: waiting for the user's input, interpreting the user's input to the operating system, and processing the output results of various operating systems.
  • the kernel 141 is composed of those parts of the operating system that are used to manage memory, files, peripherals, and system resources. Directly interact with the hardware, the operating system kernel usually runs processes and provides inter-process communication, providing CPU time slice management, interrupts, memory management, IO management, and so on.
  • the application program 141 includes programs related to automatic driving, such as programs that manage the interaction between the automatic driving device and obstacles on the road, programs that control the driving route or speed of the automatic driving device, and programs that control the interaction between the automatic driving device 100 and other automatic driving devices on the road.
  • the application program 141 also exists on the system of a software deployment server (deploying server) 149. In one embodiment, when the application program 141 needs to be executed, the computer system 101 may download the application program 141 from the software deployment server 149.
  • the sensor 153 is associated with the computer system 101.
  • the sensor 153 is used to detect the environment around the computer system 101.
  • the sensor 153 can detect animals, cars, obstacles, and crosswalks.
  • the sensor can also detect the surrounding environment of the above-mentioned animals, cars, obstacles, and crosswalks, such as: the environment around the animals, for example, when the animals appear around them. Other animals, weather conditions, the brightness of the surrounding environment, etc.
  • the sensor may be a camera, an infrared sensor, a chemical detector, a microphone, etc.
  • the sensor 153 senses information at preset intervals when activated and provides the sensed information to the computer system 101 in real time or near real time.
  • the computer system 101 is used to determine the driving state of the automatic driving device 100 according to the sensor data collected by the sensor 153, and to determine the driving operation to be performed by the automatic driving transposition 100 according to the driving state and the current driving task, and send it to the control system 106 (FIG. 1) sends a control command corresponding to the driving operation.
  • the driving state of the automatic driving device 100 may include the driving conditions of the automatic driving device 100 itself, such as the heading direction, speed, position, acceleration, etc., as well as the state of the surrounding environment of the automatic driving device 100, such as the location of obstacles and the location of other vehicles. And the speed, the location of the crosswalk, the signal of the traffic light, etc.
  • the computer system 101 may include a task abstraction network and a shared policy network implemented by the processor 103.
  • the processor 103 determines the current automatic driving task; the processor 103 inputs at least one set of historical paths of the automatic driving task into the task abstraction network for feature extraction, and obtains the task feature vector that characterizes the characteristics of the automatic driving task; processing
  • the processor 103 determines the state vector representing the current driving state of the automatic driving device according to the sensor data collected by the sensor 153; the processor 103 inputs the task feature vector and the state vector to the shared strategy network for processing, and obtains the current position of the automatic driving device.
  • the computer system 101 may be located far away from the automatic driving device, and may perform wireless communication with the automatic driving device.
  • the transceiver 123 can send automatic driving tasks, sensor data collected by the sensor 153, and other data to the computer system 101; and can also receive control instructions sent by the computer system 101.
  • the automatic driving device can execute the control instructions from the computer system 101 received by the transceiver, and perform corresponding driving operations.
  • some of the processes described herein are executed on a processor provided in an autonomous vehicle, and others are executed by a remote processor, including taking actions required to perform a single manipulation.
  • the automatic driving device involves a large amount of data processing during the automatic driving process.
  • the data processing process usually requires a cloud server to complete.
  • the automatic driving process it is also necessary to refer to the instructions of the roadside traffic system to drive. It can be seen that the automatic driving device needs to rely on, for example, the automatic driving system shown in FIG. 3 to realize automatic driving.
  • the cloud server communicates with the autonomous vehicle through the network, and the autonomous vehicle communicates with the roadside traffic system.
  • the data collection device on the self-driving vehicle collects surrounding environment information (for example: red light information in roadside traffic information, information of other surrounding vehicles, etc.) and the vehicle's own information and reports to the cloud server, cloud server After processing the reported data, it sends driving instructions to the autonomous vehicle.
  • surrounding environment information for example: red light information in roadside traffic information, information of other surrounding vehicles, etc.
  • the automatic driving device can also present the surrounding environment of the total driving device to the user through a human-computer interaction interface.
  • the embodiment of the present application provides a data presentation method, which can adaptively present point cloud data related to an automatic driving device according to driving information or/and user demand information.
  • An example of the point cloud data presented by the autonomous vehicle during the autonomous driving process can be performed immediately with reference to FIG. 4.
  • the device for presenting point cloud data may be a terminal device on an automatic driving device, such as an onboard terminal on a car or an onboard terminal on an airplane.
  • an embodiment of the data presentation method provided by the embodiment of the present application may include:
  • the driving information of the automatic driving device may include at least one of the following: driving position, driving speed, lane where the automatic driving device is located, weather information or ambient light information.
  • the user's demand information for data presentation may include at least one of the following: a presentation perspective, a combined object during presentation, or an operation instruction.
  • the first point cloud data is data expressed in the form of multiple points.
  • the information of the presentation mode includes the density of the point cloud, or a combination of the density of the point cloud and at least one of the following information: the size of the point cloud, the object to be presented in the point cloud, the color of the point cloud, or superimposed warning information.
  • the automatic driving device will adaptively present the corresponding point cloud data according to the driving information and/or the user's demand information for data presentation during automatic driving, and will not present all the detected data, which reduces The complexity of data processing and data presentation.
  • the method further includes: receiving first data of the automatic driving device and first environmental data around the automatic driving device from a data collection device.
  • the data acquisition device may be one of lidar, triangulation ranging sensor, structured light sensor, and stereo vision sensor, or a combination of two or more.
  • lidar calculates the distance from the sensor to the target surface by emitting laser pulses and measuring the laser transmission time, and its range of action can reach several kilometers.
  • Typical lidars are divided into scanning and area array lidars.
  • Triangular ranging sensor emits a beam of laser light onto the surface of an object through a laser transmitter, and uses a camera to record the position of the light spot. By solving the triangle formed by the laser transmitter, light spot, and camera, the light spot on the object surface can be obtained. Three-dimensional coordinates.
  • the structured light sensor projects a spot with a fixed pattern onto the surface of the object, and calculates the position information of the surface points by measuring the deformation of the spot pattern.
  • the stereo vision sensor uses two or more cameras to obtain a two-dimensional image of the same object, and calculates its spatial coordinates by finding the corresponding points between the two images.
  • the first data refers to the data of the automatic driving device itself. Taking the automatic driving device as a car as an example, the first data may be the body data of the car or the scene data in the car.
  • the first environmental data may be road surface data around the automatic driving device, other vehicle data, pedestrian data, or building data.
  • the driving information of the automatic driving device and/or the user's demand information for data presentation will be combined to perform screening.
  • the screening may be performed before the collected data is converted into cloud data, or Filtering is done after the collected data is converted to cloud data, which will be introduced separately below.
  • the foregoing step 202 may include:
  • the first data and the first environmental data are filtered according to the driving information of the automatic driving device and/or the user's demand information for data presentation, so as to obtain the second data of the automatic driving device to be presented.
  • the second data and the second environmental data are converted into first point cloud data.
  • the collected data is filtered first, which can reduce the amount of data converted into point cloud data, thereby reducing the amount of data calculation.
  • the collected data is converted to cloud data before screening.
  • the foregoing step 202 may include:
  • the second point cloud data is filtered according to the driving information of the automatic driving device and the user's demand information for data presentation, so as to obtain the first point cloud data to be presented.
  • the conversion process of point cloud data is the same regardless of the above-mentioned pre-screening or post-screening methods.
  • the process of converting the collected data into point cloud data is introduced below.
  • the laser coordinate system is shown in Figure 6, the X axis passing through the center point of the lidar points straight ahead, the Y axis passing through the center point horizontally along the lidar plane and the axis perpendicular to the axis, and the Z axis passing through the center point and X,
  • the plane formed by the Y axis is vertical.
  • the distance S from each laser point to the corresponding laser, the angle ⁇ in the horizontal direction, the angle ⁇ in the vertical direction, and the reflection intensity are directly obtained.
  • the coordinates (x, y, z) of each laser point can be calculated, so as to complete the conversion from a collected data to a point cloud data.
  • determining the presentation mode of the first point cloud data when determining the presentation mode of the first point cloud data, it can be determined by driving information, or it can be determined by the user's demand information for data presentation, which will be introduced separately below.
  • the determining the presentation mode of the first point cloud data may include:
  • the scene including a highway, a city street, a suburb, a mountain road, or a desert;
  • the presenting the first point cloud data in the presentation manner may include:
  • the first point cloud data is presented in different densities according to the scene where the automatic driving device is located, and the density of the point cloud corresponding to the city street is greater than that of the highway, the suburb, the mountain road, or the desert The density of the corresponding point cloud.
  • the scene where the autonomous driving vehicle is located can be determined by combining the driving position and positioning technology, so that different scenes can be combined for different presentations.
  • the point cloud data presented in different densities can be called low-density point cloud or high-density point cloud according to the density. Of course, it is not limited to these two types. There can be more names for point clouds.
  • a highway scene because the route is relatively fixed and there are no pedestrians on the general highway, mainly vehicle and lane information, houses and trees on both sides of the highway will not affect driving, so there is no need to accurately display the scenery on both sides of the highway.
  • other vehicle information can also be presented using low-density point clouds.
  • the presentation form of the low-density point cloud can be understood in conjunction with FIG. 7.
  • high-density points can be used Cloud rendering.
  • the presentation form of the high-density point cloud can be understood in conjunction with Figure 8.
  • the low-density point cloud is relative to the high-density point cloud.
  • the number of points in the point cloud is small, and the distance between the point and the point is larger, which can also be called sparse point cloud.
  • the high-density point cloud has a large number of points and is relatively dense, and it can also be called a dense point cloud.
  • the determining the presentation mode of the first point cloud data may include:
  • the presenting the first point cloud data in the presentation manner may include:
  • the point cloud information may not be presented, as shown in FIG. 9A.
  • the vehicle accelerates after starting, the point cloud data begins to appear.
  • the driving speed is low, the vehicle does not change quickly.
  • the automatic driving of the vehicle involves a small amount of calculation, and the accurate display of the surrounding situation will not affect safety, but the speed is high.
  • the speed threshold is below the speed threshold, for example, the speed threshold is 30km/h, as shown in FIG.
  • speed thresholds for multiple gears.
  • the speed thresholds of different gears correspond to different point cloud densities. For example, 30km/h is the first gear, and 50km/h is In the second gear, 90km/h is the third gear.
  • a higher density point cloud is used to present the point cloud data.
  • the point cloud data is presented in the form of a medium-density point cloud.
  • the point cloud data is presented in the form of low-density point cloud.
  • the point cloud data is presented in the form of a lower density point cloud. The density of point cloud data corresponding to the above-mentioned different gears is getting smaller and smaller.
  • This embodiment adopts different point cloud presentation modes according to different driving speeds, which not only ensures safety and reduces the amount of calculation, but also improves the user's car experience.
  • the determining the presentation mode of the first point cloud data may include:
  • Determining a presentation mode of the first point cloud data according to the presentation perspective where the presentation perspective includes a god perspective or a driver perspective;
  • the presenting the first point cloud data in the presentation manner may include:
  • the presentation perspective is the driver's perspective
  • the first point cloud data is presented from the driver's perspective.
  • the point cloud presentation mode is described according to the different viewing angles selected by the user.
  • Figures 9A to 9C are all presented from the perspective of God. God's perspective can display the overall information more comprehensively. In real life, some users are more adapted to the presentation of the driver's perspective. As shown in Figure 10, the point cloud data presented from the driver's perspective.
  • the determining the presentation mode of the first point cloud data may include:
  • the presentation mode of the first point cloud data is determined according to the combined object at the time of presentation, and the combined object at the time of presentation includes a high-precision map, a head-up display (HUD) or augmented reality (AR) ;
  • HUD head-up display
  • AR augmented reality
  • the presenting the first point cloud data in the presentation manner may include:
  • the combined object at the time of presentation is a high-precision map
  • the first point cloud data is projected to the first area on the windshield of the automatic driving device, and the area of the first area is smaller than that of the windshield. area;
  • the first point cloud data is projected onto the entire windshield glass of the automatic driving device for presentation.
  • the point cloud data can be presented in combination with a high-precision map, HUD or AR scene, which increases the flexibility of point cloud presentation.
  • the determining the presentation mode of the first point cloud data may include:
  • the presenting the first point cloud data in the presentation manner may include:
  • the first point cloud data superimposed with the early warning information corresponding to the operation instruction is presented.
  • the warning information can be a color, such as a red warning, or it can be a sound warning.
  • the warning point cloud information is displayed on the corresponding side of the current vehicle.
  • the point cloud information can be presented as a colored point cloud, such as red, or it can be presented as an early warning point cloud with a flashing reminder function.
  • the warning prompt in the parking scene is when the autopilot is in the process of reversing, it presents the sensed things within the impact of the reversing of the vehicle, combined with the current reversing speed, reversing angle, and passing point of the autopilot.
  • the cloud presents dangerous things and serves as a reminder.
  • in the parking scene information about parking spaces is displayed.
  • the method for presenting early warning information in the point cloud improves the auxiliary effect of the point cloud presentation on automatic driving, and further improves the safety of driving.
  • the embodiments of this application respectively introduce the determination and presentation of point cloud data in terms of driving position, driving speed, presentation angle of view, combined objects during presentation, and operation instructions.
  • the driving position, driving speed, presentation angle of view, etc. The combined objects and operation instructions during presentation can be combined to determine the point cloud data and presentation mode.
  • the driving information of other automatic driving devices and/or the user's demand information for data presentation can also be alone or combined with the above
  • the point cloud data and the presentation mode are determined by the driving position, the driving speed, the presentation angle, the combined object during presentation, and the operation instructions, which are not listed in the embodiment of the present application.
  • the embodiments of the present application adapt the point cloud presentation mechanism according to different needs and driving conditions of users, so that users can adapt to the presentation content of the external environment in different scenarios.
  • the point cloud can play a key auxiliary role in scenarios where the user's visual ability is weak. Therefore, it will play an important role in intelligent driving.
  • the method for presenting point cloud data can not only be applied to the field of automatic driving, but also can be applied to scenes that need to be presented such as safe city, security, and assistance for the elderly and the disabled, such as target recognition and obstacle detection.
  • an embodiment of the terminal device 30 provided in the embodiment of the present application may include:
  • the processing unit 301 is configured to obtain driving information of the automatic driving device and/or user demand information for data presentation, and determine the difference with the automatic driving device according to the driving information of the automatic driving device and/or the user demand information for data presentation.
  • the presentation unit 302 is configured to present the first point cloud data in the presentation manner.
  • the automatic driving device will adaptively present the corresponding point cloud data according to the driving information and/or the user's demand information for data presentation during automatic driving, and will not present all the detected data, which reduces The complexity of data processing and data presentation.
  • the receiving unit 303 is configured to receive first data of the automatic driving device and first environmental data around the automatic driving device from a data collection device;
  • the processing unit 301 is used to:
  • the first data and the first environmental data are filtered according to the driving information of the automatic driving device and/or the user's demand information for data presentation, so as to obtain the second data of the automatic driving device to be presented.
  • the second data and the second environmental data are converted into first point cloud data.
  • the terminal device 30 further includes:
  • the processing unit 301 is used to:
  • the second point cloud data is filtered according to the driving information of the automatic driving device and the user's demand information for data presentation, so as to obtain the first point cloud data to be presented.
  • the driving information includes at least one of the following: driving position, driving speed, lane in which the automatic driving device is located, weather information or ambient light information;
  • the user's demand information for data presentation includes at least one of the following: a presentation perspective, a combined object during presentation, or an operation instruction;
  • the information of the presentation mode includes the density of the point cloud, or a combination of the density of the point cloud and at least one of the following information: the size of the point cloud, the object to be presented in the point cloud, the color of the point cloud, or superimposed warning information.
  • the driving information includes the driving position
  • the processing unit 301 is used to:
  • the scene including a highway, a city street, a suburb, a mountain road, or a desert;
  • the presentation unit 302 is configured to present the first point cloud data in different densities according to the scene where the automatic driving device is located, and the density of the point cloud corresponding to the city street is greater than that of the highway and the suburbs. , The density of the point cloud corresponding to the mountain road or the desert.
  • the driving information includes the driving speed
  • the processing unit 301 is configured to determine the presentation mode of the first point cloud data according to the driving speed of the automatic driving device;
  • the presentation unit 302 is configured to use a first density to present the first point cloud data when the driving speed is lower than a speed threshold, and use a second density to present the data when the driving speed is higher than the speed threshold. In the first point cloud data, the first density is greater than the second density.
  • the processing unit 301 is configured to determine a presentation mode of the first point cloud data according to the presentation perspective, and the presentation perspective includes a god perspective or a driver perspective;
  • the presentation unit 302 is used to:
  • the presentation perspective is the driver's perspective
  • the first point cloud data is presented from the driver's perspective.
  • the processing unit 301 is configured to determine a presentation mode of the first point cloud data according to the combined object during the presentation, and the combined object during the presentation includes a high-precision map, a head-up display HUD, or an augmented reality AR;
  • the presentation unit 302 is used to:
  • the combined object at the time of presentation is a high-precision map
  • the first point cloud data is projected to the first area on the windshield of the automatic driving device, and the area of the first area is smaller than that of the windshield. area;
  • the first point cloud data is projected onto the entire windshield glass of the automatic driving device for presentation.
  • the processing unit 301 is configured to determine a presentation mode of the first point cloud data according to the operation instruction, the operation instruction includes turning, lane change, or reversing, and the presentation mode includes superimposing warning information;
  • the presentation unit 302 is configured to present the first point cloud data superimposed with the early warning information corresponding to the operation instruction.
  • the device is a terminal device 40.
  • the terminal device 40 may include a processor 401 (for example, a CPU), a memory 402, a display 404, and a projection device.
  • the display 404 and the projection device 403 are coupled to the processor 401, and the processor 401 controls the sending action of the display 404 and the receiving action of the projection device 403.
  • the memory 402 may include a high-speed RAM memory, or may also include a non-volatile memory NVM, such as at least one disk memory.
  • the memory 402 may store various instructions for completing various processing functions and implementing the methods of the embodiments of the present application. step.
  • the terminal device involved in the embodiment of the present application may further include one or more of a power supply 405 and a communication port 406.
  • the devices described in FIG. 14 may be connected through a communication bus or through other The connection mode is connected, which is not limited in the embodiment of the present application.
  • the communication bus is used to realize the communication connection between the components.
  • the aforementioned communication port 406 is used to implement connection and communication between the terminal device and other peripherals.
  • the processor 401 in the terminal device may perform the actions performed by the processing unit 301 in FIG. 13, the communication port 406 in the terminal device may perform the actions performed by the receiving unit 303 in FIG. 13, and the display 404 in the terminal device
  • the projection device 403 can perform the actions performed by the presentation unit 302 in FIG. 13, and its implementation principles and technical effects are similar, and will not be repeated here.
  • the present application also provides a chip system, which includes a processor, which is used to support the aforementioned terminal device to realize its related functions, for example, to receive or process the data involved in the aforementioned method embodiment.
  • the chip system further includes a memory, and the memory is used to store necessary program instructions and data of the terminal device.
  • the chip system can be composed of chips, or include chips and other discrete devices.
  • a computer-readable storage medium stores computer-executable instructions.
  • the device executes the above-mentioned figure. 5 to the method described in part of the embodiment in Figure 12.
  • a computer program product in another embodiment, includes computer-executable instructions, and the computer-executable instructions are stored in a computer-readable storage medium; at least one processor of the device can be accessed from a computer.
  • the read storage medium reads the computer-executed instruction, and at least one processor executes the computer-executed instruction to cause the device to execute the method described in the above-mentioned partial embodiments of FIGS. 5 to 12.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solutions of the embodiments of the present application are essentially or the part that contributes to the prior art or the part of the technical solutions can be embodied in the form of a software product, and the computer software product is stored in a storage medium.
  • Including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Combustion & Propulsion (AREA)
  • Chemical & Material Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Traffic Control Systems (AREA)

Abstract

本申请公开了一种数据呈现的方法,该方法应用于自动驾驶技术领域。该方法包括:获取自动驾驶装置的行驶信息和/或用户对数据呈现的需求信息;根据自动驾驶装置的行驶信息和/或用户对数据呈现的需求信息确定与自动驾驶装置相关的第一点云数据,以及确定第一点云数据的呈现方式,第一点云数据为通过多个点的形式表示的数据;以呈现方式呈现第一点云数据。本申请实施例可以根据驾驶信息或/和用户需求信息自适应呈现与自动驾驶装置相关的点云数据,不需要呈现全部的数据,降低了数据处理的复杂度。

Description

一种数据呈现的方法及终端设备
本申请要求于2019年9月25日提交中国专利局、申请号为201910916213.1、发明名称为“一种数据呈现的方法及终端设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及自动驾驶技术领域,具体涉及一种数据呈现的方法及终端设备。
背景技术
自动驾驶汽车设置有感知系统,会感知所处的环境,也会在车载终端上显示感知系统所感知到的信息。
目前能自动驾驶对于周围环境的感知主要有激光雷达、三角测距、结构光、立体视觉等方式。以激光雷达为例,激光雷达采用非接触、主动测量的方式直接获取被扫描地面和物体的三维表面数据,受外界环境影响很小,具有分辨率高、抗干扰能力强以及测距精度高等优点,能够用来精准地建立场景的三维点云图,并进一步进行场景中的目标识别,以检测障碍物,为无人驾驶汽车的感知系统提供了极为丰富的环境信息。
车载终端会对激光雷达所采集到的所有信息进行处理并在车载终端上呈现,这无疑存在着较大的计算开销。
发明内容
本申请实施例提供一种数据呈现的方法,可以根据驾驶信息或/和用户需求信息自适应呈现与自动驾驶装置相关的点云数据。
本申请第一方面提供一种数据呈现的方法,可以包括:获取自动驾驶装置的行驶信息和/或用户对数据呈现的需求信息;根据所述自动驾驶装置的行驶信息和/或所述用户对数据呈现的需求信息确定与所述自动驾驶装置相关的第一点云数据,以及确定所述第一点云数据的呈现方式,所述第一点云数据为通过多个点的形式表示的数据;以所述呈现方式呈现所述第一点云数据。
该第一方面中,获取自动驾驶装置的行驶信息和/或用户对数据呈现的需求信息可以描述为获取自动驾驶装置的行驶信息和用户对数据呈现的需求信息中的至少一种。对应地,根据所述自动驾驶装置的行驶信息和/或所述用户对数据呈现的需求信息确定与所述自动驾驶装置相关的第一点云数据,以及确定所述第一点云数据的呈现方式,可以描述为根据所述自动驾驶装置的行驶信息和所述用户对数据呈现的需求信息中的至少一种确定与所述自动驾驶装置相关的第一点云数据,以及确定所述第一点云数据的呈现方式。自动驾驶装置的行驶信息和/或用户对数据呈现的需求信息包含三种情况,分别为:A:自动驾驶装置的行驶信息;B:用户对数据呈现的需求信息;C:自动驾驶装置的行驶信息和用户对数据呈现的需求信息。若获取步骤中获取了自动驾驶装置的行驶信息,则确定第一点云数据时就根据自动驾驶装置的行驶信息确定第一点云数据;若获取步骤中获取了用户对数据呈现的需求信息,则确定第一点云数据时就根据用户对数据呈现的需求信息确定第一点云数据;若获取步骤中获取了自动驾驶装置的行驶信息和用户对数据呈现的需求信息,则确定第一点云数据时就根据自动驾驶装置的行驶信息和用户对数据呈现的需求信息确定第一点云数 据。同理,确定第一点云数据的呈现方式也是如此。该第一方面中,可以根据驾驶信息或/和用户需求信息自适应呈现与自动驾驶装置相关的点云数据,不需要呈现全部的数据,降低了数据处理的复杂度。
在第一方面一种可能的实现方式中,该方法还可以包括:
从数据采集装置接收所述自动驾驶装置的第一数据以及所述自动驾驶装置周围的第一环境数据;
上述步骤:根据所述自动驾驶装置的行驶信息和/或所述用户对数据呈现的需求信息确定与所述自动驾驶装置相关的第一点云数据,可以包括:
根据所述自动驾驶装置的行驶信息和/或所述用户对数据呈现的需求信息对所述第一数据和所述第一环境数据做筛选,以得到待呈现的所述自动驾驶装置的第二数据以及所述自动驾驶装置周围的第二环境数据;
将所述第二数据和所述第二环境数据转换为第一点云数据。
该种可能的实现方式中,数据采集装置可以是激光雷达、三角测距传感器、结构光传感器、立体视觉传感器中的一种或者是两种或多种的结合。第一数据指的是自动驾驶装置自身的数据,以自动驾驶装置是汽车为例,该第一数据可以是汽车的车体数据或车内的场景数据等。第一环境数据可以是自动驾驶装置周围的路面数据、其他车辆数据、行人数据或者建筑物数据等。该种可能的实现方式中,对采集到的数据先做筛选,可以减少转换为点云数据的数据量,从而降低了数据的计算量。
在第一方面一种可能的实现方式中,该方法还可以包括:
从数据采集装置接收所述自动驾驶装置的第一数据以及所述自动驾驶装置周围的第一环境数据;
上述步骤:根据所述自动驾驶装置的行驶信息和/或所述用户对数据呈现的需求信息确定与所述自动驾驶装置相关的第一点云数据,可以包括:
将所述第一数据和所述第一环境数据转换为第二点云数据;
根据所述自动驾驶装置的行驶信息和所述用户对数据呈现的需求信息对所述第二点云数据做筛选,以得到待呈现的第一点云数据。
该种可能的实现方式相对于上一种可能的实现方式只是云数据转换的时机不同,其他内容都可以参阅上一种可能的实现方式进行理解。
在第一方面一种可能的实现方式中,所述行驶信息包括以下至少一项:行驶位置、行驶速度、所述自动驾驶装置所处的车道、天气信息或环境光信息;
所述用户对数据呈现的需求信息包括以下至少一项:呈现视角、呈现时的结合对象或操作指令;
所述呈现方式的信息中包括点云的密度,或者点云的密度与以下信息中至少一项的组合:点云的大小、点云中需要呈现的对象、点云的颜色或叠加预警信息。
在第一方面一种可能的实现方式中,当所述行驶信息包括所述行驶位置时,上述步骤:确定所述第一点云数据的呈现方式,可以包括:
根据所述行驶位置确定所述自动驾驶装置所处的场景,所述场景包括高速公路、城市 街道、郊区、山路或沙漠;
根据所述自动驾驶装置所处的场景确定所述第一点云数据的呈现方式;
对应地,所述以所述呈现方式呈现所述第一点云数据,可以包括:
按照所述自动驾驶装置所处的场景以不同的密度呈现所述第一点云数据,所述城市街道对应的点云的密度大于所述高速公路、所述郊区、所述山路或所述沙漠对应的点云的密度。
该种可能的实现方式中,以不同的密度呈现的点云数据可以根据密度的情况称为低密度点云或高密度点云,当然不限于这两种,针对点云还可以有更多的称呼。低密度点云与高密度点云是相对的,低密度点云中点云的点数量较少,点与点之前的间距也较大,也可以称为稀疏点云。高密度点云中点云的点数量较多,比较密集,也可以称为密集点云。采用不同密度的点云呈现不同场景的数据在满足安全性需要的前提下,可以尽量减少数据处理的复杂度。
在第一方面一种可能的实现方式中,当所述行驶信息包括所述行驶速度时,上述步骤:确定所述第一点云数据的呈现方式,可以包括:
根据所述自动驾驶装置的行驶速度确定所述第一点云数据的呈现方式;
对应地,所述以所述呈现方式呈现所述第一点云数据,可以包括:
当所述行驶速度低于速度阈值时,采用第一密度呈现所述第一点云数据,当所述行驶速度高于所述速度阈值时,采用第二密度呈现所述第一点云数据,所述第一密度大于所述第二密度。
该种可能的实现方式中,速度阈值可以是一个也可以是多个,以多个为例,可以设置多个档位的速度阈值,不同档位的速度阈值对应的点云密度不同,例如:30km/h是第一档位,50km/h是第二档位,90km/h是第三档位,当速度是在30km/h以下时采用较高密度的点云形式呈现点云数据。当速度在30km/h至50km/h时,采用中等密度的点云形式呈现点云数据。当速度在50km/h至90km/h时,采用低密度的点云形式呈现点云数据。当速度是在90km/h以上时采用更低密度的点云形式呈现点云数据。上述不同档位对应的点云数据的密度越来越小。该种可能的实现方式根据行驶速度不同而采用不同的点云呈现方式,在保证安全性减少计算量的同时,还提升了用户的用车体验。
在第一方面一种可能的实现方式中,当所述用户对数据呈现的需求信息包括所述呈现视角时,上述步骤:确定所述第一点云数据的呈现方式,可以包括:
根据所述呈现视角确定所述第一点云数据的呈现方式,所述呈现视角包括上帝视角或驾驶员视角;
对应地,所述以所述呈现方式呈现所述第一点云数据,包括:
当所述呈现视角为上帝视角时,从所述上帝视角呈现所述第一点云数据;
当所述呈现视角为驾驶员视角时,从所述驾驶员视角呈现所述第一点云数据。
该种可能的实现方式中,针对用户选择的视角不同,来描述点云呈现方式。上帝视角可以更全面的显示整体的信息。驾驶员视角可以更符合驾驶员的直观感受。不同的视角的呈现方式不同,可以适配不同用户的不同显示习惯。
在第一方面一种可能的实现方式中,当所述用户对数据呈现的需求信息包括所述呈现时的结合对象时,上述步骤:确定所述第一点云数据的呈现方式,可以包括:
根据所述呈现时的结合对象确定所述第一点云数据的呈现方式,所述呈现时的结合对象包括高精度地图、平视显示器(head up display,HUD)或增强现实(augmented reality,AR);
对应地,所述以所述呈现方式呈现所述第一点云数据,可以包括:
当所述呈现时的结合对象为高精度地图时,将所述第一点云数据结合到所述高精度地图中呈现;
当所述呈现时的结合对象为HUD时,将所述第一点云数据投影到所述自动驾驶装置的风挡玻璃上的第一区域呈现,所述第一区域的面积小于所述风挡玻璃的面积;
当所述呈现时的结合对象为AR时,将所述第一点云数据投影到所述自动驾驶装置的整块风挡玻璃上呈现。
该种可能的实现方式中,在点云数据呈现时可以结合高精度地图、HUD或者AR场景进行呈现,增加了点云呈现的灵活性。
在第一方面一种可能的实现方式中,当所述用户对数据呈现的需求信息包括所述操作指令时,所述确定所述第一点云数据的呈现方式,包括:
根据所述操作指令确定所述第一点云数据的呈现方式,所述操作指令包括转向、变道或倒车,所述呈现方式包括叠加预警信息;
对应地,所述以所述呈现方式呈现所述第一点云数据,可以包括:
呈现叠加有所述操作指令对应的预警信息的所述第一点云数据。
该种可能的实现方式中,预警信息可以通过颜色,如红色预警,也可以通过声音报警。
该种可能的实现方式中,在转向、变道或倒车等不同操作指令下,可以通过不同的方式进行报警提示,提高点云呈现对自动驾驶的辅助作用,进一步提升驾驶的安全性。
本申请第二方面提供一种终端设备,该终端设备具有实现上述第一方面或第一方面任意一种可能实现方式的方法的功能。该功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。该硬件或软件包括一个或多个与上述功能相对应的模块,例如:接收单元、处理单元和发送单元。
本申请第三方面提供一种终端设备,该终端设备包括至少一个处理器、存储器、通信端口、显示器以及存储在存储器中并可在处理器上运行的计算机执行指令,当所述计算机执行指令被所述处理器执行时,所述处理器执行如上述第一方面或第一方面任意一种可能的实现方式所述的方法。该终端设备可以是车载设备,该车载设备可以预置在车体上,可以与车是一体的,或可插拔的,或通过其它方式与车体连接。
本申请第四方面提供一种存储一个或多个计算机执行指令的计算机可读存储介质,当所述计算机执行指令被处理器执行时,所述处理器执行如上述第一方面或第一方面任意一种可能的实现方式所述的方法。
本申请第五方面提供一种存储一个或多个计算机执行指令的计算机程序产品(或称计算机程序),当所述计算机执行指令被所述处理器执行时,所述处理器执行上述第一方面或 第一方面任意一种可能实现方式的方法。
本申请第六方面提供了一种芯片系统,该芯片系统包括处理器,用于支持终端设备实现上述第一方面或第一方面任意一种可能的实现方式中所涉及的功能。在一种可能的设计中,芯片系统还可以包括存储器,存储器,用于保存终端设备必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包含芯片和其他分立器件。
其中,第二至第六方面或者其中任一种可能实现方式所带来的技术效果可参见第一方面或第一方面不同可能实现方式所带来的技术效果,此处不再赘述。
本申请实施例可以根据驾驶信息或/和用户需求信息自适应呈现与自动驾驶装置相关的点云数据,不需要呈现全部的数据,降低了数据处理的复杂度。
附图说明
图1是本申请实施例提供的具有自动驾驶功能的自动驾驶装置的功能框图;
图2是本申请实施例提供的一种自动驾驶系统的结构示意图;
图3是本申请实施例提供的自动驾驶系统的一实施例示意图;
图4是本申请实施例提供的点云数据呈现的一界面图;
图5是本申请实施例提供的数据呈现的方法的一实施例示意图;
图6是本申请实施例提供的激光雷达的数据转换的一示意图;
图7是本申请实施例提供的点云数据呈现的另一界面图;
图8是本申请实施例提供的点云数据呈现的另一界面图;
图9A至9C分别是本申请实施例提供的点云数据呈现的另一界面图;
图10是本申请实施例提供的驾驶员视角的一界面示意图;
图11是本申请实施例提供的点云数据呈现的另一界面图;
图12是本申请实施例提供的点云数据呈现的另一界面图;
图13是本申请实施例提供的终端设备的一实施例示意图;
图14是本申请实施例提供的终端设备的另一实施例示意图。
具体实施方式
下面结合附图,对本申请的实施例进行描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。本领域普通技术人员可知,随着技术的发展和新场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
本申请实施例提供一种数据呈现的方法,可以根据驾驶信息或/和用户需求信息自适应呈现与车辆相关的点云数据,降低了计算量。以下分别进行详细说明。
随着互联网以及定位技术的发展,自动驾驶技术也得到了快速的发展。自动驾驶装置可以称为智能装置,例如:自动驾驶车辆也称智能汽车。智能汽车可以基于车联网(internet of vehicle,IOV)实现自动驾驶。车联网是指车与车、车与路、车与人、车与传感设备等交互,从而实现车辆与公众网络通信的动态移动通信系统。它可以通过车与车、车与人、车与路互联互通实现信息共享,收集车辆、道路和环境的信息,并在信息网络平台上对多源采集的信息进行加工、计算、共享和安全发布,根据不同的功能需求对车辆进行有效的引导与监管,以及提供专业的多媒体与移动互联网应用服务。
下面结合图1和图2对自动驾驶装置进行介绍。
图1是本申请实施例提供的具有自动驾驶功能的自动驾驶装置100的功能框图。在一个实施例中,将自动驾驶装置100配置为完全或部分地自动驾驶模式。例如,自动驾驶装置100可以在处于自动驾驶模式中的同时控制自身,并且可通过人为操作来确定自动驾驶装置及其周边环境的当前状态,确定周边环境中的至少一个其他自动驾驶装置的可能行为,并确定该其他自动驾驶装置执行可能行为的可能性相对应的置信水平,基于所确定的信息来控制自动驾驶装置100。在自动驾驶装置100处于自动驾驶模式中时,可以将自动驾驶装置100置为在没有和人交互的情况下操作。
自动驾驶装置100可包括各种子系统,例如行进系统102、传感器系统104、控制系统106、一个或多个外围设备108以及电源110、计算机系统112和用户接口116。可选地,自动驾驶装置100可包括更多或更少的子系统,并且每个子系统可包括多个元件。另外,自动驾驶装置100的每个子系统和元件可以通过有线或者无线互连。
行进系统102可包括为自动驾驶装置100提供动力运动的组件。在一个实施例中,行进系统102可包括引擎118、能量源119、传动装置120和车轮/轮胎121。引擎118可以是内燃引擎、电动机、空气压缩引擎或其他类型的引擎组合,例如气油发动机和电动机组成的混动引擎,内燃引擎和空气压缩引擎组成的混动引擎。引擎118将能量源119转换成机械能量。
能量源119的示例包括汽油、柴油、其他基于石油的燃料、丙烷、其他基于压缩气体的燃料、乙醇、太阳能电池板、电池和其他电力来源。能量源119也可以为自动驾驶装置100的其他系统提供能量。
传动装置120可以将来自引擎118的机械动力传送到车轮121。传动装置120可包括变速箱、差速器和驱动轴。在一个实施例中,传动装置120还可以包括其他器件,比如离合器。其中,驱动轴可包括可耦合到一个或多个车轮121的一个或多个轴。
传感器系统104可包括感测关于自动驾驶装置100周边的环境的信息的若干个传感器。例如,传感器系统104可包括定位系统122(定位系统可以是全球定位系统(global positioning system,GPS)系统,也可以是北斗系统或者其他定位系统)、惯性测量单元(inertial measurement unit,IMU)124、雷达126、激光测距仪128以及相机130。传感器系统104还可包括被监视自动驾驶装置100的内部系统的传感器(例如,车内空气质量监测器、燃油量表、机油温度表等)。来自这些传感器中的一个或多个的传感器数据可用于检测对象及其相应特性(位置、形状、方向、速度等)。这种检测和识别是自主自动驾驶装置100的安全操作的关键功能。
定位系统122可用于估计自动驾驶装置100的地理位置。IMU 124用于基于惯性加速度来感测自动驾驶装置100的位置和朝向变化。在一个实施例中,IMU 124可以是加速度计和陀螺仪的组合。
雷达126可利用无线电信号来感测自动驾驶装置100的周边环境内的物体。在一些实施例中,除了感测物体以外,雷达126还可用于感测物体的速度和/或前进方向。
激光测距仪128可利用激光来感测自动驾驶装置100所位于的环境中的物体。在一些实施例中,激光测距仪128可包括一个或多个激光源、激光扫描器以及一个或多个检测器,以及其他系统组件。
相机130可用于捕捉自动驾驶装置100的周边环境的多个图像。相机130可以是静态相机或视频相机。
控制系统106为控制自动驾驶装置100及其组件的操作。控制系统106可包括各种元件,其中包括转向系统132、油门134、制动单元136、传感器融合算法138、计算机视觉系统140、路线控制系统142以及障碍物规避系统144。
转向系统132可操作来调整自动驾驶装置100的前进方向。例如在一个实施例中可以为方向盘系统。
油门134用于控制引擎118的操作速度并进而控制自动驾驶装置100的速度。
制动单元136用于控制自动驾驶装置100减速。制动单元136可使用摩擦力来减慢车轮121。在其他实施例中,制动单元136可将车轮121的动能转换为电流。制动单元136也可采取其他形式来减慢车轮121转速从而控制自动驾驶装置100的速度。
计算机视觉系统140可以操作来处理和分析由相机130捕捉的图像以便识别自动驾驶装置100周边环境中的物体和/或特征。所述物体和/或特征可包括交通信号、道路边界和障碍物。计算机视觉系统140可使用物体识别算法、运动中恢复结构(structure from motion,SFM)算法、视频跟踪和其他计算机视觉技术。在一些实施例中,计算机视觉系统140可以用于为环境绘制地图、跟踪物体、估计物体的速度等等。
路线控制系统142用于确定自动驾驶装置100的行驶路线。在一些实施例中,路线控制系统142可结合来自传感器138、定位系统122和一个或多个预定地图的数据以为自动驾驶装置100确定行驶路线。
障碍物规避系统144用于识别、评估和避免或者以其他方式越过自动驾驶装置100的环境中的潜在障碍物。
当然,在一个实例中,控制系统106可以增加或替换地包括除了所示出和描述的那些以外的组件。或者也可以减少一部分上述示出的组件。
自动驾驶装置100通过外围设备108与外部传感器、其他自动驾驶装置、其他计算机系统或用户之间进行交互。外围设备108可包括无线通信系统146、车载电脑148、麦克风150和/或扬声器152。
在一些实施例中,外围设备108提供自动驾驶装置100的用户与用户接口116交互的手段。例如,车载电脑148可向自动驾驶装置100的用户提供信息。用户接口116还可操作车载电脑148来接收用户的输入。车载电脑148可以通过触摸屏进行操作。在其他情况中,外围 设备108可提供用于自动驾驶装置100与位于车内的其它设备通信的手段。例如,麦克风150可从自动驾驶装置100的用户接收音频(例如,语音命令或其他音频输入)。类似地,扬声器152可向自动驾驶装置100的用户输出音频。
无线通信系统146可以直接地或者经由通信网络来与一个或多个设备无线通信。例如,无线通信系统146可使用3G蜂窝通信,例如码分多址(code division multiple access,CDMA)、EVD0、全球移动通信系统(global system for mobile communications,GSM)/是通用分组无线服务技术(general packet radio service,GPRS),或者4G蜂窝通信,例如长期演进(long term evolution,LTE),或者5G蜂窝通信。无线通信系统146可利用WiFi与无线局域网(wireless local area network,WLAN)通信。在一些实施例中,无线通信系统146可利用红外链路、蓝牙或ZigBee与设备直接通信。其他无线协议,例如各种自动驾驶装置通信系统,例如,无线通信系统146可包括一个或多个专用短程通信(dedicated short range communications,DSRC)设备,这些设备可包括自动驾驶装置和/或路边台站之间的公共和/或私有数据通信。
电源110可向自动驾驶装置100的各种组件提供电力。在一个实施例中,电源110可以为可再充电锂离子或铅酸电池。这种电池的一个或多个电池组可被配置为电源为自动驾驶装置100的各种组件提供电力。在一些实施例中,电源110和能量源119可一起实现,例如一些全电动车中那样。
自动驾驶装置100的部分或所有功能受计算机系统112控制。计算机系统112可包括至少一个处理器113,处理器113执行存储在例如存储器114这样的非暂态计算机可读介质中的指令115。计算机系统112还可以是采用分布式方式控制自动驾驶装置100的个体组件或子系统的多个计算设备。
处理器113可以是任何常规的处理器,诸如商业可获得的中央处理器(central processing unit,CPU)。替选地,该处理器可以是诸如专用集成电路(application specific integrated circuits,ASIC)或其它基于硬件的处理器的专用设备。尽管图1功能性地图示了处理器、存储器、和在相同块中的计算机110的其它元件,但是本领域的普通技术人员应该理解该处理器、计算机、或存储器实际上可以包括可以或者可以不存储在相同的物理外壳内的多个处理器、计算机、或存储器。例如,存储器可以是硬盘驱动器或位于不同于计算机110的外壳内的其它存储介质。因此,对处理器或计算机的引用将被理解为包括对可以或者可以不并行操作的处理器或计算机或存储器的集合的引用。不同于使用单一的处理器来执行此处所描述的步骤,诸如转向组件和减速组件的一些组件每个都可以具有其自己的处理器,所述处理器只执行与特定于组件的功能相关的计算。
在此处所描述的各个方面中,处理器可以位于远离该自动驾驶装置并且与该自动驾驶装置进行无线通信。在其它方面中,此处所描述的过程中的一些在布置于自动驾驶装置内的处理器上执行而其它则由远程处理器执行,包括采取执行单一操纵的必要步骤。
在一些实施例中,存储器114可包含指令115(例如,程序逻辑),指令115可被处理器113执行来执行自动驾驶装置100的各种功能,包括以上描述的那些功能。存储器114也可包含额外的指令,包括向行进系统102、传感器系统104、控制系统106和外围设备108中的 一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。
除了指令115以外,存储器114还可存储数据,例如道路地图、路线信息,自动驾驶装置的位置、方向、速度以及其它这样的自动驾驶装置数据,以及其他信息。这种信息可在自动驾驶装置100在自主、半自主和/或手动模式中操作期间被自动驾驶装置100和计算机系统112使用。
用户接口116,用于向自动驾驶装置100的用户提供信息或从其接收信息。可选地,用户接口116可包括在外围设备108的集合内的一个或多个输入/输出设备,例如无线通信系统146、车车载电脑148、麦克风150和扬声器152。
计算机系统112可基于从各种子系统(例如,行进系统102、传感器系统104和控制系统106)以及从用户接口116接收的输入来控制自动驾驶装置100的功能。例如,计算机系统112可利用来自控制系统106的输入以便控制转向单元132来避免由传感器系统104和障碍物规避系统144检测到的障碍物。在一些实施例中,计算机系统112可操作来对自动驾驶装置100及其子系统的许多方面提供控制。
可选地,上述这些组件中的一个或多个可与自动驾驶装置100分开安装或关联。例如,存储器114可以部分或完全地与自动驾驶装置100分开存在。上述组件可以按有线和/或无线方式来通信地耦合在一起。
可选地,上述组件只是一个示例,实际应用中,上述各个模块中的组件有可能根据实际需要增添或者删除,图1不应理解为对本申请实施例的限制。
在道路行进的自动驾驶汽车,如上面的自动驾驶装置100,可以识别其周围环境内的物体以确定对当前速度的调整。所述物体可以是其它自动驾驶装置、交通控制设备、或者其它类型的物体。在一些示例中,可以独立地考虑每个识别的物体,并且基于物体的各自的特性,诸如它的当前速度、加速度、与自动驾驶装置的间距等,可以用来确定自动驾驶汽车所要调整的速度。
可选地,自动驾驶汽车自动驾驶装置100或者与自动驾驶装置100相关联的计算设备(如图1的计算机系统112、计算机视觉系统140、存储器114)可以基于所识别的物体的特性和周围环境的状态(例如,交通、雨、道路上的冰、等等)来预测所述识别的物体的行为。可选地,每一个所识别的物体都依赖于彼此的行为,因此还可以将所识别的所有物体全部一起考虑来预测单个识别的物体的行为。自动驾驶装置100能够基于预测的所述识别的物体的行为来调整它的速度。换句话说,自动驾驶汽车能够基于所预测的物体的行为来确定自动驾驶装置将需要调整到(例如,加速、减速、或者停止)什么稳定状态。在这个过程中,也可以考虑其它因素来确定自动驾驶装置100的速度,诸如,自动驾驶装置100在行驶的道路中的横向位置、道路的曲率、静态和动态物体的接近度等等。
除了提供调整自动驾驶汽车的速度的指令之外,计算设备还可以提供修改自动驾驶装置100的转向角的指令,以使得自动驾驶汽车遵循给定的轨迹和/或维持与自动驾驶汽车附近的物体(例如,道路上的相邻车道中的轿车)的安全横向和纵向距离。
上述自动驾驶装置100可以为轿车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场自动驾驶装置、施工设备、电车、高尔夫球车、火车和手推车等, 本申请实施例不做特别的限定。
图1介绍了自动驾驶装置100的功能框图,下面介绍自动驾驶装置100中的自动驾驶系统。图2为本申请实施例提供的一种自动驾驶系统的结构示意图。图1和图2是从不同的角度来描述自动驾驶装置100,例如图2中的计算机系统101为图1中的计算机系统112。
如图2所示,计算机系统101包括处理器103,处理器103和系统总线105耦合。处理器103可以是一个或者多个处理器,其中,每个处理器都可以包括一个或多个处理器核。显示适配器(video adapter)107,显示适配器107可以驱动显示器109,显示器109和系统总线105耦合。系统总线105通过总线桥111和输入输出(I/O)总线113耦合。I/O接口115和I/O总线耦合。I/O接口115和多种I/O设备进行通信,比如输入设备117(如:键盘,鼠标,触摸屏等),多媒体盘(media tray)121,例如CD-ROM,多媒体接口等。收发器123(可以发送和/或接受无线电通信信号),摄像头155(可以捕捉景田和动态数字视频图像)和外部USB接口125。可选的,和I/O接口115相连接的接口可以是USB接口。
其中,处理器103可以是任何传统处理器,包括精简指令集计算(“RISC”)处理器、复杂指令集计算(“CISC”)处理器或上述的组合。可选的,处理器可以是诸如专用集成电路(“ASIC”)的专用装置。可选的,处理器103可以是神经网络处理器(neural-network processing unit,NPU)或者是神经网络处理器和上述传统处理器的组合。可选的,处理器103挂载有一个神经网络处理器。
计算机系统101可以通过网络接口129和软件部署服务器149通信。网络接口129是硬件网络接口,比如,网卡。网络127可以是外部网络,比如因特网,也可以是内部网络,比如以太网或者虚拟私人网络(VPN)。可选的,网络127还可以是无线网络,比如WiFi网络,蜂窝网络等。
硬盘驱动接口和系统总线105耦合。硬件驱动接口和硬盘驱动器相连接。系统内存135和系统总线105耦合。运行在系统内存135的数据可以包括计算机系统101的操作系统137和应用程序143。
操作系统包括壳(Shell)139和内核(kernel)141。壳139是介于使用者和操作系统之内核(kernel)间的一个接口。壳139是操作系统最外面的一层。壳139管理使用者与操作系统之间的交互:等待使用者的输入,向操作系统解释使用者的输入,并且处理各种各样的操作系统的输出结果。
内核141由操作系统中用于管理存储器、文件、外设和系统资源的那些部分组成。直接与硬件交互,操作系统内核通常运行进程,并提供进程间的通信,提供CPU时间片管理、中断、内存管理、IO管理等等。
应用程序141包括自动驾驶相关程序,比如,管理自动驾驶装置和路上障碍物交互的程序,控制自动驾驶装置的行车路线或者速度的程序,控制自动驾驶装置100和路上其他自动驾驶装置交互的程序。应用程序141也存在于软件部署服务器(deploying server)149的系统上。在一个实施例中,在需要执行应用程序141时,计算机系统101可以从软件部署服务器149下载应用程序141。
传感器153和计算机系统101关联。传感器153用于探测计算机系统101周围的环境。举 例来说,传感器153可以探测动物,汽车,障碍物和人行横道等,进一步传感器还可以探测上述动物,汽车,障碍物和人行横道等物体周围的环境,比如:动物周围的环境,例如,动物周围出现的其他动物,天气条件,周围环境的光亮度等。可选的,如果计算机系统101位于自动驾驶装置上,传感器可以是摄像头,红外线感应器,化学检测器,麦克风等。传感器153在激活时按照预设间隔感测信息并实时或接近实时地将所感测的信息提供给计算机系统101。
计算机系统101,用于根据传感器153采集的传感器数据,确定自动驾驶装置100的行驶状态,以及根据该行驶状态和当前的驾驶任务确定自动驾驶转置100所需执行的驾驶操作,并向控制系统106(图1)发送该驾驶操作对应的控制指令。自动驾驶装置100的行驶状态可以包括自动驾驶装置100自身的行驶状况,例如车头方向、速度、位置、加速度等,也包括自动驾驶装置100周边环境的状态,例如障碍物的位置、其他车辆的位置和速度、人行横道的位置、交通灯的信号等。计算机系统101可以包括由处理器103实现的任务抽象网络和共享策略网络。具体的,处理器103确定当前的自动驾驶任务;处理器103将该自动驾驶任务的至少一组历史路径输入到任务抽象网络做特征提取,得到表征该自动驾驶任务的特征的任务特征向量;处理器103根据传感器153采集的传感器数据,确定表征自动驾驶装置的当前行驶状态的状态向量;处理器103将该任务特征向量和该状态向量输入到共享策略网络做处理,得到该自动驾驶装置当前所需执行的驾驶操作;处理器103通过控制系统执行该驾驶操作;处理器103重复之前确定和执行驾驶操作的步骤,直到完成该自动驾驶任务。
可选的,在本文所述的各种实施例中,计算机系统101可位于远离自动驾驶装置的地方,并且可与自动驾驶装置进行无线通信。收发器123可将自动驾驶任务、传感器153采集的传感器数据和其他数据发送给计算机系统101;还可以接收计算机系统101发送的控制指令。自动驾驶装置可执行收发器接收的来自计算机系统101的控制指令,并执行相应的驾驶操作。在其它方面,本文所述的一些过程在设置在自动驾驶车辆内的处理器上执行,其它由远程处理器执行,包括采取执行单个操纵所需的动作。
自动驾驶装置在自动驾驶过程中会涉及到大量数据处理的过程,数据处理的过程通常需要云服务器来完成,自动驾驶过程中也需要参考路端交通系统中设备的指示来行驶。可见,自动驾驶装置需要依赖例如图3所示的自动驾驶系统才能实现自动驾驶。如3所示的自动驾驶系统中,云服务器与自动驾驶车辆通过网络通信,自动驾驶车辆与路端交通系统通信。在自动驾驶过程中,自动驾驶车辆上的数据采集装置采集周围环境信息(例如:路端交通信息中的红灯信息、周围其他车辆的信息等)以及车辆自身的信息上报给云服务器,云服务器对上报的数据进行处理后,向自动驾驶车辆发出行驶指示信息。
为了保障自动驾驶的安全性,自动驾驶装置上还可以通过人机交互界面向用户呈现该总动驾驶装置周围环境的情况。本申请实施例提供了一种数据呈现的方法,可以根据驾驶信息或/和用户需求信息自适应呈现与自动驾驶装置相关的点云数据。自动驾驶车辆在自动驾驶过程中呈现的点云数据的示例可以参阅图4进行立即。用于呈现点云数据的设备可以是自动驾驶装置上的终端设备,例如汽车上的车载终端、飞机上的机载终端。
如图5,本申请实施例提供的数据呈现的方法的一实施例可以包括:
201、获取自动驾驶装置的行驶信息和/或用户对数据呈现的需求信息。
自动驾驶装置的行驶信息可以包括以下至少一项:行驶位置、行驶速度、所述自动驾驶装置所处的车道、天气信息或环境光信息。
用户对数据呈现的需求信息可以包括以下至少一项:呈现视角、呈现时的结合对象或操作指令。
202、根据所述自动驾驶装置的行驶信息和/或所述用户对数据呈现的需求信息确定与所述自动驾驶装置相关的第一点云数据,以及确定所述第一点云数据的呈现方式。
所述第一点云数据为通过多个点的形式表示的数据。
所述呈现方式的信息中包括点云的密度,或者点云的密度与以下信息中至少一项的组合:点云的大小、点云中需要呈现的对象、点云的颜色或叠加预警信息。
203、以所述呈现方式呈现所述第一点云数据。
本申请实施例中,自动驾驶装置在自动驾驶时会根据行驶信息和/或用户对数据呈现的需求信息自适应呈现相应的点云数据,不会将检测到的所有数据都呈现出来,降低了数据处理和数据呈现的复杂度。
上述步骤201之前或之后,该方法还包括:从数据采集装置接收所述自动驾驶装置的第一数据以及所述自动驾驶装置周围的第一环境数据。
数据采集装置可以是激光雷达、三角测距传感器、结构光传感器、立体视觉传感器中的一种或者是两种或多种的结合。
其中,激光雷达是通过发射激光脉冲并测量激光传输时间计算传感器到目标表面的距离,其作用距离可达数公里。典型的激光雷达又分为扫描式与面阵式激光雷达两类。
三角测距传感器是通过激光发射器将一束激光发射到物体表面上,并采用相机记录光斑的位置,通过解算激光发射器、光斑以及相机三者构成的三角形,可以获得物体表面上光斑的三维坐标。
结构光传感器是将一个具有固定模式的光斑投影到物体表面,通过测量光斑模式的形变从而解算出表面点的位置信息。
立体视觉传感器是采用两个或更多相机获取同一个物体的二维图像,通过寻找两幅图像间的对应点解算出其空间坐标。
第一数据指的是自动驾驶装置自身的数据,以自动驾驶装置是汽车为例,该第一数据可以是汽车的车体数据或车内的场景数据等。第一环境数据可以是自动驾驶装置周围的路面数据、其他车辆数据、行人数据或者建筑物数据等。
本申请实施例中会结合自动驾驶装置的行驶信息和/或所述用户对数据呈现的需求信息做筛选,该筛选可以是在采集到的数据还未转换为云数据之前就做筛选,也可以是在采集到的数据都转换为云数据之后再做筛选,下面分别进行介绍。
1、采集到的数据还未转换为云数据之前就做筛选。
上述步骤202可以包括:
根据所述自动驾驶装置的行驶信息和/或所述用户对数据呈现的需求信息对所述第一数据和所述第一环境数据做筛选,以得到待呈现的所述自动驾驶装置的第二数据以及所述 自动驾驶装置周围的第二环境数据;
将所述第二数据和所述第二环境数据转换为第一点云数据。
该种实施例中,对采集到的数据先做筛选,可以减少转换为点云数据的数据量,从而降低了数据的计算量。
2、采集到的数据都转换为云数据之后再做筛选。
上述步骤202可以包括:
将所述第一数据和所述第一环境数据转换为第二点云数据;
根据所述自动驾驶装置的行驶信息和所述用户对数据呈现的需求信息对所述第二点云数据做筛选,以得到待呈现的第一点云数据。
无论是上述先筛选还是后筛选的方式,点云数据的转换过程都是相同的,下面对将采集到的数据转换为点云数据的过程进行介绍。
以激光雷达为例,激光坐标系如图6所示,X轴过激光雷达的中心点指向正前方,Y轴过中心点水平沿着激光雷达平面与轴垂直,Z轴过中心点与X,Y轴所形成的平面垂直。经过激光雷达处理,直接获得每个激光点到对应激光器的距离S、水平方向角度δ、竖直方向角度β以及反射强度等信息。根据角度δ、β和距离S,可以计算出每个激光点的坐标(x,y,z),从而完成从一个采集到的数据到一个点云数据的转换。
本申请实施例中,在确定第一点云数据的呈现方式时可以通过行驶信息来确定,也可以通过用户对数据呈现的需求信息来确定,下面分别进行介绍。
1、当所述行驶信息包括所述行驶位置时,所述确定所述第一点云数据的呈现方式,可以包括:
根据所述行驶位置确定所述自动驾驶装置所处的场景,所述场景包括高速公路、城市街道、郊区、山路或沙漠;
根据所述自动驾驶装置所处的场景确定所述第一点云数据的呈现方式;
对应地,所述以所述呈现方式呈现所述第一点云数据,可以包括:
按照所述自动驾驶装置所处的场景以不同的密度呈现所述第一点云数据,所述城市街道对应的点云的密度大于所述高速公路、所述郊区、所述山路或所述沙漠对应的点云的密度。
该实施例中,以自动驾驶装置是自动驾驶车辆为例,可以结合行驶位置和定位技术确定出自动驾驶车辆所处的场景,这样就可以结合不同的场景做不同的呈现。以不同的密度呈现的点云数据可以根据密度的情况称为低密度点云或高密度点云,当然不限于这两种,针对点云还可以有更多的称呼。
例如:高速公路场景,因为路线相对固定且一般高速公路没有穿梭的行人,主要是车辆和车道信息,高速公路两边的房屋树木对驾驶不会产生影响,所以不需要精确显示高速公路两边的景物,为了减少高速行进的车辆对于周边信息采集的复杂度,其他的车辆信息也可以采用低密度点云进行呈现。低密度点云的呈现形式可以结合图7进行理解。
在城市街道,尤其是市内拥堵行车场景,由于人流穿梭,需要将本车辆相关的一些运动的行人呈现出来,并且在该场景下,信息越细腻越有利于驾驶安全,因此可采用高密度 点云呈现。高密度点云的呈现形式可以结合图8进行理解。
低密度点云与高密度点云是相对的,低密度点云中点云的点数量较少,点与点之前的间距也较大,也可以称为稀疏点云。高密度点云中点云的点数量较多,比较密集,也可以称为密集点云。
其他场景中,例如:郊区、山路或沙漠场景都可以参阅高速公路的场景,采用低密度点云的方式呈现各场景的点云数据。由此可见,该实施例中,采用不同密度的点云呈现不同场景的点云数据在满足安全性需要的前提下,可以尽量减少数据处理的复杂度。
2、当所述行驶信息包括所述行驶速度时,所述确定所述第一点云数据的呈现方式,可以包括:
根据所述自动驾驶装置的行驶速度确定所述第一点云数据的呈现方式;
对应地,所述以所述呈现方式呈现所述第一点云数据,可以包括:
当所述行驶速度低于速度阈值时,采用第一密度呈现所述第一点云数据,当所述行驶速度高于所述速度阈值时,采用第二密度呈现所述第一点云数据,所述第一密度小于所述第二密度。
该实施例中,在车辆启动阶段,因为车辆还没开始行驶,没有安全提示的必要,因此点云信息可以不做呈现,如图9A所示。随着车辆启动后的加速,点云数据开始呈现,在行驶速度较小时车辆变动的不快,车辆自动行驶涉及到的计算量小,对周围的情况精确显示也不会影响安全,但速度较大时,因为车辆变动较快,车辆自动行驶涉及到的计算量较大,如果再对周围的情况精确显示可能会影响到安全。所以在速度阈值以下,例如速度阈值为30km/h,则如图9B所示,在行驶速度在30km/h以下时采用高密度点云对周边事物以及动态情况进行呈现。当行驶速度越来越快,则如图9C所示,超过30km/h时,则可以采用低密度点云对周边事物以及动态情况进行呈现。
当然,此处也不限于只有一个速度阈值,可以设置多个档位的速度阈值,不同档位的速度阈值对应的点云密度不同,例如:30km/h是第一档位,50km/h是第二档位,90km/h是第三档位,当速度是在30km/h以下时采用较高密度的点云形式呈现点云数据。当速度在30km/h至50km/h时,采用中等密度的点云形式呈现点云数据。当速度在50km/h至90km/h时,采用低密度的点云形式呈现点云数据。当速度是在90km/h以上时采用更低密度的点云形式呈现点云数据。上述不同档位对应的点云数据的密度越来越小。
本实施例根据行驶速度不同而采用不同的点云呈现方式,在保证安全性减少计算量的同时,还提升了用户的用车体验。
3、当所述用户对数据呈现的需求信息包括所述呈现视角时,所述确定所述第一点云数据的呈现方式,可以包括:
根据所述呈现视角确定所述第一点云数据的呈现方式,所述呈现视角包括上帝视角或驾驶员视角;
对应地,所述以所述呈现方式呈现所述第一点云数据,可以包括:
当所述呈现视角为上帝视角时,从所述上帝视角呈现所述第一点云数据;
当所述呈现视角为驾驶员视角时,从所述驾驶员视角呈现所述第一点云数据。
该实施例中,针对用户选择的视角不同,来描述点云呈现方式。图9A至图9C都是从上帝视角给出的呈现方式。上帝视角可以更全面的显示整体的信息。在实际生活中,有些用户更适应驾驶员视角的呈现。如图10所示,从驾驶员视角所呈现的点云数据。
本申请实施例中,不同的视角的呈现方式不同,可以适配不同用户的不同显示习惯。
4、当所述用户对数据呈现的需求信息包括所述呈现时的结合对象时,所述确定所述第一点云数据的呈现方式,可以包括:
根据所述呈现时的结合对象确定所述第一点云数据的呈现方式,所述呈现时的结合对象包括高精度地图、平视显示器(head up display,HUD)或增强现实(augmented reality,AR);
对应地,所述以所述呈现方式呈现所述第一点云数据,可以包括:
当所述呈现时的结合对象为高精度地图时,将所述第一点云数据结合到所述高精度地图中呈现;
当所述呈现时的结合对象为HUD时,将所述第一点云数据投影到所述自动驾驶装置的风挡玻璃上的第一区域呈现,所述第一区域的面积小于所述风挡玻璃的面积;
当所述呈现时的结合对象为AR时,将所述第一点云数据投影到所述自动驾驶装置的整块风挡玻璃上呈现。
该实施例中,在点云数据呈现时可以结合高精度地图、HUD或者AR场景进行呈现,增加了点云呈现的灵活性。
5、当所述用户对数据呈现的需求信息包括所述操作指令时,所述确定所述第一点云数据的呈现方式,可以包括:
根据所述操作指令确定所述第一点云数据的呈现方式,所述操作指令包括转向、变道或倒车,所述呈现方式包括叠加预警信息;
对应地,所述以所述呈现方式呈现所述第一点云数据,可以包括:
呈现叠加有所述操作指令对应的预警信息的所述第一点云数据。
该实施例中,预警信息可以通过颜色,如红色预警,也可以通过声音报警。
车辆行驶过程中,会遇到需要变道的情况,变道时可以根据当前车辆的行驶速度,以及目标变更车道上前车后车的行驶速度,对要变道的车辆呈现相关预警信息。例如图11所示,当目标变更车道上后车的行驶速度较快,与变道车辆的变道速度有冲突时,则在当前车辆的对应的一侧呈现出预警的点云信息,该预警的点云信息,可以是采用带有颜色的点云呈现,例如红色,也可以是带有闪动提醒功能的预警提示点云呈现。
停车泊车场景下的预警提示,是当自动驾驶车进行倒车过程中,对于在车辆倒车影响范围内的感知到的事物进行呈现的同时,结合当前自动驾驶车的倒车速度,倒车角度,通过点云呈现出有危险的事物,起到提醒的作用,如图12所示,在停车泊车的场景下,显示可泊车车位的信息。
本申请实施例提供的点云呈现预警信息的方法,提高点云呈现对自动驾驶的辅助作用,进一步提升驾驶的安全性。
本申请实施例分别结合行驶位置、行驶速度、呈现视角、呈现时的结合对象以及操作 指令等方面对点云数据的确定以及呈现做了介绍,需要说明的是,行驶位置、行驶速度、呈现视角、呈现时的结合对象以及操作指令是可以结合来确定点云数据以及呈现方式的,而且,其他的自动驾驶装置的行驶信息和/或所述用户对数据呈现的需求信息也是可以独自或者结合上述行驶位置、行驶速度、呈现视角、呈现时的结合对象以及操作指令来确定点云数据以及呈现方式的,本申请实施例中不做一一列举。
本申请实施例根据用户的不同需求和行驶状况来适应点云的呈现机制,让用户在不同的场景下都能够适应外界环境的呈现内容。点云可在用户视觉能力弱的场景下,起到关键的辅助作用。因此对智能驾驶会起到重要作用。
本申请实施例提供的点云数据呈现的方法,不仅可以应用于自动驾驶领域,还可应用于如平安城市、安防、助老助残等需要进行目标识别、障碍物检测等需要呈现的场景中。
以上描述了数据呈现的方法,下面结合附图介绍本申请实施例提供的终端设备。
如图13所示,本申请实施例提供的终端设备30的一实施例可以包括:
处理单元301,用于获取自动驾驶装置的行驶信息和/或用户对数据呈现的需求信息,根据所述自动驾驶装置的行驶信息和/或所述用户对数据呈现的需求信息确定与所述自动驾驶装置相关的第一点云数据,以及确定所述第一点云数据的呈现方式,所述第一点云数据为通过多个点的形式表示的数据;
呈现单元302,用于以所述呈现方式呈现所述第一点云数据。
本申请实施例中,自动驾驶装置在自动驾驶时会根据行驶信息和/或用户对数据呈现的需求信息自适应呈现相应的点云数据,不会将检测到的所有数据都呈现出来,降低了数据处理和数据呈现的复杂度。
一种可选的实施例中,
接收单元303,用于从数据采集装置接收所述自动驾驶装置的第一数据以及所述自动驾驶装置周围的第一环境数据;
所述处理单元301用于:
根据所述自动驾驶装置的行驶信息和/或所述用户对数据呈现的需求信息对所述第一数据和所述第一环境数据做筛选,以得到待呈现的所述自动驾驶装置的第二数据以及所述自动驾驶装置周围的第二环境数据;
将所述第二数据和所述第二环境数据转换为第一点云数据。
一种可选的实施例中,所述终端设备30还包括:
所述处理单元301用于:
将所述第一数据和所述第一环境数据转换为第二点云数据;
根据所述自动驾驶装置的行驶信息和所述用户对数据呈现的需求信息对所述第二点云数据做筛选,以得到待呈现的第一点云数据。
一种可选的实施例中,所述行驶信息包括以下至少一项:行驶位置、行驶速度、所述自动驾驶装置所处的车道、天气信息或环境光信息;
所述用户对数据呈现的需求信息包括以下至少一项:呈现视角、呈现时的结合对象或操作指令;
所述呈现方式的信息中包括点云的密度,或者点云的密度与以下信息中至少一项的组合:点云的大小、点云中需要呈现的对象、点云的颜色或叠加预警信息。
一种可选的实施例中,当所述行驶信息包括所述行驶位置时,
所述处理单元301用于:
根据所述行驶位置确定所述自动驾驶装置所处的场景,所述场景包括高速公路、城市街道、郊区、山路或沙漠;
根据所述自动驾驶装置所处的场景确定所述第一点云数据的呈现方式;
所述呈现单元302,用于按照所述自动驾驶装置所处的场景以不同的密度呈现所述第一点云数据,所述城市街道对应的点云的密度大于所述高速公路、所述郊区、所述山路或所述沙漠对应的点云的密度。
一种可选的实施例中,当所述行驶信息包括所述行驶速度时,
所述处理单元301,用于根据所述自动驾驶装置的行驶速度确定所述第一点云数据的呈现方式;
所述呈现单元302,用于当所述行驶速度低于速度阈值时,采用第一密度呈现所述第一点云数据,当所述行驶速度高于所述速度阈值时,采用第二密度呈现所述第一点云数据,所述第一密度大于所述第二密度。
一种可选的实施例中,当所述用户对数据呈现的需求信息包括所述呈现视角时,
所述处理单元301,用于根据所述呈现视角确定所述第一点云数据的呈现方式,所述呈现视角包括上帝视角或驾驶员视角;
所述呈现单元302用于:
当所述呈现视角为上帝视角时,从所述上帝视角呈现所述第一点云数据;
当所述呈现视角为驾驶员视角时,从所述驾驶员视角呈现所述第一点云数据。
一种可选的实施例中,当所述用户对数据呈现的需求信息包括所述呈现时的结合对象时,
所述处理单元301,用于根据所述呈现时的结合对象确定所述第一点云数据的呈现方式,所述呈现时的结合对象包括高精度地图、平视显示器HUD或增强现实AR;
所述呈现单元302用于:
当所述呈现时的结合对象为高精度地图时,将所述第一点云数据结合到所述高精度地图中呈现;
当所述呈现时的结合对象为HUD时,将所述第一点云数据投影到所述自动驾驶装置的风挡玻璃上的第一区域呈现,所述第一区域的面积小于所述风挡玻璃的面积;
当所述呈现时的结合对象为AR时,将所述第一点云数据投影到所述自动驾驶装置的整块风挡玻璃上呈现。
一种可选的实施例中,一种可选的实施例中,
所述处理单元301,用于根据所述操作指令确定所述第一点云数据的呈现方式,所述操作指令包括转向、变道或倒车,所述呈现方式包括叠加预警信息;
所述呈现单元302,用于呈现叠加有所述操作指令对应的预警信息的所述第一点云数据。
需要说明的是,上述终端设备30的各模块之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其带来的技术效果与本发明方法实施例相同,具体内容可参见本申请前述所示的方法实施例中的叙述,此处不再赘述。
如图14所示,为本申请实施例的又一种设备的结构示意图,该设备为终端设备40,该终端设备40可以包括:处理器401(例如CPU)、存储器402、显示器404和投影设备403;显示器404和投影设备403耦合至处理器401,处理器401控制显示器404的发送动作和投影设备403的接收动作。存储器402可能包含高速RAM存储器,也可能还包括非易失性存储器NVM,例如至少一个磁盘存储器,存储器402中可以存储各种指令,以用于完成各种处理功能以及实现本申请实施例的方法步骤。可选的,本申请实施例涉及的终端设备还可以包括:电源405、以及通信端口406中的一个或多个,图14中所描述的各器件可以是通过通信总线连接,也可以是通过其他连接方式连接,对此,本申请实施例中不做限定。通信总线用于实现元件之间的通信连接。上述通信端口406用于实现终端设备与其他外设之间进行连接通信。
在一些实施例中,终端设备中的处理器401可以执行图13中处理单元301执行的动作,终端设备中的通信端口406可以执行图13中接收单元303执行的动作,终端设备中的显示器404和投影设备403可以执行图13中呈现单元302执行的动作,其实现原理和技术效果类似,在此不再赘述。
本申请还提供了一种芯片系统,该芯片系统包括处理器,用于支持上述终端设备实现其所涉及的功能,例如,例如接收或处理上述方法实施例中所涉及的数据。在一种可能的设计中,所述芯片系统还包括存储器,所述存储器,用于保存终端设备必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包含芯片和其他分立器件。
在本申请的另一实施例中,还提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机执行指令,当设备的至少一个处理器执行该计算机执行指令时,设备执行上述图5至图12部分实施例所描述的方法。
在本申请的另一实施例中,还提供一种计算机程序产品,该计算机程序产品包括计算机执行指令,该计算机执行指令存储在计算机可读存储介质中;设备的至少一个处理器可以从计算机可读存储介质读取该计算机执行指令,至少一个处理器执行该计算机执行指令使得设备执行上述图5至图12部分实施例所描述的方法。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请实施例的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请实施例所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法, 可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请实施例各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请实施例各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请实施例的具体实施方式,但本申请实施例的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请实施例揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请实施例的保护范围之内。因此,本申请实施例的保护范围应以所述权利要求的保护范围为准。

Claims (20)

  1. 一种数据呈现的方法,其特征在于,包括:
    获取自动驾驶装置的行驶信息和/或用户对数据呈现的需求信息;
    根据所述自动驾驶装置的行驶信息和/或所述用户对数据呈现的需求信息确定与所述自动驾驶装置相关的第一点云数据,以及确定所述第一点云数据的呈现方式,所述第一点云数据为通过多个点的形式表示的数据;
    以所述呈现方式呈现所述第一点云数据。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    从数据采集装置接收所述自动驾驶装置的第一数据以及所述自动驾驶装置周围的第一环境数据;
    所述根据所述自动驾驶装置的行驶信息和/或所述用户对数据呈现的需求信息确定与所述自动驾驶装置相关的第一点云数据,包括:
    根据所述自动驾驶装置的行驶信息和/或所述用户对数据呈现的需求信息对所述第一数据和所述第一环境数据做筛选,以得到待呈现的所述自动驾驶装置的第二数据以及所述自动驾驶装置周围的第二环境数据;
    将所述第二数据和所述第二环境数据转换为第一点云数据。
  3. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    从数据采集装置接收所述自动驾驶装置的第一数据以及所述自动驾驶装置周围的第一环境数据;
    所述根据所述自动驾驶装置的行驶信息和/或所述用户对数据呈现的需求信息确定与所述自动驾驶装置相关的第一点云数据,包括:
    将所述第一数据和所述第一环境数据转换为第二点云数据;
    根据所述自动驾驶装置的行驶信息和所述用户对数据呈现的需求信息对所述第二点云数据做筛选,以得到待呈现的第一点云数据。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述行驶信息包括以下至少一项:行驶位置、行驶速度、所述自动驾驶装置所处的车道、天气信息或环境光信息;
    所述用户对数据呈现的需求信息包括以下至少一项:呈现视角、呈现时的结合对象或操作指令;
    所述呈现方式的信息中包括点云的密度,或者点云的密度与以下信息中至少一项的组合:点云的大小、点云中需要呈现的对象、点云的颜色或叠加预警信息。
  5. 根据权利要求4所述的方法,其特征在于,当所述行驶信息包括所述行驶位置时,所述确定所述第一点云数据的呈现方式,包括:
    根据所述行驶位置确定所述自动驾驶装置所处的场景,所述场景包括高速公路、城市街道、郊区、山路或沙漠;
    根据所述自动驾驶装置所处的场景确定所述第一点云数据的呈现方式;
    对应地,所述以所述呈现方式呈现所述第一点云数据,包括:
    按照所述自动驾驶装置所处的场景以不同的密度呈现所述第一点云数据,所述城市街 道对应的点云的密度大于所述高速公路、所述郊区、所述山路或所述沙漠对应的点云的密度。
  6. 根据权利要求4所述的方法,其特征在于,当所述行驶信息包括所述行驶速度时,所述确定所述第一点云数据的呈现方式,包括:
    根据所述自动驾驶装置的行驶速度确定所述第一点云数据的呈现方式;
    对应地,所述以所述呈现方式呈现所述第一点云数据,包括:
    当所述行驶速度低于速度阈值时,采用第一密度呈现所述第一点云数据,当所述行驶速度高于所述速度阈值时,采用第二密度呈现所述第一点云数据,所述第一密度大于所述第二密度。
  7. 根据权利要求4所述的方法,其特征在于,当所述用户对数据呈现的需求信息包括所述呈现视角时,所述确定所述第一点云数据的呈现方式,包括:
    根据所述呈现视角确定所述第一点云数据的呈现方式,所述呈现视角包括上帝视角或驾驶员视角;
    对应地,所述以所述呈现方式呈现所述第一点云数据,包括:
    当所述呈现视角为上帝视角时,从所述上帝视角呈现所述第一点云数据;
    当所述呈现视角为驾驶员视角时,从所述驾驶员视角呈现所述第一点云数据。
  8. 根据权利要求4所述的方法,其特征在于,当所述用户对数据呈现的需求信息包括所述呈现时的结合对象时,所述确定所述第一点云数据的呈现方式,包括:
    根据所述呈现时的结合对象确定所述第一点云数据的呈现方式,所述呈现时的结合对象包括高精度地图、平视显示器HUD或增强现实AR;
    对应地,所述以所述呈现方式呈现所述第一点云数据,包括:
    当所述呈现时的结合对象为高精度地图时,将所述第一点云数据结合到所述高精度地图中呈现;
    当所述呈现时的结合对象为HUD时,将所述第一点云数据投影到所述自动驾驶装置的风挡玻璃上的第一区域呈现,所述第一区域的面积小于所述风挡玻璃的面积;
    当所述呈现时的结合对象为AR时,将所述第一点云数据投影到所述自动驾驶装置的整块风挡玻璃上呈现。
  9. 根据权利要求4所述的方法,其特征在于,当所述用户对数据呈现的需求信息包括所述操作指令时,所述确定所述第一点云数据的呈现方式,包括:
    根据所述操作指令确定所述第一点云数据的呈现方式,所述操作指令包括转向、变道或倒车,所述呈现方式包括叠加预警信息;
    对应地,所述以所述呈现方式呈现所述第一点云数据,包括:
    呈现叠加有所述操作指令对应的预警信息的所述第一点云数据。
  10. 一种终端设备,包括通信端口、处理器、存储器和显示器,所述处理器与所述存储器耦合,其特征在于,所述存储器,用于存储程序;
    所述处理器用于:
    获取自动驾驶装置的行驶信息和/或用户对数据呈现的需求信息;
    根据所述自动驾驶装置的行驶信息和/或所述用户对数据呈现的需求信息确定与所述自动驾驶装置相关的第一点云数据,以及确定所述第一点云数据的呈现方式,所述第一点云数据为通过多个点的形式表示的数据;
    所述显示器,用于以所述呈现方式呈现所述第一点云数据。
  11. 根据权利要求10所述的终端设备,其特征在于,
    所述通信端口,用于从数据采集装置接收所述自动驾驶装置的第一数据以及所述自动驾驶装置周围的第一环境数据;
    所述处理器用于:
    根据所述自动驾驶装置的行驶信息和/或所述用户对数据呈现的需求信息对所述第一数据和所述第一环境数据做筛选,以得到待呈现的所述自动驾驶装置的第二数据以及所述自动驾驶装置周围的第二环境数据;
    将所述第二数据和所述第二环境数据转换为第一点云数据。
  12. 根据权利要求10所述的终端设备,其特征在于,
    所述通信端口,用于从数据采集装置接收所述自动驾驶装置的第一数据以及所述自动驾驶装置周围的第一环境数据;
    所述处理器用于:
    将所述第一数据和所述第一环境数据转换为第二点云数据;
    根据所述自动驾驶装置的行驶信息和所述用户对数据呈现的需求信息对所述第二点云数据做筛选,以得到待呈现的第一点云数据。
  13. 根据权利要求10-12任一项所述的终端设备,其特征在于,所述行驶信息包括以下至少一项:行驶位置、行驶速度、所述自动驾驶装置所处的车道、天气信息或环境光信息;
    所述用户对数据呈现的需求信息包括以下至少一项:呈现视角、呈现时的结合对象或操作指令;
    所述呈现方式的信息中包括点云的密度,或者点云的密度与以下信息中至少一项的组合:点云的大小、点云中需要呈现的对象、点云的颜色或叠加预警信息。
  14. 根据权利要求13所述的终端设备,其特征在于,当所述行驶信息包括所述行驶位置时,
    所述处理器用于:
    根据所述行驶位置确定所述自动驾驶装置所处的场景,所述场景包括高速公路、城市街道、郊区、山路或沙漠;
    根据所述自动驾驶装置所处的场景确定所述第一点云数据的呈现方式;
    所述显示器,用于按照所述自动驾驶装置所处的场景以不同的密度呈现所述第一点云数据,所述城市街道对应的点云的密度大于所述高速公路、所述郊区、所述山路或所述沙漠对应的点云的密度。
  15. 根据权利要求13所述的终端设备,其特征在于,当所述行驶信息包括所述行驶速度时,
    所述处理器,用于根据所述自动驾驶装置的行驶速度确定所述第一点云数据的呈现方 式;
    所述显示器,用于当所述行驶速度低于速度阈值时,采用第一密度呈现所述第一点云数据,当所述行驶速度高于所述速度阈值时,采用第二密度呈现所述第一点云数据,所述第一密度大于所述第二密度。
  16. 根据权利要求13所述的终端设备,其特征在于,当所述用户对数据呈现的需求信息包括所述呈现视角时,
    所述处理器,用于根据所述呈现视角确定所述第一点云数据的呈现方式,所述呈现视角包括上帝视角或驾驶员视角;
    所述显示器用于:
    当所述呈现视角为上帝视角时,从所述上帝视角呈现所述第一点云数据;
    当所述呈现视角为驾驶员视角时,从所述驾驶员视角呈现所述第一点云数据。
  17. 根据权利要求13所述的终端设备,其特征在于,当所述用户对数据呈现的需求信息包括所述呈现时的结合对象时,所述终端设备还包括投影设备,
    所述处理器,用于根据所述呈现时的结合对象确定所述第一点云数据的呈现方式,所述呈现时的结合对象包括高精度地图、平视显示器HUD或增强现实AR;
    所述显示器用于:当所述呈现时的结合对象为高精度地图时,将所述第一点云数据结合到所述高精度地图中呈现;
    所述投影设备用于:
    当所述呈现时的结合对象为HUD时,将所述第一点云数据投影到所述自动驾驶装置的风挡玻璃上的第一区域呈现,所述第一区域的面积小于所述风挡玻璃的面积;
    当所述呈现时的结合对象为AR时,将所述第一点云数据投影到所述自动驾驶装置的整块风挡玻璃上呈现。
  18. 根据权利要求13所述的终端设备,其特征在于,当所述用户对数据呈现的需求信息包括所述操作指令时,
    所述处理器,用于根据所述操作指令确定所述第一点云数据的呈现方式,所述操作指令包括转向、变道或倒车,所述呈现方式包括叠加预警信息;
    所述显示器,用于呈现叠加有所述操作指令对应的预警信息的所述第一点云数据。
  19. 一种自动驾驶装置,其特征在于,包括:数据采集装置和终端设备,所述数据采集装置和所述终端设备通信;
    所述数据采集装置用于采集所述自动驾驶装置的第一数据以及所述自动驾驶装置周围的第一环境数据;
    所述终端设备为上述权利要求10-18任一所述的终端设备。
  20. 一种计算机可读存储介质,包括程序,当其在计算机上运行时,使得计算机执行如权利要求1至9中任一项所述的方法。
PCT/CN2020/110135 2019-09-25 2020-08-20 一种数据呈现的方法及终端设备 WO2021057344A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20870130.0A EP4029750A4 (en) 2019-09-25 2020-08-20 DATA PRESENTATION METHOD AND TERMINAL DEVICE
US17/704,709 US20220215639A1 (en) 2019-09-25 2022-03-25 Data Presentation Method and Terminal Device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910916213.1A CN110789533B (zh) 2019-09-25 2019-09-25 一种数据呈现的方法及终端设备
CN201910916213.1 2019-09-25

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/704,709 Continuation US20220215639A1 (en) 2019-09-25 2022-03-25 Data Presentation Method and Terminal Device

Publications (1)

Publication Number Publication Date
WO2021057344A1 true WO2021057344A1 (zh) 2021-04-01

Family

ID=69439768

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/110135 WO2021057344A1 (zh) 2019-09-25 2020-08-20 一种数据呈现的方法及终端设备

Country Status (4)

Country Link
US (1) US20220215639A1 (zh)
EP (1) EP4029750A4 (zh)
CN (1) CN110789533B (zh)
WO (1) WO2021057344A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114973736A (zh) * 2022-05-30 2022-08-30 东风汽车集团股份有限公司 基于虚拟仿真的远程驾驶监控系统
CN116588125A (zh) * 2023-07-17 2023-08-15 四川中普盈通科技有限公司 一种车载边缘侧数据处理系统

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111144211B (zh) * 2019-08-28 2023-09-12 华为技术有限公司 点云显示方法和装置
CN110789533B (zh) * 2019-09-25 2021-08-13 华为技术有限公司 一种数据呈现的方法及终端设备
EP4119399A4 (en) * 2020-03-31 2023-05-10 Huawei Technologies Co., Ltd. METHOD AND APPARATUS FOR COLLECTING DRIVING DATA
CN111595357B (zh) * 2020-05-14 2022-05-20 广州文远知行科技有限公司 可视化界面的显示方法、装置、电子设备和存储介质
US11763569B2 (en) * 2021-03-24 2023-09-19 Denso International America, Inc. System for controlling a camera supporting human review of sensor information
CN113378776A (zh) * 2021-06-29 2021-09-10 广州小鹏汽车科技有限公司 显示方法、显示装置、电子设备、车辆和介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140067187A1 (en) * 2012-09-05 2014-03-06 Google Inc. Construction Zone Detection Using a Plurality of Information Sources
CN107918753A (zh) * 2016-10-10 2018-04-17 腾讯科技(深圳)有限公司 点云数据处理方法及装置
CN108169730A (zh) * 2016-12-07 2018-06-15 岭纬公司 基于区域的激光雷达变密度扫描系统及方法
CN109635052A (zh) * 2018-10-31 2019-04-16 百度在线网络技术(北京)有限公司 点云数据的处理方法、装置和存储介质
CN109839922A (zh) * 2017-11-28 2019-06-04 百度在线网络技术(北京)有限公司 用于控制无人驾驶车辆的方法及装置
CN110789533A (zh) * 2019-09-25 2020-02-14 华为技术有限公司 一种数据呈现的方法及终端设备
CN111144211A (zh) * 2019-08-28 2020-05-12 华为技术有限公司 点云显示方法和装置

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9057623B2 (en) * 2010-05-24 2015-06-16 Mitsubishi Electric Corporation Navigation device
US8930139B2 (en) * 2012-06-21 2015-01-06 Telecommunication Systems, Inc. Dynamically varied map labeling
US10602121B2 (en) * 2013-09-17 2020-03-24 Société Des Arts Technologiques Method, system and apparatus for capture-based immersive telepresence in virtual environment
KR101928689B1 (ko) * 2015-05-20 2018-12-12 미쓰비시덴키 가부시키가이샤 점군 화상 생성 장치 및 표시 시스템
US11307042B2 (en) * 2015-09-24 2022-04-19 Allstate Insurance Company Three-dimensional risk maps
EP4160150A1 (en) * 2016-08-26 2023-04-05 Panasonic Intellectual Property Corporation of America Three-dimensional information processing method and three-dimensional information processing apparatus
EP3389026A1 (en) * 2017-04-12 2018-10-17 Volvo Car Corporation Apparatus and method for road vehicle driver assistance
CN107194957B (zh) * 2017-04-17 2019-11-22 武汉光庭科技有限公司 智能驾驶中激光雷达点云数据与车辆信息融合的方法
EP3648079B1 (en) * 2017-06-28 2024-01-03 Pioneer Corporation Control apparatus, control method, and program
US10331134B2 (en) * 2017-07-14 2019-06-25 Uber Technologies, Inc. Supervised movement of autonomous vehicle
JP2019100995A (ja) * 2017-12-08 2019-06-24 株式会社トプコン 測量画像表示制御装置、測量画像表示制御方法および測量画像表示制御用プログラム
CN109064506B (zh) * 2018-07-04 2020-03-13 百度在线网络技术(北京)有限公司 高精度地图生成方法、装置及存储介质
JP7163732B2 (ja) * 2018-11-13 2022-11-01 トヨタ自動車株式会社 運転支援装置、運転支援システム、運転支援方法およびプログラム
CN110163065B (zh) * 2018-12-04 2022-03-25 腾讯科技(深圳)有限公司 点云数据处理方法、点云数据加载方法、及装置和设备
CN109725330A (zh) * 2019-02-20 2019-05-07 苏州风图智能科技有限公司 一种车体定位方法及装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140067187A1 (en) * 2012-09-05 2014-03-06 Google Inc. Construction Zone Detection Using a Plurality of Information Sources
CN107918753A (zh) * 2016-10-10 2018-04-17 腾讯科技(深圳)有限公司 点云数据处理方法及装置
CN108169730A (zh) * 2016-12-07 2018-06-15 岭纬公司 基于区域的激光雷达变密度扫描系统及方法
CN109839922A (zh) * 2017-11-28 2019-06-04 百度在线网络技术(北京)有限公司 用于控制无人驾驶车辆的方法及装置
CN109635052A (zh) * 2018-10-31 2019-04-16 百度在线网络技术(北京)有限公司 点云数据的处理方法、装置和存储介质
CN111144211A (zh) * 2019-08-28 2020-05-12 华为技术有限公司 点云显示方法和装置
CN110789533A (zh) * 2019-09-25 2020-02-14 华为技术有限公司 一种数据呈现的方法及终端设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4029750A4

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114973736A (zh) * 2022-05-30 2022-08-30 东风汽车集团股份有限公司 基于虚拟仿真的远程驾驶监控系统
CN116588125A (zh) * 2023-07-17 2023-08-15 四川中普盈通科技有限公司 一种车载边缘侧数据处理系统
CN116588125B (zh) * 2023-07-17 2023-09-19 四川中普盈通科技有限公司 一种车载边缘侧数据处理系统

Also Published As

Publication number Publication date
CN110789533A (zh) 2020-02-14
US20220215639A1 (en) 2022-07-07
EP4029750A4 (en) 2022-11-30
CN110789533B (zh) 2021-08-13
EP4029750A1 (en) 2022-07-20

Similar Documents

Publication Publication Date Title
WO2021057344A1 (zh) 一种数据呈现的方法及终端设备
CN110775063B (zh) 一种车载设备的信息显示方法、装置及车辆
JP7255782B2 (ja) 障害物回避方法、障害物回避装置、自動運転装置、コンピュータ可読記憶媒体及びプログラム
WO2021135371A1 (zh) 一种自动驾驶方法、相关设备及计算机可读存储介质
JP6055562B2 (ja) 環境情報を用いた自律走行車両の画像処理方法、車両及び非一時的コンピュータ可読媒体
WO2021103511A1 (zh) 一种设计运行区域odd判断方法、装置及相关设备
WO2021000800A1 (zh) 道路可行驶区域推理方法及装置
WO2021212379A1 (zh) 车道线检测方法及装置
CN113968216B (zh) 一种车辆碰撞检测方法、装置及计算机可读存储介质
CN112543877B (zh) 定位方法和定位装置
CN112512887B (zh) 一种行驶决策选择方法以及装置
WO2020031812A1 (ja) 情報処理装置、情報処理方法、情報処理プログラム、及び移動体
WO2022051951A1 (zh) 车道线检测方法、相关设备及计算机可读存储介质
EP4307251A1 (en) Mapping method, vehicle, computer readable storage medium, and chip
CN112810603B (zh) 定位方法和相关产品
CN114842075A (zh) 数据标注方法、装置、存储介质及车辆
WO2021110166A1 (zh) 道路结构检测方法及装置
WO2022061702A1 (zh) 驾驶提醒的方法、装置及系统
CN113963535B (zh) 行驶决策确定方法、装置、电子设备存储介质
CN115100630A (zh) 障碍物检测方法、装置、车辆、介质及芯片
CN114842455A (zh) 障碍物检测方法、装置、设备、介质、芯片及车辆
WO2021159397A1 (zh) 车辆可行驶区域的检测方法以及检测装置
CN115082886B (zh) 目标检测的方法、装置、存储介质、芯片及车辆
CN115115707B (zh) 车辆落水检测方法、车辆、计算机可读存储介质及芯片
WO2022041820A1 (zh) 换道轨迹的规划方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20870130

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020870130

Country of ref document: EP

Effective date: 20220411