WO2020168787A1 - 确定车体位姿的方法及装置、制图方法 - Google Patents

确定车体位姿的方法及装置、制图方法 Download PDF

Info

Publication number
WO2020168787A1
WO2020168787A1 PCT/CN2019/123711 CN2019123711W WO2020168787A1 WO 2020168787 A1 WO2020168787 A1 WO 2020168787A1 CN 2019123711 W CN2019123711 W CN 2019123711W WO 2020168787 A1 WO2020168787 A1 WO 2020168787A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle body
time
pose information
relative
information
Prior art date
Application number
PCT/CN2019/123711
Other languages
English (en)
French (fr)
Inventor
张臣
Original Assignee
苏州风图智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州风图智能科技有限公司 filed Critical 苏州风图智能科技有限公司
Publication of WO2020168787A1 publication Critical patent/WO2020168787A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/18Stabilised platforms, e.g. by gyroscope

Definitions

  • the present disclosure relates to the field of unmanned driving technology, and in particular to a method and device for determining the pose of a vehicle, and a drawing method.
  • Unmanned driving technology is an important change in transportation, and it is of great significance to traffic safety and traffic convenience. At present, unmanned driving technology is constantly developing. Therefore, it is just around the corner for unmanned cars to replace traditional manual driving cars.
  • the production of high-precision maps is an important part of unmanned driving technology.
  • a high-precision map refers to a high-precision, finely defined map whose accuracy often needs to reach the decimeter level or even the centimeter level. Therefore, the production of high-precision maps cannot rely on GPS positioning technology like traditional electronic maps. GPS positioning technology can only achieve meter-level accuracy. The production of high-precision maps requires more sophisticated positioning technology.
  • vehicle body pose information is often determined based on the fusion positioning method of odometer and inertial measurement unit (IMU).
  • IMU inertial measurement unit
  • the present disclosure provides a method and device for determining the pose of a vehicle, and a drawing method.
  • a method for determining the pose of a vehicle body including:
  • the first relative pose information is fused with the sensor data of the vehicle body to determine the pose information of the vehicle body at the time t.
  • the using the three-dimensional laser point cloud data to determine the first relative pose information of the vehicle body with respect to (t-1) includes:
  • the first relative pose information is fused with the sensor data of the vehicle body to determine the pose information of the vehicle body at the time t ,include:
  • the first relative pose information and the second relative pose information are fused with the sensor data of the vehicle body to determine the pose information of the vehicle body at the time t.
  • the using the visual sensor data to determine the second relative pose information of the vehicle body relative to time (t-1) includes:
  • the first relative pose information is fused with the sensor data of the vehicle body to determine the pose information of the vehicle body at the time t ,include:
  • the first relative pose information is fused with the sensor data of the vehicle body to determine the pose information of the vehicle body at the time t ,include:
  • the vehicle body sensor data includes at least one of the following: inertial measurement unit (IMU) data, odometer data, electronic compass data, tilt sensor data, Gyroscope data.
  • IMU inertial measurement unit
  • a drawing method including:
  • a device for determining the pose of a vehicle body including:
  • Lidar is used to obtain the 3D laser point cloud data of the car body at time t;
  • the body sensor of the car body is used to obtain the body body sensor data of the car body at time t;
  • the processor is configured to use the three-dimensional laser point cloud data to determine the first relative pose information of the vehicle body relative to time (t-1); and, to compare the first relative pose information with the The vehicle body sensor data is fused to determine the pose information of the vehicle body at the time t.
  • the lidar is also used to obtain the three-dimensional laser point cloud data of the vehicle body at time (t-1);
  • the processor is also used for:
  • the device further includes:
  • the vision sensor is used to obtain the vision sensor data of the vehicle body at time t and (t-1);
  • the processor is also used for:
  • the first relative pose information and the second relative pose information are fused with the sensor data of the vehicle body to determine the pose information of the vehicle body at the time t.
  • the processor is further configured to:
  • the processor is further configured to:
  • the processor is further configured to:
  • Graph optimization processing is performed on the pose information of the vehicle body at the time (t-1) and the preliminary pose information at the time t to generate the pose information of the vehicle body at the time t.
  • the vehicle body sensor includes at least one of the following: an inertial measurement unit (IMU), an odometer, an electronic compass, an inclination sensor, and a gyroscope.
  • IMU inertial measurement unit
  • odometer odometer
  • electronic compass odometer
  • inclination sensor e.g., inclination sensor
  • gyroscope e.g., gyroscope
  • a device for determining the pose of a vehicle body including:
  • a memory for storing processor executable instructions
  • the processor is configured to execute the method for determining the pose of the vehicle.
  • a non-transitory computer-readable storage medium which when the instructions in the storage medium are executed by a processor, enables the processor to execute the method for determining the pose of a vehicle.
  • the method and device for determining the pose of the vehicle can combine the three-dimensional laser point cloud data of the vehicle body with the vehicle body sensor The data is fused and positioned to determine the pose information of the car. Since the 3D laser point cloud data contains rich environmental information around the car body, and the body body sensor data contains the body body feature information, the fusion of the environment information around the car body and the body body feature information can greatly Reduce the cumulative error and obtain more accurate vehicle posture information. After obtaining more accurate vehicle body posture information, a more accurate and reliable high-precision map applied to an unmanned driving environment can be determined based on the vehicle body posture information.
  • Fig. 1 is a flowchart showing a method for determining the pose of a vehicle body according to an exemplary embodiment.
  • Fig. 2 is a flow chart showing a method for determining the pose of a vehicle body according to an exemplary embodiment.
  • Fig. 3 is a flow chart showing a method for determining the pose of a vehicle body according to an exemplary embodiment.
  • Fig. 4 is a block diagram showing a device for determining the posture of a vehicle body according to an exemplary embodiment.
  • Fig. 5 is a block diagram showing a device according to an exemplary embodiment.
  • Fig. 6 is a block diagram showing a device according to an exemplary embodiment.
  • vehicle body pose information is often determined based on the fusion positioning of odometer and IMU.
  • both the odometer data and IMU data are sensor data based on the characteristics of the car body. If the car body characteristics produce a little error, the odometer data and IMU data may have the same error. Therefore, as time progresses, based on The fusion positioning method of the odometer and IMU may result in a large cumulative error in the determined vehicle pose information.
  • the method for determining the vehicle body pose provided in the present disclosure can integrate the three-dimensional laser point cloud data of the vehicle body and the vehicle body sensor data to determine the vehicle body pose information. Since the 3D laser point cloud data contains rich environmental information around the car body, and the body body sensor data contains the body body feature information, the fusion of the environment information around the car body and the body body feature information can greatly Reduce the cumulative error and obtain more accurate vehicle posture information.
  • Fig. 1 is a method flowchart of an embodiment of a method for determining a vehicle body pose provided by the present disclosure.
  • the present disclosure provides method operation steps as shown in the following embodiments or drawings, more or less operation steps may be included in the method based on conventional or without creative labor. In steps where there is no necessary causality logically, the execution order of these steps is not limited to the execution order provided by the embodiments of the present disclosure.
  • FIG. 1 an embodiment of the method for determining the pose of a vehicle provided by the present disclosure is shown in FIG. 1, which may include:
  • step 101 obtain the three-dimensional laser point cloud data of the vehicle body at time t and the sensor data of the vehicle body;
  • step 103 using the three-dimensional laser point cloud data, determine the first relative pose information of the vehicle body relative to time (t-1);
  • step 105 the first relative pose information is fused with the sensor data of the vehicle body to determine the pose information of the vehicle body at the time t.
  • the point cloud map in the process of constructing a point cloud map, it is necessary to correspond the point cloud data collected at time t with the pose information of the vehicle body, and the point cloud data corresponding to multiple discrete time points and the pose information of the vehicle body
  • the point cloud map can be generated by data fusion of the information. Therefore, accurately determining the vehicle body pose information corresponding to time t has an important role in constructing the point cloud map.
  • the three-dimensional laser point cloud data of the car body at time t and the sensor data of the car body can be obtained.
  • the three-dimensional laser point cloud data may include three-dimensional point cloud data of the surrounding environment of the vehicle body scanned by a laser radar.
  • the lidar may include multi-line radar, unidirectional radar, etc., and the present disclosure is not limited herein.
  • the vehicle body sensor data may include sensory data based on the characteristics of the vehicle body acquired by a sensor installed on the vehicle body.
  • the characteristics of the vehicle body may include, for example, the inclination angle of the vehicle body, wheel rotation speed, acceleration, three-axis attitude angle, heading, and so on.
  • the vehicle body sensor data may include at least one of the following: inertial measurement unit (IMU) data, odometer data, electronic compass data, inclination sensor data, and gyroscope data.
  • IMU inertial measurement unit
  • IMU data can be used to describe the angular velocity and acceleration of the car body in three-dimensional space
  • the odometer data can be used to describe the rotation speed of the wheel
  • the electronic compass data can be used to describe the heading of the car body
  • the inclination sensor data can be used to describe The inclination angle of the vehicle body relative to the horizontal plane
  • the gyroscope data can be used to describe the angular velocity of the vehicle body in three-dimensional space.
  • the vehicle body sensor data may include data acquired by any sensor capable of sensing the characteristics of the vehicle body, and the disclosure is not limited herein.
  • the first relative position of the vehicle body relative to time (t-1) may be determined based on the three-dimensional laser point cloud data.
  • Posture information The process of determining the first relative pose information, as shown in FIG. 2, may include:
  • step 201 obtain the 3D laser point cloud data of the vehicle body at time (t-1);
  • step 203 the point cloud feature information corresponding to the three-dimensional laser point cloud data of the vehicle body at the time t and the time (t-1) are respectively extracted;
  • step 205 based on the point cloud feature information of the vehicle body at the time t and the time (t-1), it is determined that the vehicle body is relative to the (t-1) time at the time t.
  • the first relative pose information at the moment is determined.
  • the three-dimensional laser point cloud data of the vehicle body at time (t-1) can be obtained, and the three-dimensional laser point cloud data of the vehicle body at the time t and the time (t-1) can be extracted respectively Point cloud feature information corresponding to the data.
  • the point cloud feature information may include the feature information of boundary points, boundary lines, and boundary surfaces in the three-dimensional laser point cloud data.
  • the point cloud feature information may include various boundary feature information such as road boundaries, traffic lights, signs, landmarks, and obstacles.
  • the first relative pose information can be calculated based on the distance information.
  • the first relative pose information may include the spatial translation and attitude change of the vehicle body at time t relative to time (t-1).
  • the spatial translation may be ( ⁇ x, ⁇ y, ⁇ z)
  • the attitude change can be expressed by ( ⁇ , ⁇ , )expression.
  • the registration between the three-dimensional laser point cloud data at time t and (t-1) can be realized based on the LOAM algorithm, RANSAC algorithm, etc., and the first time between the two time points can be calculated. Relative pose information.
  • the first relative pose information of the vehicle body relative to time (t-1) can be fused with the sensor data of the vehicle body to determine the The pose information of the vehicle body at the time t.
  • the specific method of fusion may include:
  • step 301 obtain the pose information of the vehicle body at the time (t-1);
  • step 303 the predicted pose information of the vehicle body at the time t is predicted by using the pose information of the vehicle body at the time (t-1);
  • step 305 the predicted pose information is corrected using the first relative pose information and the vehicle body sensor data, and the corrected predicted pose information is used as the vehicle body at the t The pose information at the moment.
  • data obtained by multiple sensors can be fused to calculate the more accurate pose information of the vehicle body at time t.
  • the predicted pose information of the vehicle body at the time t may be predicted based on the pose information of the vehicle body at the time (t-1).
  • the predicted pose information obtained by the prediction can be determined based on the state information of the vehicle body itself, but the influence of various external states may occur when the vehicle body travels between time t and (t-1).
  • the predicted pose information can be corrected using the first relative pose information and the vehicle body sensor data, and the corrected predicted pose information can be used as the vehicle body at the t The pose information at the moment.
  • the embodiments of the present disclosure can be calculated by using the extended Kalman filter algorithm, but any deformation algorithm that can be based on the extended Kalman filter algorithm falls within the protection scope of the embodiments of the present disclosure.
  • the vision sensor data may include data obtained by using a vision sensor, and the vision sensor may include a monocular camera device, a binocular camera device, a depth camera device, and so on.
  • the vehicle body in the process of fusing the first relative pose information with the sensor data of the vehicle body to determine the pose information of the vehicle body at the time t, the vehicle body can be obtained The visual sensing data at time t, and using the visual sensing data to determine the second relative pose information of the vehicle body relative to time (t-1). Then, the first relative pose information and the second relative pose information may be fused with the vehicle body sensing data to determine the pose information of the vehicle body at the time t.
  • the visual sensor data of the vehicle body at time (t-1) in the process of determining the second relative pose information, can be acquired. Then, the visual feature information corresponding to the visual sensor data of the vehicle body at the time t and the time (t-1) can be extracted respectively. Finally, based on the visual feature information of the vehicle body at the time t and the time (t-1), the vehicle body at the time t relative to the time (t-1) can be determined The second relative pose information.
  • the visual feature information may include feature information of boundary points, boundary lines, and boundary surfaces in the visual sensor data.
  • the registration between the visual sensor data at time t and (t-1) can be realized based on the SURF algorithm, HOG algorithm, RANSAC algorithm, etc., and the second relative position between the two times can be calculated. Posture information.
  • the The first relative pose information is fused with the sensor data of the vehicle body to generate preliminary pose information of the vehicle body at the time t.
  • graph optimization processing can be performed on the pose information of the vehicle body at the time (t-1) and the preliminary pose information at the time t to generate the pose information of the vehicle body at the time t information.
  • the graph optimization processing of the pose information at time (t-1) and the preliminary pose information at time t can be implemented based on the GraphSLAM framework.
  • the information matrix can be Dimensionality reduction and optimization can reduce or even eliminate accumulated errors in the preliminary pose information.
  • the method for determining the pose of the vehicle body can integrate the three-dimensional laser point cloud data of the vehicle body and the sensor data of the vehicle body to determine the vehicle body pose information. Since the 3D laser point cloud data contains rich environmental information around the car body, and the body body sensor data contains the body body feature information, the fusion of the environment information around the car body and the body body feature information can greatly Reduce the cumulative error and obtain more accurate vehicle posture information. After obtaining more accurate vehicle body posture information, a more accurate and reliable high-precision map applied to an unmanned driving environment can be determined based on the vehicle body posture information.
  • Another aspect of the present disclosure also provides a mapping method, which can use the method for determining the pose of the vehicle body described in any of the above embodiments to determine the pose information of the vehicle body at multiple moments, and based on the vehicle body
  • the three-dimensional laser point cloud data and pose information at the multiple times are drawn to generate a point cloud map.
  • FIG. 4 is a block diagram of the device 400 for determining the pose of a vehicle body according to an exemplary embodiment. 4, the device includes a laser radar 401, a body sensor 403, and a processor 405, where:
  • Lidar 401 is used to obtain the three-dimensional laser point cloud data of the vehicle body at time t;
  • the vehicle body sensor 403 is used to obtain the vehicle body sensor data of the vehicle body at time t;
  • the processor 405 is configured to use the three-dimensional laser point cloud data to determine the first relative pose information of the vehicle body relative to time (t-1); and, to compare the first relative pose information with The vehicle body sensor data is fused to determine the pose information of the vehicle body at the time t.
  • the lidar is also used to obtain the three-dimensional laser point cloud data of the vehicle body at time (t-1);
  • the processor is also used for:
  • the device further includes:
  • the vision sensor is used to obtain the vision sensor data of the vehicle body at time t and (t-1);
  • the processor is also used for:
  • the first relative pose information and the second relative pose information are fused with the sensor data of the vehicle body to determine the pose information of the vehicle body at the time t.
  • the processor is further configured to:
  • the processor is further configured to:
  • the processor is further configured to:
  • Graph optimization processing is performed on the pose information of the vehicle body at the time (t-1) and the preliminary pose information at the time t to generate the pose information of the vehicle body at the time t.
  • the vehicle body sensor includes at least one of the following: an inertial measurement unit (IMU), an odometer, an electronic compass, an inclination sensor, and a gyroscope.
  • IMU inertial measurement unit
  • odometer odometer
  • electronic compass odometer
  • inclination sensor e.g., inclination sensor
  • gyroscope e.g., gyroscope
  • Fig. 5 is a block diagram showing a device 700 for resource allocation indication according to an exemplary embodiment.
  • the apparatus 700 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, etc.
  • the device 700 may include one or more of the following components: a processing component 702, a memory 704, a power supply component 706, a multimedia component 708, an audio component 710, an input/output (I/O) interface 712, a sensor component 714, And the communication component 716.
  • the processing component 702 generally controls the overall operations of the device 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 702 may include one or more processors 720 to execute instructions to complete all or part of the steps of the foregoing method.
  • the processing component 702 may include one or more modules to facilitate the interaction between the processing component 702 and other components.
  • the processing component 702 may include a multimedia module to facilitate the interaction between the multimedia component 708 and the processing component 702.
  • the memory 704 is configured to store various types of data to support the operation of the device 700. Examples of such data include instructions for any application or method operating on the device 700, contact data, phone book data, messages, pictures, videos, etc.
  • the memory 704 can be implemented by any type of volatile or non-volatile storage devices or their combination, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable and Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic Disk or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM erasable and Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic Disk Magnetic Disk or Optical Disk.
  • the power supply component 706 provides power to various components of the device 700.
  • the power supply component 706 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 700.
  • the multimedia component 708 includes a screen that provides an output interface between the device 700 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen can be implemented as a touch-sensitive display to transmit input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
  • the multimedia component 708 includes a front camera and/or a rear camera. When the device 700 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can transmit external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 710 is configured to output and/or input audio signals.
  • the audio component 710 includes a microphone (MIC).
  • the microphone When the device 700 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to transmit external audio signals.
  • the transmitted audio signal can be further stored in the memory 704 or sent via the communication component 716.
  • the audio component 710 further includes a speaker for outputting audio signals.
  • the I/O interface 712 provides an interface between the processing component 702 and a peripheral interface module.
  • the above-mentioned peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include but are not limited to: home button, volume button, start button, and lock button.
  • the sensor component 714 includes one or more sensors for providing the device 700 with various aspects of status assessment.
  • the sensor component 714 can detect the on/off status of the device 700 and the relative positioning of components, such as the display and keypad of the device 700.
  • the sensor component 714 can also detect the position change of the device 700 or a component of the device 700. , The presence or absence of contact between the user and the device 700, the orientation or acceleration/deceleration of the device 700, and the temperature change of the device 700.
  • the sensor component 714 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
  • the sensor component 714 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 714 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
  • the communication component 716 is configured to facilitate wired or wireless communication between the apparatus 700 and other devices.
  • the device 700 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
  • the communication component 716 transmits a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 716 further includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the apparatus 700 may be implemented by one or more application specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field programmable A gate array (FPGA), controller, microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
  • ASIC application specific integrated circuits
  • DSP digital signal processors
  • DSPD digital signal processing devices
  • PLD programmable logic devices
  • FPGA field programmable A gate array
  • controller microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
  • non-transitory computer-readable storage medium including instructions, such as the memory 704 including instructions, which may be executed by the processor 720 of the device 700 to complete the foregoing method.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
  • Fig. 6 is a block diagram showing a device 800 for information processing according to an exemplary embodiment.
  • the device 800 may be provided as a server.
  • the apparatus 800 includes a processing component 822, which further includes one or more processors, and a memory resource represented by a memory 832, for storing instructions that can be executed by the processing component 822, such as application programs.
  • the application program stored in the memory 832 may include one or more modules each corresponding to a set of instructions.
  • the processing component 822 is configured to execute instructions to execute the method described in any of the foregoing embodiments.
  • the device 800 may also include a power component 826 configured to perform power management of the device 800, a wired or wireless network interface 850 configured to connect the device 800 to a network, and an input output (I/O) interface 858.
  • the device 800 can operate based on an operating system stored in the memory 832, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
  • non-transitory computer-readable storage medium including instructions, such as a memory 832 including instructions, which may be executed by the processing component 822 of the device 800 to complete the foregoing method.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

一种确定车体位姿的方法及装置、制图方法。确定车体位姿的方法包括:获取车体在t时刻的三维激光点云数据、车体本体传感数据(S101);利用三维激光点云数据,确定车体相对于( t-1)时刻的第一相对位姿信息(S103);将第一相对位姿信息与车体本体传感数据进行融合,确定车体在t时刻的位姿信息(S105)。利用提供的技术方案,可以将车体周围的环境信息和车体本体特征信息进行融合,可以大大减小累计误差,获取比较准确的车体位姿信息。

Description

确定车体位姿的方法及装置、制图方法 技术领域
本公开涉及无人驾驶技术领域,尤其涉及一种确定车体位姿的方法及装置、制图方法。
背景技术
无人驾驶技术是交通工具的一次重要变革,无论对于交通安全还是交通便捷来说,都具有十分重要的意义。目前,无人驾驶技术正在不断发展,因此,无人驾驶汽车代替传统的手动驾驶汽车也是指日可待。高精地图的制作是无人驾驶技术中的重要环节,高精地图是指高精度、精细化定义的地图,其精度往往需要达到分米级,甚至厘米级。因此,在制作高精地图时无法像传统电子地图那样依赖于GPS定位技术,GPS定位技术只可以达到米级精度,制作高精地图需要更加精细的定位技术。
相关技术中,在制作高精地图时往往基于里程计和惯性测量单元(Inertial measurement unit,IMU)融合定位的方式确定车体位姿信息。该定位技术通过给定的初始车体位姿信息,测量相对于初始位姿信息的距离和方向来确定当前车体位姿信息。因此,相关技术中的定位方式对前一步的定位具有很大的依赖性,导致前一步的定位误差也会积累到当前步骤中来,进而在整个定位过程中误差不断被积累的现象。
因此,相关技术中亟需一种能够在制作高精地图时准确确定车体位姿的方式。
发明内容
为克服相关技术中存在的问题,本公开提供一种确定车体位姿的方法及装置、制图方法。
根据本公开实施例的第一方面,提供一种确定车体位姿的方法,包括:
获取车体在t时刻的三维激光点云数据、车体本体传感数据;
利用所述三维激光点云数据,确定所述车体相对于(t-1)时刻的第一相对位姿信息;
将所述第一相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息。
可选的,在本公开的一个实施例中,所述利用所述三维激光点云数据,确定所述车体相对于(t-1)的第一相对位姿信息,包括:
获取所述车体在(t-1)时刻的三维激光点云数据;
分别提取所述车体在所述t时刻和所述(t-1)时刻的三维激光点云数据对应的点云特征信息;
基于所述车体在所述t时刻和所述(t-1)时刻的所述点云特征信息,确定所述车体在所述t时刻相对于所述(t-1)时刻的第一相对位姿信息。
可选的,在本公开的一个实施例中,所述将所述第一相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息,包括:
获取车体在t时刻和(t-1)时刻的视觉传感数据;
利用所述视觉传感数据,确定所述车体相对于(t-1)时刻的第二相对位姿信息;
将所述第一相对位姿信息、所述第二相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息。
可选的,在本公开的一个实施例中,所述利用所述视觉传感数据,确定所述车体相对于(t-1)时刻的第二相对位姿信息,包括:
分别提取所述车体在所述t时刻和所述(t-1)时刻的视觉传感数据对应的视觉特征信息;
基于所述车体在所述t时刻和所述(t-1)时刻的所述视觉特征信息,确定所述车体在所述t时刻相对于所述(t-1)时刻的第二相对位姿信息。
可选的,在本公开的一个实施例中,所述将所述第一相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息,包括:
获取所述车体在所述(t-1)时刻的位姿信息;
利用所述车体在所述(t-1)时刻的位姿信息预测得到所述车体在所述t时刻的预测位姿信息;
利用所述第一相对位姿信息、所述车体本体传感数据对所述预测位姿信息进行修正,并将修正后的预测位姿信息作为所述车体在所述t时刻的位姿信息。
可选的,在本公开的一个实施例中,所述将所述第一相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息,包括:
获取所述车体在所述(t-1)时刻的位姿信息;
将所述第一相对位姿信息与所述车体本体传感数据进行融合,生成所述车体在所述t时刻的初步位姿信息;
对所述车体在所述(t-1)时刻的位姿信息和在所述t时刻的初步位姿信息进行图优化 处理,生成所述车体在所述t时刻的位姿信息。
可选的,在本公开的一个实施例中,所述车体本体传感数据包括下述中的至少一种:惯性测量单元(IMU)数据、里程计数据、电子罗盘数据、倾角传感器数据、陀螺仪数据。
根据本公开实施例的第二方面,提供一种制图方法,所述方法包括:
利用上述任一实施例所述确定车体位姿的方法确定车体在多个时刻的位姿信息;
基于所述车体在所述多个时刻的三维激光点云数据和位姿信息,绘制生成点云地图。
根据本公开实施例的第三方面,提供一种确定车体位姿的装置,包括:
激光雷达,用于获取车体在t时刻的三维激光点云数据;
车体本体传感器,用于获取车体在t时刻的车体本体传感数据;
处理器,用于利用所述三维激光点云数据,确定所述车体相对于(t-1)时刻的第一相对位姿信息;以及,用于将所述第一相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息。
可选的,在本公开的一个实施例中,
所述激光雷达,还用于获取的所述车体在(t-1)时刻的三维激光点云数据;
相应地,所述处理器还用于:
分别提取所述车体在所述t时刻和所述(t-1)时刻的三维激光点云数据对应的点云特征信息;
基于所述车体在所述t时刻和所述(t-1)时刻的所述点云特征信息,确定所述车体在所述t时刻相对于所述(t-1)时刻的第一相对位姿信息。
可选的,在本公开的一个实施例中,所述装置还包括:
视觉传感器,用于获取车体在t时刻和(t-1)时刻的视觉传感数据;
相应地,所述处理器还用于:
利用所述视觉传感数据,确定所述车体相对于(t-1)时刻的第二相对位姿信息;
将所述第一相对位姿信息、所述第二相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息。
可选的,在本公开的一个实施例中,所述处理器还用于:
分别提取所述车体在所述t时刻和所述(t-1)时刻的视觉传感数据对应的视觉特征信息;
基于所述车体在所述t时刻和所述(t-1)时刻的所述视觉特征信息,确定所述车体在所述t时刻相对于所述(t-1)时刻的第二相对位姿信息。
可选的,在本公开的一个实施例中,所述处理器还用于:
获取所述车体在所述(t-1)时刻的位姿信息;
利用所述车体在所述(t-1)时刻的位姿信息预测得到所述车体在所述t时刻的预测位姿信息;
利用所述第一相对位姿信息、所述车体本体传感数据对所述预测位姿信息进行修正,并将修正后的预测位姿信息作为所述车体在所述t时刻的位姿信息。
可选的,在本公开的一个实施例中,所述处理器还用于:
获取所述车体在所述(t-1)时刻的位姿信息;
将所述第一相对位姿信息与所述车体本体传感数据进行融合,生成所述车体在所述t时刻的初步位姿信息;
对所述车体在所述(t-1)时刻的位姿信息和在所述t时刻的初步位姿信息进行图优化处理,生成所述车体在所述t时刻的位姿信息。
可选的,在本公开的一个实施例中,所述车体本体传感器包括下述中的至少一种:惯性测量单元(IMU)、里程计、电子罗盘、倾角传感器、陀螺仪。
根据本公开实施例的第四方面,提供一种确定车体位姿的装置,包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为执行所述确定车体位姿的方法。
根据本公开实施例的第五方面,提供一种非临时性计算机可读存储介质,当所述存储介质中的指令由处理器执行时,使得处理器能够执行所述确定车体位姿的方法。
本公开的实施例提供的技术方案可以包括以下有益效果:本公开各个实施例提供的确定车体位姿的方法及装置、制图方法,可以将车体的三维激光点云数据与车体本体传感数据进行融合定位,确定出车体位姿信息。由于三维激光点云数据包含车体周围比较丰富的环境信息,而车体本体传感数据包含车体本体特征信息,因此,将车体周围的环境信息和车体本体特征信息进行融合,可以大大减小累计误差,获取比较准确的车体位姿信息。获取比较准确的车体位姿信息之后,可以基于所述车体位姿信息确定绘制应用于无人驾驶环境更加准确可靠的高精地图。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例, 并与说明书一起用于解释本公开的原理。
图1是根据一示例性实施例示出的一种确定车体位姿的方法的流程图。
图2是根据一示例性实施例示出的一种确定车体位姿的方法的流程图。
图3是根据一示例性实施例示出的一种确定车体位姿的方法的流程图。
图4是根据一示例性实施例示出的一种确定车体位姿的装置的框图。
图5是根据一示例性实施例示出的一种装置的框图。
图6是根据一示例性实施例示出的一种装置的框图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
为了方便本领域技术人员理解本申请实施例提供的技术方案,下面首先对技术方案实现的技术环境进行说明。
相关技术中,在制作高精地图时往往基于里程计和IMU融合定位的方式确定车体位姿信息。但是里程计数据和IMU数据都是基于车体本体特征的传感数据,若车体本体特征产生一点误差,则里程计数据和IMU数据可能产生一致的误差,因此,随着时间的推进,基于里程计和IMU融合定位的方式可能导致确定的车体位姿信息具有较大的累计误差。
基于以上的技术需求,本公开提供的确定车体位姿的方法,可以将车体的三维激光点云数据与车体本体传感数据进行融合定位,确定出车体位姿信息。由于三维激光点云数据包含车体周围比较丰富的环境信息,而车体本体传感数据包含车体本体特征信息,因此,将车体周围的环境信息和车体本体特征信息进行融合,可以大大减小累计误差,获取比较准确的车体位姿信息。
下面结合附图对本公开所述的确定车体位姿的方法进行详细的说明。图1是本公开提供的确定车体位姿方法的一种实施例的方法流程图。虽然本公开提供了如下述实施例或附图所示的方法操作步骤,但基于常规或者无需创造性的劳动在所述方法中可以包括更多或者更少的操作步骤。在逻辑性上不存在必要因果关系的步骤中,这些步骤的执行顺序不限于本公开实施例提供的执行顺序。
具体的本公开提供的确定车体位姿方法的一种实施例如图1所示,可以包括:
步骤101中,获取车体在t时刻的三维激光点云数据、车体本体传感数据;
步骤103中,利用所述三维激光点云数据,确定所述车体相对于(t-1)时刻的第一相对位姿信息;
步骤105中,将所述第一相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息。
本公开实施例中,在构建点云地图的过程中,需要将t时刻采集的点云数据和车体的位姿信息相对应,将多个离散的时间点对应的点云数据和车体位姿信息进行数据融合,即可生成点云地图,因此,准确地确定t时刻对应的车体位姿信息对于构建点云地图具有重要的作用。基于此,可以获取车体在t时刻的三维激光点云数据和车体本体传感数据。其中,所述三维激光点云数据可以包括利用激光雷达扫描到的车体周围环境的三维点云数据。所述激光雷达可以包括多线雷达、单向雷达等等,本公开在此不做限制。所述车体本体传感数据可以包括利用安装在车体上的传感器获取到的基于车体本体特征的感知数据。所述车体本体特征例如可以包括车体的倾角、车轮旋转速度、加速度、三轴姿态角、航向等等。基于此,所述车体本体传感数据可以包括下述中的至少一种:惯性测量单元(IMU)数据、里程计数据、电子罗盘数据、倾角传感器数据、陀螺仪数据。其中,IMU数据可以用于描述车体在三维空间中的角速度和加速度,里程计数据可以用于描述车轮的旋转速度,电子罗盘数据可以用于描述车体的航向,倾角传感器数据可以用于描述车体相对于水平面的倾斜角度,陀螺仪数据可以用于描述车体在三维空间中的角速度。当然,所述车体本体传感数据可以包括利用任何能够感测车体本体特征的传感器获取的数据,本公开在此不做限制。
在本公开实施例中,在获取到车体在t时刻的三维激光点云数据之后,可以基于所述三维激光点云数据,确定所述车体相对于(t-1)时刻的第一相对位姿信息。在确定所述第一相对位姿信息的过程中,如图2所示,可以包括:
步骤201中,获取所述车体在(t-1)时刻的三维激光点云数据;
步骤203中,分别提取所述车体在所述t时刻和所述(t-1)时刻的三维激光点云数据对应的点云特征信息;
步骤205中,基于所述车体在所述t时刻和所述(t-1)时刻的所述点云特征信息,确定所述车体在所述t时刻相对于所述(t-1)时刻的第一相对位姿信息。
本公开实施例中,可以获取车体在(t-1)时刻的三维激光点云数据,并分别提取所述车体在所述t时刻和所述(t-1)时刻的三维激光点云数据对应的点云特征信息。在一个 实施例中,所述点云特征信息可以包括三维激光点云数据中的边界点、边界线、边界面的特征信息。在一个示例中,所述点云特征信息可以包括道路边界、交通指示灯、指示牌、标志性建筑的轮廓、障碍物轮廓等多种边界的特征信息。通过获取到t时刻和(t-1)时刻对应的点云特征信息之后,可以基于所述点云特征信息,确定车体在所述t时刻相对于所述(t-1)时刻的第一相对位姿信息。由于三维激光点云数据中包含扫描平面中的距离信息,因此,基于所述距离信息可以计算得到所述第一相对位姿信息。其中,所述第一相对位姿信息可以包括车体在t时刻相对于(t-1)时刻的空间平移和姿态变化,在一个示例中,所述空间平移可以用(Δx,Δy,Δz)表达,所述姿态变化可以用(Δφ,Δθ,
Figure PCTCN2019123711-appb-000001
)表达。在本公开的一个实施例中,可以基于LOAM算法、RANSAC算法等实现t时刻和(t-1)时刻的三维激光点云数据之间的配准,并计算得到两个时刻之间的第一相对位姿信息。
在获取到所述车体相对于(t-1)时刻的所述第一相对位姿信息之后,可以将所述第一相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息。在一个实施例中,如图3所示,融合的具体方式可以包括:
步骤301中,获取所述车体在所述(t-1)时刻的位姿信息;
步骤303中,利用所述车体在所述(t-1)时刻的位姿信息预测得到所述车体在所述t时刻的预测位姿信息;
步骤305中,利用所述第一相对位姿信息、所述车体本体传感数据对所述预测位姿信息进行修正,并将修正后的预测位姿信息作为所述车体在所述t时刻的位姿信息。
本公开实施例中,可以多传感器获取的数据进行融合,以计算得到车体在t时刻比较准确的位姿信息。在一个实施例中,可以基于车体在(t-1)时刻的位姿信息,预测得到所述车体在所述t时刻的预测位姿信息。当然,预测得到的预测位姿信息可以基于车体自身的状态信息确定,但是车体在t时刻和(t-1)时刻之间的行进过程中,可能会出现多种外部状态的影响。基于此,可以利用所述第一相对位姿信息、所述车体本体传感数据对所述预测位姿信息进行修正,并将修正后的预测位姿信息作为所述车体在所述t时刻的位姿信息。需要说明的是,本公开实施例可以利用扩展卡尔曼滤波算法计算得到,但是任何可以基于扩展卡尔曼滤波算法的变形算法均属于本公开实施例保护的范围。
本公开实施例中,还可以在进行数据融合的过程中,增加视觉传感数据的特征。视觉传感数据中可以包含车体周围环境中丰富的形状特征和纹理特征,因此,视觉传感数据可以和三维激光点云数据之间形成互补的关系,使得融合的数据中包含更多的特征数据,以实现更加准确的定位。在本公开实施例中,所述视觉传感数据可以包括利用视觉传感器获 取的数据,所述视觉传感器可以包括单目摄像设备、双目摄像设备、深度摄像设备等等。本公开实施例中,在将所述第一相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息的过程中,可以获取车体在t时刻的视觉传感数据,并利用所述视觉传感数据,确定所述车体相对于(t-1)时刻的第二相对位姿信息。然后,可以将所述第一相对位姿信息、所述第二相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息。
本公开实施例中,在确定所述第二相对位姿信息的过程中,可以获取所述车体在(t-1)时刻的视觉传感数据。然后,可以分别提取所述车体在所述t时刻和所述(t-1)时刻的视觉传感数据对应的视觉特征信息。最后,可以基于所述车体在所述t时刻和所述(t-1)时刻的所述视觉特征信息,确定所述车体在所述t时刻相对于所述(t-1)时刻的第二相对位姿信息。同样地,所述视觉特征信息可以包括视觉传感数据中的边界点、边界线、边界面的特征信息。在一些示例中,可以基于SURF算法、HOG算法、RANSAC算法等实现t时刻和(t-1)时刻的视觉传感数据之间的配准,并计算得到两个时刻之间的第二相对位姿信息。
本公开实施例中,在将所述第一相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息的过程中,可以将所述第一相对位姿信息与所述车体本体传感数据进行融合,生成所述车体在所述t时刻的初步位姿信息。然后,可以对所述车体在所述(t-1)时刻的位姿信息和在所述t时刻的初步位姿信息进行图优化处理,生成所述车体在所述t时刻的位姿信息。在一个实施例中,可以基于GraphSLAM框架实现对所述(t-1)时刻的位姿信息和所述t时刻的初步位姿信息的图优化处理,在GraphSLAM框架中,可以通过对信息矩阵的降维、优化,实现对所述初步位姿信息中累计误差的减小甚至消除。
本公开各个实施例提供的确定车体位姿的方法,可以将车体的三维激光点云数据与车体本体传感数据进行融合定位,确定出车体位姿信息。由于三维激光点云数据包含车体周围比较丰富的环境信息,而车体本体传感数据包含车体本体特征信息,因此,将车体周围的环境信息和车体本体特征信息进行融合,可以大大减小累计误差,获取比较准确的车体位姿信息。获取比较准确的车体位姿信息之后,可以基于所述车体位姿信息确定绘制应用于无人驾驶环境更加准确可靠的高精地图。
本公开另一方面还提供一种制图方法,所述方法可以利用上述任一实施例所述的确定车体位姿的方法确定车体在多个时刻的位姿信息,并基于所述车体在所述多个时刻的三维激光点云数据和位姿信息,绘制生成点云地图。
本公开另一方面还提供一种确定车体位姿的装置,图4是根据一示例性实施例示出的确定车体位姿的装置400的框图。参照图4,该装置包括激光雷达401、车体本体传感器403、处理器405,其中,
激光雷达401,用于获取车体在t时刻的三维激光点云数据;
车体本体传感器403,用于获取车体在t时刻的车体本体传感数据;
处理器405,用于利用所述三维激光点云数据,确定所述车体相对于(t-1)时刻的第一相对位姿信息;以及,用于将所述第一相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息。
可选的,在本公开的一个实施例中,
所述激光雷达,还用于获取的所述车体在(t-1)时刻的三维激光点云数据;
相应地,所述处理器还用于:
分别提取所述车体在所述t时刻和所述(t-1)时刻的三维激光点云数据对应的点云特征信息;
基于所述车体在所述t时刻和所述(t-1)时刻的所述点云特征信息,确定所述车体在所述t时刻相对于所述(t-1)时刻的第一相对位姿信息。
可选的,在本公开的一个实施例中,所述装置还包括:
视觉传感器,用于获取车体在t时刻和(t-1)时刻的视觉传感数据;
相应地,所述处理器还用于:
利用所述视觉传感数据,确定所述车体相对于(t-1)时刻的第二相对位姿信息;
将所述第一相对位姿信息、所述第二相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息。
可选的,在本公开的一个实施例中,所述处理器还用于:
分别提取所述车体在所述t时刻和所述(t-1)时刻的视觉传感数据对应的视觉特征信息;
基于所述车体在所述t时刻和所述(t-1)时刻的所述视觉特征信息,确定所述车体在所述t时刻相对于所述(t-1)时刻的第二相对位姿信息。
可选的,在本公开的一个实施例中,所述处理器还用于:
获取所述车体在所述(t-1)时刻的位姿信息;
利用所述车体在所述(t-1)时刻的位姿信息预测得到所述车体在所述t时刻的预测位姿信息;
利用所述第一相对位姿信息、所述车体本体传感数据对所述预测位姿信息进行修正,并将修正后的预测位姿信息作为所述车体在所述t时刻的位姿信息。
可选的,在本公开的一个实施例中,所述处理器还用于:
获取所述车体在所述(t-1)时刻的位姿信息;
将所述第一相对位姿信息与所述车体本体传感数据进行融合,生成所述车体在所述t时刻的初步位姿信息;
对所述车体在所述(t-1)时刻的位姿信息和在所述t时刻的初步位姿信息进行图优化处理,生成所述车体在所述t时刻的位姿信息。
可选的,在本公开的一个实施例中,所述车体本体传感器包括下述中的至少一种:惯性测量单元(IMU)、里程计、电子罗盘、倾角传感器、陀螺仪。
图5是根据一示例性实施例示出的一种用于资源分配指示的装置700的框图。例如,装置700可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图5,装置700可以包括以下一个或多个组件:处理组件702,存储器704,电源组件706,多媒体组件708,音频组件710,输入/输出(I/O)的接口712,传感器组件714,以及通信组件716。
处理组件702通常控制装置700的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件702可以包括一个或多个处理器720来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件702可以包括一个或多个模块,便于处理组件702和其他组件之间的交互。例如,处理组件702可以包括多媒体模块,以方便多媒体组件708和处理组件702之间的交互。
存储器704被配置为存储各种类型的数据以支持在装置700的操作。这些数据的示例包括用于在装置700上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器704可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM), 可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件706为装置700的各种组件提供电力。电源组件706可以包括电源管理系统,一个或多个电源,及其他与为装置700生成、管理和分配电力相关联的组件。
多媒体组件708包括在所述装置700和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触敏显示器,以传输来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件708包括一个前置摄像头和/或后置摄像头。当装置700处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以传输外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件710被配置为输出和/或输入音频信号。例如,音频组件710包括一个麦克风(MIC),当装置700处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为传输外部音频信号。所传输的音频信号可以被进一步存储在存储器704或经由通信组件716发送。在一些实施例中,音频组件710还包括一个扬声器,用于输出音频信号。
I/O接口712为处理组件702和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件714包括一个或多个传感器,用于为装置700提供各个方面的状态评估。例如,传感器组件714可以检测到装置700的打开/关闭状态,组件的相对定位,例如所述组件为装置700的显示器和小键盘,传感器组件714还可以检测装置700或装置700一个组件的位置改变,用户与装置700接触的存在或不存在,装置700方位或加速/减速和装置700的温度变化。传感器组件714可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件714还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件714还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件716被配置为便于装置700和其他设备之间有线或无线方式的通信。装置 700可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件716经由广播信道传输来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件716还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,装置700可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器704,上述指令可由装置700的处理器720执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
图6是根据一示例性实施例示出的一种用于信息处理的装置800的框图。例如,装置800可以被提供为一服务器。参照图6,装置800包括处理组件822,其进一步包括一个或多个处理器,以及由存储器832所代表的存储器资源,用于存储可由处理组件822的执行的指令,例如应用程序。存储器832中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件822被配置为执行指令,以执行上述任一实施例所述的方法。
装置800还可以包括一个电源组件826被配置为执行装置800的电源管理,一个有线或无线网络接口850被配置为将装置800连接到网络,和一个输入输出(I/O)接口858。装置800可以操作基于存储在存储器832的操作系统,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM或类似。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器832,上述指令可由装置800的处理组件822执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其它实 施方案。本公开旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。

Claims (17)

  1. 一种确定车体位姿的方法,其特征在于,包括:
    获取车体在t时刻的三维激光点云数据、车体本体传感数据;
    利用所述三维激光点云数据,确定所述车体相对于(t-1)时刻的第一相对位姿信息;
    将所述第一相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息。
  2. 根据权利要求1所述的确定车体位姿的方法,其特征在于,所述利用所述三维激光点云数据,确定所述车体相对于(t-1)的第一相对位姿信息,包括:
    获取所述车体在(t-1)时刻的三维激光点云数据;
    分别提取所述车体在所述t时刻和所述(t-1)时刻的三维激光点云数据对应的点云特征信息;
    基于所述车体在所述t时刻和所述(t-1)时刻的所述点云特征信息,确定所述车体在所述t时刻相对于所述(t-1)时刻的第一相对位姿信息。
  3. 根据权利要求1所述的确定车体位姿的方法,其特征在于,所述将所述第一相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息,包括:
    获取车体在t时刻和(t-1)时刻的视觉传感数据;
    利用所述视觉传感数据,确定所述车体相对于(t-1)时刻的第二相对位姿信息;
    将所述第一相对位姿信息、所述第二相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息。
  4. 根据权利要求3所述的确定车体位姿的方法,其特征在于,所述利用所述视觉传感数据,确定所述车体相对于(t-1)时刻的第二相对位姿信息,包括:
    分别提取所述车体在所述t时刻和所述(t-1)时刻的视觉传感数据对应的视觉特征信息;
    基于所述车体在所述t时刻和所述(t-1)时刻的所述视觉特征信息,确定所述车体在所述t时刻相对于所述(t-1)时刻的第二相对位姿信息。
  5. 根据权利要求1所述的确定车体位姿的方法,其特征在于,所述将所述第一相对位 姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息,包括:
    获取所述车体在所述(t-1)时刻的位姿信息;
    利用所述车体在所述(t-1)时刻的位姿信息预测得到所述车体在所述t时刻的预测位姿信息;
    利用所述第一相对位姿信息、所述车体本体传感数据对所述预测位姿信息进行修正,并将修正后的预测位姿信息作为所述车体在所述t时刻的位姿信息。
  6. 根据权利要求1所述的确定车体位姿的方法,其特征在于,所述将所述第一相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息,包括:
    获取所述车体在所述(t-1)时刻的位姿信息;
    将所述第一相对位姿信息与所述车体本体传感数据进行融合,生成所述车体在所述t时刻的初步位姿信息;
    对所述车体在所述(t-1)时刻的位姿信息和在所述t时刻的初步位姿信息进行图优化处理,生成所述车体在所述t时刻的位姿信息。
  7. 根据权利要求1-6任一项所述的确定车体位姿的方法,其特征在于,所述车体本体传感数据包括下述中的至少一种:惯性测量单元(IMU)数据、里程计数据、电子罗盘数据、倾角传感器数据、陀螺仪数据。
  8. 一种制图方法,其特征在于,所述方法包括:
    利用权利要求1-7中任一项所述的方法确定车体在多个时刻的位姿信息;
    基于所述车体在所述多个时刻的三维激光点云数据和位姿信息,绘制生成点云地图。
  9. 一种确定车体位姿的装置,其特征在于,包括:
    激光雷达,用于获取车体在t时刻的三维激光点云数据;
    车体本体传感器,用于获取车体在t时刻的车体本体传感数据;
    处理器,用于利用所述三维激光点云数据,确定所述车体相对于(t-1)时刻的第一相对位姿信息;以及,用于将所述第一相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息。
  10. 根据权利要求9所述的确定车体位姿的装置,其特征在于,
    所述激光雷达,还用于获取的所述车体在(t-1)时刻的三维激光点云数据;
    相应地,所述处理器还用于:
    分别提取所述车体在所述t时刻和所述(t-1)时刻的三维激光点云数据对应的点云特征信息;
    基于所述车体在所述t时刻和所述(t-1)时刻的所述点云特征信息,确定所述车体在所述t时刻相对于所述(t-1)时刻的第一相对位姿信息。
  11. 根据权利要求9所述的确定车体位姿的装置,其特征在于,所述装置还包括:
    视觉传感器,用于获取车体在t时刻和(t-1)时刻的视觉传感数据;
    相应地,所述处理器还用于:
    利用所述视觉传感数据,确定所述车体相对于(t-1)时刻的第二相对位姿信息;
    将所述第一相对位姿信息、所述第二相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息。
  12. 根据权利要求11所述的确定车体位姿的装置,其特征在于,所述处理器还用于:
    分别提取所述车体在所述t时刻和所述(t-1)时刻的视觉传感数据对应的视觉特征信息;
    基于所述车体在所述t时刻和所述(t-1)时刻的所述视觉特征信息,确定所述车体在所述t时刻相对于所述(t-1)时刻的第二相对位姿信息。
  13. 根据权利要求9所述的确定车体位姿的装置,其特征在于,所述处理器还用于:
    获取所述车体在所述(t-1)时刻的位姿信息;
    利用所述车体在所述(t-1)时刻的位姿信息预测得到所述车体在所述t时刻的预测位姿信息;
    利用所述第一相对位姿信息、所述车体本体传感数据对所述预测位姿信息进行修正,并将修正后的预测位姿信息作为所述车体在所述t时刻的位姿信息。
  14. 根据权利要求9所述的确定车体位姿的装置,其特征在于,所述处理器还用于:
    获取所述车体在所述(t-1)时刻的位姿信息;
    将所述第一相对位姿信息与所述车体本体传感数据进行融合,生成所述车体在所述t时刻的初步位姿信息;
    对所述车体在所述(t-1)时刻的位姿信息和在所述t时刻的初步位姿信息进行图优化处理,生成所述车体在所述t时刻的位姿信息。
  15. 根据权利要求9-14任一项所述的确定车体位姿的装置,其特征在于,所述车体本体传感器包括下述中的至少一种:惯性测量单元(IMU)、里程计、电子罗盘、倾角传感器、陀螺仪。
  16. 一种确定车体位姿的装置,其特征在于,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为执行权利要求1-7或者权利要求8任意一项所述的方法。
  17. 一种非临时性计算机可读存储介质,当所述存储介质中的指令由处理器执行时,使得处理器能够执行权利要求1-7或者权利要求8任意一项所述的方法。
PCT/CN2019/123711 2019-02-20 2019-12-06 确定车体位姿的方法及装置、制图方法 WO2020168787A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910126956.9 2019-02-20
CN201910126956.9A CN109870157B (zh) 2019-02-20 2019-02-20 确定车体位姿的方法及装置、制图方法

Publications (1)

Publication Number Publication Date
WO2020168787A1 true WO2020168787A1 (zh) 2020-08-27

Family

ID=66918971

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/123711 WO2020168787A1 (zh) 2019-02-20 2019-12-06 确定车体位姿的方法及装置、制图方法

Country Status (2)

Country Link
CN (1) CN109870157B (zh)
WO (1) WO2020168787A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112781594A (zh) * 2021-01-11 2021-05-11 桂林电子科技大学 基于imu耦合的激光雷达迭代最近点改进算法
CN112902951A (zh) * 2021-01-21 2021-06-04 深圳市镭神智能系统有限公司 一种行驶设备的定位方法、装置、设备及存储介质
CN112948411A (zh) * 2021-04-15 2021-06-11 深圳市慧鲤科技有限公司 位姿数据的处理方法及接口、装置、系统、设备和介质
WO2023097873A1 (zh) * 2021-11-30 2023-06-08 上海仙途智能科技有限公司 用于车辆定位检查的方法、装置、存储介质及设备

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109870157B (zh) * 2019-02-20 2021-11-02 苏州风图智能科技有限公司 确定车体位姿的方法及装置、制图方法
CN111443359B (zh) * 2020-03-26 2022-06-07 达闼机器人股份有限公司 定位方法、装置及设备
CN116106927A (zh) * 2020-03-27 2023-05-12 深圳市镭神智能系统有限公司 一种基于激光雷达的二维栅格地图构建方法、介质和系统
CN113494911B (zh) * 2020-04-02 2024-06-07 宝马股份公司 对车辆进行定位的方法和系统
CN112781586B (zh) * 2020-12-29 2022-11-04 上海商汤临港智能科技有限公司 一种位姿数据的确定方法、装置、电子设备及车辆
CN113075687A (zh) * 2021-03-19 2021-07-06 长沙理工大学 一种基于多传感器融合的电缆沟智能巡检机器人定位方法
CN113218389B (zh) * 2021-05-24 2024-05-17 北京航迹科技有限公司 一种车辆定位方法、装置、存储介质及计算机程序产品
CN114526745B (zh) * 2022-02-18 2024-04-12 太原市威格传世汽车科技有限责任公司 一种紧耦合激光雷达和惯性里程计的建图方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160063717A1 (en) * 2014-08-26 2016-03-03 Kabushiki Kaisha Topcon Point cloud position data processing device, point cloud position data processing system, point cloud position data processing method, and program therefor
CN105607071A (zh) * 2015-12-24 2016-05-25 百度在线网络技术(北京)有限公司 一种室内定位方法及装置
CN108036793A (zh) * 2017-12-11 2018-05-15 北京奇虎科技有限公司 基于点云的定位方法、装置及电子设备
CN108225345A (zh) * 2016-12-22 2018-06-29 乐视汽车(北京)有限公司 可移动设备的位姿确定方法、环境建模方法及装置
CN109214248A (zh) * 2017-07-04 2019-01-15 百度在线网络技术(北京)有限公司 用于识别无人驾驶车辆的激光点云数据的方法和装置
CN109870157A (zh) * 2019-02-20 2019-06-11 苏州风图智能科技有限公司 确定车体位姿的方法及装置、制图方法

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104374376B (zh) * 2014-11-05 2016-06-15 北京大学 一种车载三维测量系统装置及其应用
CN106406338B (zh) * 2016-04-14 2023-08-18 中山大学 一种基于激光测距仪的全向移动机器人的自主导航装置及其方法
CN106123890A (zh) * 2016-06-14 2016-11-16 中国科学院合肥物质科学研究院 一种多传感器数据融合的机器人定位方法
CN106969763B (zh) * 2017-04-07 2021-01-01 百度在线网络技术(北京)有限公司 用于确定无人驾驶车辆的偏航角的方法和装置
CN108732603B (zh) * 2017-04-17 2020-07-10 百度在线网络技术(北京)有限公司 用于定位车辆的方法和装置
CN108732584B (zh) * 2017-04-17 2020-06-30 百度在线网络技术(北京)有限公司 用于更新地图的方法和装置
CN107340522B (zh) * 2017-07-10 2020-04-17 浙江国自机器人技术有限公司 一种激光雷达定位的方法、装置及系统
CN108253958B (zh) * 2018-01-18 2020-08-11 亿嘉和科技股份有限公司 一种稀疏环境下的机器人实时定位方法
CN108759815B (zh) * 2018-04-28 2022-11-15 温州大学激光与光电智能制造研究院 一种用于全局视觉定位方法中的信息融合组合导航方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160063717A1 (en) * 2014-08-26 2016-03-03 Kabushiki Kaisha Topcon Point cloud position data processing device, point cloud position data processing system, point cloud position data processing method, and program therefor
CN105607071A (zh) * 2015-12-24 2016-05-25 百度在线网络技术(北京)有限公司 一种室内定位方法及装置
CN108225345A (zh) * 2016-12-22 2018-06-29 乐视汽车(北京)有限公司 可移动设备的位姿确定方法、环境建模方法及装置
CN109214248A (zh) * 2017-07-04 2019-01-15 百度在线网络技术(北京)有限公司 用于识别无人驾驶车辆的激光点云数据的方法和装置
CN108036793A (zh) * 2017-12-11 2018-05-15 北京奇虎科技有限公司 基于点云的定位方法、装置及电子设备
CN109870157A (zh) * 2019-02-20 2019-06-11 苏州风图智能科技有限公司 确定车体位姿的方法及装置、制图方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112781594A (zh) * 2021-01-11 2021-05-11 桂林电子科技大学 基于imu耦合的激光雷达迭代最近点改进算法
CN112781594B (zh) * 2021-01-11 2022-08-19 桂林电子科技大学 基于imu耦合的激光雷达迭代最近点改进算法
CN112902951A (zh) * 2021-01-21 2021-06-04 深圳市镭神智能系统有限公司 一种行驶设备的定位方法、装置、设备及存储介质
CN112948411A (zh) * 2021-04-15 2021-06-11 深圳市慧鲤科技有限公司 位姿数据的处理方法及接口、装置、系统、设备和介质
WO2023097873A1 (zh) * 2021-11-30 2023-06-08 上海仙途智能科技有限公司 用于车辆定位检查的方法、装置、存储介质及设备

Also Published As

Publication number Publication date
CN109870157B (zh) 2021-11-02
CN109870157A (zh) 2019-06-11

Similar Documents

Publication Publication Date Title
WO2020168787A1 (zh) 确定车体位姿的方法及装置、制图方法
WO2021128777A1 (en) Method, apparatus, device, and storage medium for detecting travelable region
US20200357138A1 (en) Vehicle-Mounted Camera Self-Calibration Method and Apparatus, and Storage Medium
CN110967011B (zh) 一种定位方法、装置、设备及存储介质
US10043314B2 (en) Display control method and information processing apparatus
US8972174B2 (en) Method for providing navigation information, machine-readable storage medium, mobile terminal, and server
CN109725329B (zh) 一种无人车定位方法及装置
JP2018535402A (ja) 異なる分解能を有するセンサーの出力を融合するシステム及び方法
CN110986930B (zh) 设备定位方法、装置、电子设备及存储介质
US20160203629A1 (en) Information display apparatus, and method for displaying information
EP3825960A1 (en) Method and device for obtaining localization information
US20200265725A1 (en) Method and Apparatus for Planning Navigation Region of Unmanned Aerial Vehicle, and Remote Control
KR102569214B1 (ko) 이동단말기 및 그 제어방법
CN110865405A (zh) 融合定位方法及装置、移动设备控制方法及电子设备
WO2023077754A1 (zh) 目标跟踪方法、装置及存储介质
CN110633336B (zh) 激光数据搜索范围的确定方法、装置及存储介质
CN114608591B (zh) 车辆定位方法、装置、存储介质、电子设备、车辆及芯片
JP2015049039A (ja) ナビゲーション装置、及びナビゲーションプログラム
CN116359942A (zh) 点云数据的采集方法、设备、存储介质及程序产品
WO2019233299A1 (zh) 地图构建方法、装置及计算机可读存储介质
CN114623836A (zh) 车辆位姿确定方法、装置及车辆
WO2024087456A1 (zh) 确定朝向信息以及自动驾驶车辆
CN111369566B (zh) 确定路面消隐点位置的方法、装置、设备及存储介质
CN116540252B (zh) 基于激光雷达的速度确定方法、装置、设备及存储介质
CN113532468B (zh) 一种导航方法和相关设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19916185

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19916185

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19916185

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17/03/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19916185

Country of ref document: EP

Kind code of ref document: A1