WO2020168787A1 - 确定车体位姿的方法及装置、制图方法 - Google Patents
确定车体位姿的方法及装置、制图方法 Download PDFInfo
- Publication number
- WO2020168787A1 WO2020168787A1 PCT/CN2019/123711 CN2019123711W WO2020168787A1 WO 2020168787 A1 WO2020168787 A1 WO 2020168787A1 CN 2019123711 W CN2019123711 W CN 2019123711W WO 2020168787 A1 WO2020168787 A1 WO 2020168787A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vehicle body
- time
- pose information
- relative
- information
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 230000000007 visual effect Effects 0.000 claims description 36
- 238000012545 processing Methods 0.000 claims description 20
- 238000003860 storage Methods 0.000 claims description 9
- 238000005457 optimization Methods 0.000 claims description 8
- 238000005259 measurement Methods 0.000 claims description 7
- 238000005516 engineering process Methods 0.000 description 16
- 238000004891 communication Methods 0.000 description 10
- 230000004927 fusion Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 4
- 230000001186 cumulative effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 206010034719 Personality change Diseases 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000013468 resource allocation Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/18—Stabilised platforms, e.g. by gyroscope
Definitions
- the present disclosure relates to the field of unmanned driving technology, and in particular to a method and device for determining the pose of a vehicle, and a drawing method.
- Unmanned driving technology is an important change in transportation, and it is of great significance to traffic safety and traffic convenience. At present, unmanned driving technology is constantly developing. Therefore, it is just around the corner for unmanned cars to replace traditional manual driving cars.
- the production of high-precision maps is an important part of unmanned driving technology.
- a high-precision map refers to a high-precision, finely defined map whose accuracy often needs to reach the decimeter level or even the centimeter level. Therefore, the production of high-precision maps cannot rely on GPS positioning technology like traditional electronic maps. GPS positioning technology can only achieve meter-level accuracy. The production of high-precision maps requires more sophisticated positioning technology.
- vehicle body pose information is often determined based on the fusion positioning method of odometer and inertial measurement unit (IMU).
- IMU inertial measurement unit
- the present disclosure provides a method and device for determining the pose of a vehicle, and a drawing method.
- a method for determining the pose of a vehicle body including:
- the first relative pose information is fused with the sensor data of the vehicle body to determine the pose information of the vehicle body at the time t.
- the using the three-dimensional laser point cloud data to determine the first relative pose information of the vehicle body with respect to (t-1) includes:
- the first relative pose information is fused with the sensor data of the vehicle body to determine the pose information of the vehicle body at the time t ,include:
- the first relative pose information and the second relative pose information are fused with the sensor data of the vehicle body to determine the pose information of the vehicle body at the time t.
- the using the visual sensor data to determine the second relative pose information of the vehicle body relative to time (t-1) includes:
- the first relative pose information is fused with the sensor data of the vehicle body to determine the pose information of the vehicle body at the time t ,include:
- the first relative pose information is fused with the sensor data of the vehicle body to determine the pose information of the vehicle body at the time t ,include:
- the vehicle body sensor data includes at least one of the following: inertial measurement unit (IMU) data, odometer data, electronic compass data, tilt sensor data, Gyroscope data.
- IMU inertial measurement unit
- a drawing method including:
- a device for determining the pose of a vehicle body including:
- Lidar is used to obtain the 3D laser point cloud data of the car body at time t;
- the body sensor of the car body is used to obtain the body body sensor data of the car body at time t;
- the processor is configured to use the three-dimensional laser point cloud data to determine the first relative pose information of the vehicle body relative to time (t-1); and, to compare the first relative pose information with the The vehicle body sensor data is fused to determine the pose information of the vehicle body at the time t.
- the lidar is also used to obtain the three-dimensional laser point cloud data of the vehicle body at time (t-1);
- the processor is also used for:
- the device further includes:
- the vision sensor is used to obtain the vision sensor data of the vehicle body at time t and (t-1);
- the processor is also used for:
- the first relative pose information and the second relative pose information are fused with the sensor data of the vehicle body to determine the pose information of the vehicle body at the time t.
- the processor is further configured to:
- the processor is further configured to:
- the processor is further configured to:
- Graph optimization processing is performed on the pose information of the vehicle body at the time (t-1) and the preliminary pose information at the time t to generate the pose information of the vehicle body at the time t.
- the vehicle body sensor includes at least one of the following: an inertial measurement unit (IMU), an odometer, an electronic compass, an inclination sensor, and a gyroscope.
- IMU inertial measurement unit
- odometer odometer
- electronic compass odometer
- inclination sensor e.g., inclination sensor
- gyroscope e.g., gyroscope
- a device for determining the pose of a vehicle body including:
- a memory for storing processor executable instructions
- the processor is configured to execute the method for determining the pose of the vehicle.
- a non-transitory computer-readable storage medium which when the instructions in the storage medium are executed by a processor, enables the processor to execute the method for determining the pose of a vehicle.
- the method and device for determining the pose of the vehicle can combine the three-dimensional laser point cloud data of the vehicle body with the vehicle body sensor The data is fused and positioned to determine the pose information of the car. Since the 3D laser point cloud data contains rich environmental information around the car body, and the body body sensor data contains the body body feature information, the fusion of the environment information around the car body and the body body feature information can greatly Reduce the cumulative error and obtain more accurate vehicle posture information. After obtaining more accurate vehicle body posture information, a more accurate and reliable high-precision map applied to an unmanned driving environment can be determined based on the vehicle body posture information.
- Fig. 1 is a flowchart showing a method for determining the pose of a vehicle body according to an exemplary embodiment.
- Fig. 2 is a flow chart showing a method for determining the pose of a vehicle body according to an exemplary embodiment.
- Fig. 3 is a flow chart showing a method for determining the pose of a vehicle body according to an exemplary embodiment.
- Fig. 4 is a block diagram showing a device for determining the posture of a vehicle body according to an exemplary embodiment.
- Fig. 5 is a block diagram showing a device according to an exemplary embodiment.
- Fig. 6 is a block diagram showing a device according to an exemplary embodiment.
- vehicle body pose information is often determined based on the fusion positioning of odometer and IMU.
- both the odometer data and IMU data are sensor data based on the characteristics of the car body. If the car body characteristics produce a little error, the odometer data and IMU data may have the same error. Therefore, as time progresses, based on The fusion positioning method of the odometer and IMU may result in a large cumulative error in the determined vehicle pose information.
- the method for determining the vehicle body pose provided in the present disclosure can integrate the three-dimensional laser point cloud data of the vehicle body and the vehicle body sensor data to determine the vehicle body pose information. Since the 3D laser point cloud data contains rich environmental information around the car body, and the body body sensor data contains the body body feature information, the fusion of the environment information around the car body and the body body feature information can greatly Reduce the cumulative error and obtain more accurate vehicle posture information.
- Fig. 1 is a method flowchart of an embodiment of a method for determining a vehicle body pose provided by the present disclosure.
- the present disclosure provides method operation steps as shown in the following embodiments or drawings, more or less operation steps may be included in the method based on conventional or without creative labor. In steps where there is no necessary causality logically, the execution order of these steps is not limited to the execution order provided by the embodiments of the present disclosure.
- FIG. 1 an embodiment of the method for determining the pose of a vehicle provided by the present disclosure is shown in FIG. 1, which may include:
- step 101 obtain the three-dimensional laser point cloud data of the vehicle body at time t and the sensor data of the vehicle body;
- step 103 using the three-dimensional laser point cloud data, determine the first relative pose information of the vehicle body relative to time (t-1);
- step 105 the first relative pose information is fused with the sensor data of the vehicle body to determine the pose information of the vehicle body at the time t.
- the point cloud map in the process of constructing a point cloud map, it is necessary to correspond the point cloud data collected at time t with the pose information of the vehicle body, and the point cloud data corresponding to multiple discrete time points and the pose information of the vehicle body
- the point cloud map can be generated by data fusion of the information. Therefore, accurately determining the vehicle body pose information corresponding to time t has an important role in constructing the point cloud map.
- the three-dimensional laser point cloud data of the car body at time t and the sensor data of the car body can be obtained.
- the three-dimensional laser point cloud data may include three-dimensional point cloud data of the surrounding environment of the vehicle body scanned by a laser radar.
- the lidar may include multi-line radar, unidirectional radar, etc., and the present disclosure is not limited herein.
- the vehicle body sensor data may include sensory data based on the characteristics of the vehicle body acquired by a sensor installed on the vehicle body.
- the characteristics of the vehicle body may include, for example, the inclination angle of the vehicle body, wheel rotation speed, acceleration, three-axis attitude angle, heading, and so on.
- the vehicle body sensor data may include at least one of the following: inertial measurement unit (IMU) data, odometer data, electronic compass data, inclination sensor data, and gyroscope data.
- IMU inertial measurement unit
- IMU data can be used to describe the angular velocity and acceleration of the car body in three-dimensional space
- the odometer data can be used to describe the rotation speed of the wheel
- the electronic compass data can be used to describe the heading of the car body
- the inclination sensor data can be used to describe The inclination angle of the vehicle body relative to the horizontal plane
- the gyroscope data can be used to describe the angular velocity of the vehicle body in three-dimensional space.
- the vehicle body sensor data may include data acquired by any sensor capable of sensing the characteristics of the vehicle body, and the disclosure is not limited herein.
- the first relative position of the vehicle body relative to time (t-1) may be determined based on the three-dimensional laser point cloud data.
- Posture information The process of determining the first relative pose information, as shown in FIG. 2, may include:
- step 201 obtain the 3D laser point cloud data of the vehicle body at time (t-1);
- step 203 the point cloud feature information corresponding to the three-dimensional laser point cloud data of the vehicle body at the time t and the time (t-1) are respectively extracted;
- step 205 based on the point cloud feature information of the vehicle body at the time t and the time (t-1), it is determined that the vehicle body is relative to the (t-1) time at the time t.
- the first relative pose information at the moment is determined.
- the three-dimensional laser point cloud data of the vehicle body at time (t-1) can be obtained, and the three-dimensional laser point cloud data of the vehicle body at the time t and the time (t-1) can be extracted respectively Point cloud feature information corresponding to the data.
- the point cloud feature information may include the feature information of boundary points, boundary lines, and boundary surfaces in the three-dimensional laser point cloud data.
- the point cloud feature information may include various boundary feature information such as road boundaries, traffic lights, signs, landmarks, and obstacles.
- the first relative pose information can be calculated based on the distance information.
- the first relative pose information may include the spatial translation and attitude change of the vehicle body at time t relative to time (t-1).
- the spatial translation may be ( ⁇ x, ⁇ y, ⁇ z)
- the attitude change can be expressed by ( ⁇ , ⁇ , )expression.
- the registration between the three-dimensional laser point cloud data at time t and (t-1) can be realized based on the LOAM algorithm, RANSAC algorithm, etc., and the first time between the two time points can be calculated. Relative pose information.
- the first relative pose information of the vehicle body relative to time (t-1) can be fused with the sensor data of the vehicle body to determine the The pose information of the vehicle body at the time t.
- the specific method of fusion may include:
- step 301 obtain the pose information of the vehicle body at the time (t-1);
- step 303 the predicted pose information of the vehicle body at the time t is predicted by using the pose information of the vehicle body at the time (t-1);
- step 305 the predicted pose information is corrected using the first relative pose information and the vehicle body sensor data, and the corrected predicted pose information is used as the vehicle body at the t The pose information at the moment.
- data obtained by multiple sensors can be fused to calculate the more accurate pose information of the vehicle body at time t.
- the predicted pose information of the vehicle body at the time t may be predicted based on the pose information of the vehicle body at the time (t-1).
- the predicted pose information obtained by the prediction can be determined based on the state information of the vehicle body itself, but the influence of various external states may occur when the vehicle body travels between time t and (t-1).
- the predicted pose information can be corrected using the first relative pose information and the vehicle body sensor data, and the corrected predicted pose information can be used as the vehicle body at the t The pose information at the moment.
- the embodiments of the present disclosure can be calculated by using the extended Kalman filter algorithm, but any deformation algorithm that can be based on the extended Kalman filter algorithm falls within the protection scope of the embodiments of the present disclosure.
- the vision sensor data may include data obtained by using a vision sensor, and the vision sensor may include a monocular camera device, a binocular camera device, a depth camera device, and so on.
- the vehicle body in the process of fusing the first relative pose information with the sensor data of the vehicle body to determine the pose information of the vehicle body at the time t, the vehicle body can be obtained The visual sensing data at time t, and using the visual sensing data to determine the second relative pose information of the vehicle body relative to time (t-1). Then, the first relative pose information and the second relative pose information may be fused with the vehicle body sensing data to determine the pose information of the vehicle body at the time t.
- the visual sensor data of the vehicle body at time (t-1) in the process of determining the second relative pose information, can be acquired. Then, the visual feature information corresponding to the visual sensor data of the vehicle body at the time t and the time (t-1) can be extracted respectively. Finally, based on the visual feature information of the vehicle body at the time t and the time (t-1), the vehicle body at the time t relative to the time (t-1) can be determined The second relative pose information.
- the visual feature information may include feature information of boundary points, boundary lines, and boundary surfaces in the visual sensor data.
- the registration between the visual sensor data at time t and (t-1) can be realized based on the SURF algorithm, HOG algorithm, RANSAC algorithm, etc., and the second relative position between the two times can be calculated. Posture information.
- the The first relative pose information is fused with the sensor data of the vehicle body to generate preliminary pose information of the vehicle body at the time t.
- graph optimization processing can be performed on the pose information of the vehicle body at the time (t-1) and the preliminary pose information at the time t to generate the pose information of the vehicle body at the time t information.
- the graph optimization processing of the pose information at time (t-1) and the preliminary pose information at time t can be implemented based on the GraphSLAM framework.
- the information matrix can be Dimensionality reduction and optimization can reduce or even eliminate accumulated errors in the preliminary pose information.
- the method for determining the pose of the vehicle body can integrate the three-dimensional laser point cloud data of the vehicle body and the sensor data of the vehicle body to determine the vehicle body pose information. Since the 3D laser point cloud data contains rich environmental information around the car body, and the body body sensor data contains the body body feature information, the fusion of the environment information around the car body and the body body feature information can greatly Reduce the cumulative error and obtain more accurate vehicle posture information. After obtaining more accurate vehicle body posture information, a more accurate and reliable high-precision map applied to an unmanned driving environment can be determined based on the vehicle body posture information.
- Another aspect of the present disclosure also provides a mapping method, which can use the method for determining the pose of the vehicle body described in any of the above embodiments to determine the pose information of the vehicle body at multiple moments, and based on the vehicle body
- the three-dimensional laser point cloud data and pose information at the multiple times are drawn to generate a point cloud map.
- FIG. 4 is a block diagram of the device 400 for determining the pose of a vehicle body according to an exemplary embodiment. 4, the device includes a laser radar 401, a body sensor 403, and a processor 405, where:
- Lidar 401 is used to obtain the three-dimensional laser point cloud data of the vehicle body at time t;
- the vehicle body sensor 403 is used to obtain the vehicle body sensor data of the vehicle body at time t;
- the processor 405 is configured to use the three-dimensional laser point cloud data to determine the first relative pose information of the vehicle body relative to time (t-1); and, to compare the first relative pose information with The vehicle body sensor data is fused to determine the pose information of the vehicle body at the time t.
- the lidar is also used to obtain the three-dimensional laser point cloud data of the vehicle body at time (t-1);
- the processor is also used for:
- the device further includes:
- the vision sensor is used to obtain the vision sensor data of the vehicle body at time t and (t-1);
- the processor is also used for:
- the first relative pose information and the second relative pose information are fused with the sensor data of the vehicle body to determine the pose information of the vehicle body at the time t.
- the processor is further configured to:
- the processor is further configured to:
- the processor is further configured to:
- Graph optimization processing is performed on the pose information of the vehicle body at the time (t-1) and the preliminary pose information at the time t to generate the pose information of the vehicle body at the time t.
- the vehicle body sensor includes at least one of the following: an inertial measurement unit (IMU), an odometer, an electronic compass, an inclination sensor, and a gyroscope.
- IMU inertial measurement unit
- odometer odometer
- electronic compass odometer
- inclination sensor e.g., inclination sensor
- gyroscope e.g., gyroscope
- Fig. 5 is a block diagram showing a device 700 for resource allocation indication according to an exemplary embodiment.
- the apparatus 700 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, etc.
- the device 700 may include one or more of the following components: a processing component 702, a memory 704, a power supply component 706, a multimedia component 708, an audio component 710, an input/output (I/O) interface 712, a sensor component 714, And the communication component 716.
- the processing component 702 generally controls the overall operations of the device 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
- the processing component 702 may include one or more processors 720 to execute instructions to complete all or part of the steps of the foregoing method.
- the processing component 702 may include one or more modules to facilitate the interaction between the processing component 702 and other components.
- the processing component 702 may include a multimedia module to facilitate the interaction between the multimedia component 708 and the processing component 702.
- the memory 704 is configured to store various types of data to support the operation of the device 700. Examples of such data include instructions for any application or method operating on the device 700, contact data, phone book data, messages, pictures, videos, etc.
- the memory 704 can be implemented by any type of volatile or non-volatile storage devices or their combination, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable and Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic Disk or Optical Disk.
- SRAM static random access memory
- EEPROM electrically erasable programmable read-only memory
- EPROM erasable and Programmable Read Only Memory
- PROM Programmable Read Only Memory
- ROM Read Only Memory
- Magnetic Memory Flash Memory
- Magnetic Disk Magnetic Disk or Optical Disk.
- the power supply component 706 provides power to various components of the device 700.
- the power supply component 706 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 700.
- the multimedia component 708 includes a screen that provides an output interface between the device 700 and the user.
- the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen can be implemented as a touch-sensitive display to transmit input signals from the user.
- the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
- the multimedia component 708 includes a front camera and/or a rear camera. When the device 700 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can transmit external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
- the audio component 710 is configured to output and/or input audio signals.
- the audio component 710 includes a microphone (MIC).
- the microphone When the device 700 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to transmit external audio signals.
- the transmitted audio signal can be further stored in the memory 704 or sent via the communication component 716.
- the audio component 710 further includes a speaker for outputting audio signals.
- the I/O interface 712 provides an interface between the processing component 702 and a peripheral interface module.
- the above-mentioned peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include but are not limited to: home button, volume button, start button, and lock button.
- the sensor component 714 includes one or more sensors for providing the device 700 with various aspects of status assessment.
- the sensor component 714 can detect the on/off status of the device 700 and the relative positioning of components, such as the display and keypad of the device 700.
- the sensor component 714 can also detect the position change of the device 700 or a component of the device 700. , The presence or absence of contact between the user and the device 700, the orientation or acceleration/deceleration of the device 700, and the temperature change of the device 700.
- the sensor component 714 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
- the sensor component 714 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
- the sensor component 714 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
- the communication component 716 is configured to facilitate wired or wireless communication between the apparatus 700 and other devices.
- the device 700 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
- the communication component 716 transmits a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
- the communication component 716 further includes a near field communication (NFC) module to facilitate short-range communication.
- the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
- RFID radio frequency identification
- IrDA infrared data association
- UWB ultra-wideband
- Bluetooth Bluetooth
- the apparatus 700 may be implemented by one or more application specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field programmable A gate array (FPGA), controller, microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
- ASIC application specific integrated circuits
- DSP digital signal processors
- DSPD digital signal processing devices
- PLD programmable logic devices
- FPGA field programmable A gate array
- controller microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
- non-transitory computer-readable storage medium including instructions, such as the memory 704 including instructions, which may be executed by the processor 720 of the device 700 to complete the foregoing method.
- the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
- Fig. 6 is a block diagram showing a device 800 for information processing according to an exemplary embodiment.
- the device 800 may be provided as a server.
- the apparatus 800 includes a processing component 822, which further includes one or more processors, and a memory resource represented by a memory 832, for storing instructions that can be executed by the processing component 822, such as application programs.
- the application program stored in the memory 832 may include one or more modules each corresponding to a set of instructions.
- the processing component 822 is configured to execute instructions to execute the method described in any of the foregoing embodiments.
- the device 800 may also include a power component 826 configured to perform power management of the device 800, a wired or wireless network interface 850 configured to connect the device 800 to a network, and an input output (I/O) interface 858.
- the device 800 can operate based on an operating system stored in the memory 832, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
- non-transitory computer-readable storage medium including instructions, such as a memory 832 including instructions, which may be executed by the processing component 822 of the device 800 to complete the foregoing method.
- the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Traffic Control Systems (AREA)
- Navigation (AREA)
Abstract
Description
Claims (17)
- 一种确定车体位姿的方法,其特征在于,包括:获取车体在t时刻的三维激光点云数据、车体本体传感数据;利用所述三维激光点云数据,确定所述车体相对于(t-1)时刻的第一相对位姿信息;将所述第一相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息。
- 根据权利要求1所述的确定车体位姿的方法,其特征在于,所述利用所述三维激光点云数据,确定所述车体相对于(t-1)的第一相对位姿信息,包括:获取所述车体在(t-1)时刻的三维激光点云数据;分别提取所述车体在所述t时刻和所述(t-1)时刻的三维激光点云数据对应的点云特征信息;基于所述车体在所述t时刻和所述(t-1)时刻的所述点云特征信息,确定所述车体在所述t时刻相对于所述(t-1)时刻的第一相对位姿信息。
- 根据权利要求1所述的确定车体位姿的方法,其特征在于,所述将所述第一相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息,包括:获取车体在t时刻和(t-1)时刻的视觉传感数据;利用所述视觉传感数据,确定所述车体相对于(t-1)时刻的第二相对位姿信息;将所述第一相对位姿信息、所述第二相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息。
- 根据权利要求3所述的确定车体位姿的方法,其特征在于,所述利用所述视觉传感数据,确定所述车体相对于(t-1)时刻的第二相对位姿信息,包括:分别提取所述车体在所述t时刻和所述(t-1)时刻的视觉传感数据对应的视觉特征信息;基于所述车体在所述t时刻和所述(t-1)时刻的所述视觉特征信息,确定所述车体在所述t时刻相对于所述(t-1)时刻的第二相对位姿信息。
- 根据权利要求1所述的确定车体位姿的方法,其特征在于,所述将所述第一相对位 姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息,包括:获取所述车体在所述(t-1)时刻的位姿信息;利用所述车体在所述(t-1)时刻的位姿信息预测得到所述车体在所述t时刻的预测位姿信息;利用所述第一相对位姿信息、所述车体本体传感数据对所述预测位姿信息进行修正,并将修正后的预测位姿信息作为所述车体在所述t时刻的位姿信息。
- 根据权利要求1所述的确定车体位姿的方法,其特征在于,所述将所述第一相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息,包括:获取所述车体在所述(t-1)时刻的位姿信息;将所述第一相对位姿信息与所述车体本体传感数据进行融合,生成所述车体在所述t时刻的初步位姿信息;对所述车体在所述(t-1)时刻的位姿信息和在所述t时刻的初步位姿信息进行图优化处理,生成所述车体在所述t时刻的位姿信息。
- 根据权利要求1-6任一项所述的确定车体位姿的方法,其特征在于,所述车体本体传感数据包括下述中的至少一种:惯性测量单元(IMU)数据、里程计数据、电子罗盘数据、倾角传感器数据、陀螺仪数据。
- 一种制图方法,其特征在于,所述方法包括:利用权利要求1-7中任一项所述的方法确定车体在多个时刻的位姿信息;基于所述车体在所述多个时刻的三维激光点云数据和位姿信息,绘制生成点云地图。
- 一种确定车体位姿的装置,其特征在于,包括:激光雷达,用于获取车体在t时刻的三维激光点云数据;车体本体传感器,用于获取车体在t时刻的车体本体传感数据;处理器,用于利用所述三维激光点云数据,确定所述车体相对于(t-1)时刻的第一相对位姿信息;以及,用于将所述第一相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息。
- 根据权利要求9所述的确定车体位姿的装置,其特征在于,所述激光雷达,还用于获取的所述车体在(t-1)时刻的三维激光点云数据;相应地,所述处理器还用于:分别提取所述车体在所述t时刻和所述(t-1)时刻的三维激光点云数据对应的点云特征信息;基于所述车体在所述t时刻和所述(t-1)时刻的所述点云特征信息,确定所述车体在所述t时刻相对于所述(t-1)时刻的第一相对位姿信息。
- 根据权利要求9所述的确定车体位姿的装置,其特征在于,所述装置还包括:视觉传感器,用于获取车体在t时刻和(t-1)时刻的视觉传感数据;相应地,所述处理器还用于:利用所述视觉传感数据,确定所述车体相对于(t-1)时刻的第二相对位姿信息;将所述第一相对位姿信息、所述第二相对位姿信息与所述车体本体传感数据进行融合,确定所述车体在所述t时刻的位姿信息。
- 根据权利要求11所述的确定车体位姿的装置,其特征在于,所述处理器还用于:分别提取所述车体在所述t时刻和所述(t-1)时刻的视觉传感数据对应的视觉特征信息;基于所述车体在所述t时刻和所述(t-1)时刻的所述视觉特征信息,确定所述车体在所述t时刻相对于所述(t-1)时刻的第二相对位姿信息。
- 根据权利要求9所述的确定车体位姿的装置,其特征在于,所述处理器还用于:获取所述车体在所述(t-1)时刻的位姿信息;利用所述车体在所述(t-1)时刻的位姿信息预测得到所述车体在所述t时刻的预测位姿信息;利用所述第一相对位姿信息、所述车体本体传感数据对所述预测位姿信息进行修正,并将修正后的预测位姿信息作为所述车体在所述t时刻的位姿信息。
- 根据权利要求9所述的确定车体位姿的装置,其特征在于,所述处理器还用于:获取所述车体在所述(t-1)时刻的位姿信息;将所述第一相对位姿信息与所述车体本体传感数据进行融合,生成所述车体在所述t时刻的初步位姿信息;对所述车体在所述(t-1)时刻的位姿信息和在所述t时刻的初步位姿信息进行图优化处理,生成所述车体在所述t时刻的位姿信息。
- 根据权利要求9-14任一项所述的确定车体位姿的装置,其特征在于,所述车体本体传感器包括下述中的至少一种:惯性测量单元(IMU)、里程计、电子罗盘、倾角传感器、陀螺仪。
- 一种确定车体位姿的装置,其特征在于,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为执行权利要求1-7或者权利要求8任意一项所述的方法。
- 一种非临时性计算机可读存储介质,当所述存储介质中的指令由处理器执行时,使得处理器能够执行权利要求1-7或者权利要求8任意一项所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910126956.9 | 2019-02-20 | ||
CN201910126956.9A CN109870157B (zh) | 2019-02-20 | 2019-02-20 | 确定车体位姿的方法及装置、制图方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020168787A1 true WO2020168787A1 (zh) | 2020-08-27 |
Family
ID=66918971
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/123711 WO2020168787A1 (zh) | 2019-02-20 | 2019-12-06 | 确定车体位姿的方法及装置、制图方法 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109870157B (zh) |
WO (1) | WO2020168787A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112781594A (zh) * | 2021-01-11 | 2021-05-11 | 桂林电子科技大学 | 基于imu耦合的激光雷达迭代最近点改进算法 |
CN112902951A (zh) * | 2021-01-21 | 2021-06-04 | 深圳市镭神智能系统有限公司 | 一种行驶设备的定位方法、装置、设备及存储介质 |
CN112948411A (zh) * | 2021-04-15 | 2021-06-11 | 深圳市慧鲤科技有限公司 | 位姿数据的处理方法及接口、装置、系统、设备和介质 |
WO2023097873A1 (zh) * | 2021-11-30 | 2023-06-08 | 上海仙途智能科技有限公司 | 用于车辆定位检查的方法、装置、存储介质及设备 |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109870157B (zh) * | 2019-02-20 | 2021-11-02 | 苏州风图智能科技有限公司 | 确定车体位姿的方法及装置、制图方法 |
CN111443359B (zh) * | 2020-03-26 | 2022-06-07 | 达闼机器人股份有限公司 | 定位方法、装置及设备 |
CN116106927A (zh) * | 2020-03-27 | 2023-05-12 | 深圳市镭神智能系统有限公司 | 一种基于激光雷达的二维栅格地图构建方法、介质和系统 |
CN113494911B (zh) * | 2020-04-02 | 2024-06-07 | 宝马股份公司 | 对车辆进行定位的方法和系统 |
CN112781586B (zh) * | 2020-12-29 | 2022-11-04 | 上海商汤临港智能科技有限公司 | 一种位姿数据的确定方法、装置、电子设备及车辆 |
CN113075687A (zh) * | 2021-03-19 | 2021-07-06 | 长沙理工大学 | 一种基于多传感器融合的电缆沟智能巡检机器人定位方法 |
CN113218389B (zh) * | 2021-05-24 | 2024-05-17 | 北京航迹科技有限公司 | 一种车辆定位方法、装置、存储介质及计算机程序产品 |
CN114526745B (zh) * | 2022-02-18 | 2024-04-12 | 太原市威格传世汽车科技有限责任公司 | 一种紧耦合激光雷达和惯性里程计的建图方法及系统 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160063717A1 (en) * | 2014-08-26 | 2016-03-03 | Kabushiki Kaisha Topcon | Point cloud position data processing device, point cloud position data processing system, point cloud position data processing method, and program therefor |
CN105607071A (zh) * | 2015-12-24 | 2016-05-25 | 百度在线网络技术(北京)有限公司 | 一种室内定位方法及装置 |
CN108036793A (zh) * | 2017-12-11 | 2018-05-15 | 北京奇虎科技有限公司 | 基于点云的定位方法、装置及电子设备 |
CN108225345A (zh) * | 2016-12-22 | 2018-06-29 | 乐视汽车(北京)有限公司 | 可移动设备的位姿确定方法、环境建模方法及装置 |
CN109214248A (zh) * | 2017-07-04 | 2019-01-15 | 百度在线网络技术(北京)有限公司 | 用于识别无人驾驶车辆的激光点云数据的方法和装置 |
CN109870157A (zh) * | 2019-02-20 | 2019-06-11 | 苏州风图智能科技有限公司 | 确定车体位姿的方法及装置、制图方法 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104374376B (zh) * | 2014-11-05 | 2016-06-15 | 北京大学 | 一种车载三维测量系统装置及其应用 |
CN106406338B (zh) * | 2016-04-14 | 2023-08-18 | 中山大学 | 一种基于激光测距仪的全向移动机器人的自主导航装置及其方法 |
CN106123890A (zh) * | 2016-06-14 | 2016-11-16 | 中国科学院合肥物质科学研究院 | 一种多传感器数据融合的机器人定位方法 |
CN106969763B (zh) * | 2017-04-07 | 2021-01-01 | 百度在线网络技术(北京)有限公司 | 用于确定无人驾驶车辆的偏航角的方法和装置 |
CN108732603B (zh) * | 2017-04-17 | 2020-07-10 | 百度在线网络技术(北京)有限公司 | 用于定位车辆的方法和装置 |
CN108732584B (zh) * | 2017-04-17 | 2020-06-30 | 百度在线网络技术(北京)有限公司 | 用于更新地图的方法和装置 |
CN107340522B (zh) * | 2017-07-10 | 2020-04-17 | 浙江国自机器人技术有限公司 | 一种激光雷达定位的方法、装置及系统 |
CN108253958B (zh) * | 2018-01-18 | 2020-08-11 | 亿嘉和科技股份有限公司 | 一种稀疏环境下的机器人实时定位方法 |
CN108759815B (zh) * | 2018-04-28 | 2022-11-15 | 温州大学激光与光电智能制造研究院 | 一种用于全局视觉定位方法中的信息融合组合导航方法 |
-
2019
- 2019-02-20 CN CN201910126956.9A patent/CN109870157B/zh active Active
- 2019-12-06 WO PCT/CN2019/123711 patent/WO2020168787A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160063717A1 (en) * | 2014-08-26 | 2016-03-03 | Kabushiki Kaisha Topcon | Point cloud position data processing device, point cloud position data processing system, point cloud position data processing method, and program therefor |
CN105607071A (zh) * | 2015-12-24 | 2016-05-25 | 百度在线网络技术(北京)有限公司 | 一种室内定位方法及装置 |
CN108225345A (zh) * | 2016-12-22 | 2018-06-29 | 乐视汽车(北京)有限公司 | 可移动设备的位姿确定方法、环境建模方法及装置 |
CN109214248A (zh) * | 2017-07-04 | 2019-01-15 | 百度在线网络技术(北京)有限公司 | 用于识别无人驾驶车辆的激光点云数据的方法和装置 |
CN108036793A (zh) * | 2017-12-11 | 2018-05-15 | 北京奇虎科技有限公司 | 基于点云的定位方法、装置及电子设备 |
CN109870157A (zh) * | 2019-02-20 | 2019-06-11 | 苏州风图智能科技有限公司 | 确定车体位姿的方法及装置、制图方法 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112781594A (zh) * | 2021-01-11 | 2021-05-11 | 桂林电子科技大学 | 基于imu耦合的激光雷达迭代最近点改进算法 |
CN112781594B (zh) * | 2021-01-11 | 2022-08-19 | 桂林电子科技大学 | 基于imu耦合的激光雷达迭代最近点改进算法 |
CN112902951A (zh) * | 2021-01-21 | 2021-06-04 | 深圳市镭神智能系统有限公司 | 一种行驶设备的定位方法、装置、设备及存储介质 |
CN112948411A (zh) * | 2021-04-15 | 2021-06-11 | 深圳市慧鲤科技有限公司 | 位姿数据的处理方法及接口、装置、系统、设备和介质 |
WO2023097873A1 (zh) * | 2021-11-30 | 2023-06-08 | 上海仙途智能科技有限公司 | 用于车辆定位检查的方法、装置、存储介质及设备 |
Also Published As
Publication number | Publication date |
---|---|
CN109870157B (zh) | 2021-11-02 |
CN109870157A (zh) | 2019-06-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020168787A1 (zh) | 确定车体位姿的方法及装置、制图方法 | |
WO2021128777A1 (en) | Method, apparatus, device, and storage medium for detecting travelable region | |
US20200357138A1 (en) | Vehicle-Mounted Camera Self-Calibration Method and Apparatus, and Storage Medium | |
CN110967011B (zh) | 一种定位方法、装置、设备及存储介质 | |
US10043314B2 (en) | Display control method and information processing apparatus | |
US8972174B2 (en) | Method for providing navigation information, machine-readable storage medium, mobile terminal, and server | |
CN109725329B (zh) | 一种无人车定位方法及装置 | |
JP2018535402A (ja) | 異なる分解能を有するセンサーの出力を融合するシステム及び方法 | |
CN110986930B (zh) | 设备定位方法、装置、电子设备及存储介质 | |
US20160203629A1 (en) | Information display apparatus, and method for displaying information | |
EP3825960A1 (en) | Method and device for obtaining localization information | |
US20200265725A1 (en) | Method and Apparatus for Planning Navigation Region of Unmanned Aerial Vehicle, and Remote Control | |
KR102569214B1 (ko) | 이동단말기 및 그 제어방법 | |
CN110865405A (zh) | 融合定位方法及装置、移动设备控制方法及电子设备 | |
WO2023077754A1 (zh) | 目标跟踪方法、装置及存储介质 | |
CN110633336B (zh) | 激光数据搜索范围的确定方法、装置及存储介质 | |
CN114608591B (zh) | 车辆定位方法、装置、存储介质、电子设备、车辆及芯片 | |
JP2015049039A (ja) | ナビゲーション装置、及びナビゲーションプログラム | |
CN116359942A (zh) | 点云数据的采集方法、设备、存储介质及程序产品 | |
WO2019233299A1 (zh) | 地图构建方法、装置及计算机可读存储介质 | |
CN114623836A (zh) | 车辆位姿确定方法、装置及车辆 | |
WO2024087456A1 (zh) | 确定朝向信息以及自动驾驶车辆 | |
CN111369566B (zh) | 确定路面消隐点位置的方法、装置、设备及存储介质 | |
CN116540252B (zh) | 基于激光雷达的速度确定方法、装置、设备及存储介质 | |
CN113532468B (zh) | 一种导航方法和相关设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19916185 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19916185 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19916185 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17/03/2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19916185 Country of ref document: EP Kind code of ref document: A1 |