WO2021056438A1 - 点云数据处理方法及其装置、激光雷达、可移动平台 - Google Patents

点云数据处理方法及其装置、激光雷达、可移动平台 Download PDF

Info

Publication number
WO2021056438A1
WO2021056438A1 PCT/CN2019/108627 CN2019108627W WO2021056438A1 WO 2021056438 A1 WO2021056438 A1 WO 2021056438A1 CN 2019108627 W CN2019108627 W CN 2019108627W WO 2021056438 A1 WO2021056438 A1 WO 2021056438A1
Authority
WO
WIPO (PCT)
Prior art keywords
obstacle
frame
point cloud
cloud data
historical
Prior art date
Application number
PCT/CN2019/108627
Other languages
English (en)
French (fr)
Inventor
蒋卓键
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201980033234.7A priority Critical patent/CN112154356B/zh
Priority to PCT/CN2019/108627 priority patent/WO2021056438A1/zh
Publication of WO2021056438A1 publication Critical patent/WO2021056438A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/50Systems of measurement based on relative movement of target
    • G01S17/58Velocity or trajectory determination systems; Sense-of-movement determination systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/52Determining velocity

Definitions

  • the present disclosure relates to the technical field of point cloud data processing, and in particular to a point cloud data processing method and device, lidar, and movable platform.
  • lidar In the field of autonomous driving, the most commonly used sensor is lidar, which is generally used for dynamic obstacle detection and the establishment of environmental maps.
  • the main problem of the current lidar is that the generated point cloud data is not dense enough, which is especially obvious for objects far away from the lidar. This problem will affect the effect of dynamic obstacle detection, and also cause the environment map established by lidar to be not accurate enough.
  • the present disclosure provides a point cloud data processing method, including:
  • the present disclosure also provides a point cloud data processing device, including:
  • Memory used to store executable instructions
  • the processor is configured to execute the executable instructions stored in the memory to perform the following operations:
  • the present disclosure also provides a laser radar, including:
  • Transmitter used to emit laser beams
  • Receiver used to receive the emitted laser beam
  • the above-mentioned point cloud data processing device processes the laser beam received by the receiver to generate point cloud data.
  • the present disclosure also provides a movable platform, including
  • a power system is provided in the body, and the power system is used to provide power for the movable platform;
  • the above-mentioned lidar is provided in the body and is used to perceive environmental information of the movable platform.
  • the present disclosure also provides a computer-readable storage medium that stores executable instructions that, when executed by one or more processors, can cause the one or more processors to execute the aforementioned point cloud Data processing method.
  • the point cloud data of the historical frame is accumulated to the current frame to achieve point cloud data enhancement, so that the point cloud data becomes dense, which overcomes the defect of point cloud data sparseness, and is conducive to generating high Accurate environmental map.
  • the point cloud data of the historical frame is accumulated to the current frame.
  • FIG. 1 is a flowchart of a point cloud data processing method according to an embodiment of the disclosure.
  • Figure 2 is a schematic diagram of the environment map.
  • Figure 3 shows the point cloud data of obstacles in frame t.
  • Figure 4 shows the point cloud data of the obstacle after frame t.
  • Figure 5 shows the point cloud data of the framed obstacle in frame t-1.
  • Figure 6 shows the point cloud data of the framed obstacle in frame t-2.
  • Figure 7 shows the relationship between the position of the obstacle in frame t-1 and its predicted position in frame t.
  • Figure 8 shows the relationship between the position of the obstacle in frame t-2 and its predicted position in frame t.
  • Fig. 9 is the point cloud data of the obstacle of t frame processed according to the point cloud data processing method of the embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram of a point cloud data processing device according to an embodiment of the disclosure.
  • FIG. 11 is a schematic diagram of a lidar according to an embodiment of the disclosure.
  • FIG. 12 is a schematic diagram of a movable platform according to an embodiment of the disclosure.
  • An embodiment of the present disclosure provides a point cloud data processing method. As shown in FIG. 1, the point cloud data processing method includes the following steps:
  • Step S101 Obtain a current frame, and determine an obstacle in the current frame
  • Step S102 Obtain a historical frame before the current frame, and determine the position and speed of the obstacle in the historical frame;
  • Step S103 accumulate the point cloud data of the obstacle in the historical frame to the current frame according to the position and speed of the obstacle in the historical frame.
  • the point cloud data processing method of this embodiment is executed by a point cloud data processing device.
  • the point cloud data processing device is used as a component of the sensor, and the sensor is generally installed on a movable platform.
  • the movable platforms in this article include: vehicles, unmanned aerial vehicles, manned aerial vehicles, ships and other movable carriers.
  • the unmanned aerial vehicle herein may be an unmanned aerial vehicle rotorcraft, such as a multi-rotor aircraft propelled by multiple propellers to move in the air.
  • the vehicle herein can be various motor vehicles and non-motor vehicles. Motor vehicles can be unmanned vehicles or manned vehicles.
  • the mobile platform in this document can carry one or more sensors for collecting environmental data.
  • the data acquired by the one or more sensors can be combined to generate an environmental map representing the surrounding environment.
  • the environment map in this article can be a two-dimensional map or a three-dimensional map.
  • the environment can be urban, suburban or rural or any other environment.
  • the environment map may include information about the location of objects in the environment.
  • the objects in the environment are, for example, one or more obstacles. Obstacles can include any objects or entities that can hinder the movement of the movable platform.
  • Some obstacles may be located on the ground, such as buildings in Figure 2, motor vehicles (for example, cars and trucks on the road in Figure 2), humans, animals, plants (for example, trees in Figure 2) and others Man-made or natural structures.
  • Some obstacles may be completely in the air, including aircraft (for example, airplanes, helicopters, hot air balloons, other UAVs) or birds.
  • the mobile platform can use the generated environment map to perform various operations, some of which can be semi-automated or fully automated.
  • the environment map can be used to automatically determine a flight path for the unmanned aerial vehicle to navigate from its current location to the target location.
  • the environment map can be used to automatically determine a driving path for the vehicle to travel from its current location to the target location.
  • the environment map can be used to determine the spatial arrangement of one or more obstacles, and thereby enable the movable platform to perform obstacle avoidance maneuvers.
  • the sensors used to collect environmental data in this article can improve the accuracy and precision of environmental map construction, even in diverse environments and operating conditions, thereby enhancing the robustness and robustness of functions such as navigation and obstacle avoidance. flexibility.
  • the point cloud data processing device as a component of the sensor, can be used alone or in combination with other sensors on a movable platform to generate an environment map.
  • the sensor can be a lidar
  • the point cloud data processing device is a data processing component of the lidar.
  • Other sensors of the movable platform can be GPS sensors, inertial sensors, vision sensors, ultrasonic sensors, and so on. The fusion of lidar and other sensors can be used to compensate for limitations or errors associated with a single sensor type, thereby improving the accuracy and reliability of environmental maps.
  • the lidar can continuously detect the surrounding environment during the movement of the movable platform.
  • the lidar emits a laser beam to the surrounding environment, the laser beam is reflected by objects in the environment, and the reflected signal is received by the lidar to obtain a data frame.
  • the lidar images the surrounding environment at each time to obtain the data frame at each time.
  • the data frame at each moment is composed of point cloud data.
  • Point cloud data refers to a data collection that reflects the surface shape of objects in the environment.
  • step S101 the lidar emits a laser beam to the surrounding environment at the current moment, the laser beam is reflected by objects in the environment, and the reflected signal is received by the lidar to obtain a data frame at the current moment, hereinafter referred to as the current frame.
  • the obstacles in the current frame are identified. Some obstacles in the environment are static and some obstacles are moving. Moving obstacles are called dynamic obstacles. Since the purpose of this embodiment is to accumulate point cloud data of dynamic obstacles, the obstacle recognition in this step refers to the recognition of dynamic obstacles. In the subsequent steps of this embodiment, dynamic obstacles are also processed. Unless otherwise specified, the obstacles below refer to dynamic obstacles.
  • the method may include the following steps: effective point cloud screening, point cloud clustering, and obstacle framing. These steps are respectively introduced below.
  • the point cloud data of the current frame is filtered first.
  • a density-based spatial clustering algorithm (DBSCAN, Density-Based Spatial Clustering of Application with Noise) can be used to perform point cloud clustering on the current frame.
  • the DBSCAN algorithm has a fast calculation speed, can effectively process noise points, and find spatial clusters of arbitrary shapes, and can separate different obstacles that are easily mistaken for the same obstacle, with high clustering accuracy.
  • the point cloud data belonging to cars and trucks can be separated.
  • Obstacles can be initially identified through point cloud clustering. Obstacles are generally represented in the form of a three-dimensional cube in the data frame of the point cloud data processing device, and this three-dimensional cube is called a "frame". By adding a frame to the obstacle, the outline of the obstacle can be circled with a frame for subsequent obstacle tracking.
  • the characteristics of obstacles can be extracted.
  • these features may include: tracking point location, obstacle movement direction, length and width, and so on.
  • the frame of the obstacle is extracted according to the characteristics of the obstacle.
  • Various methods can be used to propose the obstacle frame.
  • the minimum convex hull method combined with the fuzzy line segment method is used to extract the obstacle frame.
  • the frames of cars and trucks can be extracted.
  • the obstacles in the current frame can be identified through the above steps. Depending on the actual environment, there may be one or more obstacles in the current frame.
  • step S102 obtains the historical frame before the current frame, and determines the position and speed of the obstacle in the historical frame.
  • the historical frame refers to a data frame obtained at various historical moments before the current frame.
  • the lidar emits a laser beam to the surrounding environment at each historical moment, the laser beam is reflected by objects in the environment, and the reflected signal is received by the lidar to obtain the data frame at the historical moment.
  • the current frame time is t
  • the historical frame time refers to the time t-n, ..., t-2, t-1.
  • the history frame includes data frames at time t-n, ..., t-2, and t-1.
  • the same obstacle will appear in the data frame at multiple times, that is, the obstacles in the current frame determined in step S101. Some or all of these obstacles are in the data frame. It also appears in the history frame.
  • the obstacle is first identified in the historical frame, that is, the obstacle in the current frame is found in the historical frame.
  • the obstacle tracking method may be used to identify the obstacle in the current frame in the historical frame.
  • the feature points of the current frame and the feature points of the historical frame are respectively obtained, and the optical flow algorithm is used to determine whether the obstacle of the current frame is based on the feature points of the current frame and the feature points of the historical frame. Obstacles in historical frames.
  • the previous frame that is, the data frame at time t-1 (hereinafter referred to as t-1 frame) is acquired.
  • t-1 frame the data frame at time t-1
  • the optical flow algorithm the optical flow algorithm
  • the t-2 frame obtains the data frame at time t-2 (hereinafter referred to as the t-2 frame), and then obtain the feature points of the t-2 frame respectively, and perform the optical flow algorithm on the feature points of the t-1 frame and the feature of the t-2 frame.
  • the obstacle in the t-1 frame is identified in the t-2 frame, as shown in FIG. 6.
  • obstacle tracking other methods may be used to implement obstacle tracking, and these methods include: multi-target hypothesis tracking method, nearest neighbor method, joint probability data association method, and so on.
  • the obstacle can be attached with the same number.
  • an artificial neural network algorithm can be used to extract the feature information of the obstacle.
  • the above is for all obstacles determined in the current frame. That is to say, when an obstacle is determined in the current frame in step S101 (for example, when there is only a car or a truck in Figure 4), the obstacle can be associated with the historical frame through the above steps, so that in each historical frame Identify the obstacle.
  • the obstacle can be associated with the historical frame through the above steps, so that in each historical frame Identify the obstacle.
  • multiple obstacles are determined in the current frame in step S101 (for example, cars and trucks appear at the same time in Figure 4), perform the above operations on each obstacle, and each obstacle can be associated with the historical frame.
  • Each obstacle is identified in each historical frame.
  • the position of the obstacle in the historical frame can be extracted from the point cloud data of the obstacle in the historical frame.
  • the point cloud data of obstacles includes location information and attribute information.
  • the attribute information generally refers to the intensity information of the obstacle echo signal.
  • the position information refers to the position coordinates of the obstacle, and the position coordinates may be the three-axis coordinates X/Y/Z in the three-dimensional coordinate system with the laser radar as the origin. Therefore, by extracting the position coordinates of the point cloud data of the obstacle, the position of the obstacle in the historical frame can be obtained.
  • the three-axis coordinates in the point cloud data of the obstacle in frame t-1 are used as the position of the obstacle in frame t-1; for an obstacle in frame t-2 , Take the three-axis coordinates in the point cloud data of the obstacle in frame t-2 as the position of the obstacle in frame t-2; and so on, for the obstacle in frame tn, take the obstacle in frame tn
  • the three-axis coordinates in the point cloud data are used as the position of the obstacle in the tn frame.
  • the speed of the obstacle in the historical frame In order to accumulate the point cloud data of the obstacle, it is necessary to know the speed of the obstacle in the historical frame. There are many ways to determine the speed of obstacles in historical frames. These methods at least include: determining the speed of the obstacle in the historical frame according to the measurement value of the preset sensor. As an example, this embodiment adopts a Kalman filter to estimate the speed of the obstacle in the historical frame. Iterative operations are performed to determine the speed of the obstacle in the historical frame according to the state equation and the measurement equation; the measurement equation includes the speed measurement value of the preset sensor.
  • v w (t) is the velocity prediction value of the state equation in frame t
  • A is the coefficient of the state equation
  • v w (t-1) is the velocity prediction value of the state equation in frame t-1
  • w(t) is State noise of t frame
  • v z (t) is the velocity prediction value of the measurement equation in t frame
  • z(t) is the velocity measurement value of the point cloud data processing device in t frame
  • y(t) is the predicted noise of t frame .
  • v(t) A*v w (t-1)+w(t)*(v z (t)-z(t-1))
  • v(t) is the speed of the obstacle in frame t
  • z(t-1) is the measured value of the speed of the point cloud data processing device in frame t-1.
  • An initial value can be set for the speed prediction value of the state equation, and the initial value can be an empirical value, such as 80km/h.
  • the sampling of the data frame of the point cloud data processing device and the sampling of the preset sensor can be performed synchronously or asynchronously.
  • the speed measurement value z(t) of the point cloud data processing device in the t frame is obtained at the same sampling time t in the t frame.
  • the speed measurement value z(t) of the point cloud data processing device in the t frame may be obtained before or after the sampling time t of the t frame.
  • the speeds of the cars and trucks in each historical frame shown in FIG. 3 can be estimated, so as to obtain the speeds of the cars and trucks in each historical frame.
  • the lidar is installed on a movable platform, so the speed measurement value of the point cloud data processing device is actually the speed measurement value of the movable platform.
  • the speed measurement is usually provided by at least one preset sensor of a lidar or a movable platform.
  • These preset sensors include at least: an inertial measurement unit, a wheel speedometer, and a satellite positioning unit.
  • the measured value of one of the inertial measurement unit, the wheel speedometer, and the satellite positioning unit may be used to obtain the speed measurement value. It is also possible to perform data fusion on the measurement values of two or more sensors including the inertial measurement unit, the wheel speedometer, and the satellite positioning unit to obtain the speed measurement value.
  • the speed measurement value obtained by data fusion has higher accuracy, which is beneficial to further improve the accuracy of point cloud data accumulation and further improve the quality of point cloud.
  • step S103 can accumulate the point cloud data of the obstacle in the historical frame to the current frame, thereby making the point cloud data of the current frame more dense and improving the quality of the point cloud data .
  • the moving distance of the obstacle from the historical frame to the current frame is determined.
  • the moving distance can be determined by the following steps:
  • the moving distance is obtained according to the speed of the historical frame and the time difference.
  • the length of time between adjacent frames depends on the frame rate of the lidar. The higher the frame rate, the shorter the time length, and the lower the frame rate, the longer the time length. For example, if the Lidar frame rate is 20fps, that is, 20 frames per second, then the length of time between two adjacent frames is 0.05 seconds.
  • the time difference between t-1 frame and tzhen is 0.05 seconds, and the time difference between t-2 frame and t frame is 0.1 seconds. Similarly, the time difference between t-n frame and t frame is 0.05*n seconds.
  • the moving distance is obtained according to the speed of the historical frame and the time difference. Specifically, for each obstacle, multiply the speed of its historical frame by the time difference between each historical frame and the current frame to obtain the moving distance of the obstacle from each historical frame to the current frame.
  • the predicted position of the obstacle in the historical frame in the current frame is determined.
  • For each obstacle move its position in each historical frame to the moving distance between the historical frame and the current frame, and then the predicted position of the obstacle in each historical frame in the current frame can be obtained.
  • update the point cloud data of the obstacle in the historical frame refers to replacing the position coordinates of the point cloud data of the obstacle in the historical frame with the position coordinates of the predicted position.
  • the point cloud data of the obstacles in the historical frame can be accumulated in the current frame.
  • the point cloud data in each historical frame is updated, that is, the three-dimensional coordinates of the point cloud data in the historical frame are replaced with the three-dimensional coordinates of the predicted position of the historical frame in the current frame, and the updated
  • the point cloud data of is also used as the point cloud data of the current frame.
  • n of historical frames can be determined according to actual requirements. Generally speaking, the larger the value of n, the more historical frames are accumulated, and the denser the current frame. But at the same time it will also bring noise. Some of the accumulated point cloud data may be noise, which exceeds the detection range of the lidar, which is not good for improving the quality of the point cloud. Therefore, in this embodiment, the noise removal operation may be performed after step S103.
  • the preset position range may be a range of one, two or three dimensions in a three-dimensional coordinate system where the point cloud data processing device is the origin. For example, a position range of the X axis in a three-dimensional coordinate system, or a position range formed by the X axis and the Y axis, or a position range formed by the X axis, the Y axis, and the Z axis.
  • the predicted position outside the preset position range can be used as the noise position, that is, the point cloud data that is accumulated to the current frame and outside the preset position range is used as the noise, and the point cloud data corresponding to the noise is removed from the current frame Remove.
  • the point cloud data of the historical frame is accumulated to the current frame to achieve point cloud data enhancement, so that the point cloud data becomes dense, which overcomes the defect of point cloud data sparseness, and is conducive to generating high Accurate environmental map.
  • the point cloud data of the historical frame is accumulated to the current frame.
  • the accumulated current frame suppresses or even eliminates the point cloud.
  • the problem of tailing further improves the quality of the point cloud, which is conducive to further improving the accuracy of the environmental map.
  • Another embodiment of the present disclosure provides a point cloud data processing device as a component of a sensor of a movable platform.
  • the sensor can be installed on a movable platform, and the sensor can be a lidar.
  • the point cloud data processing device can be used alone or in combination with other sensors of a movable platform to generate an environment map.
  • the point cloud data processing device includes a memory and a processor.
  • the processor and the memory can be connected via a bus.
  • the memory may store instructions for the processor to execute and/or data to be processed or processed.
  • the number of memories can also be one or more. Instructions for execution by the processor and/or data to be processed or processed may be stored in one memory, or may be distributed and stored in various memories.
  • the memory may be volatile memory or non-volatile memory.
  • the memory may include: random access memory (RAM), dynamic RAM (DRAM), static RAM (SRAM), synchronous DRAM (SDRAM), cache, registers, etc.
  • the memory can also include: one-time programmable read-only memory (OTPROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), mask ROM, Flash ROM, flash memory, hard drive, solid state drive, etc.
  • OTPROM one-time programmable read-only memory
  • EPROM erasable programmable ROM
  • EEPROM electrically erasable programmable ROM
  • mask ROM Flash ROM
  • flash memory hard drive, solid state drive, etc.
  • the number of processors can be one or more.
  • the processor may be a central processing unit (CPU), a field-programmable gate array (FPGA), a digital signal processor (DSP), or other data processing chips.
  • CPU central processing unit
  • FPGA field-programmable gate array
  • DSP digital signal processor
  • the instructions stored in the memory for execution by the processors can be executed by the one processor.
  • the instructions stored in the memory for execution by the processors may be executed by one of the multiple processors or distributed among at least some of the multiple processors.
  • Memory used to store executable instructions
  • the processor is configured to execute the executable instructions stored in the memory to perform the following operations:
  • the operation of determining the position of the obstacle in the historical frame includes:
  • the operation of identifying the obstacle in the history frame includes:
  • the feature points of the current frame and the feature points of the historical frame it is determined whether the obstacle of the current frame is the obstacle of the historical frame through an optical flow algorithm.
  • the speed of the obstacle in the historical frame is determined according to the measurement value of the preset sensor.
  • the Kalman filter is used to determine the speed of the obstacle in the historical frame.
  • the Kalman filter includes: state equation and measurement equation;
  • the determining the speed of the obstacle in the historical frame by using the Kalman filter includes:
  • An iterative operation is performed according to the state equation and the measurement equation to determine the speed of the obstacle in the historical frame; the measurement equation includes the measurement value of the preset sensor.
  • the preset sensor includes at least one of the following: an inertial measurement unit, a wheel speedometer, and a satellite positioning unit.
  • the operation of accumulating point cloud data of the obstacle in the historical frame to the current frame includes:
  • the point cloud data of the obstacle in the historical frame is updated according to the predicted position, and the updated point cloud data is supplemented to the current frame.
  • the operation of determining the movement distance of the obstacle from the historical frame to the current frame includes:
  • the moving distance is obtained according to the speed of the obstacle in the historical frame and the time difference.
  • the operation of updating the point cloud data of the obstacle in the historical frame includes:
  • the position coordinates of the point cloud data of the obstacle in the historical frame are replaced with the position coordinates of the predicted position.
  • the processor also performs the following operations:
  • the preset position range includes at least:
  • the range of at least one dimension in the coordinate system of the point cloud processing device that executes the point cloud data processing method is the range of at least one dimension in the coordinate system of the point cloud processing device that executes the point cloud data processing method.
  • Another embodiment of the present disclosure also provides a laser radar, as shown in FIG. 11, including: a transmitter, a receiver, and the point cloud data processing device of the foregoing embodiment.
  • the transmitter is used to emit a laser beam that irradiates and is reflected by objects in the environment.
  • the receiver is used to receive the emitted laser beam.
  • the point cloud data processing device processes the laser beam received by the receiver to generate point cloud data.
  • Yet another embodiment of the present disclosure also provides a movable platform, as shown in FIG. 12, including: a body, a power system, and the lidar of the foregoing embodiment.
  • the movable platform can be at least: a vehicle or an aircraft.
  • the aircraft may be a drone, for example.
  • the body is used to provide support for the power system and lidar.
  • a control component and a communication component can be provided in the body.
  • the body can control the actions of the power system and the lidar through the control component, and communicate with the remote control station, control terminal or control center through the communication component.
  • the power system is arranged on the body, and the power system is used to provide power to the movable platform so that the movable platform can exercise or sail.
  • the lidar is set on the body to perceive the environmental information of the movable platform.
  • Yet another embodiment of the present disclosure provides a computer-readable storage medium.
  • the computer-readable storage medium stores executable instructions.
  • the executable instructions When executed by one or more processors, the one or more processors can execute the drone operation as described in the embodiments of the present disclosure. Simulation method for simulation.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a digital video disc (DVD)), or a semiconductor medium (for example, a solid state disk (SSD)), etc. .
  • the size of the sequence numbers of the foregoing processes does not mean the order of execution.
  • the execution order of the processes should be determined by their functions and internal logic, and should not be used in the embodiments of the present invention.
  • the implementation process constitutes any limitation.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable systems.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer instructions may be transmitted from a website, computer, server, or data center. Transmission to another website, computer, server, or data center via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.).
  • wired such as coaxial cable, optical fiber, digital subscriber line (DSL)
  • wireless such as infrared, wireless, microwave, etc.
  • the computer program may be configured to have, for example, computer program code including computer program modules. It should be noted that the division method and number of modules are not fixed. Those skilled in the art can use appropriate program modules or program module combinations according to actual conditions. When these program module combinations are executed by a computer (or processor), the computer The flow of the simulation method of the drone described in the present disclosure and its variants can be executed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种点云数据处理方法及其装置、激光雷达、可移动平台,点云数据处理方法包括:获取当前帧,确定所述当前帧中的障碍物;获取所述当前帧之前的历史帧,确定所述障碍物在所述历史帧中的位置和速度;根据所述障碍物在所述历史帧中的位置和速度,将所述历史帧中的所述障碍物的点云数据累积至所述当前帧。

Description

点云数据处理方法及其装置、激光雷达、可移动平台 技术领域
本公开涉及点云数据处理技术领域,尤其涉及一种点云数据处理方法及其装置、激光雷达、可移动平台。
背景技术
在自动驾驶领域,目前应用最多的传感器是激光雷达,激光雷达一般用于动态障碍物检测以及建立环境地图。当前激光雷达存在的主要问题是生成的点云数据不够稠密,这对距激光雷达较远的物体尤其明显。该问题会影响动态障碍物检测的效果,也会导致激光雷达建立的环境地图不够精确。
公开内容
本公开提供了一种点云数据处理方法,包括:
获取当前帧,确定所述当前帧中的障碍物;
获取所述当前帧之前的历史帧,确定所述障碍物在所述历史帧中的位置和速度;
根据所述障碍物在所述历史帧中的位置和速度,将所述历史帧中的所述障碍物的点云数据累积至所述当前帧。
本公开还提供了一种点云数据处理装置,包括:
存储器,用于存储可执行指令;
处理器,用于执行所述存储器中存储的所述可执行指令,以执行如下操作:
获取当前帧,确定所述当前帧中的障碍物;
获取所述当前帧之前的历史帧,确定所述障碍物在所述历史帧中的位置和速度;
根据所述障碍物在所述历史帧中的位置和速度,将所述历史帧中的所述障碍物的点云数据累积至所述当前帧。
本公开还提供了一种激光雷达,包括:
发射器,用于发射激光束;
接收器,用于接收发射回来的激光束;
上述点云数据处理装置,对所述接收器接收到激光束进行处理生成点云数据。
本公开还提供了一种可移动平台,包括
机体;
动力系统,设于所述机体,所述动力系统用于为所述可移动平台提供动力;
上述激光雷达,设于所述机体,用于感知所述可移动平台的环境信息。
本公开还提供了一种计算机可读存储介质,其存储有可执行指令,所述可执行指令在由一个或多个处理器执行时,可以使所述一个或多个处理器执行上述点云数据处理方法。
从上述技术方案可以看出,本公开至少具有以下有益效果:
通过本实施例的点云数据处理方法,将历史帧的点云数据累积至当前帧,实现点云数据增强,使点云数据变得稠密,克服了点云数据稀疏的缺陷,有利于生成高精度的环境地图。并且在累积的过程中考虑到障碍物移动的因素,根据障碍物的速度将历史帧的点云数据累积至当前帧,通过这种速度补偿的方法,抑制甚至消除了点云拖尾的问题,进一步提高了点云质量,有利于进一步提高环境地图的精度。
附图说明
附图是用来提供对本公开的进一步理解,并且构成说明书的一部分,与下面的具体实施方式一起用于解释本公开,但并不构成对本公开的限制。在附图中:
图1为本公开实施例点云数据处理方法的流程图。
图2为环境地图的一个示意图。
图3显示了t帧的障碍物的点云数据。
图4显示了t帧的加框后的障碍物的点云数据。
图5显示了t-1帧的加框后的障碍物的点云数据。
图6显示了t-2帧的加框后的障碍物的点云数据。
图7显示了t-1帧的障碍物的位置及其在t帧的预测位置之间的关系。
图8显示了t-2帧的障碍物的位置及其在t帧的预测位置之间的关系。
图9为根据本公开实施例点云数据处理方法处理后的t帧的障碍物的 点云数据。
图10为本公开实施例点云数据处理装置的示意图。
图11为本公开实施例激光雷达的示意图。
图12为本公开实施例的可移动平台的示意图。
具体实施方式
下面将结合实施例和实施例中的附图,对本公开技术方案进行清楚、完整的描述。显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
本公开一实施例提供了一种点云数据处理方法,如图1所示,点云数据处理方法包括以下步骤:
步骤S101:获取当前帧,确定所述当前帧中的障碍物;
步骤S102:获取所述当前帧之前的历史帧,确定所述障碍物在所述历史帧中的位置和速度;
步骤S103:根据所述障碍物在所述历史帧中的位置和速度,将所述历史帧中的所述障碍物的点云数据累积至所述当前帧。
本实施例的点云数据处理方法由一点云数据处理装置执行。所述点云数据处理装置作为传感器的部件,传感器一般安装于可移动平台。本文中的可移动平台包括:车辆、无人飞行器、有人飞行器、船舶等可移动载体。本文中的无人飞行器可以是无人机旋翼航空器,诸如由多个螺旋桨推动以在空中移动的多旋翼航空器。本文中的车辆可以是各种机动车辆和非机动车辆。机动车辆可以是无人驾驶车辆、有人驾驶车辆。
本文中的可移动平台可携带用于收集环境数据的一个或多个传感器。由所述一个或多个传感器获取的数据可以组合以生成表示周围环境的环境地图。本文中的环境地图可以是二维地图、三维地图。环境可以是城市、郊区或乡村或者任何其他环境。如图2所示,环境地图可以包括关于环境中物体位置的信息,环境中的物体例如是一个或多个障碍物。障碍物可以包括可阻碍可移动平台移动的任何物体或实体。一些障碍物可能位于地面上,诸如图2中的建筑物、机动车(例如,图2中的道路上的轿车、卡车)、人类、动物、植物(例如,如图2中的树木)和其他人造或自然构造物。 一些障碍物可能完全位于空中,包括飞行器(例如,飞机、直升机、热气球、其他UAV)或鸟类。
可移动平台可以使用所生成的环境地图来执行各种操作,其中一些可以是半自动化的或全自动化的。例如,环境地图可以用于自动为无人飞行器确定一个飞行路径,以从其当前位置航行至目标位置。例如,环境地图可以用于自动为车辆确定一个行驶路径,以从其当前位置行驶至目标位置。又例如,环境地图可以用于确定一个或多个障碍物的空间排列,并从而使得可移动平台能够执行避障机动。有利地,本文中用于收集环境数据的传感器可以提高环境地图构建的准确度和精度,即使是在多样化的环境和操作条件下也如此,从而增强诸如导航和避障等功能的稳健性和灵活性。
本实施例中,所述点云数据处理装置作为传感器的部件,可以单独或与可移动平台的其他传感器组合来生成环境地图。传感器可以是激光雷达,点云数据处理装置是激光雷达的数据处理部件。可移动平台的其他传感器可以是GPS传感器、惯性传感器、视觉传感器、超声传感器等。激光雷达与其他传感器的融合可以用于补偿与单个传感器类型相关联的局限性或误差,从而提高环境地图的准确度和可靠性。
为构建环境地图,激光雷达在可移动平台的移动过程中,可不断探测周围环境。在探测过程中,激光雷达向周围环境发射激光束,激光束被环境中的物体反射,反射信号由激光雷达接收,得到数据帧。在探测周围环境的过程中,激光雷达在各个时刻对周围环境成像,得到各个时刻的数据帧。每一时刻的数据帧由点云数据组成。点云数据是指反映环境中的物体的表面形状的数据集合。
在步骤S101中,激光雷达在当前时刻向周围环境发射激光束,激光束被环境中的物体反射,反射信号由激光雷达接收,得到当前时刻的数据帧,以下简称当前帧。通过对当前帧的点云数据的处理,识别出当前帧中的障碍物。环境中的有些障碍物是静止的,有些障碍物是移动的,移动的障碍物称为动态障碍物。由于本实施例的目的是累积动态障碍物的点云数据,所以本步骤的障碍物识别是指动态障碍物的识别。在本实施例之后的步骤中,也是针对动态障碍物进行处理。如无专门的说明,下文中的障碍物均指动态障碍物。
可通过多种方法识别当前帧的中的障碍物。在一个示例中,该方法可以包括以下步骤:有效点云筛选、点云聚类、障碍物加框。以下分别对这些步骤进行介绍。
有效点筛选:
由于环境中的有些物体与障碍物识别不相关,例如,路面、树木、墙体、楼宇等静态障碍物,这些不相关的物体会影响对障碍物的识别。因此首先对当前帧的点云数据进行筛选。
选取感兴趣的空间区域,并将感兴趣的空间区域之外的点云数据排除。通过该步骤可将不相关物体的点云数据剔除,如图3所示,剔除了树木、建筑物,只保留与障碍物有关的点云数据。
点云聚类:
筛选出与障碍物有关的点云数据后,还不能确定这些点云数据分别属于哪个障碍物。通过点云聚类可将分离出属于同一障碍物的点云数据。在一个示例中,可采用基于密度的空间聚类算法(DBSCAN,Density-Based Spatial Clustering of Application with Noise)对当前帧进行点云聚类。DBSCAN算法计算速度快,能够有效处理噪声点,并发现任意形状的空间聚类,同时可将易被误认为同一障碍物的不同障碍物分开,聚类准确性高。如图3所示,通过点云聚类,可将分别属于轿车和卡车的点云数据分开。
障碍物加框:
经过点云聚类可初步将障碍物识别出来。障碍物在点云数据处理装置的数据帧中一般用三维立方体的形式表示,这个三维立方体称为“框”。通过障碍物加框可将障碍物的轮廓用框圈出来,以便进行后续的障碍物跟踪。
首先可提取障碍物的特征。在一个示例中,这些特征可以包括:跟踪点位置,障碍物运动方向、长度和宽度等。然后根据上述障碍物的特征提取障碍物的框。可利用各种方法提出提取障碍物的框,在一个示例中,采用最小凸包法结合模糊线段的方法提取障碍物的框。如图4所示,可提取出轿车和卡车的框。
通过上述步骤即可识别出当前帧中的障碍物,取决于实际环境,当前帧中的障碍物可以是一个,也可以是多个。
确定出当前帧中的障碍物后,步骤S102获取当前帧之前的历史帧,并确定所述障碍物在历史帧中的位置和速度。
所述历史帧是指当前帧之前的各个历史时刻得到的数据帧。在各个历史时刻如何得到数据帧,可参见步骤S101中获取当前帧的描述。也就是说,激光雷达在各个历史时刻向周围环境发射激光束,激光束被环境中的物体反射,反射信号由激光雷达接收,得到历史时刻的数据帧。例如,如果当前帧时刻为t,那么历史帧时刻是指t-n、...、t-2、t-1这些时刻。对于t时刻的当前帧、历史帧包括t-n、...、t-2、t-1时刻的数据帧。
由于激光雷达是在各个时刻不断对周围环境成像,所以同一个障碍物会在多个时刻的数据帧中出现,即步骤S101确定出的当前帧中的障碍物,这些障碍物的部分或全部在历史帧中也同样会出现。在本步骤中,首先在历史帧中识别出所述障碍物,即在历史帧中找出当前帧中的障碍物。本实施例可采用障碍物跟踪法在历史帧中识别出当前帧中的所述障碍物。
具体来说,采用障碍物跟踪法时,分别获取当前帧的特征点和历史帧的特征点,根据当前帧的特征点和历史帧的特征点,通过光流算法确定当前帧的障碍物是否为历史帧的障碍物。
例如,对于t时刻的当前帧(为方便描述,以下简称t帧),获取其前一帧,即t-1时刻的数据帧(以下简称t-1帧)。然后分别获取t帧的特征点和t-1帧的特征点,通过光流算法对t帧的特征点和t-1帧的特征进行处理,在t-1帧中识别出t帧中的所述障碍物,,如图5所示。然后,获取t-2时刻的数据帧(以下简称t-2帧),然后分别获取t-2帧的特征点,通过光流算法对t-1帧的特征点和t-2帧的特征进行处理,在t-2帧中识别出t-1帧中的障碍物,如图6所示。以此类推,对t-3时刻的数据至t-n时刻的数据帧的相邻两帧均执行上述步骤,即可将该障碍物关联至t-n帧,实现对障碍物的跟踪。
本实施例可采用其他方法实现障碍物跟踪,这些方法包括:多目标假设跟踪法、最近邻域法、联合概率数据关联法等等。
如果判断前一帧的一个障碍物与后一帧的一个障碍物为同一障碍物,可将该障碍物附上相同的编号。其中,提取数据帧的障碍物的特征信息的方法有多种,在一个示例中,可采用人工神经网络算法提取障碍物的特征 信息。
本领域技术人员可以理解,以上针对的是当前帧中确定出的所有障碍物。也就是说,当步骤S101在当前帧中确定出一个障碍物时(例如,当图4中只有轿车或卡车时),通过上述步骤可将该障碍物关联至历史帧,从而在各个历史帧中识别出该障碍物。当步骤S101在当前帧中确定出多个障碍物时(例如,图4中同时出现轿车和卡车),对每一障碍物执行上述操作,可将每一个障碍物都关联至历史帧,从而在各个历史帧中识别出每一个障碍物。
在历史帧中识别出当前帧中的障碍物后,可从历史帧中的所述障碍物的点云数据中提取出所述障碍物在历史帧中的位置。
如表1所示,障碍物的点云数据包括有位置信息和属性信息。属性信息一般是指障碍物回波信号的强度信息。位置信息是指障碍物的位置坐标,该位置坐标可以是以激光雷达为原点的三维坐标系下的三轴坐标X/Y/Z。因此,将障碍物的点云数据的位置坐标提取出来,即可得到障碍物在历史帧中的位置。
表1点云数据的格式
Figure PCTCN2019108627-appb-000001
例如,对于t-1帧的障碍物,将t-1帧的该障碍物的点云数据中的三轴坐标作为该障碍物在t-1帧中的位置;对于t-2帧的障碍物,将t-2帧的该障碍物的点云数据中的三轴坐标作为该障碍物在t-2帧中的位置;以此类推,对于t-n帧的障碍物,将t-n帧的该障碍物的点云数据中的三轴坐标作为该障碍物在t-n帧中的位置。通过上述步骤,即可得到轿车和卡车的在图5所示的t-1帧中的位置、以及在图6所示的t-2帧中的位置。
为了将累积障碍物的点云数据,需要知道障碍物在历史帧的速度。可通过多种方法确定障碍物在历史帧的速度。这些方法至少包括:根据预设传感器的测量值确定障碍物在历史帧中的速度。作为一个示例,本实施例采用卡尔曼滤波器来估计障碍物在历史帧的速度。根据状态方程和测量方程进行迭代运算确定障碍物在历史帧中的速度;所述测量方程包括预设传感器的速度测量值。
卡尔曼滤波器的状态方程为
v w(t)=A*v w(t-1)+w(t)
卡尔曼滤波器的测量方程为
v z(t)=z(t)+y(t)
其中,v w(t)为状态方程在t帧的速度预测值,A为状态方程的系数,v w(t-1)为状态方程在t-1帧的速度预测值,w(t)为t帧的状态噪声;v z(t)为测量方程在t帧的速度预测值,z(t)为点云数据处理装置在t帧的速度测量值,y(t)为t帧的预测噪声。
障碍物的速度计算公式为
v(t)=A*v w(t-1)+w(t)*(v z(t)-z(t-1))
其中,v(t)为障碍物在t帧的速度,z(t-1)为点云数据处理装置在t-1帧的速度测量值。
可为状态方程的速度预测值设定一个初始值,该初始值可以是一经验值,例如80km/h。通过对卡尔曼滤波器的状态方程和测量方程进行迭代运算,得到障碍物在t-n帧、...、t-2帧、t-1帧的速度。
其中,点云数据处理装置数据帧的采样与预设传感器的采样可同步进行,也可以异步进行。当二者同步采样时,点云数据处理装置在t帧的速度测量值z(t)与t帧在同一采样时刻t得到。当二者异步采样时,点云数据处理装置在t帧的速度测量值z(t)可以是在t帧的采样时刻t之前或之后得到的。
通过上述步骤可分别对图3所示的各个历史帧中的轿车和卡车的速度进行估计,从而得到各个历史帧中的轿车和卡车的速度。
如前文所述,激光雷达安装于可移动平台上,所以点云数据处理装置的速度测量值实际也是可移动平台的速度测量值。该速度测量值通常是由 激光雷达或可移动平台的至少一个预设传感器提供。这些预设传感器至少包括:惯性测量单元、轮速计、卫星定位单元。
本实施例中,可以利用惯性测量单元、轮速计、卫星定位单元的其中一个的测量值得到速度测量值。也可以对惯性测量单元、轮速计、卫星定位单元在内的两个或两个以上传感器的测量值进行数据融合,以得到所述速度测量值。通过数据融合得到的速度测量值精度更高,有利于进一步提高点云数据累积的精度,进一步改善点云质量。
获得障碍物在历史帧中的位置以及速度后,步骤S103即可将历史帧中的障碍物的点云数据累积至当前帧,从而使当前帧的点云数据更加稠密,提高点云数据的质量。
首先,根据障碍物在历史帧的速度,确定所述障碍物从所述历史帧至所述当前帧的移动距离。所述移动距离可通过以下步骤确定:
确定所述历史帧与所述当前帧的时间差;
根据所述历史帧的所述速度、以及所述时间差,得到所述移动距离。
首先计算历史帧与当前帧的时间差。对于当前帧的t帧,t-1帧与t帧时刻相差一个时刻,t-2帧与t帧相差两个时刻,同理,t-n帧与t帧相差n个时刻。相邻帧之间时间长度取决于激光雷达的帧率,帧率越高,则这个时间长度越短,帧率越低,则这个时间长度越长。举例来说,如果激光雷达的帧率为20fps,即每秒20帧,那么相邻两帧的之间的时间长度就是0.05秒。t-1帧与tzhen的时间差就是0.05秒,t-2帧与t帧的时间差就是0.1秒,同理,t-n帧与t帧的时间差就是0.05*n秒。
接着根据历史帧的所述速度、以及所述时间差,得到所述移动距离。具体来说,对于每一个障碍物,将其历史帧的速度乘以各个历史帧与当前帧的时间差,即可得到这个障碍物从各个历史帧到当前帧的移动距离。
例如,参见图7所示,为了更加清楚地显示障碍物的移动距离和位置,在图7中省去了轿车和卡车的点云数据,以二者的框来分别代表轿车和卡车。对于当前帧t帧的轿车,将轿车在t-1帧时刻的速度乘以一个时刻的时间长度,得到轿车从t-1帧到t帧的移动距离D1;对于当前帧t的卡车,将卡车在t-1帧的速度乘以一个时刻的时间长度,得到卡车从t-1帧到t帧的移动距离D1’。参见图8所示,将轿车在t-2帧的速度乘以两个时刻的 时间长度,得到轿车从t-2帧到当前帧t的移动距离D2。同理,将卡车在t-2帧的速度乘以两个时刻的时间长度,得到卡车从t-2帧到t帧的移动距离D2’。以此类推,即可得到轿车和卡车从t-n帧到t帧的移动距离Dn和Dn’。
接着根据所述障碍物在所述历史帧中的位置、以及所述移动距离,确定所述历史帧中的所述障碍物在所述当前帧中的预测位置。
对于每一个障碍物,将其在各个历史帧的位置移动至历史帧与当前帧的移动距离,即可得到各个历史帧中的该障碍物在当前帧的预测位置。
例如,如图7所示,将轿车在t-1帧的三维坐标移动一个距离D1,即可得到t-1帧的轿车在t帧中的预测位置P1及其三维坐标;将卡车在t-1帧的三维坐标移动一个距离D1’,即可得到t-1帧的卡车在t帧中的预测位置P1’及其三维坐标。将轿车在t-2帧的三维坐标移动一个距离D2,即可得到t-2帧的轿车在t帧中的预测位置P2及其三维坐标;将卡车在t-2帧的三维坐标移动一个距离D2’,即可得到t-2帧的卡车在t帧中的预测位置P2’及其三维坐标。以此类推,即可得到t-n帧的轿车和卡车在t帧中的预测位置及其三维坐标。
然后,根据所述预测位置更新所述历史帧中的所述障碍物的点云数据,将更新后的所述点云数据补充至所述当前帧。更新历史帧中的障碍物的点云数据是指,将历史帧中的障碍物的点云数据的位置坐标替换为预测位置的位置坐标。
通过上述步骤,可将历史帧中的障碍物的点云数据累积至当前帧中。对于每一个障碍物,对其在各个历史帧中的点云数据进行更新,即将历史帧中的点云数据的三维坐标替换为该历史帧在当前帧的预测位置的三维坐标,并将更新后的点云数据也作为当前帧的点云数据。
例如,将t-1帧中轿车的点云数据的三维坐标替换为t-1帧的轿车在t帧中的预测位置的三维坐标,将t-2帧中轿车的点云数据的三维坐标替换为t-2帧的轿车在t帧中的预测位置的三维坐标,从而将t-1帧和t-2帧中轿车的点云数据累积至t帧。将t-1帧中卡车的点云数据的三维坐标替换为t-1帧的卡车在t帧中的预测位置的三维坐标,将t-2帧中卡车的点云数据的三维坐标替换为t-2帧的卡车在t帧中的预测位置的三维坐标,从而 将t-1帧和t-2帧中卡车的点云数据累积至t帧。以此类推,可将t-1帧至t-n帧中的轿车和卡车的点云数据全部累积至t帧。如图9所示,经过累积后,当前帧中的轿车和卡车的点云数据明显比图3所示的累积前的点云数据更加稠密,从而提高了点云数据的密度,改善了点云质量,有利于提高环境地图的精度。
需要说明的是,历史帧的数量n可根据实际需求来确定。一般来说,n的数值越大,即累积的历史帧越多,当前帧就越稠密。但同时也会带来噪声,累积的点云数据可能有一部分是噪点,超出了激光雷达的探测范围,对改善点云质量不利。因此,本实施例可在步骤S103之后,进行去噪操作。
首先确定获取一个预设位置范围。这个预设位置范围可以是点云数据处理装置为原点的三维坐标系下的一个、两个或三维维度的范围。例如三维坐标系下X轴的一个位置范围,或X轴和Y轴共同形成的位置范围,或X轴、Y轴和Z轴共同形成的位置范围。
可将位于预设位置范围之外的预测位置作为噪点位置,即将累积至当前帧的、且位于预设位置范围之外的点云数据作为噪点,并将噪点对应的点云数据从当前帧中去除。通过上述去噪操作,可消除噪点,进一步提高点云质量,有利于进一步提高环境地图的精度。
通过本实施例的点云数据处理方法,将历史帧的点云数据累积至当前帧,实现点云数据增强,使点云数据变得稠密,克服了点云数据稀疏的缺陷,有利于生成高精度的环境地图。并且在累积的过程中考虑到障碍物移动的因素,根据障碍物的速度将历史帧的点云数据累积至当前帧,通过这种速度补偿的方法,累积后的当前帧抑制甚至消除了点云拖尾的问题,进一步提高了点云质量,有利于进一步提高环境地图的精度。
本公开另一实施例提供了一种点云数据处理装置,所述点云数据处理装置作为可移动平台的传感器的部件。传感器可安装于可移动平台,传感器可以是激光雷达。所述点云数据处理装置可以单独或与可移动平台的其他传感器组合来生成环境地图。
如图10所示,该点云数据处理装置包括:存储器和处理器。处理器和存储器可以通过总线相连。
存储器可以存储供处理器执行的指令和/或要处理或已处理的数据。存储器的数量也可以是一个或多个。供处理器执行的指令和/或要处理或已处理的数据可以存储在一个存储器中,也可以分布存储于各个存储器中。存储器可以是易失性存储器或非易失性存储器。例如,作为易失性存储器,存储器可以包括:随机存取存储器(RAM)、动态RAM(DRAM)、静态RAM(SRAM)、同步DRAM(SDRAM)、高速缓存、寄存器等。例如,作为非易失性存储器,存储器还可以包括:一次性可编程只读存储器(OTPROM)、可擦除可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)、掩膜ROM、闪存ROM、闪存、硬盘驱动器、固态驱动器等等。
处理器的数量可以是一个或多个。处理器可以为中央处理器(Central Processing Unit,CPU)、现场可编程门阵列(Field-Programmable Gate Array,FPGA),数字信号处理器(Digital Signal Processor,DSP),或其他数据处理芯片。当处理器的数量是一个时,存储器存储的供处理器执行的指令可由该一个处理器执行。当处理器的数量是多个时,存储器存储的供处理器执行的指令可由多个处理器中的一个,或在多个处理器的至少部分处理器中分布执行。
本实施例的点云数据处理装置,
存储器,用于存储可执行指令;
处理器,用于执行所述存储器中存储的所述可执行指令,以执行如下操作:
获取当前帧,确定所述当前帧中的障碍物;
获取所述当前帧之前的历史帧,确定所述障碍物在所述历史帧中的位置和速度;
根据所述障碍物在所述历史帧中的位置和速度,将所述历史帧中的所述障碍物的点云数据累积至所述当前帧。
所述确定所述障碍物在所述历史帧中的位置的操作,包括:
在所述历史帧中识别出所述障碍物;
从所述历史帧中的所述障碍物的点云数据,提取出所述障碍物在所述历史帧中的位置。
所述在所述历史帧中识别出所述障碍物的操作,包括:
分别获取所述当前帧的特征点和所述历史帧的特征点;
根据所述当前帧的特征点和所述历史帧的特征点,通过光流算法确定所述当前帧的障碍物是否为所述历史帧的障碍物。
根据预设传感器的测量值确定所述障碍物在所述历史帧中的速度。
根据预设传感器的测量值,利用卡尔曼滤波器确定所述障碍物在所述历史帧中的速度。
所述卡尔曼滤波器包括:状态方程、测量方程;
所述利用卡尔曼滤波器确定所述障碍物在所述历史帧中的速度,包括:
根据所述状态方程和所述测量方程进行迭代运算确定所述障碍物在所述历史帧中的速度;所述测量方程包括所述预设传感器的测量值。
所述预设传感器为多个,所述测量值通过多个所述预设传感器的测量结果的融合得到。
所述预设传感器至少包括以下其中一种:惯性测量单元、轮速计、卫星定位单元。
所述将所述历史帧中的所述障碍物的点云数据累积至所述当前帧的操作,包括:
根据所述障碍物在所述历史帧中的速度,确定所述障碍物从所述历史帧至所述当前帧的移动距离;
根据所述障碍物在所述历史帧中的位置、以及所述移动距离,确定所述历史帧中的所述障碍物在所述当前帧中的预测位置;
根据所述预测位置更新所述历史帧中的所述障碍物的点云数据,将更新后的所述点云数据补充至所述当前帧。
所述确定所述障碍物从所述历史帧至所述当前帧的移动距离的操作,包括:
确定所述历史帧与所述当前帧的时间差;
根据所述障碍物在所述历史帧中的速度、以及所述时间差,得到所述移动距离。
所述更新所述历史帧中的所述障碍物的点云数据的操作,包括:
将所述历史帧中的所述障碍物的点云数据的位置坐标替换为所述预测位置的位置坐标。
所述处理器还执行如下操作:
将位于预设位置范围之外的预测位置作为噪点位置;
将与所述噪点位置对应的点云数据从所述当前帧中去除。
所述预设位置范围至少包括:
执行所述点云数据处理方法的点云处理装置的坐标系下至少一个维度的范围。
本公开又一实施例还提供了一种激光雷达,如图11所示,包括:发射器、接收器和上述实施例的点云数据处理装置。
发射器用于发射激光束,所述激光束照射环境中的物体,并被环境中的物体反射。
接收器用于接收发射回来的激光束。
所述点云数据处理装置对所述接收器接收到激光束进行处理,以生成点云数据。
本公开再一实施例还提供了一种可移动平台,如图12所示,包括:机体、动力系统、以及上述实施例的激光雷达。所述可移动平台至少可以是:车辆、飞行器。所述飞行器例如可以是无人机。
机体用于为动力系统和激光雷达提供支撑。机体内可以设置有控制部件和通信部件。机体通过控制部件可控制动力系统和激光雷达的动作,通过通信部件可与远端的控制站、控制终端或控制中心通信。
动力系统设置在机体上,动力系统用于为可移动平台提供动力,以使可移动平台行使或航行。
激光雷达设置在机体上,用于感知可移动平台的环境信息。
本公开再一实施例提供了一种计算机可读存储介质。计算机可读存储介质存储有可执行指令,所述可执行指令在由一个或多个处理器执行时,可以使所述一个或多个处理器执行如本公开实施例所述的无人机的仿真方法进行仿真。
所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如数字视频光盘(digital video disc,DVD))、或者半导体介质(例如固态 硬盘(solid state disk,SSD))等。
应理解,在本发明的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本发明实施例的实施过程构成任何限定。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其他任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程系统。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本公开的范围。
以上所述,仅为本公开的具体实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以所述权利要求的保护范围为准。
另外,计算机程序可被配置为具有例如包括计算机程序模块的计算机程序代码。应当注意,模块的划分方式和个数并不是固定的,本领域技术人员可以根据实际情况使用合适的程序模块或程序模块组合,当这些程序模块组合被计算机(或处理器)执行时,使得计算机可以执行本公开所述所述的无人机的仿真方法的流程及其变形。
本领域技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
最后应说明的是:以上各实施例仅用以说明本公开的技术方案,而非对其限制;尽管参照前述各实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;在不冲突的情况下,本公开实施例中的特征可以任意组合;而这些修改或者替换,并不使相应技术方案的本质脱离本公开各实施例技术方案的范围。

Claims (30)

  1. 一种点云数据处理方法,其特征在于,包括:
    获取当前帧,确定所述当前帧中的障碍物;
    获取所述当前帧之前的历史帧,确定所述障碍物在所述历史帧中的位置和速度;
    根据所述障碍物在所述历史帧中的位置和速度,将所述历史帧中的所述障碍物的点云数据累积至所述当前帧。
  2. 如权利要求1所述的点云数据处理方法,其特征在于,所述确定所述障碍物在所述历史帧中的位置,包括:
    在所述历史帧中识别出所述障碍物;
    从所述历史帧中的所述障碍物的点云数据,提取出所述障碍物在所述历史帧中的位置。
  3. 如权利要求2所述的点云数据处理方法,其特征在于,所述在所述历史帧中识别出所述障碍物,包括:
    分别获取所述当前帧的特征点和所述历史帧的特征点;
    根据所述当前帧的特征点和所述历史帧的特征点,通过光流算法确定所述当前帧的障碍物是否为所述历史帧的障碍物。
  4. 如权利要求1所述的点云数据处理方法,其特征在于,根据预设传感器的测量值确定所述障碍物在所述历史帧中的速度。
  5. 如权利要求4所述的点云数据处理方法,其特征在于,根据预设传感器的测量值,利用卡尔曼滤波器确定所述障碍物在所述历史帧中的速度。
  6. 如权利要求5所述的点云数据处理方法,其特征在于,所述卡尔曼滤波器包括:状态方程、测量方程;
    所述利用卡尔曼滤波器确定所述障碍物在所述历史帧中的速度,包括:
    根据所述状态方程和所述测量方程进行迭代运算确定所述障碍物在所述历史帧中的速度;所述测量方程包括所述预设传感器的测量值。
  7. 如权利要求6所述的点云数据处理方法,其特征在于,所述预设传感器为多个,所述测量值通过多个所述预设传感器的测量结果的融合得到。
  8. 如权利要求5所述的点云数据处理方法,其特征在于,所述预设传感器至少包括以下其中一种:惯性测量单元、轮速计、卫星定位单元。
  9. 如权利要求1所述的点云数据处理方法,其特征在于,所述将所述历史帧中的所述障碍物的点云数据累积至所述当前帧,包括:
    根据所述障碍物在所述历史帧中的速度,确定所述障碍物从所述历史帧至所述当前帧的移动距离;
    根据所述障碍物在所述历史帧中的位置、以及所述移动距离,确定所述历史帧中的所述障碍物在所述当前帧中的预测位置;
    根据所述预测位置更新所述历史帧中的所述障碍物的点云数据,将更新后的所述点云数据补充至所述当前帧。
  10. 如权利要求9所述的点云数据处理方法,其特征在于,所述确定所述障碍物从所述历史帧至所述当前帧的移动距离,包括:
    确定所述历史帧与所述当前帧的时间差;
    根据所述障碍物在所述历史帧中的速度、以及所述时间差,得到所述移动距离。
  11. 如权利要求9所述的点云数据处理方法,其特征在于,所述更新所述历史帧中的所述障碍物的点云数据,包括:
    将所述历史帧中的所述障碍物的点云数据的位置坐标替换为所述预测位置的位置坐标。
  12. 如权利要求9所述的点云数据处理方法,其特征在于,还包括:
    将位于预设位置范围之外的预测位置作为噪点位置;
    将与所述噪点位置对应的点云数据从所述当前帧中去除。
  13. 如权利要求12所述的点云数据处理方法,其特征在于,所述预设位置范围至少包括:
    执行所述点云数据处理方法的点云处理装置的坐标系下至少一个维度的范围。
  14. 一种点云数据处理装置,其特征在于,包括:
    存储器,用于存储可执行指令;
    处理器,用于执行所述存储器中存储的所述可执行指令,以执行如下操作:
    获取当前帧,确定所述当前帧中的障碍物;
    获取所述当前帧之前的历史帧,确定所述障碍物在所述历史帧中的位置和速度;
    根据所述障碍物在所述历史帧中的位置和速度,将所述历史帧中的所述障碍物的点云数据累积至所述当前帧。
  15. 如权利要求14所述的点云数据处理装置,其特征在于,所述确定所述障碍物在所述历史帧中的位置的操作,包括:
    在所述历史帧中识别出所述障碍物;
    从所述历史帧中的所述障碍物的点云数据,提取出所述障碍物在所述历史帧中的位置。
  16. 如权利要求15所述的点云数据处理装置,其特征在于,所述在所述历史帧中识别出所述障碍物的操作,包括:
    分别获取所述当前帧的特征点和所述历史帧的特征点;
    根据所述当前帧的特征点和所述历史帧的特征点,通过光流算法确定所述当前帧的障碍物是否为所述历史帧的障碍物。
  17. 如权利要求14所述的点云数据处理装置,其特征在于,根据预设传感器的测量值确定所述障碍物在所述历史帧中的速度。
  18. 如权利要求17所述的点云数据处理装置,其特征在于,根据预设传感器的测量值,利用卡尔曼滤波器确定所述障碍物在所述历史帧中的速度。
  19. 如权利要求18所述的点云数据处理装置,其特征在于,所述卡尔曼滤波器包括:状态方程、测量方程;
    所述利用卡尔曼滤波器确定所述障碍物在所述历史帧中的速度,包括:
    根据所述状态方程和所述测量方程进行迭代运算确定所述障碍物在所述历史帧中的速度;所述测量方程包括所述预设传感器的测量值。
  20. 如权利要求19所述的点云数据处理装置,其特征在于,所述预设传感器为多个,所述测量值通过多个所述预设传感器的测量结果的融合得到。
  21. 如权利要求18所述的点云数据处理装置,其特征在于,所述预设传感器至少包括以下其中一种:惯性测量单元、轮速计、卫星定位单元。
  22. 如权利要求14所述的点云数据处理装置,其特征在于,所述将所述历史帧中的所述障碍物的点云数据累积至所述当前帧的操作,包括:
    根据所述障碍物在所述历史帧中的速度,确定所述障碍物从所述历史帧至所述当前帧的移动距离;
    根据所述障碍物在所述历史帧中的位置、以及所述移动距离,确定所述历史帧中的所述障碍物在所述当前帧中的预测位置;
    根据所述预测位置更新所述历史帧中的所述障碍物的点云数据,将更新后的所述点云数据补充至所述当前帧。
  23. 如权利要求22所述的点云数据处理装置,其特征在于,所述确 定所述障碍物从所述历史帧至所述当前帧的移动距离的操作,包括:
    确定所述历史帧与所述当前帧的时间差;
    根据所述障碍物在所述历史帧中的速度、以及所述时间差,得到所述移动距离。
  24. 如权利要求22所述的点云数据处理装置,其特征在于,所述更新所述历史帧中的所述障碍物的点云数据的操作,包括:
    将所述历史帧中的所述障碍物的点云数据的位置坐标替换为所述预测位置的位置坐标。
  25. 如权利要求22所述的点云数据处理装置,其特征在于,所述处理器还执行如下操作:
    将位于预设位置范围之外的预测位置作为噪点位置;
    将与所述噪点位置对应的点云数据从所述当前帧中去除。
  26. 如权利要求25所述的点云数据处理装置,其特征在于,所述预设位置范围至少包括:
    执行所述点云数据处理方法的点云处理装置的坐标系下至少一个维度的范围。
  27. 一种激光雷达,其特征在于,包括:
    发射器,用于发射激光束;
    接收器,用于接收发射回来的激光束;
    如权利要求14至26任一项所述的点云数据处理装置,对所述接收器接收到激光束进行处理生成点云数据。
  28. 一种可移动平台,其特征在于,包括
    机体;
    动力系统,设于所述机体,所述动力系统用于为所述可移动平台提供动力;
    权利要求27所述的激光雷达,设于所述机体,用于感知所述可移动平台的环境信息。
  29. 如权利要求28所述的可移动平台,其特征在于,所述可移动平台至少包括:车辆、飞行器。
  30. 一种计算机可读存储介质,其特征在于,其存储有可执行指令,所述可执行指令在由一个或多个处理器执行时,可以使所述一个或多个处理器执行如权利要求1至13中任一项权利要求所述的点云数据处理方法。
PCT/CN2019/108627 2019-09-27 2019-09-27 点云数据处理方法及其装置、激光雷达、可移动平台 WO2021056438A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980033234.7A CN112154356B (zh) 2019-09-27 2019-09-27 点云数据处理方法及其装置、激光雷达、可移动平台
PCT/CN2019/108627 WO2021056438A1 (zh) 2019-09-27 2019-09-27 点云数据处理方法及其装置、激光雷达、可移动平台

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/108627 WO2021056438A1 (zh) 2019-09-27 2019-09-27 点云数据处理方法及其装置、激光雷达、可移动平台

Publications (1)

Publication Number Publication Date
WO2021056438A1 true WO2021056438A1 (zh) 2021-04-01

Family

ID=73891925

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/108627 WO2021056438A1 (zh) 2019-09-27 2019-09-27 点云数据处理方法及其装置、激光雷达、可移动平台

Country Status (2)

Country Link
CN (1) CN112154356B (zh)
WO (1) WO2021056438A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113253293A (zh) * 2021-06-03 2021-08-13 中国人民解放军国防科技大学 一种激光点云畸变的消除方法和计算机可读存储介质

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734811B (zh) * 2021-01-21 2021-08-24 清华大学 一种障碍物跟踪方法、障碍物跟踪装置和芯片
CN112836628B (zh) * 2021-02-01 2022-12-27 意诺科技有限公司 粘连活动点处理方法和装置
CN113075668B (zh) * 2021-03-25 2024-03-08 广州小鹏自动驾驶科技有限公司 一种动态障碍物对象识别方法和装置
CN113534089B (zh) * 2021-07-09 2024-05-14 厦门精益远达智能科技有限公司 一种基于雷达的目标检测方法、装置以及设备
CN113673383B (zh) * 2021-08-05 2024-04-19 苏州智加科技有限公司 一种面向复杂道路场景的时空域障碍物检测方法及系统
CN113926174A (zh) * 2021-11-16 2022-01-14 南京禾蕴信息科技有限公司 一种绕桩运动轨迹分析与计时装置及其分析方法
CN114675274A (zh) * 2022-03-10 2022-06-28 北京三快在线科技有限公司 障碍物检测方法、装置、存储介质及电子设备
CN117148837A (zh) * 2023-08-31 2023-12-01 上海木蚁机器人科技有限公司 动态障碍物的确定方法、装置、设备和介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230379A (zh) * 2017-12-29 2018-06-29 百度在线网络技术(北京)有限公司 用于融合点云数据的方法和装置
US20180261095A1 (en) * 2017-03-08 2018-09-13 GM Global Technology Operations LLC Method and apparatus of networked scene rendering and augmentation in vehicular environments in autonomous driving systems
CN108647646A (zh) * 2018-05-11 2018-10-12 北京理工大学 基于低线束雷达的低矮障碍物的优化检测方法及装置
CN108985171A (zh) * 2018-06-15 2018-12-11 上海仙途智能科技有限公司 运动状态估计方法和运动状态估计装置
CN109509210A (zh) * 2017-09-15 2019-03-22 百度在线网络技术(北京)有限公司 障碍物跟踪方法和装置
CN109848988A (zh) * 2019-01-24 2019-06-07 深圳市普森斯科技有限公司 一种基于历史多帧点云信息融合的扫描匹配方法及系统
CN109974693A (zh) * 2019-01-31 2019-07-05 中国科学院深圳先进技术研究院 无人机定位方法、装置、计算机设备及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11330284B2 (en) * 2015-03-27 2022-05-10 Qualcomm Incorporated Deriving motion information for sub-blocks in video coding
US10375413B2 (en) * 2015-09-28 2019-08-06 Qualcomm Incorporated Bi-directional optical flow for video coding
CN108732603B (zh) * 2017-04-17 2020-07-10 百度在线网络技术(北京)有限公司 用于定位车辆的方法和装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180261095A1 (en) * 2017-03-08 2018-09-13 GM Global Technology Operations LLC Method and apparatus of networked scene rendering and augmentation in vehicular environments in autonomous driving systems
CN109509210A (zh) * 2017-09-15 2019-03-22 百度在线网络技术(北京)有限公司 障碍物跟踪方法和装置
CN108230379A (zh) * 2017-12-29 2018-06-29 百度在线网络技术(北京)有限公司 用于融合点云数据的方法和装置
CN108647646A (zh) * 2018-05-11 2018-10-12 北京理工大学 基于低线束雷达的低矮障碍物的优化检测方法及装置
CN108985171A (zh) * 2018-06-15 2018-12-11 上海仙途智能科技有限公司 运动状态估计方法和运动状态估计装置
CN109848988A (zh) * 2019-01-24 2019-06-07 深圳市普森斯科技有限公司 一种基于历史多帧点云信息融合的扫描匹配方法及系统
CN109974693A (zh) * 2019-01-31 2019-07-05 中国科学院深圳先进技术研究院 无人机定位方法、装置、计算机设备及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113253293A (zh) * 2021-06-03 2021-08-13 中国人民解放军国防科技大学 一种激光点云畸变的消除方法和计算机可读存储介质

Also Published As

Publication number Publication date
CN112154356B (zh) 2024-03-15
CN112154356A (zh) 2020-12-29

Similar Documents

Publication Publication Date Title
WO2021056438A1 (zh) 点云数据处理方法及其装置、激光雷达、可移动平台
CN108152831B (zh) 一种激光雷达障碍物识别方法及系统
KR102628778B1 (ko) 위치결정을 위한 방법, 컴퓨팅 기기, 컴퓨터 판독가능 저장 매체 및 컴퓨터 프로그램
CN110178048B (zh) 交通工具环境地图生成和更新的方法和系统
KR20210111180A (ko) 위치 추적 방법, 장치, 컴퓨팅 기기 및 컴퓨터 판독 가능한 저장 매체
US11651553B2 (en) Methods and systems for constructing map data using poisson surface reconstruction
CN110658531A (zh) 一种用于港口自动驾驶车辆的动态目标追踪方法
JP2021508814A (ja) LiDARを用いた車両測位システム
WO2019007263A1 (zh) 车载传感器的外部参数标定的方法和设备
EP2937757A2 (en) Methods and systems for object detection using multiple sensors
CN110988912A (zh) 自动驾驶车辆的道路目标与距离检测方法、系统、装置
EP4179500A1 (en) Method and system for generating bird's eye view bounding box associated with object
CN111458700A (zh) 车辆映射和定位的方法和系统
CN110826386A (zh) 基于lidar的对象检测和分类
WO2020186444A1 (zh) 物体检测方法、电子设备与计算机存储介质
WO2021196532A1 (en) Method for generation of an augmented point cloud with point features from aggregated temporal 3d coordinate data, and related device
RU2767949C2 (ru) Способ (варианты) и система для калибровки нескольких лидарных датчиков
RU2744012C1 (ru) Способы и системы для автоматизированного определения присутствия объектов
US20210197809A1 (en) Method of and system for predicting future event in self driving car (sdc)
WO2024012211A1 (zh) 自动驾驶环境感知方法、介质及车辆
CN112051575A (zh) 一种毫米波雷达与激光雷达的调整方法及相关装置
Jiménez et al. Improving the lane reference detection for autonomous road vehicle control
WO2022078342A1 (zh) 动态占据栅格估计方法及装置
EP3798665A1 (en) Method and system for aligning radar detection data
CN115965847A (zh) 交叉视角下多模态特征融合的三维目标检测方法和系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19946379

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19946379

Country of ref document: EP

Kind code of ref document: A1