WO2021232227A1 - 构建点云帧的方法、目标检测方法、测距装置、可移动平台和存储介质 - Google Patents

构建点云帧的方法、目标检测方法、测距装置、可移动平台和存储介质 Download PDF

Info

Publication number
WO2021232227A1
WO2021232227A1 PCT/CN2020/091005 CN2020091005W WO2021232227A1 WO 2021232227 A1 WO2021232227 A1 WO 2021232227A1 CN 2020091005 W CN2020091005 W CN 2020091005W WO 2021232227 A1 WO2021232227 A1 WO 2021232227A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
frame
points
measuring device
point
Prior art date
Application number
PCT/CN2020/091005
Other languages
English (en)
French (fr)
Inventor
李延召
郝智翔
陈涵
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN202080006494.8A priority Critical patent/CN114026461A/zh
Priority to PCT/CN2020/091005 priority patent/WO2021232227A1/zh
Publication of WO2021232227A1 publication Critical patent/WO2021232227A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging

Definitions

  • the present invention generally relates to the technical field of distance measuring devices, and more specifically to a method for constructing a point cloud frame, a target detection method, a distance measuring device, a movable platform and a storage medium.
  • a laser radar ranging device can usually output a point cloud frame by frame at a certain frequency (such as 10 Hz) to depict a three-dimensional scene.
  • a certain frequency such as 10 Hz
  • an intelligent algorithm can be developed to intelligently perceive the scene target, and each frame is strictly based on time. For division, the number of points in the point cloud in each frame is basically the same and there will be no duplicate points.
  • This is the simplest and simplest framing method. However, this framing method will cause problems in the use of detection and recognition algorithms. The sparse target points reduce the accuracy of the algorithm.
  • the present invention proposes a method for constructing a point cloud frame, a target detection method, a distance measuring device, a movable platform, and a storage medium.
  • one aspect of the present invention provides a method for constructing a point cloud frame.
  • the method includes: acquiring a plurality of point cloud points sequentially collected by a distance measuring device; A plurality of point cloud points form a multi-frame point cloud frame and output in sequence, wherein the point cloud points in the point cloud frame adopt different integration time lengths for the point cloud points at different spatial positions.
  • the step of sequentially outputting the plurality of point cloud points into a multi-frame point cloud frame according to the spatial position information of the plurality of point cloud points includes:
  • the point cloud point output in each point cloud frame is determined according to the collection time of each point cloud point, the integration duration, and the end time of each point cloud frame.
  • the point cloud points of the point cloud frames of at least two adjacent frames have overlapping parts.
  • the point cloud frame of the next frame includes the point cloud point whose time difference between the acquisition time in the point cloud frame of the previous frame and the end time of the next frame is less than or equal to the integration duration.
  • the target detection method includes: scanning a target scene through a distance measuring device; sequentially outputting multiple point cloud frames according to the aforementioned method of constructing a point cloud frame; and at least one frame based on the output
  • the point cloud frame acquires the position information of the detection target in the target scene.
  • acquiring the position information of the detection target in the target scene based on the outputted at least one frame of the point cloud frame includes:
  • the position information of each target at the current moment is determined based on the clipped point cloud clusters.
  • Another aspect of the present invention provides a distance measuring device, the distance measuring device includes:
  • Memory used to store executable program instructions
  • the processor is configured to execute the program instructions stored in the memory, so that the processor executes the aforementioned method for constructing a point cloud frame, or causes the processor to execute the aforementioned target detection method.
  • a movable platform which includes:
  • At least one of the aforementioned distance measuring devices is arranged on the movable platform body.
  • Another aspect of the present invention provides a computer storage medium on which a computer program is stored, and when the computer program is executed by a processor, the foregoing method of constructing a point cloud frame or the foregoing method of target detection is implemented.
  • the integration time and the integration space can be adjusted adaptively, so that the point cloud points in each point cloud frame
  • the spatial distribution of is more uniform and reasonable, and it better characterizes the object information in the scanned scene.
  • the point cloud frame constructed by this method can be used as a good way to display the point cloud, and can also be used as the basis of the subsequent algorithm to make the subsequent algorithm It can be more accurate and reduce processing errors caused by point cloud sparseness.
  • FIG. 1 shows a schematic structural diagram of a distance measuring device in an embodiment of the present invention
  • Figure 2 shows a schematic diagram of a distance measuring device in an embodiment of the present invention
  • Fig. 3 shows a schematic diagram of a scanning pattern of a distance measuring device in an embodiment of the present invention
  • Fig. 4 shows a schematic diagram of a scanning pattern of a distance measuring device in another embodiment of the present invention
  • FIG. 5 shows a schematic flowchart of a method for constructing a point cloud frame in an embodiment of the present invention
  • Fig. 6 shows a schematic diagram of the traditional integration time-length function as a function of distance
  • FIG. 7 shows a schematic diagram of the variation function of integration duration with distance in an embodiment of the present invention.
  • FIG. 8 shows a schematic diagram of the variation function of integration duration with distance in another embodiment of the present invention.
  • FIG. 9 shows a schematic diagram of the variation function of integration duration with distance in still another embodiment of the present invention.
  • FIG. 10 shows a schematic flowchart of a target detection method in an embodiment of the present invention.
  • Fig. 11 shows a schematic block diagram of a distance measuring device in an embodiment of the present invention.
  • the distance measuring device includes a lidar.
  • the distance measuring device is only used as an example. It can also be applied to this application.
  • the distance measuring device can be used to implement the method of constructing a point cloud frame herein.
  • the distance measuring device may be electronic equipment such as lidar and laser distance measuring equipment.
  • the distance measuring device is used to sense external environmental information, for example, distance information, orientation information, reflection intensity information, speed information, etc. of environmental targets.
  • the distance measuring device can detect the distance from the probe to the distance measuring device by measuring the time of light propagation between the distance measuring device and the probe, that is, the time-of-flight (TOF).
  • the ranging device can also detect the distance from the detected object to the ranging device through other technologies, such as a ranging method based on phase shift measurement, or a ranging method based on frequency shift measurement. This is not limited.
  • the distance measuring device 100 includes a transmitting module, a scanning module, and a detection module.
  • the transmitting module is used to transmit a light pulse sequence to detect a target scene;
  • the scanning module is used to sequentially change the propagation path of the light pulse sequence emitted by the transmitting module.
  • the detection module is used to receive the light pulse sequence reflected by the object, and determine the distance and/or the distance between the object and the distance measuring device according to the reflected light pulse sequence Orientation to generate the point cloud points.
  • the transmitting module includes a transmitting circuit 110; the detecting module includes a receiving circuit 120, a sampling circuit 130 and an arithmetic circuit 140.
  • the transmitting circuit 110 may emit a light pulse sequence (for example, a laser pulse sequence).
  • the receiving circuit 120 can receive the light pulse sequence reflected by the detected object, that is, obtain the pulse waveform of the echo signal through it, and perform photoelectric conversion on the light pulse sequence to obtain the electrical signal, and then the electrical signal can be processed Output to the sampling circuit 130.
  • the sampling circuit 130 may sample the electrical signal to obtain the sampling result.
  • the arithmetic circuit 140 may determine the distance between the distance measuring device 100 and the detected object, that is, the depth, based on the sampling result of the sampling circuit 130.
  • the distance measuring device 100 may further include a control circuit 150 that can control other circuits, for example, can control the working time of each circuit and/or set parameters for each circuit.
  • a control circuit 150 can control other circuits, for example, can control the working time of each circuit and/or set parameters for each circuit.
  • the distance measuring device shown in FIG. 1 includes a transmitting circuit, a receiving circuit, a sampling circuit, and an arithmetic circuit for emitting a beam for detection
  • the embodiment of the present application is not limited to this, the transmitting circuit
  • the number of any one of the receiving circuit, the sampling circuit, and the arithmetic circuit can also be at least two, which are used to emit at least two light beams in the same direction or in different directions; wherein, the at least two light paths can be simultaneous Shooting can also be shooting at different times.
  • the light-emitting chips in the at least two transmitting circuits are packaged in the same module.
  • each emitting circuit includes a laser emitting chip, and the dies in the laser emitting chips in the at least two emitting circuits are packaged together and housed in the same packaging space.
  • the distance measuring device 100 may also include a scanning module for changing the propagation direction of at least one light pulse sequence (for example, a laser pulse sequence) emitted by the transmitting circuit, so as to control the field of view. Perform a scan.
  • the scanning area of the scanning module in the field of view of the distance measuring device increases with the accumulation of time.
  • the module including the transmitting circuit 110, the receiving circuit 120, the sampling circuit 130, and the arithmetic circuit 140, or the module including the transmitting circuit 110, the receiving circuit 120, the sampling circuit 130, the arithmetic circuit 140, and the control circuit 150 may be referred to as the measuring circuit.
  • the distance module the distance measurement module can be independent of other modules, for example, the scanning module.
  • a coaxial optical path can be used in the distance measuring device, that is, the light beam emitted by the distance measuring device and the reflected light beam share at least part of the optical path in the distance measuring device.
  • the distance measuring device may also adopt an off-axis optical path, that is, the light beam emitted by the distance measuring device and the reflected light beam are transmitted along different optical paths in the distance measuring device.
  • Fig. 2 shows a schematic diagram of an embodiment in which the distance measuring device of the present invention adopts a coaxial optical path.
  • the ranging device 200 includes a ranging module 210, which includes a transmitter 203 (which may include the above-mentioned transmitting circuit), a collimating element 204, a detector 205 (which may include the above-mentioned receiving circuit, sampling circuit, and arithmetic circuit), and Light path changing element 206.
  • the ranging module 210 is used to emit a light beam, receive the return light, and convert the return light into an electrical signal.
  • the transmitter 203 can be used to emit a light pulse sequence.
  • the transmitter 203 may emit a sequence of laser pulses.
  • the laser beam emitted by the transmitter 203 is a narrow-bandwidth beam with a wavelength outside the visible light range.
  • the collimating element 204 is arranged on the exit light path of the emitter, and is used to collimate the light beam emitted from the emitter 203, and collimate the light beam emitted from the emitter 203 into parallel light and output to the scanning module.
  • the collimating element is also used to condense at least a part of the return light reflected by the probe.
  • the collimating element 204 may be a collimating lens or other elements capable of collimating light beams.
  • the transmitting light path and the receiving light path in the distance measuring device are combined before the collimating element 204 through the light path changing element 206, so that the transmitting light path and the receiving light path can share the same collimating element, so that the light path More compact.
  • the emitter 203 and the detector 205 use their respective collimating elements, and the optical path changing element 206 is arranged on the optical path behind the collimating element.
  • the optical path changing element can use a small area mirror to The transmitting light path and the receiving light path are combined.
  • the light path changing element may also use a reflector with a through hole, where the through hole is used to transmit the emitted light of the emitter 203 and the reflector is used to reflect the return light to the detector 205. In this way, the shielding of the back light by the support of the small reflector in the case of using the small reflector can be reduced.
  • the optical path changing element deviates from the optical axis of the collimating element 204.
  • the optical path changing element may also be located on the optical axis of the collimating element 204.
  • the distance measuring device 200 further includes a scanning module 202.
  • the scanning module 202 is placed on the exit light path of the distance measuring module 210.
  • the scanning module 202 is used to change the transmission direction of the collimated beam 219 emitted by the collimating element 204 and project it to the external environment, and project the return light to the collimating element 204 .
  • the returned light is collected on the detector 205 via the collimating element 204.
  • the scanning module 202 may include at least one optical element for changing the propagation path of the light beam, where the optical element may change the propagation path of the light beam by reflecting, refraction, diffracting the light beam, etc.
  • the optical element includes at least one light refraction element having a non-parallel exit surface and an entrance surface.
  • the scanning module 202 includes a lens, a mirror, a prism, a galvanometer, a grating, a liquid crystal, an optical phased array (Optical Phased Array), or any combination of the foregoing optical elements.
  • the optical element is moving, for example, the at least part of the optical element is driven to move by a driving module, and the moving optical element can reflect, refract, or diffract the light beam to different directions at different times.
  • the multiple optical elements of the scanning module 202 can rotate or vibrate around a common axis 209, and each rotating or vibrating optical element is used to continuously change the propagation direction of the incident light beam.
  • the multiple optical elements of the scanning module 202 may rotate at different rotation speeds or vibrate at different speeds.
  • at least part of the optical elements of the scanning module 202 may rotate at substantially the same rotation speed.
  • the multiple optical elements of the scanning module may also rotate around different axes.
  • the multiple optical elements of the scanning module may also rotate in the same direction or in different directions; or vibrate in the same direction, or vibrate in different directions, which is not limited herein.
  • the scanning module 202 includes a first optical element 214 and a driver 216 connected to the first optical element 214.
  • the driver 216 is used to drive the first optical element 214 to rotate around the rotation axis 209 to change the first optical element 214.
  • the direction of the beam 219 is collimated.
  • the first optical element 214 projects the collimated beam 219 to different directions.
  • the angle between the direction of the collimated beam 219 changed by the first optical element and the rotation axis 209 changes with the rotation of the first optical element 214.
  • the first optical element 214 includes a pair of opposed non-parallel surfaces through which the collimated light beam 219 passes.
  • the first optical element 214 includes a prism whose thickness varies along at least one radial direction.
  • the first optical element 214 includes a wedge prism, and the collimated beam 219 is refracted.
  • the scanning module 202 further includes a second optical element 215, the second optical element 215 rotates around the rotation axis 209, and the rotation speed of the second optical element 215 is different from the rotation speed of the first optical element 214.
  • the second optical element 215 is used to change the direction of the light beam projected by the first optical element 214.
  • the second optical element 215 is connected to another driver 217, and the driver 217 drives the second optical element 215 to rotate.
  • the first optical element 214 and the second optical element 215 can be driven by the same or different drivers, so that the rotation speed and/or rotation of the first optical element 214 and the second optical element 215 are different, so that the collimated light beam 219 is projected to the outside space.
  • the controller 218 controls the drivers 216 and 217 to drive the first optical element 214 and the second optical element 215, respectively.
  • the rotational speeds of the first optical element 214 and the second optical element 215 can be determined according to the expected scanning area and pattern in actual applications.
  • the drivers 216 and 217 may include motors or other drivers.
  • the second optical element 215 includes a pair of opposite non-parallel surfaces through which the light beam passes. In one embodiment, the second optical element 215 includes a prism whose thickness varies in at least one radial direction. In one embodiment, the second optical element 215 includes a wedge prism.
  • the scanning module 202 further includes a third optical element (not shown) and a driver for driving the third optical element to move.
  • the third optical element includes a pair of opposite non-parallel surfaces, and the light beam passes through the pair of surfaces.
  • the third optical element includes a prism whose thickness varies in at least one radial direction.
  • the third optical element includes a wedge prism. At least two of the first, second, and third optical elements rotate at different rotation speeds and/or rotation directions.
  • the scanning module includes two or three light refraction elements arranged in sequence on the exit light path of the light pulse sequence.
  • at least two of the light refraction elements in the scanning module rotate during the scanning process to change the direction of the light pulse sequence.
  • the scanning path of the scanning module is different at least partly at different moments.
  • the rotation of each optical element in the scanning module 202 can project light to different directions, such as the direction of the projected light 211 and the direction 213. Space to scan.
  • the light 211 projected by the scanning module 202 hits the detection object 201, a part of the light is reflected by the detection object 201 to the distance measuring device 200 in a direction opposite to the projected light 211.
  • the return light 212 reflected by the detection object 201 is incident on the collimating element 204 after passing through the scanning module 202.
  • the detector 205 and the transmitter 203 are placed on the same side of the collimating element 204, and the detector 205 is used to convert at least part of the return light passing through the collimating element 204 into electrical signals.
  • an anti-reflection coating is plated on each optical element.
  • the thickness of the antireflection coating is equal to or close to the wavelength of the light beam emitted by the emitter 203, which can increase the intensity of the transmitted light beam.
  • a filter layer is plated on the surface of an element located on the beam propagation path in the distance measuring device, or a filter is provided on the beam propagation path for transmitting at least the wavelength band of the beam emitted by the transmitter, Reflect other bands to reduce the noise caused by ambient light to the receiver.
  • the transmitter 203 may include a laser diode through which nanosecond laser pulses are emitted.
  • the laser pulse receiving time can be determined, for example, the laser pulse receiving time can be determined by detecting the rising edge time and/or the falling edge time of the electrical signal pulse.
  • the distance measuring device 200 can calculate the TOF using the pulse receiving time information and the pulse sending time information, so as to determine the distance between the probe 201 and the distance measuring device 200.
  • the distance and orientation detected by the distance measuring device 200 can be used for remote sensing, obstacle avoidance, surveying and mapping, modeling, navigation, and the like.
  • the scanning pattern shown in FIG. 3 may be obtained according to the specific scanning method based on the aforementioned distance measuring device, or the scanning pattern shown in FIG. 4 may be obtained based on other distance measuring devices.
  • the scanning pattern in this text may refer to a period of time.
  • lidar point cloud poorly portray distant scenes, and the accuracy of perception algorithms such as target detection is usually not enough for the distance.
  • the unbalanced density of the lidar point cloud greatly limits the perception distance, so that It greatly limits the application of lidar in open scenes such as autonomous driving.
  • the ranging device of lidar can usually output a point cloud frame by frame at a certain frequency (such as 10Hz) to describe the three-dimensional scene.
  • a certain frequency such as 10Hz
  • intelligent algorithms can be developed to detect the target scene.
  • the goal of intelligent perception is that each frame is strictly based on time.
  • the number of points in the point cloud of each frame is basically the same and there will be no repeated points.
  • This is the simplest and simplest framing method.
  • This The framing method will cause the target points to be too sparse in the use of detection, recognition and other algorithms, which reduces the accuracy of the algorithm.
  • an embodiment of the present invention provides a method for constructing a point cloud frame.
  • the method includes: acquiring a plurality of point cloud points sequentially collected by a distance measuring device; and according to the spatial position information of the plurality of point cloud points
  • the multiple point cloud points are formed into a multi-frame point cloud frame and output in sequence, wherein the point cloud points in the point cloud frame adopt different integration time lengths for the point cloud points at different spatial positions.
  • the integration time and integration space can be adjusted adaptively, so that the spatial distribution of point cloud points in each point cloud frame is more uniform and reasonable, and the object information in the scanned scene can be better described.
  • the points constructed by this method Cloud frames can be used as a good way to display point clouds, and can also be used as the basis for subsequent algorithms, so that subsequent algorithms can be more accurate and reduce processing errors caused by point cloud sparseness.
  • FIG. 5 shows a schematic flowchart of the method for constructing a point cloud frame in an embodiment of the present invention.
  • the method for constructing a point cloud frame in the embodiment of the present invention includes the following steps S501 to S502.
  • step S501 multiple point cloud points sequentially collected by the ranging device are acquired.
  • the distance measuring device actively emits laser pulses to the detected object, captures the laser echo signal, and calculates the distance of the detected object according to the time difference between laser emission and reception; based on the known emission direction of the laser, the detected object is obtained.
  • the point cloud points sequentially collected by the distance measuring device may be only one point cloud point, or may also consist of multiple point cloud points, which is not specifically limited here.
  • step S502 according to the spatial position information of the multiple point cloud points, the multiple point cloud points are formed into a multi-frame point cloud frame and output in sequence, wherein the point cloud frame is The point cloud points at different spatial positions use different integration durations.
  • the integration time and integration space can be adjusted adaptively, so that the spatial distribution of point cloud points in each point cloud frame is more uniform and reasonable, and the object information in the scanned scene can be better described.
  • the points constructed by this method Cloud frames can be used as a good way to display point clouds, and can also be used as the basis for subsequent algorithms, so that subsequent algorithms can be more accurate and reduce processing errors caused by point cloud sparseness.
  • the step of sequentially outputting the plurality of point cloud points into a multi-frame point cloud frame according to the spatial position information of the plurality of point cloud points includes: determining the point cloud frame according to the spatial position information
  • the integration time used for the point cloud points at different spatial positions can be adjusted adaptively according to the characteristics of the point cloud points at different spatial positions, so that the point cloud point space in each frame can be adjusted.
  • the distribution is more uniform and reasonable, and the object information in the scanned scene is better described; according to the collection time of each point cloud point, the integration duration and the end time of each point cloud frame, it is determined in each point cloud
  • the point cloud points output in the point cloud frame for example, the point cloud points output by each point cloud frame include the point cloud points whose acquisition time is before the end time of the point cloud frame and the time difference between the end time and the acquisition time is less than or equal to the integration duration
  • Such a setting can make the point cloud points with a longer integration time may appear repeatedly in at least two point cloud frames, so that the spatial distribution of the point cloud points in each point cloud frame is more uniform and reasonable.
  • the point cloud points of the point cloud frames of at least two adjacent frames have overlapping parts
  • the point cloud frame of the next frame includes at least part of the point cloud points of the point cloud frame of the previous frame
  • the point cloud frame of the next frame includes at least part of the point cloud points in the point cloud frame of the previous frame, which are the point cloud points whose time difference between the acquisition time and the end time of the next frame is less than or equal to the integration time
  • the point cloud frame of the next frame includes at least part of the point cloud points of the point cloud frames of the previous two frames, or the point cloud frame of the next frame includes at least part of the point cloud frames output before the point cloud frame
  • this part of the point cloud points usually corresponds to a longer integration time, and the integration time is greater than the difference between the end time of the point cloud frame and the point cloud point acquisition time, and the point cloud points whose acquisition time is before the end time.
  • Such a setting can make the point cloud points with a longer integration time may appear repeatedly in
  • the point cloud point output in each point cloud frame is determined according to the acquisition time of each point cloud point, the integration duration, and the end time of each point cloud frame, specifically Including: First, in step A1, determine the time difference between the collection time of each point cloud point collected within a predetermined time period and the end time of the current point cloud frame, where the point cloud data collected within the predetermined time period can be stored
  • a specific storage space such as a cache
  • the size of the storage space can be set reasonably according to the collection rate. For high collection rates, you can set a larger storage space, and for low collection rates, you can set a smaller storage space.
  • Storage space the end time of the current point cloud frame can be set reasonably according to the latest point cloud data collected within a predetermined time period.
  • step A2 the point cloud points with the time difference less than or equal to the integration duration are added to the The current point cloud frame, and for point cloud points with a time difference greater than the integration duration, the current point cloud frame is not placed.
  • each point cloud point in the storage space is traversed, and step A1 and steps are followed.
  • the A2 method filters the point cloud points that should be added to the current point cloud frame, then clears the storage space, adds the newly generated point cloud points of the current point cloud frame to the storage space, and outputs the current point cloud frame for use Display or subsequent processing.
  • determining the point cloud point output in each point cloud frame according to the acquisition time of each point cloud point, the integration duration, and the end time of each point cloud frame includes the following step S1 Go to step S3:
  • step S1 the current point cloud point collection time is compared with the preset end time of at least two storage spaces (for example, buffering) to obtain a comparison result, wherein each storage space is used for storing The point cloud data of the point cloud frame to be formed.
  • the at least two preset storage spaces are set based on the point cloud point collection rate of the distance measuring device.
  • the number of preset storage spaces can be set reasonably according to actual needs. It can be two storage spaces or more than two storage spaces. If N storage spaces are set, each storage space corresponds to a point cloud to be formed Frame, where N is greater than or equal to 2, then the end times of the N storage spaces are respectively Ti, where i represents the i-th point cloud frame of the N point cloud frames.
  • the time interval between the end time of adjacent point cloud frames can be a fixed time interval, such as 0.1s, 0.2s, etc., or it can also be at least partially different between the end time of adjacent point cloud frames. The specific time interval can be set reasonably according to actual needs.
  • step S2 the storage space in which the current point cloud point should be stored is determined according to the comparison result and the integration duration of the current point cloud point.
  • the ranging device Each time the ranging device receives a new point cloud point, it calculates the point cloud point’s collection time T and the point cloud point’s spatial position information, and determines the point cloud frame to be used for the point cloud point according to its spatial position information
  • the integration duration of (the details will be described below).
  • the end time of the first storage space is 0.1s
  • the end time of the second storage space is 0.2s
  • the integration time of the newly collected current point cloud point is 0.1s.
  • the acquisition time is 0.05s
  • the current point cloud point should be placed in the first storage space, and since the difference between the end time of the second storage space and the acquisition time is greater than the integration time, when traversing to the second storage space, The point cloud point is not processed, that is, it is not placed in the second storage space.
  • the end time of the first storage space is 0.1s
  • the end time of the second storage space is 0.2s
  • the integration time of the newly collected current point cloud point is 0.2s
  • the acquisition time is 0.05s
  • the difference between the end time of the first storage space and the acquisition time is non-negative and less than the integration duration
  • the difference between the end time of the second storage space and the acquisition time is non-negative and less than the integration duration
  • step S3 when the acquisition time is greater than the end time of the storage space, the current point cloud frame in the storage space is output.
  • the point cloud data stored in the i-th storage space Output as the current point cloud frame.
  • the acquisition time is equal to the end time of the storage space, the current point cloud point is stored in the storage space, and the point cloud data stored in the storage space is output as the current point cloud frame at the same time.
  • the storage space of the current point cloud frame that has been output can be cleared, and the end time of the storage space can be reset, where the end time after resetting is the longest of the at least two storage spaces.
  • the late end time plus the preset time interval, the preset time interval can be set reasonably according to actual needs, for example, it can be a constant, such as 0.1s, 0.2s, 0.3s, etc.
  • the spatial position information of the multiple point cloud points may include at least one of the following: distance information from the distance measuring device, position information within the field of view of the distance measuring device, height information,
  • the spatial position information in the scanned target scene may also include other spatial information used.
  • the point cloud frame may have different integration time settings for the point cloud points. The following will be based on the space The specific type of location information describes the setting of the integration duration.
  • the scanning pattern may be unevenly distributed in the field of view, such as , There will be dense in the middle and sparse on both sides as shown in Figure 3 and Figure 4. This scanning method is likely to cause the distribution of the finally obtained point cloud to appear dense in the middle and sparse on both sides. Therefore, in order to make the final The point cloud distribution of the output point cloud frame is more balanced, and the integration time can be set reasonably according to the position information of the point cloud point in the field of view of the ranging device.
  • the field of view of the ranging device may include the first The field of view area and the second field of view area, for example, the scan density in the first field of view area is different from the scan density in the second field of view area.
  • the integration time used for the point cloud points in the field of view area is different from the integration time used for the point cloud points in the second field of view area,
  • the integration time corresponding to the point cloud points in each field of view area can be set reasonably according to the scanning density in different field of view areas.
  • the angle information of each point cloud point can be used to determine its location.
  • a relatively long integration time is used.
  • a relatively short integration time is used.
  • the scanning density in the first field of view is lower than the scanning density in the second field of view.
  • the integration time used for the point cloud points in the first field of view area in the point cloud frame is greater than the integration time used for the point cloud points in the second field of view area.
  • the first field of view The area is located in the edge area of the field of view of the distance measuring device, and the second field of view area may be located in the middle area of the field of view of the distance measuring device.
  • the point cloud points in the field of view area with lower scan density can be made denser, so that the spatial distribution of the point cloud points in each point cloud frame is more uniform and reasonable, and the scan can be better characterized
  • the object information in the scene, the point cloud frame constructed by this method can be used as a good way to display the point cloud, and can also be used as the basis of the subsequent algorithm, so that the subsequent algorithm can be more accurate and reduce the processing error caused by the sparse point cloud .
  • the scanning methods may be different for different distance measuring devices, so the distribution of scanning density may also be different. Therefore, the field of view range of the distance measuring device can be divided into more than Two field of view areas, where each field of view area can have a different scanning density, and the integration time used for the point cloud points in the field of view area with higher scanning density is longer than that of the field of view area with lower scanning density.
  • the integration time used for the point cloud points in the output point cloud frame makes the point cloud points denser in the field of view area where the scanning density is less in the output point cloud frame.
  • the target scene scanned by the distance measuring device includes a first spatial region and a second spatial region, wherein the point cloud frame in the point cloud frame uses an integral for the point cloud points in the first spatial region
  • the duration is different from the integration duration used for the point cloud points in the second spatial region.
  • the point cloud density in the first spatial region is less than the point cloud density in the second spatial region
  • the point cloud frame uses an integration time longer than the point cloud points in the first spatial region The integration duration used for the point cloud points in the second spatial region.
  • the point cloud density in each spatial region in the output point cloud frame is as roughly equal as possible, so that each frame The spatial distribution of the point cloud points in the point cloud frame is more uniform and reasonable.
  • the point cloud density in different spatial regions in the target scene space can be acquired according to any suitable method.
  • the point cloud of at least one frame of the target scene can be acquired first, and the target scene includes at least two spatial regions.
  • the region can be set reasonably according to actual needs.
  • the target scene space can be divided into multiple spatial regions by, for example, multiple rectangular grids. The order of division can be set reasonably according to actual needs, for example, from left to right. Divide sequentially, or from bottom to top, and so on.
  • the point cloud density in each spatial region is obtained, that is, the number of point cloud points in each spatial region, so as to determine the point cloud density in each spatial region.
  • the first space area may also be a space on a road
  • the second space area may be a space on both sides of a road, for example, when a distance measuring device is applied to an autonomous driving scene (for example, at least one distance measuring device is set in front of the vehicle). Device), usually have different degrees of attention to the targets at different positions on the road.
  • the point cloud points in the space on the road and the point cloud points in the space on both sides of the road can use different integration durations, for example, the road
  • the attention degree of the target on both sides of the road is higher than the attention degree of the target in the space on both sides of the road, then the point cloud frame can integrate point cloud points in the space on the road longer than the point cloud points in the space on both sides of the road
  • the integration time used can be set reasonably according to actual scene needs, and there is no specific limitation here.
  • scan density and point cloud density are two different concepts.
  • the scan density of an area can refer to the number of light pulses emitted into the area within a period of time
  • the point cloud density of can refer to the number of point cloud points in the spatial region in a frame of point cloud.
  • the target scene scanned by the distance measuring device since the target scene scanned by the distance measuring device is different, its requirements for point clouds at different height positions are also different. See the lane line clearly, so the user will be more interested in the ground or objects close to the ground, but not too interested in the taller objects (such as tree canopy), so the corresponding point can be determined according to the height position of the point cloud point
  • the integration time used by the cloud points for example, the height information includes a first height interval and a second height interval, and the point cloud frame's integration time for the point cloud points in the first height interval is compared with the second height interval.
  • the point cloud points in the height interval have different integration durations.
  • the height value in the first height interval is smaller than the height value in the second height interval;
  • the integration time length of the point cloud points in the interval is greater than the integration time length of the point cloud points in the second height interval.
  • the first height interval is a height interval lower than 4m
  • the second height interval is a height interval higher than 4m
  • the first height interval may be a height interval lower than 3m
  • the second height interval is a height interval higher than 3m.
  • the height interval, or the first height interval may be a height interval lower than 2m, and the second height interval may be a height interval higher than 2m, or the first height interval may be a height interval lower than 1m, and the second height interval
  • the interval is a height interval higher than 1m, and these height intervals can be set reasonably according to needs, and no specific limitation is made here. It is worth mentioning that the height information may also include more than two height intervals, which are not specifically limited here.
  • the height information may include the height information of the point cloud point relative to the ground, and the ground coordinate information may be obtained by any suitable method.
  • the ground coordinates are based on the scanning of the target scene by the distance measuring device.
  • the obtained at least one point cloud frame is obtained by segmenting the ground.
  • a longer integration time can be used for the point cloud points in the area of interest to the user in the target scene.
  • a shorter integration time is used for the point cloud points in the area of less interest, thereby increasing the density of the point cloud in the area of interest, making it more able to clearly present the objects in these areas, so as to better describe Scanning the object information in the scene is conducive to making more accurate judgments on the road condition information in the autonomous driving scene, so as to ensure the safety of driving.
  • the integration time can be set according to the distance information between the point cloud point and the distance measuring device.
  • the distance information includes a first distance interval and a second distance interval.
  • the integration time length used for the point cloud points in the first distance interval in the point cloud frame is different from the integration time length used for the point cloud points in the second distance interval, in particular, the first The point cloud density in the distance interval is different from the point cloud density in the second distance interval.
  • the point cloud points in the interval use different integration time.
  • the point cloud density in the first distance interval is higher than the point cloud density in the second distance interval;
  • the integration time is less than the integration time for the point cloud points in the second distance interval.
  • the distance value in the first distance interval is less than the distance value in the second distance interval
  • the point cloud frame uses an integration time length of less than the point cloud points within the first distance interval.
  • the integration time used for the point cloud points within the second distance interval.
  • the division of the distance interval can be set reasonably according to the actual point cloud point distribution characteristics, which may include a first distance interval, a second distance interval, or more than two distance intervals.
  • the distance interval may include a distance or a distance including only one point, and the specific distance may be set reasonably according to actual needs.
  • a conventional point cloud frame In the construction of a conventional point cloud frame, it usually adopts substantially the same integration time for point cloud points at different distances. For example, as shown in FIG. It uses different integration durations for different distance intervals.
  • Figures 7 to 9 show three typical integration durations with distance. They all show that the integration duration is relatively small in the vicinity and the integration duration is relatively large in the distance.
  • the distance measurement device such as lidar samples the close and close distances, so that the formed point cloud frame is more at different distances. balanced.
  • the integration duration used for the point cloud points at different spatial positions in the point cloud frame is set by the distance measuring device based on the integration duration determined by the user according to the spatial position information of the point cloud points, or, The integration time lengths used for the point cloud points at different spatial positions in the point cloud frame are set by the distance measuring device based on different application scenarios.
  • the integration time and the integration space can be adaptively adjusted, so that the spatial distribution of the point cloud points in each point cloud frame is more uniform, reasonable, and better.
  • the ground describes the object information in the scanned scene.
  • the point cloud frame constructed by this method can be used as a good way to display the point cloud, and can also be used as the basis of the subsequent algorithm, so that the subsequent algorithm can be more accurate and reduce the point cloud sparseness. The processing error.
  • FIG. 10 a target detection method according to an embodiment of the present invention will be described with reference to FIG. 10.
  • the method is based on the aforementioned method of constructing a point cloud frame. Therefore, the method of constructing a point cloud frame in the foregoing can be combined with this embodiment. In order to avoid repetition, the method of constructing the point cloud frame will not be described again.
  • the target detection method of the embodiment of the present invention includes the following steps: first, in step S1001, the target scene is scanned by the ranging device; then, in step S1002, the ranging device is acquired sequentially The collected multiple point cloud points; then, in step S1003, the multiple point cloud points are formed into multiple point cloud frames and output in sequence according to the spatial position information of the multiple point cloud points, wherein the point cloud The point cloud points at different spatial positions in the frames use different integration durations; finally, in step S1004, the position information of the detection target in the target scene is acquired based on the output of at least one frame of the point cloud frame, wherein, The position information may include distance information and/or angle information.
  • the target detection method is a learning method, you can first perform the detection model training based on the conventional point cloud frame or the point cloud frame constructed by the method of constructing the point cloud frame mentioned above, and then perform the target detection based on the point cloud frame mentioned above; if it is The non-learning method directly performs target detection based on the point cloud frame mentioned above.
  • the detection target in the target scene is obtained based on at least one output frame of the point cloud frame.
  • the location information includes: obtaining the current point cloud frame output at the current moment; segmenting the point cloud clusters of each target in the current point cloud frame; removing the point cloud clusters of each target that is greater than a predetermined duration threshold from the current moment Cloud points to obtain the cropped point cloud clusters of each target, that is, to remove the point cloud points whose collection time is greater than the predetermined duration threshold from the current moment in the point cloud clusters of each target, for example, the current point cloud frame
  • the output time is the current time.
  • the predetermined duration threshold can be set reasonably according to the actual situation and is not specifically limited here; the current time is determined based on the clipped point cloud cluster The location information of each target can more accurately locate the location of the target.
  • the target detection method of this embodiment can It can better characterize the object information in the scanned scene, especially when the integration time of the point cloud points is adjusted reasonably according to different distance intervals, so that the distant point cloud points are more dense, so it can be more dense than the traditional point cloud frame.
  • the long-distance detection improves the target detection capability, especially the expansion of the target detection range such as lidar.
  • FIG. 11 a distance measuring device according to an embodiment of the present invention will be described, wherein the features of the aforementioned distance measuring device can be combined into this embodiment.
  • the ranging device 1100 further includes one or more processors 1102, one or more memories 1101, and one or more processors 1102 work together or individually.
  • the distance measuring device may further include at least one of an input device (not shown), an output device (not shown), and an image sensor (not shown), and these components are connected through a bus system and/or other forms The mechanisms (not shown) are interconnected.
  • the memory 1101 is used to store program instructions executable by the processor, for example, used to store the corresponding steps and program instructions in the method for constructing a point cloud frame and/or the target detection method according to the embodiments of the present invention. It may include one or more computer program products, and the computer program products may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
  • the volatile memory may include random access memory (RAM) and/or cache memory (cache), for example.
  • the non-volatile memory may include, for example, read-only memory (ROM), hard disk, flash memory, and the like.
  • the input device may be a device used by a user to input instructions, and may include one or more of a keyboard, a mouse, a microphone, and a touch screen.
  • the output device may output various information (for example, images or sounds) to the outside (for example, a user), and may include one or more of a display, a speaker, etc., for outputting the constructed point cloud frame as an image or video.
  • the communication interface (not shown) is used for communication between the ranging device and other devices, including wired or wireless communication.
  • the ranging device can access a wireless network based on a communication standard, such as WiFi, 2G, 3G, 4G, 5G, or a combination thereof.
  • the communication interface receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication interface further includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • the processor 1102 may be a central processing unit (CPU), an image processing unit (GPU), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other forms of processing with data processing capabilities and/or instruction execution capabilities Unit, and can control other components in the ranging device to perform desired functions.
  • the processor can execute the instructions stored in the memory to execute the method for constructing a point cloud frame and/or the target detection method of the embodiments of the present invention described herein. For these methods, refer to the descriptions in the foregoing embodiments. Do not repeat it again.
  • the processor can include one or more embedded processors, processor cores, microprocessors, logic circuits, hardware finite state machines (FSM), digital signal processors (DSP), or combinations thereof.
  • the processor includes a field programmable gate array (FPGA), wherein the arithmetic circuit of the distance measuring device may be a part of the field programmable gate array (FPGA).
  • the distance measuring device includes one or more processors that work together or separately, and the memory is used to store program instructions; the processor is used to execute the program instructions stored in the memory, and when the program instructions are executed, The processor is used to implement the corresponding steps in the method for constructing a point cloud frame and/or the target detection method according to an embodiment of the present invention. To avoid repetition, the specific description of these methods can refer to the relevant description of the foregoing embodiment.
  • the distance measuring device of the embodiment of the present invention can be applied to a mobile platform, and the distance measuring device can be installed on the platform body of the mobile platform.
  • a mobile platform with a distance measuring device can measure the external environment, for example, measuring the distance between the mobile platform and obstacles for obstacle avoidance and other purposes, and for two-dimensional or three-dimensional surveying and mapping of the external environment.
  • the mobile platform includes at least one of an unmanned aerial vehicle, a car, a remote control car, a robot, a boat, and a camera.
  • the ranging device is applied to an unmanned aerial vehicle
  • the platform body is the fuselage of the unmanned aerial vehicle.
  • the distance measuring device is applied to a car
  • the platform body is the body of the car.
  • the car can be a self-driving car or a semi-self-driving car, and there is no restriction here.
  • the platform body is the body of the remote control car.
  • the platform body is a robot.
  • the distance measuring device is applied to a camera, the platform body is the camera itself.
  • both the distance measuring device and the mobile platform have the same advantages as the aforementioned method.
  • the embodiment of the present invention also provides a computer storage medium on which a computer program is stored.
  • One or more computer program instructions may be stored on the computer-readable storage medium, and the processor may run the program instructions stored in the memory to implement the functions (implemented by the processor) in the embodiments of the present invention described herein And/or other desired functions, for example, to perform the corresponding steps of the method for constructing a point cloud frame and/or the target detection method according to the embodiments of the present invention, various application programs and Various data, such as various data used and/or generated by the application program.
  • the computer storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), and a portable compact disk. Read only memory (CD-ROM), USB memory, or any combination of the above storage media.
  • the computer-readable storage medium may be any combination of one or more computer-readable storage media.
  • a computer-readable storage medium contains computer-readable program codes for converting point cloud data into two-dimensional images, and/or computer-readable program codes for three-dimensional reconstruction of point cloud data, and the like.
  • each part of this application can be implemented by hardware, software, firmware, or a combination thereof.
  • multiple steps or methods can be implemented by software or firmware stored in a memory and executed by a suitable instruction execution system.
  • Discrete logic gate circuits with logic functions for data signals Logic circuits, dedicated integrated circuits with suitable combinational logic gate circuits, programmable gate array (Programmable Gate Array; hereinafter referred to as PGA), Field Programmable Gate Array (Field Programmable Gate Array; referred to as FPGA), etc.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another device, or some features can be ignored or not implemented.
  • the various component embodiments of the present invention may be implemented by hardware, or by software modules running on one or more processors, or by a combination of them.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all of the functions of some modules according to the embodiments of the present invention.
  • DSP digital signal processor
  • the present invention can also be implemented as a device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein.
  • Such a program for realizing the present invention may be stored on a computer-readable medium, or may have the form of one or more signals.
  • Such a signal can be downloaded from an Internet website, or provided on a carrier signal, or provided in any other form.

Abstract

一种构建点云帧的方法、目标检测方法、测距装置、可移动平台和存储介质。该构建点云帧的方法包括:获取测距装置依次采集的多个点云点(S501);根据多个点云点的空间位置信息将多个点云点形成多帧点云帧依次输出(S502)。其中,点云帧中对不同空间位置处的点云点采用的积分时长不同。通过该方法可自适应地调节积分时长和积分空间,使每帧点云帧中的点云点的空间分布更加均匀、合理,更好地刻画扫描场景中的物体信息。

Description

构建点云帧的方法、目标检测方法、测距装置、可移动平台和存储介质
说明书
技术领域
本发明总地涉及测距装置技术领域,更具体地涉及一种构建点云帧的方法、目标检测方法、测距装置、可移动平台和存储介质。
背景技术
例如激光雷达的测距装置通常可以以一定频率(如10Hz)输出一帧一帧的点云来刻画三维场景,基于此可以开发智能算法来对场景目标进行智能感知,其每一帧严格基于时间进行划分,每帧点云中点的数量基本相同且不会有重复的点,这是一种最为简单、朴素的分帧方法,然而这种分帧方法在使用检测、识别等算法中会造成目标点过于稀疏的情况,降低了算法的精准度。
因此,鉴于上述问题的存在,本发明提出一种构建点云帧的方法、目标检测方法、测距装置、可移动平台和存储介质。
发明内容
为了解决上述问题中的至少一个而提出了本发明。具体地,本发明一方面提供一种构建点云帧的方法,所述方法包括:获取测距装置依次采集的多个点云点;根据所述多个点云点的空间位置信息将所述多个点云点形成多帧点云帧依次输出,其中,所述点云帧中对不同空间位置处的点云点采用的积分时长不同。
在一个示例中,所述根据所述多个点云点的空间位置信息将所述多个点云点形成多帧点云帧依次输出,包括:
根据所述空间位置信息确定所述点云帧中对不同空间位置处的点云点采用的积分时长;
根据每个所述点云点的采集时间、所述积分时长和每个所述点云帧的结束时间确定在每个所述点云帧中输出的点云点。
在一个示例中,至少相邻两帧的点云帧的点云点具有重叠部分。
在一个示例中,后一帧的点云帧中包括前一帧的点云帧中的采集时间和后一帧的结束时间之间的时间差小于或等于所述积分时长的点云点。
本发明再一方面提供一种目标检测方法,该目标检测方法包括:通过测距装置对目标场景进行扫描;根据前述构建点云帧的方法依次输出多帧点云帧;基于输出的至少一帧所述点云帧获取所述目标场景中的检测目标的位置信息。
在一个示例中,基于输出的至少一帧所述点云帧获取所述目标场景中的检测目标的位置信息,包括:
获取当前时刻输出的当前点云帧;
分割所述当前点云帧中的每个目标的点云簇;
去除每个目标的点云簇中距当前时刻大于预定时长阈值的点云点,以获得每个目标的裁剪后的点云簇;
基于所述剪裁后的点云簇确定当前时刻每个目标的位置信息。
本发明另一方面提供一种测距装置,所述测距装置包括:
存储器,用于存储可执行的程序指令;
处理器,用于执行所述存储器中存储的所述程序指令,使得所述处理器执行前述的构建点云帧的方法,或者,使得所述处理器执行前述的目标检测方法。
本发明又一方面提供一种可移动平台,所述可移动平台包括:
可移动平台本体;
至少一个前述的测距装置,设置于所述可移动平台本体。
本发明另一方面提供计算机存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现前述的构建点云帧的方法,或者,实现前述的目标检测方法。
根据本发明实施例的构建点云帧的方法、目标检测方法、测距装置、可移动平台和存储介质,可自适应地调节积分时长和积分空间,使每帧点云帧中的点云点的空间分布更加均匀、合理,更好地刻画扫描场景中的物体信息,通过该方法构建的点云帧可以作为一种良好的显示点云的方式,也可以作为后续算法的基础,使得后续算法能够更加精准,减少点云稀疏带来的处理错误。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1示出了本发明一实施例中的测距装置的架构示意图;
图2示出了本发明一个实施例中的测距装置的示意图;
图3示出了本发明一个实施例中的测距装置的扫描图案的示意图;
图4示出了本发明另一个实施例中的测距装置的扫描图案的示意图;
图5示出了本发明一个实施例中的构建点云帧的方法的示意性流程图;
图6示出了传统的积分时长随距离的变化函数的示意图;
图7示出了本发明一个实施例中的积分时长随距离的变化函数的示意图;
图8示出了本发明另一个实施例中的积分时长随距离的变化函数的示意图;
图9示出了本发明再一个实施例中的积分时长随距离的变化函数的示意图;
图10示出了本发明一个实施例中的目标检测方法的示意性流程图;
图11示出了本发明一实施例中的测距装置的示意性框图。
具体实施方式
为了使得本发明的目的、技术方案和优点更为明显,下面将参照附图详细描述根据本发明的示例实施例。显然,所描述的实施例仅仅是本发明的一部分实施例,而不是本发明的全部实施例,应理解,本发明不受这里描述的示例实施例的限制。基于本发明中描述的本发明实施例,本领域技术人员在没有付出创造性劳动的情况下所得到的所有其它实施例都应落入本发明的保护范围之内。
在下文的描述中,给出了大量具体的细节以便提供对本发明更为彻底的理解。然而,对于本领域技术人员而言显而易见的是,本发明可以无需一个或多个这些细节而得以实施。在其他的例子中,为了避免与本发明发生混淆,对于本领域公知的一些技术特征未进行描述。
应当理解的是,本发明能够以不同形式实施,而不应当解释为局限于这里提出的实施例。相反地,提供这些实施例将使公开彻底和完全,并且将本发明的范围完全地传递给本领域技术人员。
在此使用的术语的目的仅在于描述具体实施例并且不作为本发明的限制。在此使用时,单数形式的“一”、“一个”和“所述/该”也意图包括复数形式,除非上下文清楚指出另外的方式。还应明白术语“组成”和/或“包括”,当在该说明书中使用时,确定所述特征、整数、步骤、操作、元件和/或部件的存在,但不排除一个或更多其它的特征、整数、步骤、操作、元件、部件和/或组的存在或添加。在此使用时,术语“和/或”包括相关所列项目的任何及所有组合。
为了彻底理解本发明,将在下列的描述中提出详细的结构,以便阐释本发明提出的技术方案。本发明的可选实施例详细描述如下,然而除了这些详细描述外,本发明还可以具有其他实施方式。
下面结合附图,对本申请的构建点云帧的方法、进行详细说明。在不冲突的情况下,下述的实施例及实施方式中的特征可以相互组合。
首先参考图1和图2对本发明实施例中的一种测距装置的结构做详细的示例性地描 述,测距装置包括激光雷达,该测距装置仅作为示例,对于其他适合的测距装置也可以应用于本申请。该测距装置可以用于执行本文中的构建点云帧的方法。
本发明各个实施例提供的方案可以应用于测距装置,该测距装置可以是激光雷达、激光测距设备等电子设备。在一种实施方式中,测距装置用于感测外部环境信息,例如,环境目标的距离信息、方位信息、反射强度信息、速度信息等。一种实现方式中,测距装置可以通过测量测距装置和探测物之间光传播的时间,即光飞行时间(Time-of-Flight,TOF),来探测探测物到测距装置的距离。或者,测距装置也可以通过其他技术来探测探测物到测距装置的距离,例如基于相位移动(phase shift)测量的测距方法,或者基于频率移动(frequency shift)测量的测距方法,在此不做限制。
为了便于理解,以下将结合图1所示的测距装置100对测距的工作流程进行举例描述。
作为示例,测距装置100包括发射模块、扫描模块和探测模块,发射模块用于发射光脉冲序列,以探测目标场景;扫描模块用于将所述发射模块发射的光脉冲序列的传播路径依次改变至不同方向出射,形成一个扫描视场;探测模块用于接收经物体反射回的光脉冲序列,以及根据所述反射回的光脉冲序列确定所述物体相对所述测距装置的距离和/或方位,以生成所述点云点。
具体地,如图1所示,发射模块包括发射电路110;探测模块包括接收电路120、采样电路130和运算电路140。
发射电路110可以出射光脉冲序列(例如激光脉冲序列)。接收电路120可以接收经过被探测物反射的光脉冲序列,也即通过其获得回波信号的脉冲波形,并对该光脉冲序列进行光电转换,以得到电信号,再对电信号进行处理之后可以输出给采样电路130。采样电路130可以对电信号进行采样,以获取采样结果。运算电路140可以基于采样电路130的采样结果,以确定测距装置100与被探测物之间的距离,也即深度。
可选地,该测距装置100还可以包括控制电路150,该控制电路150可以实现对其他电路的控制,例如,可以控制各个电路的工作时间和/或对各个电路进行参数设置等。
应理解,虽然图1示出的测距装置中包括一个发射电路、一个接收电路、一个采样电路和一个运算电路,用于出射一路光束进行探测,但是本申请实施例并不限于此,发射电路、接收电路、采样电路、运算电路中的任一种电路的数量也可以是至少两个,用于沿相同方向或分别沿不同方向出射至少两路光束;其中,该至少两束光路可以是同时出射,也可以是分别在不同时刻出射。一个示例中,该至少两个发射电路中的发光芯片封装在同一个模块中。例如,每个发射电路包括一个激光发射芯片,该至少两个发射电路中的激光发射芯片中的die封装到一起,容置在同一个封装空间中。
一些实现方式中,除了图1所示的电路,测距装置100还可以包括扫描模块,用于 将发射电路出射的至少一路光脉冲序列(例如激光脉冲序列)改变传播方向出射,以对视场进行扫描。示例性地,所述扫描模块在测距装置的视场内的扫描区域随着时间的累积而增加。
其中,可以将包括发射电路110、接收电路120、采样电路130和运算电路140的模块,或者,包括发射电路110、接收电路120、采样电路130、运算电路140和控制电路150的模块称为测距模块,该测距模块可以独立于其他模块,例如,扫描模块。
测距装置中可以采用同轴光路,也即测距装置出射的光束和经反射回来的光束在测距装置内共用至少部分光路。例如,发射电路出射的至少一路激光脉冲序列经扫描模块改变传播方向出射后,经探测物反射回来的激光脉冲序列经过扫描模块后入射至接收电路。或者,测距装置也可以采用异轴光路,也即测距装置出射的光束和经反射回来的光束在测距装置内分别沿不同的光路传输。图2示出了本发明的测距装置采用同轴光路的一种实施例的示意图。
测距装置200包括测距模块210,测距模块210包括发射器203(可以包括上述的发射电路)、准直元件204、探测器205(可以包括上述的接收电路、采样电路和运算电路)和光路改变元件206。测距模块210用于发射光束,且接收回光,将回光转换为电信号。其中,发射器203可以用于发射光脉冲序列。在一个实施例中,发射器203可以发射激光脉冲序列。可选的,发射器203发射出的激光束为波长在可见光范围之外的窄带宽光束。准直元件204设置于发射器的出射光路上,用于准直从发射器203发出的光束,将发射器203发出的光束准直为平行光出射至扫描模块。准直元件还用于会聚经探测物反射的回光的至少一部分。该准直元件204可以是准直透镜或者是其他能够准直光束的元件。
在图2所示实施例中,通过光路改变元件206来将测距装置内的发射光路和接收光路在准直元件204之前合并,使得发射光路和接收光路可以共用同一个准直元件,使得光路更加紧凑。在其他的一些实现方式中,也可以是发射器203和探测器205分别使用各自的准直元件,将光路改变元件206设置在准直元件之后的光路上。
在图2所示实施例中,由于发射器203出射的光束的光束孔径较小,测距装置所接收到的回光的光束孔径较大,所以光路改变元件可以采用小面积的反射镜来将发射光路和接收光路合并。在其他的一些实现方式中,光路改变元件也可以采用带通孔的反射镜,其中该通孔用于透射发射器203的出射光,反射镜用于将回光反射至探测器205。这样可以减小采用小反射镜的情况中小反射镜的支架会对回光的遮挡。
在图2所示实施例中,光路改变元件偏离了准直元件204的光轴。在其他的一些实现方式中,光路改变元件也可以位于准直元件204的光轴上。
测距装置200还包括扫描模块202。扫描模块202放置于测距模块210的出射光路上, 扫描模块202用于改变经准直元件204出射的准直光束219的传输方向并投射至外界环境,并将回光投射至准直元件204。回光经准直元件204汇聚到探测器205上。
在一个实施例中,扫描模块202可以包括至少一个光学元件,用于改变光束的传播路径,其中,该光学元件可以通过对光束进行反射、折射、衍射等等方式来改变光束传播路径,例如所述光学元件包括至少一个具有非平行的出射面和入射面的光折射元件。例如,扫描模块202包括透镜、反射镜、棱镜、振镜、光栅、液晶、光学相控阵(Optical Phased Array)或上述光学元件的任意组合。一个示例中,至少部分光学元件是运动的,例如通过驱动模块来驱动该至少部分光学元件进行运动,该运动的光学元件可以在不同时刻将光束反射、折射或衍射至不同的方向。在一些实施例中,扫描模块202的多个光学元件可以绕共同的轴209旋转或振动,每个旋转或振动的光学元件用于不断改变入射光束的传播方向。在一个实施例中,扫描模块202的多个光学元件可以以不同的转速旋转,或以不同的速度振动。在另一个实施例中,扫描模块202的至少部分光学元件可以以基本相同的转速旋转。在一些实施例中,扫描模块的多个光学元件也可以是绕不同的轴旋转。在一些实施例中,扫描模块的多个光学元件也可以是以相同的方向旋转,或以不同的方向旋转;或者沿相同的方向振动,或者沿不同的方向振动,在此不作限制。
在一个实施例中,扫描模块202包括第一光学元件214和与第一光学元件214连接的驱动器216,驱动器216用于驱动第一光学元件214绕转动轴209转动,使第一光学元件214改变准直光束219的方向。第一光学元件214将准直光束219投射至不同的方向。在一个实施例中,准直光束219经第一光学元件改变后的方向与转动轴209的夹角随着第一光学元件214的转动而变化。在一个实施例中,第一光学元件214包括相对的非平行的一对表面,准直光束219穿过该对表面。在一个实施例中,第一光学元件214包括厚度沿至少一个径向变化的棱镜。在一个实施例中,第一光学元件214包括楔角棱镜,对准直光束219进行折射。
在一个实施例中,扫描模块202还包括第二光学元件215,第二光学元件215绕转动轴209转动,第二光学元件215的转动速度与第一光学元件214的转动速度不同。第二光学元件215用于改变第一光学元件214投射的光束的方向。在一个实施例中,第二光学元件215与另一驱动器217连接,驱动器217驱动第二光学元件215转动。第一光学元件214和第二光学元件215可以由相同或不同的驱动器驱动,使第一光学元件214和第二光学元件215的转速和/或转向不同,从而将准直光束219投射至外界空间不同的方向,可以扫描较大的空间范围。在一个实施例中,控制器218控制驱动器216和217,分别驱动第一光学元件214和第二光学元件215。第一光学元件214和第二光学元件215的转速可以根据实际应用中预期扫描的区域和样式确定。驱动器216和217可以包括电机或其他驱 动器。
在一个实施例中,第二光学元件215包括相对的非平行的一对表面,光束穿过该对表面。在一个实施例中,第二光学元件215包括厚度沿至少一个径向变化的棱镜。在一个实施例中,第二光学元件215包括楔角棱镜。
一个实施例中,扫描模块202还包括第三光学元件(图未示)和用于驱动第三光学元件运动的驱动器。可选地,该第三光学元件包括相对的非平行的一对表面,光束穿过该对表面。在一个实施例中,第三光学元件包括厚度沿至少一个径向变化的棱镜。在一个实施例中,第三光学元件包括楔角棱镜。第一、第二和第三光学元件中的至少两个光学元件以不同的转速和/或转向转动。
在一个实施例中,所述扫描模块包括在所述光脉冲序列的出射光路上依次排布的2个或3个所述光折射元件。可选地,所述扫描模块中的至少2个所述光折射元件在扫描过程中旋转,以改变所述光脉冲序列的方向。
所述扫描模块在至少部分不同时刻的扫描路径不同,扫描模块202中的各光学元件旋转可以将光投射至不同的方向,例如投射的光211的方向和方向213,如此对测距装置200周围的空间进行扫描。当扫描模块202投射出的光211打到探测物201时,一部分光被探测物201沿与投射的光211相反的方向反射至测距装置200。探测物201反射的回光212经过扫描模块202后入射至准直元件204。
探测器205与发射器203放置于准直元件204的同一侧,探测器205用于将穿过准直元件204的至少部分回光转换为电信号。
一个实施例中,各光学元件上镀有增透膜。可选的,增透膜的厚度与发射器203发射出的光束的波长相等或接近,能够增加透射光束的强度。
一个实施例中,测距装置中位于光束传播路径上的一个元件表面上镀有滤光层,或者在光束传播路径上设置有滤光器,用于至少透射发射器所出射的光束所在波段,反射其他波段,以减少环境光给接收器带来的噪音。
在一些实施例中,发射器203可以包括激光二极管,通过激光二极管发射纳秒级别的激光脉冲。进一步地,可以确定激光脉冲接收时间,例如,通过探测电信号脉冲的上升沿时间和/或下降沿时间确定激光脉冲接收时间。如此,测距装置200可以利用脉冲接收时间信息和脉冲发出时间信息计算TOF,从而确定探测物201到测距装置200的距离。测距装置200探测到的距离和方位可以用于遥感、避障、测绘、建模、导航等。
基于前述测距装置的按照特定扫描方式可以获得如图3所示的扫描图案,或者基于其他的测距装置还可能获得如图4所示的扫描图案,本文中扫描图案可以指的是一段时长内光束在扫描视场内的扫描轨迹累积所形成的图案。在扫描模块的扫描下,光束在一个扫描 周期内形成一个完整的扫描图案之后,又在下一个扫描周期内沿着开始形成下一个完整的、相同或不同的扫描图案。从扫描图案可以看出对于不同的区域其扫描密度会有明显差别,例如,中间区域扫描密度大于边缘区域的扫描密度,并且,例如激光雷达的测距装置采样通常呈现远处稀疏近处稠密的特点,这也使得激光雷达点云对远处场景的刻画不佳,目标检测等感知算法的精准度对远处也通常不够,激光雷达点云密度不均衡使其感知距离大大受限,以至于大大限制了激光雷达在例如自动驾驶等开阔场景中的应用。
进一步,例如激光雷达的测距装置在对目标场景进行扫描后,通常可以以一定频率(如10Hz)输出一帧一帧的点云来刻画三维场景,基于此可以开发智能算法来对目标场景中的目标进行智能感知,其每一帧严格基于时间进行划分,每帧点云中点的数量基本相同且不会有重复的点,这是一种最为简单、朴素的分帧方法,然而这种分帧方法在使用检测、识别等算法中会造成目标点过于稀疏的情况,降低了算法的精准度。
鉴于上述问题的存在,本发明实施例中提供一种构建点云帧的方法,该方法包括:获取测距装置依次采集的多个点云点;根据所述多个点云点的空间位置信息将所述多个点云点形成多帧点云帧依次输出,其中,所述点云帧中对不同空间位置处的点云点采用的积分时长不同。通过该方法可自适应地调节积分时长和积分空间,使每帧点云帧中的点云点的空间分布更加均匀、合理,更好地刻画扫描场景中的物体信息,通过该方法构建的点云帧可以作为一种良好的显示点云的方式,也可以作为后续算法的基础,使得后续算法能够更加精准,减少点云稀疏带来的处理错误。
下面,参考附图5对本发明的构建点云帧的方法进行描述,其中,图5示出了本发明一个实施例中的构建点云帧的方法的示意性流程图。
作为示例,本发明实施例的构建点云帧的方法包括以下步骤S501至步骤S502。
首先,在步骤S501中,获取测距装置依次采集的多个点云点。
示例性地,通过测距装置主动对被探测物体发射激光脉冲,捕捉激光回波信号并根据激光发射和接收之间的时间差计算出被测对象的距离;基于激光的已知发射方向,获得被测对象的角度信息;通过高频率的发射和接收,可以获取海量的探测点的距离及角度信息等空间位置信息,称为点云,而探测点则可以称为点云点。
值得一提的是,测距装置依次采集的点云点可以是只有一个点云点,或者也可以由多个点云点,在此不对其进行具体限定。
接着,继续参考图5,在步骤S502中,根据所述多个点云点的空间位置信息将所述多个点云点形成多帧点云帧依次输出,其中,所述点云帧中对不同空间位置处的点云点采用的积分时长不同。通过该方法可自适应地调节积分时长和积分空间,使每帧点云帧中的点云点的空间分布更加均匀、合理,更好地刻画扫描场景中的物体信息,通过该方法构建 的点云帧可以作为一种良好的显示点云的方式,也可以作为后续算法的基础,使得后续算法能够更加精准,减少点云稀疏带来的处理错误。
在一个示例中,所述根据所述多个点云点的空间位置信息将所述多个点云点形成多帧点云帧依次输出,包括:根据所述空间位置信息确定所述点云帧中对不同空间位置处的点云点采用的积分时长,可以根据不同空间位置的点云点的特点,适应性的调节积分时长和积分空间,使每帧点云帧中的点云点的空间分布更加均匀、合理,更好地刻画扫描场景中的物体信息;根据每个所述点云点的采集时间、所述积分时长和每个所述点云帧的结束时间确定在每个所述点云帧中输出的点云点,例如,每个点云帧输出的点云点包括采集时间在点云帧的结束时间之前并且结束时间和采集时间的时间差小于或等于积分时长的点云点,这样的设置可以使积分时长较长的点云点可能在至少两帧的点云帧中重复出现,从而使每帧点云帧中的点云点的空间分布更加均匀、合理。
可选地,至少相邻两帧的点云帧的点云点具有重叠部分,例如,后一帧的点云帧中包括前一帧的点云帧中的至少部分点云点,特别是,后一帧的点云帧中包括前一帧的点云帧中的至少部分点云点为采集时间和后一帧的结束时间之间的时间差小于或等于积分时长的点云点,或者,还可以是后一帧的点云帧包括前两帧的点云帧中的至少部分点云点,或者,后一帧的点云帧包括在该点云帧之前输出的点云帧中的至少部分点云点,该部分点云点通常对应较长的积分时长,积分时长大于点云帧的结束时间和点云点的采集时间之差且采集时间在结束时间之前的点云点。这样的设置可以使积分时长较长的点云点可能在至少两帧的点云帧中重复出现,从而使每帧点云帧中的点云点的空间分布更加均匀、合理。
在一个具体示例中,根据每个所述点云点的采集时间、所述积分时长和每个所述点云帧的结束时间确定在每个所述点云帧中输出的点云点,具体包括:首先,在步骤A1中,确定预定时间段内采集的每个点云点的采集时间和当前点云帧的结束时间之间的时间差,其中,预定时间段内采集的点云数据可以存储于特定的存储空间中,例如缓存中,其中,可以根据采集速率合理的设置该存储空间的大小,对于采集速率高时可以设置较大的存储空间,对于采集速率较低时可以设置较小的存储空间,当前点云帧的结束时间可以根据最新采集的预定时间段内的点云数据而合理设定,在步骤A2中,将所述时间差小于或等于积分时长的点云点添加至所述当前点云帧,而对于时间差大于积分时长的点云点则不放入该当前点云帧,在存储空间非空时,遍历该存储空间中的每个点云点,并按照步骤A1和步骤A2的方式筛选应该添加至当前点云帧的点云点,随后清空该存储空间,将新生成的当前点云帧的点云点添加到该存储空间中,输出该当前点云帧,以供显示或者后续处理。
在另一个具体示例中,根据每个所述点云点的采集时间、所述积分时长和每个点云帧的结束时间确定在每个点云帧中输出的点云点,包括以下步骤S1至步骤S3:
首先,在步骤S1中,将当前点云点的采集时间和预设的至少两个存储空间(例如缓存)的结束时间进行比较,以获得比较结果,其中,每个所述存储空间用于存储将要形成的点云帧的点云数据。
其中,预设的至少两个存储空间是基于测距装置的点云点的采集速率而设定的。预设的存储空间的数量可以根据实际需要合理设定,其可以是两个存储空间,或者多于两个的存储空间,如果设置N个存储空间,每个存储空间对应一个将要形成的点云帧,其中,N大于或等于2,则N个存储空间的结束时间分别为Ti,其中i代表N个点云帧的第i个点云帧。其中相邻点云帧的结束时间之间的时间间隔可以是固定时间间隔,例如0.1s、0.2s等,或者还可以是至少部分不同的相邻点云帧的结束时间之间还可以具有不同的时间间隔,具体可以根据实际需要合理设定。
然后,在步骤S2中,在根据所述比较结果和所述当前点云点的积分时长确定所述当前点云点应存入的存储空间。
测距装置每接收到一个新的点云点,则计算该点云点的采集时间T以及该点云点的空间位置信息,并根据其空间位置信息确定点云帧对该点云点应采用的积分时长(具体将在下文中描述)。
遍历每一个点云帧的结束时间Ti,其中,比较结果为结束时间Ti和当前点云点的采集时间T的差值,当该比较结果为非负数且所述比较结果小于或等于点云帧对所述当前点云点采用的积分时长时,将所述当前点云点存入相应的存储空间,这样积分时长较长的点云点其可能在多个点云帧中重复出现,从而使点云帧中的点云分布更加均匀;而当比较结果为非负数且大于点云帧对当前点云点采用的积分时长时,则不对该当前点云点做处理。
举例来说,若设置两个存储空间,其中第一存储空间的结束时间为0.1s,第二存储空间的结束时间为0.2s,而新采集的当前点云点的积分时长为0.1s,其采集时间为0.05s,那么该当前点云点则应放入第一存储空间,而由于第二存储空间的结束时间和采集时间的差值大于积分时长,所以在遍历到第二存储空间时则不对该点云点进行处理,也即不放入第二存储空间。
再举例来说,若设置两个存储空间,其中第一存储空间的结束时间为0.1s,第二存储空间的结束时间为0.2s,而新采集的当前点云点的积分时长为0.2s,其采集时间为0.05s,那么第一存储空间的结束时间和采集时间的差值非负且小于积分时长,那么于第二存储空间的结束时间和采集时间的差值非负且小于积分时长,因此,当前点云点应存入第一存储空间和第二存储空间,也即其会在两个点云帧中均输出。
最后,在步骤S3中,当所述采集时间大于所述存储空间的结束时间时输出该存储空间中的当前点云帧。
当所述采集时间大于所述存储空间的结束时间时,也即结束时间Ti和当前点云点的采集时间T的差值为负值时,则将第i个存储空间中存储的点云数据作为当前点云帧输出。或者,还可以是,当采集时间等于存储空间的结束时间时,将当前点云点存储至该存储空间,并同时将该存储空间存储的点云数据作为当前点云帧输出。
在输出当前点云帧之后,可以清空已输出所述当前点云帧的存储空间,并重置该存储空间的结束时间,其中,重置后的结束时间为所述至少两个存储空间中最晚的结束时间加上预设时间间隔,该预设时间间隔可以根据实际需要合理设定,例如,其可以是一个常量,例如0.1s、0.2s、0.3s等。
多个点云点的空间位置信息可以包括以下至少一种:与所述测距装置的距离信息、在所述测距装置的视场范围内的位置信息、高度信息、在所述测距装置扫描的目标场景中的空间位置信息,或者还可以包括其他使用的空间信息,其中对于不同类型的空间位置信息,点云帧对点云点采用的积分时长的设置可能不同,下文中将根据空间位置信息的具体类型对积分时长的设置做描述。
在一个实施例中,由于测距装置例如激光雷达按照特定的扫描模式(特别是非重复扫描方式)对目标场景进行扫描时,其扫描图案可能会呈现在视场范围内分布不均的情况,例如,会呈现如图3和图4所示的中间密、两边稀疏的情况,这样的扫描方式很可能会导致最终获得的点云的分布也出现中间密、两边稀疏的情况,因此,为了使最终输出的点云帧的点云分布更加均衡,可以根据点云点在测距装置的视场范围内的位置信息合理设定积分时长,示例性地,测距装置的视场范围可以包括第一视场区域和第二视场区域,例如,所述第一视场区域内的扫描密度与所述第二视场区域内的扫描密度不同,其中,所述点云帧中对所述第一视场区域内的点云点采用的积分时长与对所述第二视场区域内的点云点采用的积分时长不同,
具体地,可以根据不同视场区域内的扫描密度来合理的设定每个视场区域内的点云点对应的积分时长,例如,可以根据每个点云点的角度信息来确定其所处的视场区域,从而再根据该视场区域的扫描密度,确定对点云点采用的积分时长,特别是,对于扫描密度较低的视场区域内的点云点采用相对长的积分时长,对于扫描密度较高的视场区域内的点云点采用相对短的积分时长,例如,所述第一视场区域内的扫描密度低于所述第二视场区域内的扫描密度,所述点云帧中对所述第一视场区域内的点云点采用的积分时长大于对所述第二视场区域内的点云点采用的积分时长,可选地,所述第一视场区域位于测距装置的视场的边缘区域,第二视场区域可以位于测距装置的视场范围的中间区域。通过这样的设置,可以使扫描密度较低的视场区域内的点云点变的更加稠密,从而使每帧点云帧中的点云点的空间分布更加均匀、合理,更好地刻画扫描场景中的物体信息,通过该方法构建的 点云帧可以作为一种良好的显示点云的方式,也可以作为后续算法的基础,使得后续算法能够更加精准,减少点云稀疏带来的处理错误。
值得一提的是,对于不同的测距装置其扫描方式可能不同,那么其扫描密度的分布也就可能不同,因此,可以根据其具体的扫描方式将测距装置的视场范围划分为多于两个的视场区域,其中,每个视场区域可以具有不同的扫描密度,而对扫描密度较大的视场区域内的点云点采用的积分时长大于对扫描密度较小的视场区域内的点云点采用的积分时长,从而使输出的点云帧中扫描密度较小的视场区域内的点云点更加稠密。
在另一个实施例中,所述测距装置扫描的目标场景包括第一空间区域和第二空间区域,其中,所述点云帧中对所述第一空间区域内的点云点采用的积分时长与对所述第二空间区域内的点云点采用的积分时长不同。可选地,所述第一空间区域内的点云密度小于所述第二空间区域内的点云密度,所述点云帧对所述第一空间区域内的点云点采用的积分时长大于对所述第二空间区域内的点云点采用的积分时长。通过对点云密度较小的空间区域内的点云点采用较长的积分时长,从而让输出的点云帧中的每个空间区域内的点云密度尽可能的大体相等,从而使每帧点云帧中的点云点的空间分布更加均匀、合理。
可以根据任意适合的方法获取目标场景空间中的不同空间区域内的点云密度,例如,可以首先获取目标场景的至少一帧的点云,所述目标场景包括至少两个空间区域,该些空间区域可以根据实际需要进行合理的设定,例如可以通过例如多个长方形格子的方式将目标场景空间进行分割为多个空间区域,分割的顺序可以根据实际需要合理设定,例如可以从左到右依次分割,也可以从下到上依次分割等等。然后,获取每个空间区域中的点云密度,也即每个空间区域中的点云点的数量,从而确定每个空间区域的点云密度。
可选地,第一空间区域还可以是道路上的空间,所述第二空间区域可以是道路两旁的空间,例如在测距装置应用于自动驾驶场景(例如在车辆的前方设置至少一个测距装置)时,通常对于道路不同位置处的目标具有不同的关注度,因此,可以使道路上的空间内的点云点和道路两旁的空间内的点云点采用不同的积分时长,例如,道路上的目标的关注度要高对道路两旁的空间上的目标的关注度,则点云帧可以对位于道路上的空间内的点云点的积分时长大于位于道路两旁的空间内的点云点采用的积分时长,具体可以根据实际的场景需要进行合理的设定,在此不做具体限定。
值得一提的是,在本文中扫描密度和点云密度是两个不同的概念,其中,一个区域的扫描密度可以指的是一段时长内出射到该区域内的光脉冲的数量,而空间区域的点云密度则可以指的是在一帧点云中的该空间区域内的点云点的数量,通过高频率的发射和接收,可以获取海量的探测点的距离及角度信息等空间位置信息,称为点云,而这些探测点则可以称为点云点。
在再一个实施例中,由于测距装置扫描的目标场景不同,其对于不同高度位置处的点云的要求也不同,例如,测距装置应用于自动驾驶场景时,由于用户希望能够更清楚的看清车道线,因此用户会对地面或接近地面的物体较为感兴趣,而对于较高的物体(例如树冠)并不会太感兴趣,因此可以根据点云点的高度位置来确定对相应点云点采用的积分时长,示例性地,高度信息包括第一高度区间和第二高度区间,所述点云帧对所述第一高度区间内的点云点的积分时长与对所述第二高度区间内的点云点的积分时长不同,可选地,所述第一高度区间内的高度值小于所述第二高度区间内的高度值;所述点云帧中对所述第一高度区间内的点云点的积分时长大于对所述第二高度区间内的点云点的积分时长。例如第一高度区间为低于4m的高度区间,第二高度区间为高于4m的高度区间,或者,第一高度区间可以为低于3m的高度区间,而第二高度区间为高于3m的高度区间,或者,第一高度区间可以为低于2m的高度区间,而第二高度区间为高于2m的高度区间,或者,第一高度区间可以为低于1m的高度区间,而第二高度区间为高于1m的高度区间,该些高度区间可以根据需要进行合理的设定,在此不对其进行具体限定。值得一提的是,高度信息还可以包括多于两个的高度区间,在此不做具体限定。
在本文中,高度信息可以包括所述点云点相对地面的高度信息,可以通过任意适合的方法获取地面的坐标信息,例如所述地面的坐标为基于所述测距装置对目标场景进行扫描而获得的至少一个点云帧分割出所述地面而获得的。
通过上述根据点云点的高度位置自适应调节不同高度区间内的点云点采用的积分时长的方法,可以使对目标场景中用户比较感兴趣的区域内的点云点采用较长的积分时长,而对不太感兴趣的区域内的点云点采用较短的积分时长,从而增加感兴趣区域内的点云密度,使其更能够清晰的呈现该些区域的物体,从而更好地刻画扫描场景中的物体信息,有利于对例如自动驾驶场景中的路况信息做出更准确的判断,从而保证驾驶的安全性。
在又一个实施例中,由于同样的物体,在近处所呈现的点云要多于在远处所呈现的点云,因此当需要识别的物体距离测距装置较远时,一帧点云帧中描述这个物体的点云点会较少,影响检测精度,因此可以根据点云点与测距装置之间的距离信息设定积分时长,例如,距离信息包括第一距离区间和第二距离区间,其中,所述点云帧中对所述第一距离区间内的点云点采用的积分时长与对所述第二距离区间内的点云点采用的积分时长不同,特别是,所述第一距离区间内的点云密度与所述第二距离区间内的点云密度不同,所述点云帧中对所述第一距离区间内的点云点采用的积分时长与对所述第二距离区间内的点云点采用的积分时长不同。在一个具体示例中,所述第一距离区间内的点云密度高于所述第二距离区间内的点云密度;所述点云帧中对所述第一距离区间内的点云点的积分时长小于对所述第二距离区间内的点云点的积分时长。通过对第二距离区间内的点云点采用较长的 积分时长,可以增加第二距离区间内的点云密度,以使所形成的点云帧具有更加丰富、完备的信息。
在一个示例中,所述第一距离区间内的距离值小于所述第二距离区间内的距离值,所述点云帧对具有所述第一距离区间内的点云点采用的积分时长小于对具有所述第二距离区间内的点云点采用的积分时长。通过对与测距装置距离不同的点云点采用不同的积分时长,可以使得所形成的点云帧具有更加丰富、完备的信息,特别是对于远距离物体的呈现更为清晰,解决当前点云帧远处过稀而导致对远处物体刻画不足、检测不准的问题,使得激光雷达(特别是新型非重复性扫描激光雷达)能尽可能满足自动驾驶等应用对于远距离物体感知的能力。
值得一提的是,距离区间的划分可以根据实际的点云点分布特点而合理设定,其可以包括第一距离区间、第二距离区间,或者还可以包括多于两个的距离区间,该距离区间可以包括一段距离或者还可以是只包括一个点的距离,具体可以根据实际需要合理设定。
在常规的点云帧的构建中,其对于不同距离处的点云点通常采用大体相同的积分时长,例如,如图6所示的均采用0.1s的积分时长,而本发明实施例中则是对不同距离区间采用不同的积分时长,如图7至图9给出了三种典型的积分时长随距离的变化函数,其均呈现出近处积分时长较小、远处积分时长较大的特征,相较于图6所示的传统积分时长不变的方案,更加适配于例如激光雷达的测距装置采样近处密远处稀的特点,使得形成的点云帧在不同距离处更加均衡。
值得一提的是图7至图9的变化函数仅是作为示例,并不构成限制,积分时长随距离变化的变化函数其可以并不仅限于呈线性变化的情况,其还可以是呈现曲线增长变化的情况,或者其他任意适合的变化函数也均可适用于本实施例。
在本文中,所述点云帧中对不同空间位置处的点云点采用的积分时长为所述测距装置基于用户根据点云点的空间位置信息确定的积分时长而设定的,或者,所述点云帧中对不同空间位置处的点云点采用的积分时长为所述测距装置基于不同的应用场景而设定的。
综上所述,根据本发明实施例的构建点云帧的方法,可自适应地调节积分时长和积分空间,使每帧点云帧中的点云点的空间分布更加均匀、合理,更好地刻画扫描场景中的物体信息,通过该方法构建的点云帧可以作为一种良好的显示点云的方式,也可以作为后续算法的基础,使得后续算法能够更加精准,减少点云稀疏带来的处理错误。
下面,参考图10描述本发明实施例的一种目标检测方法,该方法基于前述的构建点云帧的方法而进行,因此,前文中的构建点云帧的方法可以结合至本实施例,在此为了避免重复,不再对构建点云帧的方法再进行描述。
作为示例,如图10所示,本发明实施例的目标检测方法包括以下步骤:首先,在步骤S1001中,通过测距装置对目标场景进行扫描;接着,在步骤S1002中,获取测距装置依次采集的多个点云点;接着,在步骤S1003中,根据所述多个点云点的空间位置信息将所述多个点云点形成多帧点云帧依次输出,其中,所述点云帧中对不同空间位置处的点云点采用的积分时长不同;最后,在步骤S1004中,基于输出的至少一帧所述点云帧获取所述目标场景中的检测目标的位置信息,其中,该位置信息可以包括距离信息和/或角度信息。
若目标检测方法为学习型方法,可先基于常规点云帧或者前文所提构建点云帧的方法构建的点云帧进行检测模型训练,然后基于前文所提点云帧进行目标检测;若为非学习型方法,则直接基于前文所提点云帧进行目标检测。
在一个示例中,为了解决积分时长的增加,可能会导致远处运动模糊,造成对目标定位不准的问题,基于输出的至少一帧所述点云帧获取所述目标场景中的检测目标的位置信息,包括:获取当前时刻输出的当前点云帧;分割所述当前点云帧中的每个目标的点云簇;去除每个目标的点云簇中距当前时刻大于预定时长阈值的点云点,以获得每个目标的裁剪后的点云簇,也即去除每个目标的点云簇中采集时间距当前时刻大于预定时长阈值的点云点,举例来说,当前点云帧的输出时刻为当前时刻,若当前时刻为0.2s,预设时长阈值为0.1s,则将采集时间在0.1s之前的点云点去除,而保留0.1s至0.2s之间采集的点云点,该些点云点的位置信息最能够反应相应目标的位置信息,其中,该预定时长阈值可以根据实际情况合理设定,在此不做具体限定;基于所述剪裁后的点云簇确定当前时刻每个目标的位置信息,从而能够更加准确的定位目标的位置。
由于输出的多帧点云帧为按照前文中的构建点云帧的方法形成,每帧点云帧中的点云点的空间分布更加均匀、合理,因此,其本实施例的目标检测方法能够更好地刻画扫描场景中的物体信息,特别是,在根据不同距离区间合理调节点云点的积分时长时,使远处的点云点更加稠密,因此相对传统的点云帧其可以获得更远的检测,提升了目标检测能力,特别是拓展了例如激光雷达的目标检测距离。
下面,参考图11对本发明一个实施例的测距装置做描述,其中,前述测距装置的特征可以结合到本实施例中。
在一些实施例中,如图所述测距装置1100还包括一个或多个处理器1102,一个或多个存储器1101,一个或多个处理器1102共同地或单独地工作。可选地,测距装置还可以 包括输入装置(未示出)、输出装置(未示出)以及图像传感器(未示出)中的至少一个,这些组件通过总线系统和/或其它形式的连接机构(未示出)互连。
存储器1101用于存储处理器可执行的程序指令,例如用于存储用于实现根据本发明实施例的构建点云帧的方法和/或目标检测方法中的相应步骤和程序指令。可以包括一个或多个计算机程序产品,所述计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。所述易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。所述非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。
所述输入装置可以是用户用来输入指令的装置,并且可以包括键盘、鼠标、麦克风和触摸屏等中的一个或多个。
所述输出装置可以向外部(例如用户)输出各种信息(例如图像或声音),并且可以包括显示器、扬声器等中的一个或多个,用于将构建的点云帧输出为图像或视频。
通信接口(未示出)用于测距装置和其他设备之间进行通信,包括有线或者无线方式的通信。测距装置可以接入基于通信标准的无线网络,如WiFi、2G、3G、4G、5G或它们的组合。在一个示例性实施例中,通信接口经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信接口还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
处理器1102可以是中央处理单元(CPU)、图像处理单元(GPU)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者具有数据处理能力和/或指令执行能力的其它形式的处理单元,并且可以控制测距装置中的其它组件以执行期望的功能。所述处理器能够执行所述存储器中存储的指令,以执行本文描述的本发明实施例的构建点云帧的方法和/或目标检测方法,该些方法参考前述实施例中的描述,在此不再重复赘述。例如,处理器能够包括一个或多个嵌入式处理器、处理器核心、微型处理器、逻辑电路、硬件有限状态机(FSM)、数字信号处理器(DSP)或它们的组合。在本实施例中,所述处理器包括现场可编程门阵列(FPGA),其中,测距装置的运算电路可以是现场可编程门阵列(FPGA)的一部分。
所述测距装置包括一个或多个处理器,共同地或单独地工作,存储器用于存储程序指令;所述处理器用于执行所述存储器存储的程序指令,当所述程序指令被执行时,所述处理器用于实现根据本发明实施例的构建点云帧的方法和/或目标检测方法中的相应步骤,为避免重复,对该些方法的具体描述可以参考前述实施例的相关描述。
在一种实施方式中,本发明实施方式的测距装置可应用于移动平台,测距装置可安装在移动平台的平台本体。具有测距装置的移动平台可对外部环境进行测量,例如,测量移动平台与障碍物的距离用于避障等用途,和对外部环境进行二维或三维的测绘。在某些实施方式中,移动平台包括无人飞行器、汽车、遥控车、机器人、船、相机中的至少一种。当测距装置应用于无人飞行器时,平台本体为无人飞行器的机身。当测距装置应用于汽车时,平台本体为汽车的车身。该汽车可以是自动驾驶汽车或者半自动驾驶汽车,在此不做限制。当测距装置应用于遥控车时,平台本体为遥控车的车身。当测距装置应用于机器人时,平台本体为机器人。当测距装置应用于相机时,平台本体为相机本身。
本发明实施例中的测距装置由于用于执行前述的方法,而移动平台包括该测距装置,因此测距装置和移动平台均具有和前述方法相同的优点。
另外,本发明实施例还提供了一种计算机存储介质,其上存储有计算机程序。在所述计算机可读存储介质上可以存储一个或多个计算机程序指令,处理器可以运行存储器存储的所述程序指令,以实现本文所述的本发明实施例中(由处理器实现)的功能以及/或者其它期望的功能,例如以执行根据本发明实施例的构建点云帧的方法和/或目标检测方法的相应步骤,在所述计算机可读存储介质中还可以存储各种应用程序和各种数据,例如所述应用程序使用和/或产生的各种数据等。
例如,所述计算机存储介质例如可以包括智能电话的存储卡、平板电脑的存储部件、个人计算机的硬盘、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)、便携式紧致盘只读存储器(CD-ROM)、USB存储器、或者上述存储介质的任意组合。所述计算机可读存储介质可以是一个或多个计算机可读存储介质的任意组合。例如一个计算机可读存储介质包含用于将点云数据转换为二维图像的计算机可读的程序代码,和/或将点云数据进行三维重建的计算机可读的程序代码等。
应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(Programmable Gate Array;以下简称:PGA),现场可编程门阵列(Field Programmable Gate Array;简称:FPGA)等。
尽管这里已经参考附图描述了示例实施例,应理解上述示例实施例仅仅是示例性的,并且不意图将本发明的范围限制于此。本领域普通技术人员可以在其中进行各种改变和修改,而不偏离本发明的范围和精神。所有这些改变和修改意在被包括在所附权利要求所要 求的本发明的范围之内。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。例如,以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个设备,或一些特征可以忽略,或不执行。
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本发明的实施例可以在没有这些具体细节的情况下实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
类似地,应当理解,为了精简本发明并帮助理解各个发明方面中的一个或多个,在对本发明的示例性实施例的描述中,本发明的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该本发明的方法解释成反映如下意图:即所要求保护的本发明要求比在每个权利要求中所明确记载的特征更多的特征。更确切地说,如相应的权利要求书所反映的那样,其发明点在于可以用少于某个公开的单个实施例的所有特征的特征来解决相应的技术问题。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本发明的单独实施例。
本领域的技术人员可以理解,除了特征之间相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的替代特征来代替。
此外,本领域的技术人员能够理解,尽管在此所述的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本发明的范围之内并且形成不同的实施例。例如,在权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。
本发明的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本发明实施例的一些模块的一些或者全部功能。本发明还可以实现为用于执行这里所描述的方法的一部分或者全部的装置程序(例 如,计算机程序和计算机程序产品)。这样的实现本发明的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。
应该注意的是上述实施例对本发明进行说明而不是对本发明进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。本发明可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。

Claims (34)

  1. 一种构建点云帧的方法,其特征在于,所述方法包括:
    获取测距装置依次采集的多个点云点;
    根据所述多个点云点的空间位置信息将所述多个点云点形成多帧点云帧依次输出,其中,所述点云帧中对不同空间位置处的点云点采用的积分时长不同。
  2. 如权利要求1所述的方法,其特征在于,所述根据所述多个点云点的空间位置信息将所述多个点云点形成多帧点云帧依次输出,包括:
    根据所述空间位置信息确定所述点云帧中对不同空间位置处的点云点采用的积分时长;
    根据每个所述点云点的采集时间、所述积分时长和每个所述点云帧的结束时间确定在每个所述点云帧中输出的点云点。
  3. 如权利要求1所述的方法,其特征在于,至少相邻两帧的点云帧的点云点具有重叠部分。
  4. 如权利要求1所述的方法,其特征在于,后一帧的点云帧中包括前一帧的点云帧中的采集时间和后一帧的结束时间之间的时间差小于或等于所述积分时长的点云点。
  5. 如权利要求1所述的方法,其特征在于,所述空间位置信息包括以下至少一种:与所述测距装置的距离信息、在所述测距装置的视场范围内的位置信息、高度信息、在所述测距装置扫描的目标场景中的空间位置信息。
  6. 如权利要求5所述的方法,其特征在于,所述测距装置的视场范围包括第一视场区域和第二视场区域,其中,所述点云帧中对所述第一视场区域内的点云点采用的积分时长与对所述第二视场区域内的点云点采用的积分时长不同。
  7. 如权利要求6所述的方法,其特征在于,所述第一视场区域内的扫描密度与所述第二视场区域内的扫描密度不同。
  8. 如权利要求6所述的方法,其特征在于,所述第一视场区域内的扫描密度低于所述第二视场区域内的扫描密度;
    所述点云帧中对所述第一视场区域内的点云点采用的积分时长大于对所述第二视场区域内的点云点采用的积分时长。
  9. 如权利要求8所述的方法,其特征在于,所述第二视场区域位于所述测距装置的视场的中间区域,所述第一视场区域位于所述测距装置的视场的边缘区域。
  10. 如权利要求5所述的方法,其特征在于,所述测距装置扫描的目标场景包括第一空间区域和第二空间区域,其中,所述点云帧中对所述第一空间区域内的点云点采用的积分时长与对所述第二空间区域内的点云点采用的积分时长不同。
  11. 如权利要求10所述的方法,其特征在于,所述第一空间区域内的点云密度小于所述第二空间区域内的点云密度,所述点云帧对所述第一空间区域内的点云点采用的积分时长大于对所述第二空间区域内的点云点采用的积分时长。
  12. 如权利要求10所述的方法,其特征在于,所述第一空间区域是道路上的空间,所述第二空间区域是道路两旁的空间。
  13. 如权利要求12所述的方法,其特征在于,当所述测距装置应用于自动驾驶场景时,所述点云帧对位于道路上的空间内的点云点采用的积分时长大于对位于道路两旁的空间内的点云点采用的积分时长。
  14. 如权利要求5所述的方法,其特征在于,所述高度信息包括第一高度区间和第二高度区间,所述点云帧对所述第一高度区间内的点云点的积分时长与对所述第二高度区间内的点云点的积分时长不同。
  15. 如权利要求14所述的方法,其特征在于,
    所述第一高度区间内的高度值小于所述第二高度区间内的高度值;
    所述点云帧中对所述第一高度区间内的点云点的积分时长大于对所述第二高度区间内的点云点的积分时长。
  16. 如权利要求14所述的方法,其特征在于,所述测距装置应用于自动驾驶场景。
  17. 如权利要求5所述的方法,其特征在于,所述高度信息包括所述点云点相对地面的高度信息,所述地面的坐标为基于所述测距装置对目标场景进行扫描而获得的至少一个点云帧分割出所述地面而获得的。
  18. 如权利要求5所述的方法,其特征在于,所述距离信息包括第一距离区间和第二距离区间,其中,所述点云帧中对所述第一距离区间内的点云点采用的积分时长与对所述第二距离区间内的点云点采用的积分时长不同。
  19. 如权利要求18所述的方法,其特征在于,所述第一距离区间内的点云密度与所述第二距离区间内的点云密度不同。
  20. 如权利要求18所述的方法,其特征在于,所述第一距离区间内的点云密度高于所述第二距离区间内的点云密度;
    所述点云帧中对所述第一距离区间内的点云点的积分时长小于对所述第二距离区间内的点云点的积分时长。
  21. 如权利要求18所述的方法,其特征在于,所述第一距离区间内的距离值小于所述第二距离区间内的距离值,所述点云帧对具有所述第一距离区间内的点云点采用的积分时长小于对具有所述第二距离区间内的点云点采用的积分时长。
  22. 如权利要求2所述的方法,其特征在于,根据每个所述点云点的采集时间、所 述积分时长和每个所述点云帧的结束时间确定在每个所述点云帧中输出的点云点,具体包括:
    确定预定时间段内采集的每个点云点的采集时间和当前点云帧的结束时间之间的时间差;
    将所述时间差小于或等于积分时长的点云点添加至所述当前点云帧。
  23. 如权利要求2所述的方法,其特征在于,根据每个所述点云点的采集时间、所述积分时长和每个点云帧的结束时间确定在每个点云帧中输出的点云点,包括:
    将当前点云点的采集时间和预设的至少两个存储空间的结束时间进行比较获得比较结果,其中,每个所述存储空间用于存储将要形成的点云帧的点云数据;
    根据所述比较结果和所述当前点云点的积分时长确定所述当前点云点应存入的存储空间;
    当所述采集时间大于所述存储空间的结束时间时输出该存储空间中的当前点云帧。
  24. 如权利要求23所述的方法,其特征在于,预设的至少两个存储空间是基于测距装置的点云点的采集速率而设定的。
  25. 如权利要求23所述的方法,其特征在于,所述方法还包括:
    清空已输出所述当前点云帧的存储空间,并重置该存储空间的结束时间,其中,重置后的结束时间为所述至少两个存储空间中最晚的结束时间加上预设时间间隔。
  26. 如权利要求23所述的方法,其特征在于,根据所述比较结果和所述当前点云点的积分时长确定所述当前点云点应存入的存储空间,包括:
    当所述比较结果为非负数且所述比较结果小于或等于点云帧对所述当前点云点采用的积分时长时,将所述当前点云点存入相应的存储空间。
  27. 如权利要求1至26任一项所述的方法,其特征在于,所述点云帧中对不同空间位置处的点云点采用的积分时长为所述测距装置基于用户根据点云点的空间位置信息确定的积分时长而设定的,或者,所述点云帧中对不同空间位置处的点云点采用的积分时长为所述测距装置基于不同的应用场景而设定的。
  28. 一种目标检测方法,其特征在于,所述目标检测方法包括:
    通过测距装置对目标场景进行扫描;
    根据如权利要求1至27任一项所述的方法依次输出多帧点云帧;
    基于输出的至少一帧所述点云帧获取所述目标场景中的检测目标的位置信息。
  29. 如权利要求28所述的目标检测方法,其特征在于,基于输出的至少一帧所述点云帧获取所述目标场景中的检测目标的位置信息,包括:
    获取当前时刻输出的当前点云帧;
    分割所述当前点云帧中的每个目标的点云簇;
    去除每个目标的点云簇中采集时间距当前时刻大于预定时长阈值的点云点,以获得每个目标的裁剪后的点云簇;
    基于所述剪裁后的点云簇确定当前时刻每个目标的位置信息。
  30. 一种测距装置,其特征在于,所述测距装置包括:
    存储器,用于存储可执行的程序指令;
    处理器,用于执行所述存储器中存储的所述程序指令,使得所述处理器执行1至27任一项所述的构建点云帧的方法,或者,使得所述处理器执行28或29所述的目标检测方法。
  31. 如权利要求30所述的测距装置,其特征在于,所述测距装置包括:
    发射模块,用于发射光脉冲序列,以探测目标场景;
    扫描模块,用于将所述发射模块发射的光脉冲序列的传播路径依次改变至不同方向出射,形成一个扫描视场;
    探测模块,用于接收经物体反射回的光脉冲序列,以及根据所述反射回的光脉冲序列确定所述物体相对所述测距装置的距离和/或方位,以生成所述点云点。
  32. 一种可移动平台,其特征在于,所述可移动平台包括:
    可移动平台本体;
    至少一个如权利要求30或31所述的测距装置,设置于所述可移动平台本体。
  33. 如权利要求32所述的可移动平台,其特征在于,所述可移动平台包括无人机、机器人、车或船。
  34. 一种计算机存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现1至27任一项所述的构建点云帧的方法,或者,实现28或29所述的目标检测方法。
PCT/CN2020/091005 2020-05-19 2020-05-19 构建点云帧的方法、目标检测方法、测距装置、可移动平台和存储介质 WO2021232227A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080006494.8A CN114026461A (zh) 2020-05-19 2020-05-19 构建点云帧的方法、目标检测方法、测距装置、可移动平台和存储介质
PCT/CN2020/091005 WO2021232227A1 (zh) 2020-05-19 2020-05-19 构建点云帧的方法、目标检测方法、测距装置、可移动平台和存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/091005 WO2021232227A1 (zh) 2020-05-19 2020-05-19 构建点云帧的方法、目标检测方法、测距装置、可移动平台和存储介质

Publications (1)

Publication Number Publication Date
WO2021232227A1 true WO2021232227A1 (zh) 2021-11-25

Family

ID=78708959

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/091005 WO2021232227A1 (zh) 2020-05-19 2020-05-19 构建点云帧的方法、目标检测方法、测距装置、可移动平台和存储介质

Country Status (2)

Country Link
CN (1) CN114026461A (zh)
WO (1) WO2021232227A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882339A (zh) * 2022-03-23 2022-08-09 太原理工大学 基于实时稠密点云地图的煤矿巷道孔眼自主识别方法
CN115047471A (zh) * 2022-03-30 2022-09-13 北京一径科技有限公司 确定激光雷达点云分层的方法、装置、设备及存储介质
CN114882339B (zh) * 2022-03-23 2024-04-16 太原理工大学 基于实时稠密点云地图的煤矿巷道孔眼自主识别方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107817501A (zh) * 2017-10-27 2018-03-20 广东电网有限责任公司机巡作业中心 一种可变扫描频率的点云数据处理方法
CN107817503A (zh) * 2016-09-14 2018-03-20 北京百度网讯科技有限公司 应用于激光点云数据的运动补偿方法和装置
CN108732584A (zh) * 2017-04-17 2018-11-02 百度在线网络技术(北京)有限公司 用于更新地图的方法和装置
CN109934920A (zh) * 2019-05-20 2019-06-25 奥特酷智能科技(南京)有限公司 基于低成本设备的高精度三维点云地图构建方法
US20200003869A1 (en) * 2018-07-02 2020-01-02 Beijing Didi Infinity Technology And Development Co., Ltd. Vehicle navigation system using pose estimation based on point cloud
CN110850439A (zh) * 2020-01-15 2020-02-28 奥特酷智能科技(南京)有限公司 一种高精度三维点云地图构建方法
CN110849374A (zh) * 2019-12-03 2020-02-28 中南大学 地下环境定位方法、装置、设备及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107817503A (zh) * 2016-09-14 2018-03-20 北京百度网讯科技有限公司 应用于激光点云数据的运动补偿方法和装置
CN108732584A (zh) * 2017-04-17 2018-11-02 百度在线网络技术(北京)有限公司 用于更新地图的方法和装置
CN107817501A (zh) * 2017-10-27 2018-03-20 广东电网有限责任公司机巡作业中心 一种可变扫描频率的点云数据处理方法
US20200003869A1 (en) * 2018-07-02 2020-01-02 Beijing Didi Infinity Technology And Development Co., Ltd. Vehicle navigation system using pose estimation based on point cloud
CN109934920A (zh) * 2019-05-20 2019-06-25 奥特酷智能科技(南京)有限公司 基于低成本设备的高精度三维点云地图构建方法
CN110849374A (zh) * 2019-12-03 2020-02-28 中南大学 地下环境定位方法、装置、设备及存储介质
CN110850439A (zh) * 2020-01-15 2020-02-28 奥特酷智能科技(南京)有限公司 一种高精度三维点云地图构建方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LI, XINGDONG ET AL.: "Inter Frame Ppoint Clouds Registration Algorithm for Pose Optimization of Depth Camera", JOURNAL OF ZHEJIANG UNIVERSITY(ENGINEERING SCIENCE), vol. 53, no. 9, 30 September 2018 (2018-09-30), pages 1749 - 1758, XP055869673 *
ZHAO, KAI ET AL.: "A Preprocessing Method of 3D Point Clouds Registration in Urban Environments", OPTO-ELECTRONIC ENGINEERING, vol. 45, no. 12, 31 December 2018 (2018-12-31), pages 75 - 83, XP055869667 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882339A (zh) * 2022-03-23 2022-08-09 太原理工大学 基于实时稠密点云地图的煤矿巷道孔眼自主识别方法
CN114882339B (zh) * 2022-03-23 2024-04-16 太原理工大学 基于实时稠密点云地图的煤矿巷道孔眼自主识别方法
CN115047471A (zh) * 2022-03-30 2022-09-13 北京一径科技有限公司 确定激光雷达点云分层的方法、装置、设备及存储介质
CN115047471B (zh) * 2022-03-30 2023-07-04 北京一径科技有限公司 确定激光雷达点云分层的方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN114026461A (zh) 2022-02-08

Similar Documents

Publication Publication Date Title
WO2022126427A1 (zh) 点云处理方法、点云处理装置、可移动平台和计算机存储介质
Liu et al. TOF lidar development in autonomous vehicle
WO2020243962A1 (zh) 物体检测方法、电子设备和可移动平台
WO2021072710A1 (zh) 移动物体的点云融合方法、系统及计算机存储介质
CN114424086A (zh) 用于lidar测量的处理系统
WO2021253430A1 (zh) 绝对位姿确定方法、电子设备及可移动平台
EP2972471A1 (en) Lidar scanner
WO2021051281A1 (zh) 点云滤噪的方法、测距装置、系统、存储介质和移动平台
US20220035011A1 (en) Temporal jitter in a lidar system
WO2021062581A1 (zh) 路面标识识别方法及装置
WO2022198637A1 (zh) 点云滤噪方法、系统和可移动平台
WO2021239054A1 (zh) 空间测量装置、方法、设备以及计算机可读存储介质
WO2020215252A1 (zh) 测距装置点云滤噪的方法、测距装置和移动平台
CN111587381A (zh) 调整扫描元件运动速度的方法及测距装置、移动平台
WO2021232227A1 (zh) 构建点云帧的方法、目标检测方法、测距装置、可移动平台和存储介质
US20210333401A1 (en) Distance measuring device, point cloud data application method, sensing system, and movable platform
WO2020237663A1 (zh) 一种多通道激光雷达点云插值的方法和测距装置
US20210255289A1 (en) Light detection method, light detection device, and mobile platform
WO2022256976A1 (zh) 稠密点云真值数据的构建方法、系统和电子设备
WO2020177076A1 (zh) 一种探测装置初始状态标定方法及装置
WO2021232247A1 (zh) 点云着色方法、点云着色系统和计算机存储介质
WO2021253429A1 (zh) 数据处理方法、装置、激光雷达和存储介质
WO2020155142A1 (zh) 一种点云重采样的方法、装置和系统
WO2020107379A1 (zh) 应用于测距装置的反射率校正方法、测距装置
WO2021026766A1 (zh) 扫描模块的电机转速控制方法、装置和测距装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20937151

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20937151

Country of ref document: EP

Kind code of ref document: A1