WO2022256976A1 - Method and system for constructing dense point cloud truth value data and electronic device - Google Patents

Method and system for constructing dense point cloud truth value data and electronic device Download PDF

Info

Publication number
WO2022256976A1
WO2022256976A1 PCT/CN2021/098657 CN2021098657W WO2022256976A1 WO 2022256976 A1 WO2022256976 A1 WO 2022256976A1 CN 2021098657 W CN2021098657 W CN 2021098657W WO 2022256976 A1 WO2022256976 A1 WO 2022256976A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
data
cloud data
distance measuring
motion
Prior art date
Application number
PCT/CN2021/098657
Other languages
French (fr)
Chinese (zh)
Inventor
宫正
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2021/098657 priority Critical patent/WO2022256976A1/en
Publication of WO2022256976A1 publication Critical patent/WO2022256976A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images

Definitions

  • Embodiments of the present invention relate to the technical field of distance measurement, and more specifically, relate to a method, system and electronic device for constructing dense point cloud truth data.
  • Three-dimensional point cloud detection systems such as lidar, laser rangefinders and other distance measuring devices can measure the time of light propagation between the distance measuring device and the measured object, that is, the time of flight (Time-of-Flight, TOF) , to detect the distance from the measured object to the distance measuring device.
  • TOF Time-of-Flight
  • the point cloud data collected by the ranging device is sparse and unevenly distributed. How to combine image information to construct a complete, high-precision and dense point cloud data is an urgent problem to be solved.
  • Deep learning is considered to be an effective solution, which trains a specific network through a large amount of data, so as to obtain a network model that can generate dense point cloud data based on dense images and sparse point cloud data. Deep learning requires a large amount of data for training to improve the estimation accuracy of the network. Therefore, how to automatically construct dense point cloud true value data is one of the most important problems to be solved at present.
  • the first aspect of the embodiment of the present invention provides a method for constructing dense point cloud truth data, including:
  • the device and the inertial measurement device are arranged on the same movable platform, and the fields of view of the camera and the at least two distance measuring devices at least partially overlap;
  • Noise filtering is performed on the corrected target point cloud data to obtain the target point cloud data after noise filtering
  • the dense point cloud true value data includes the target point cloud data after the noise filtering process and the image data after the spatial alignment.
  • the second aspect of the embodiment of the present invention provides a system for constructing dense point cloud true value data.
  • the system includes a camera, at least two distance measuring devices, an inertial measurement device and a processor.
  • the camera, the at least two The distance measuring device and the inertial measurement device are arranged on the same movable platform, and the fields of view of the camera and the at least two distance measuring devices at least partially overlap;
  • the processor is used to:
  • Noise filtering is performed on the corrected target point cloud data to obtain the target point cloud data after noise filtering
  • Output dense point cloud true value data comprises the target point cloud data after the noise filtering process and the image data after carrying out the space alignment.
  • the third aspect of the embodiment of the present invention provides an electronic device, and the electronic device includes:
  • a processor configured to execute the instructions stored in the memory, so that the processor executes the method for constructing dense point cloud truth data as described above
  • the method, system, and electronic device for constructing dense point cloud true value data in the embodiments of the present invention use point cloud data collected by multiple ranging devices to construct dense point clouds, which can avoid the problems of inconsistency in depth and blurring of moving objects caused by inconsistent viewing angles; through Time alignment, spatial alignment and noise filtering are performed on the point cloud data collected by multiple ranging devices, which improves the accuracy of the dense point cloud true value data.
  • Fig. 1 is a schematic frame diagram of a distance measuring device involved in an embodiment of the present invention
  • Fig. 2 is a schematic diagram of an embodiment in which the distance measuring device involved in the embodiment of the present invention adopts a coaxial optical path;
  • Fig. 3 is a schematic diagram of a scanning pattern of a ranging device according to an embodiment of the present invention.
  • Fig. 4 is a schematic flowchart of a method for constructing dense point cloud true value data according to an embodiment of the present invention
  • FIG. 5 is a schematic diagram of a construction system of dense point cloud true value data according to an embodiment of the present invention.
  • Fig. 6A is the target point cloud data before distortion correction is performed on the motion distortion caused by the motion of the distance measuring device according to an embodiment of the present invention
  • Fig. 6B is the target point cloud data after distortion correction is performed on the motion distortion caused by the motion of the distance measuring device according to an embodiment of the present invention
  • Fig. 7A is a point cloud cluster before distortion correction is performed on the motion distortion caused by the motion of the target object according to an embodiment of the present invention
  • Fig. 7B is a point cloud cluster after distortion correction is performed on the motion distortion caused by the motion of the target object according to an embodiment of the present invention
  • Fig. 8 is a schematic diagram of dense point cloud true value data according to an embodiment of the present invention.
  • Fig. 9 is a schematic block diagram of an electronic device according to an embodiment of the present invention.
  • the laser ranging method provided by various embodiments of the present invention can be applied to a ranging device, and the ranging device can be an electronic device such as a laser radar or a laser ranging device.
  • the ranging device is used to sense external environment information, for example, distance information, orientation information, reflection intensity information, speed information, etc. of environmental objects.
  • the distance measuring device can detect the distance from the detection object to the distance measurement device by measuring the time of light propagation between the distance measurement device and the detection object, that is, the time-of-flight (TOF).
  • TOF time-of-flight
  • the ranging device 100 may include a transmitting circuit 110 , a receiving circuit 120 , a sampling circuit 130 and an arithmetic circuit 140 .
  • the transmitting circuit 110 may transmit a sequence of light pulses (eg, a sequence of laser pulses).
  • the receiving circuit 120 can receive the light pulse sequence reflected by the measured object, and perform photoelectric conversion on the light pulse sequence to obtain an electrical signal, and then process the electrical signal and output it to the sampling circuit 130 .
  • the sampling circuit 130 can sample the electrical signal to obtain a sampling result.
  • the arithmetic circuit 140 can determine the distance between the distance measuring device 100 and the measured object based on the sampling result of the sampling circuit 130 .
  • the distance measuring device 100 may further include a control circuit 150, which can control other circuits, for example, control the working time of each circuit and/or set parameters for each circuit.
  • a control circuit 150 can control other circuits, for example, control the working time of each circuit and/or set parameters for each circuit.
  • the ranging device shown in FIG. 1 includes a transmitting circuit, a receiving circuit, a sampling circuit and an arithmetic circuit for emitting a light beam for detection
  • the transmitting circuit The number of any one of the receiving circuit, the sampling circuit, and the computing circuit can also be at least two, for emitting at least two light beams along the same direction or respectively along different directions; wherein, the at least two light paths can be simultaneously It can also be emitted at different times.
  • the light emitting chips in the at least two emitting circuits are packaged in the same module.
  • each emitting circuit includes a laser emitting chip, and the dies of the laser emitting chips in the at least two emitting circuits are packaged together and accommodated in the same packaging space.
  • the distance measuring device 100 may further include a scanning module 160 configured to change the propagation direction of at least one laser pulse sequence emitted by the transmitting circuit to emit.
  • the module including the transmitting circuit 110, the receiving circuit 120, the sampling circuit 130 and the operation circuit 140, or the module including the transmitting circuit 110, the receiving circuit 120, the sampling circuit 130, the operation circuit 140 and the control circuit 150 may be called a measuring circuit.
  • the ranging module 150 may be independent from other modules, for example, the scanning module 160 .
  • a coaxial optical path can be used in the distance measuring device, that is, the light beam emitted by the distance measuring device and the reflected light beam share at least part of the optical path in the distance measuring device.
  • the distance measuring device may also adopt an off-axis optical path, that is, the light beam emitted by the distance measuring device and the reflected light beam are respectively transmitted along different optical paths in the distance measuring device.
  • Fig. 2 shows a schematic diagram of an embodiment in which the distance measuring device of the present invention adopts a coaxial optical path.
  • the ranging device 200 includes a ranging module 210, and the ranging module 210 includes a transmitter 203 (which may include the above-mentioned transmitting circuit), a collimator element 204, a detector 205 (which may include the above-mentioned receiving circuit, sampling circuit and computing circuit) and The light path changing element 206 .
  • the distance measuring module 210 is used for emitting light beams, receiving return light, and converting the return light into electrical signals.
  • the transmitter 203 can be used to transmit the light pulse sequence.
  • the transmitter 203 may emit a sequence of laser pulses.
  • the laser beam emitted by the transmitter 203 is a narrow-bandwidth beam whose wavelength is outside the range of visible light.
  • the collimating element 204 is arranged on the outgoing light path of the emitter, and is used for collimating the light beam emitted from the emitter 203, and collimating the light beam emitted by the emitter 203 into a parallel light that is emitted to the scanning module.
  • the collimating element is also used for converging at least a part of the returned light reflected by the measured object.
  • the collimating element 204 may be a collimating lens or other elements capable of collimating light beams.
  • the transmitting optical path and receiving optical path in the distance measuring device are combined before the collimating element 204 through the optical path changing element 206, so that the transmitting optical path and the receiving optical path can share the same collimating element, so that the optical path more compact.
  • the emitter 203 and the detector 205 respectively use their own collimating elements, and the optical path changing element 206 is arranged on the optical path after the collimating element.
  • the optical path changing element can be realized by using a small-area reflector. Merge the transmit light path and the receive light path.
  • the optical path changing element may also use a reflector with a through hole, wherein the through hole is used to transmit the outgoing light of the emitter 203 , and the reflector is used to reflect the return light to the detector 205 . In this way, the shielding of the return light by the support of the small reflector in the case of using the small reflector can be reduced.
  • the optical path changing element deviates from the optical axis of the collimating element 204 .
  • the optical path changing element may also be located on the optical axis of the collimating element 204 .
  • the ranging device 200 also includes a scanning module 202 .
  • the scanning module 202 is placed on the outgoing optical path of the distance measuring module 210.
  • the scanning module 202 is used to change the transmission direction of the collimated light beam 219 emitted by the collimating element 204 and project it to the external environment, and project the return light to the collimating element 204 .
  • the returned light is converged onto the detector 205 through the collimation element 204 .
  • the scanning module 202 may include at least one optical element for changing the propagation path of the beam, wherein the optical element may change the propagation path of the beam by reflecting, refracting, diffracting and so on.
  • the scanning module 202 includes lenses, mirrors, prisms, gratings, liquid crystals, optical phased arrays (Optical Phased Array), or any combination of the above optical elements.
  • at least part of the optical elements are movable, for example, driven by a driving module to move the at least part of the optical elements, and the moving optical elements can reflect, refract or diffract light beams to different directions at different times.
  • multiple optical elements of scanning module 202 may rotate or vibrate about a common axis 209, with each rotating or vibrating optical element serving to continuously change the direction of propagation of the incident light beam.
  • the multiple optical elements of scanning module 202 may rotate at different rotational speeds, or vibrate at different speeds.
  • at least some of the optical elements of scanning module 202 may rotate at substantially the same rotational speed.
  • the multiple optical elements of the scanning module may also rotate about different axes.
  • the multiple optical elements of the scanning module may also rotate in the same direction or in different directions; or vibrate in the same direction or in different directions, which is not limited here.
  • the scanning module 202 includes a first optical element 214 and a driver 216 connected to the first optical element 214, the driver 216 is used to drive the first optical element 214 to rotate around the rotation axis 209, so that the first optical element 214 changes The direction of the collimated light beam 219 .
  • the first optical element 214 projects the collimated light beam 219 in different directions.
  • the angle between the direction of the collimated light beam 219 changed by the first optical element and the rotation axis 209 changes as the first optical element 214 rotates.
  • first optical element 214 includes a pair of opposing non-parallel surfaces through which collimated light beam 219 passes.
  • the first optical element 214 comprises a prism having a thickness varying along at least one radial direction.
  • the first optical element 114 includes a wedge prism that refracts the collimated light beam 219 .
  • the scanning module 202 further includes a second optical element 215 , the second optical element 215 rotates around the rotation axis 209 , and the rotation speed of the second optical element 215 is different from that of the first optical element 214 .
  • the second optical element 215 is used to change the direction of the light beam projected by the first optical element 214 .
  • the second optical element 215 is connected with another driver 217, and the driver 217 drives the second optical element 215 to rotate.
  • the first optical element 214 and the second optical element 215 can be driven by the same or different drivers, so that the rotation speed and/or the direction of rotation of the first optical element 214 and the second optical element 215 are different, thereby projecting a collimated light beam 219 to the external space In different directions, a larger spatial range can be scanned.
  • the controller 218 controls the drivers 216 and 217 to drive the first optical element 214 and the second optical element 215 respectively.
  • the rotational speeds of the first optical element 214 and the second optical element 215 can be determined according to the area and pattern expected to be scanned in practical applications.
  • Drivers 216 and 217 may include motors or other drivers.
  • the second optical element 215 includes a pair of opposing non-parallel surfaces through which the light beam passes. In one embodiment, the second optical element 215 comprises a prism whose thickness varies along at least one radial direction. In one embodiment, the second optical element 215 includes a wedge prism.
  • the scanning module 202 further includes a third optical element (not shown in the figure) and a driver for driving the movement of the third optical element.
  • the third optical element comprises a pair of opposite non-parallel surfaces through which the light beam passes.
  • the third optical element comprises a prism whose thickness varies along at least one radial direction.
  • the third optical element comprises a wedge prism. At least two of the first, second and third optical elements rotate at different rotational speeds and/or deflections.
  • each optical element in the scanning module 202 can project light to different directions, such as directions 211 and 213 , so as to scan the space around the distance measuring device 200 .
  • FIG. 3 is a schematic diagram of a scanning pattern of the ranging device 200 . It can be understood that when the speed of the optical elements in the scanning module changes, the scanning pattern will also change accordingly.
  • the detection object 201 When the light 211 projected by the scanning module 202 hits the detection object 201 , a part of the light is reflected by the detection object 201 to the distance measuring device 200 in a direction opposite to the projected light 211 .
  • the return light 212 reflected by the detection object 201 enters the collimation element 204 after passing through the scanning module 202 .
  • the detector 205 is placed on the same side of the collimation element 204 as the emitter 203, and the detector 205 is used to convert at least part of the return light passing through the collimation element 204 into an electrical signal.
  • each optical element is coated with an anti-reflection film.
  • the thickness of the antireflection film is equal to or close to the wavelength of the light beam emitted by the emitter 203, which can increase the intensity of the transmitted light beam.
  • a filter layer is coated on the surface of a component located on the beam propagation path in the ranging device, or an optical filter is arranged on the beam propagation path, for at least transmitting the wavelength band of the beam emitted by the transmitter, Reflect other bands to reduce noise from ambient light to the receiver.
  • the transmitter 203 may include a laser diode, and the laser diode emits nanosecond-level laser pulses.
  • the laser pulse receiving time can be determined, for example, the laser pulse receiving time can be determined by detecting the rising edge time and/or falling edge time of the electrical signal pulse. In this way, the distance measuring device 200 can calculate the TOF by using the pulse receiving time information and the pulse sending time information, so as to determine the distance from the detection object 201 to the distance measuring device 200 .
  • the distance and orientation detected by the ranging device 200 can be used for remote sensing, obstacle avoidance, surveying and mapping, modeling, navigation and so on.
  • the distance measuring device in the embodiment of the present invention can be applied to a movable platform, and the distance measuring device can be installed on the movable platform body of the movable platform.
  • the movable platform with the distance measuring device can measure the external environment, for example, measure the distance between the movable platform and obstacles for purposes such as obstacle avoidance, and perform two-dimensional or three-dimensional mapping of the external environment.
  • the mobile platform includes at least one of an unmanned aerial vehicle, an automobile, a remote control vehicle, a robot, and a camera.
  • the movable platform body When the ranging device is applied to the unmanned aerial vehicle, the movable platform body is the fuselage of the unmanned aerial vehicle.
  • the movable platform body When the distance measuring device is applied to a car, the movable platform body is the body of the car.
  • the car may be an automatic driving car or a semi-automatic driving car, which is not limited here.
  • the movable platform body When the distance measuring device is applied to the remote control car, the movable platform body is the body of the remote control car.
  • the movable platform body When the ranging device is applied to a robot, the movable platform body is a robot.
  • FIG. 4 shows the construction method 400 of dense point cloud true value data according to an embodiment of the present invention. Schematic flow chart. As shown in FIG. 4, the construction method 400 of dense point cloud true value data includes the following steps:
  • step S410 the image data collected by the camera, the point cloud data collected by at least two distance measuring devices, and the pose data of the camera and the distance measuring device collected by the inertial measurement device are acquired, the camera, the at least The two distance measuring devices and the inertial measurement device are arranged on the same movable platform, and the fields of view of the camera and the at least two distance measuring devices at least partially overlap;
  • step S420 acquire the time information of the image data, the point cloud data and the pose data, and time-align the image data, the point cloud data and the pose data according to the time information ;
  • step S430 the camera, the distance measuring device and the inertial measurement device are calibrated to obtain the spatial transformation relationship between the image, the point cloud data and the pose data, and according to the The image data, the point cloud data and the pose data are spatially aligned according to the spatial transformation relationship;
  • step S440 the target point cloud data within a preset time range is extracted from the point cloud data transformed into the same space coordinate system, and the motion distortion in the target point cloud data is distorted according to the change of the pose data Correction to obtain the corrected target point cloud data;
  • step S450 perform noise filtering processing on the corrected target point cloud data to obtain target point cloud data after noise filtering processing
  • step S460 the dense point cloud true value data is output, and the dense point cloud true value data includes the target point cloud data after the noise filtering process and the image data after the spatial alignment.
  • the method 400 for constructing dense point cloud true value data in the embodiment of the present invention constructs dense point cloud data based on point cloud data collected by multiple distance measuring devices, which can avoid the problems of inconsistency in depth and blurring of moving objects caused by inconsistent viewing angles of a single distance measuring device .
  • FIG. 5 shows a system 500 for constructing dense point cloud ground truth data according to an embodiment of the present invention.
  • the camera 510 and at least two distance measuring devices 520 are arranged on the same movable platform, and the fields of view of the camera 510 and the at least two distance measuring devices 520 at least partially overlap.
  • mobile platforms include cars, remote control cars, robots, unmanned aerial vehicles, etc.
  • the camera 510 is controlled to collect image data
  • at least two ranging devices 520 are controlled to collect point cloud data
  • the inertial measurement device 530 is controlled to collect pose data while the movable platform is in motion.
  • the point cloud data collected by the ranging device 510 is a collection of a large number of point cloud points, wherein each point cloud point contains at least three-dimensional coordinate information of an object; the image collected by the camera is a two-dimensional image (for example, a grayscale image or an RGB image), Key features such as color and texture of objects can be provided.
  • each distance measuring device 520 and camera 510 have the same orientation, and their layout fits each other as much as possible to obtain as large an overlapping field of view as possible, so as to collect point cloud data and images in the same field of view to the greatest extent data.
  • the ranging device 520 may be implemented as the ranging device described with reference to FIG. 1 and FIG. 2
  • the camera 510 may be implemented as a monocular camera, specifically, a monocular visible light camera.
  • an inertial measurement device 530 is also mounted on the movable platform, and the inertial measurement device 530 may be a high-precision inertial navigation system capable of obtaining a high-precision position and attitude with 6 degrees of freedom.
  • the inertial measurement device 530 includes, but is not limited to, a gyroscope and an accelerometer.
  • the gyroscope is used to form a navigation coordinate system, stabilize the measurement axis of the accelerometer in this coordinate system, and give the heading and attitude angle; To measure the acceleration of the carrier, the acceleration can be integrated to obtain the velocity, and the velocity can be integrated to obtain the distance.
  • the inertial measurement device 530 may be an inertial measurement device equipped on the movable platform itself.
  • the pose data collected by the inertial measurement device 530 can be understood as the pose data of the camera 510 and the distance measuring device 520, the inertial The pose data collected by the measuring device 530 can be used to correct point cloud distortion caused by the movement of the distance measuring device 520 itself.
  • step S420 the time information of the image data, point cloud data and pose data is acquired, and the image data, point cloud data and pose data are time-aligned according to the time information.
  • Temporal information includes timestamps of image data, point cloud data, and pose data.
  • Time alignment is to find the image data, point cloud data and pose data at the same time, so that the distortion of the point cloud data can be corrected according to the pose data, and the matching image data and point cloud data can be obtained, so as to form a Dense point cloud ground truth data for training network models.
  • image data, point cloud data and pose data are data after time synchronization based on the data synchronization system 540 .
  • the time stamps of the data are unified in different time domains.
  • the data synchronization system 540 transmits the time-synchronized data to the processor 550, and the processor 550 time-aligns the image data, point cloud data, and pose data according to the time information of the image data, point cloud data, and pose data, specifically Specifically, the time stamp of the point cloud data can be obtained, and the image data and pose data corresponding to the time can be found according to the time stamp, where the pose data can include the pose data corresponding to each moment, and the image data can include the corresponding image data.
  • GPS Global Positioning System, Global Positioning System
  • IEEE 1588 clock synchronization protocol based on Ethernet can be used for time synchronization, or based on PPS (Pulse per Second, pulse signal per second) for time synchronization Wait.
  • PPS Pulse per Second, pulse signal per second
  • step S430 is performed to calibrate the camera, distance measuring device and inertial measurement device to obtain the spatial transformation relationship between the image, point cloud data and pose data, and according to the spatial Transformation relations spatially align image data, point cloud data, and pose data.
  • the calibration between different sensors is to calibrate the external parameters of different sensors; the spatial alignment is to unify the data of multiple sensors into the same spatial coordinate system.
  • the calibration can be performed in real time or in advance.
  • calibrating the camera, the distance measuring device and the inertial measurement device includes: calibrating at least two distance measuring devices with each other, calibrating the distance measuring device and the camera, and calibrating the distance measuring device and the inertial measurement device .
  • the multi-sensor data is unified under the spatial coordinate system, so as to facilitate subsequent distortion correction of point cloud data and obtain point cloud data with consistent depth.
  • the methods for calibrating the distance measuring device, the camera and the inertial measurement device include but not limited to manual calibration, semi-automatic calibration and fully automatic calibration.
  • automatic calibration it is possible to perform feature extraction and feature matching on point cloud data and images with specific markers placed in front of the distance measuring device and camera in advance, and then calibrate the distance between the distance measuring device and the camera based on the feature matching results.
  • external reference Taking manual calibration as an example, a user interaction interface can be provided.
  • the user interaction interface includes a parameter input area and a display area.
  • the parameter input area is used for the user to input the external parameters of the sensor
  • the display area is used to display the data collected by each sensor. The data displayed in the display area adjusts the external parameters of each sensor.
  • the point cloud data output by each distance measuring device can be converted to the position in space through the external parameters of the corresponding distance measuring device, so as to realize the position of the point cloud in space convert.
  • the user can determine whether the point cloud data after position transformation is in the same coordinate system.
  • the external parameters of the distance measuring devices are adjusted so that the point cloud data detected by the various distance measuring devices are located in the same coordinate system after conversion.
  • the distance measuring device includes a first distance measuring device and at least one second distance measuring device, wherein the space coordinate system of the first point cloud data collected by the first distance measuring device is used as the reference coordinate system, then in the When at least two distance measuring devices are calibrated, each second distance measuring device and the first distance measuring device can be calibrated respectively to obtain the second point cloud data collected by each second distance measuring device and the first distance measuring device.
  • the spatial transformation relationship between the first point cloud data collected by the device that is, when the coordinate system of the point cloud data is unified, the point cloud data collected by all distance measuring devices are converted to the first point cloud collected by the first distance measuring device In the spatial coordinate system of the data.
  • the camera and the first distance measuring device can be calibrated to obtain the spatial transformation relationship between the image and the first point cloud data, thereby converting the image to the first point cloud data. in the space coordinate system.
  • the inertial measurement device and the first distance measuring device can be calibrated to obtain the spatial transformation relationship between the pose data and the first point cloud data, so that the pose data Convert to the space coordinate system of the first point cloud data.
  • the first distance measuring device may be any one of at least two distance measuring devices.
  • the coordinate system of the image data may be used as the reference coordinate system
  • the coordinate system of the inertial measurement data may be used as the reference coordinate system
  • the coordinate system of the movable platform may be used as the reference coordinate system, as long as the multi-sensor The data can be unified to the same coordinate system.
  • step S440 is executed to extract the target point cloud data within the preset time range from the point cloud data transformed into the same space coordinate system, and to adjust the target point cloud data according to the change of pose data Distortion correction is performed on the motion distortion in the image to obtain the corrected target point cloud data.
  • the time length of the preset time range may be longer than the acquisition time of a single frame of point cloud data, so as to obtain dense point cloud data.
  • the embodiment of the present invention adopts multiple ranging devices to jointly collect point cloud data, dense point cloud data can be obtained without too long a time, so the length of the preset time range can also be equal to or less than a single Acquisition time of frame point cloud data, thereby reducing motion distortion.
  • the preset time range is greater than or equal to 100ms.
  • the pose data aligned in time and space is applied to the target point cloud data to correct the motion distortion caused by the movement of the ranging device itself, that is, the target point cloud data Perform motion compensation.
  • the reason for motion distortion is that the point cloud points in the target point cloud data are not collected at the same time, but are collected continuously within the preset time range.
  • the distance measuring device moves with the movable platform. Since the point cloud point measures the distance between the object and the distance measuring device, for the same object, the depth measured at different times is inconsistent, which causes the point cloud data to be distorted.
  • Figure 6A and Figure 6B respectively show the target point cloud data before and after distortion correction. It can be seen from Fig. 6A and Fig. 6B that distortion correction effectively reduces the smear phenomenon of point cloud data.
  • the pose data and the target point cloud data are located in the same space coordinate system, the movement direction and speed of the target point cloud data within the preset time range can be obtained according to the change of the pose data, and according to the obtained movement direction
  • the motion distortion caused by the motion of the ranging device in the target point cloud data can be corrected for distortion.
  • the pose data for a long time can be extracted to determine the motion direction and motion speed of point cloud data.
  • corresponding correction coefficients can be assigned to the target point cloud data at different times, and the correction coefficients can also be called distortion coefficients. Small; the larger the distortion, the larger the correction coefficient, and the larger the correction range required.
  • the target point cloud coefficients at different moments can be corrected according to the correction coefficients corresponding to the target point cloud data at different moments and the moving direction and moving speed of the target point cloud data.
  • the correction coefficient When determining the correction coefficient, it is necessary to determine a reference time point within the preset time range, and assign correction coefficients to the point cloud points collected at different times according to the interval between the collection time of point cloud points and the reference time point.
  • the correction coefficients of all point cloud points in the target point cloud data can be evenly divided by time, and normalized to [0-1], so as to assign point cloud points according to time t(t0-tn) Corresponding correction factor.
  • the closer the distance to the reference time point the smaller the distortion and the closer the correction coefficient is to 0; on the contrary, the farther the distance from the reference time point is, the greater the distortion is and the closer the correction coefficient is to 1.
  • the motion interpolation estimation at the corresponding moment is applied to obtain the final distortion correction result.
  • the moving direction of the target point cloud data is ⁇ q and the moving distance is ⁇ d
  • the distortion correction for the motion distortion caused by the movement of the ranging device is to perform motion compensation on the target point cloud data within the preset time range, so that the point cloud points collected at different times are all compensated to the point where the target time point is located.
  • the target time point may be the starting moment of the preset time range
  • performing distortion correction on the motion distortion caused by the movement of the distance measuring device in the target point cloud data includes: for each point cloud data in the target According to the pose data, estimate its position at the initial moment of the preset time range, so as to correct the target point cloud data to the position corresponding to the initial moment of the preset time range.
  • the target point cloud data may also be corrected to a position corresponding to the end moment of the preset time range, or a position corresponding to any other moment in the preset time range.
  • the point cloud cluster of the moving object contained in the point cloud data will also have motion distortion. Therefore, after the distortion correction is performed on the distortion caused by the movement of the ranging device, the distortion correction can also be performed on the motion distortion caused by the movement of the target object in the target point cloud data, so as to further improve the accuracy of the dense point cloud data.
  • the point cloud clusters corresponding to different target objects in the target point cloud data are first identified, the displacement information of the point cloud clusters at different moments is obtained, and the motion estimation of the target object is performed according to the displacement information.
  • the target point cloud data may be the target point cloud data corrected for the above-mentioned motion distortion caused by the movement of the ranging device.
  • the distortion correction may be performed on the motion distortion caused by the motion of the target object first, and then the distortion correction is performed on the motion distortion caused by the motion of the distance measuring device.
  • the method for detecting the point cloud cluster of the target object includes but not limited to the scene flow algorithm, the target detection algorithm, or the moving target estimation algorithm based on clustering and segmentation, etc.
  • the embodiment of the present invention can also adopt any other A suitable method is to identify the point cloud clusters belonging to the target object, and the embodiment of the present invention does not limit the specific method for identifying the point cloud clusters.
  • the point cloud data within a longer time range can be extracted from the point cloud data, and the point cloud cluster of the moving target can be identified from it for motion estimation.
  • the length of the long-term range is greater than the length of the preset time range, so that more accurate displacement information can be obtained.
  • distortion correction is performed on the motion distortion of the point cloud clusters induced by the motion of the target object. Similar to the above-mentioned distortion correction method for the target point cloud data, first assign corresponding correction coefficients to the point cloud clusters at different times according to the time information of the point cloud clusters, and according to the correction coefficients corresponding to the point cloud clusters at different times and the results of motion estimation Distortion correction is performed on point cloud clusters at different moments.
  • the correction coefficient When determining the correction coefficient, it is also necessary to determine a reference time point within the preset time range, and assign correction coefficients to the point cloud points collected at different times according to the interval between the collection time of each point cloud point in the point cloud cluster and the reference time point, Among them, the selection of the reference time point should be consistent with the reference time point selected when performing distortion correction on the target point cloud data.
  • the correction coefficients of all point cloud points in the point cloud cluster can be evenly divided by time, so as to assign corresponding correction coefficients to point cloud points by time. The closer the distance to the reference time point, the closer the correction coefficient is to 0, and the distance from the reference point The farther the time point is, the closer the correction factor is to 1. After distributing the distortion coefficients, the motion interpolation estimation at the corresponding moment is applied to obtain the final distortion correction result.
  • Fig. 7A is the point cloud cluster of the vehicle before the distortion correction is performed on the motion distortion caused by the movement of the vehicle
  • Fig. 7B is the point cloud cluster after the distortion correction is performed on Fig. 7A
  • Point cloud clusters of vehicles From the comparison of FIG. 7A and FIG. 7B , it can be seen that correcting the motion distortion of the point cloud cluster can significantly improve the smear phenomenon of the point cloud cluster.
  • each distance measuring device may receive The reflected light of laser pulses emitted by other ranging devices forms noise points, which will have a non-negligible impact on the dense point cloud true value data. Therefore, in step S450, identify noise points in the target point cloud data caused by crosstalk between different distance measuring devices, and filter out these noise points, so as to improve the accuracy of the dense point cloud true value data.
  • the noise filtering of the point cloud data may be performed using a neighbor point noise filtering method or a depth map noise filtering method. For example, if the number of adjacent points within a certain neighborhood of a certain point cloud point is less than the preset threshold, the point cloud point is an isolated point. Due to the spatial continuity of the real measurement point, the isolated point is usually a noise point, so this can be Some point cloud points are filtered out as noise. In one embodiment, all point cloud points may use the same neighborhood range and preset threshold. In other embodiments, since the point clouds collected for objects that are closer to the distance are relatively dense, and the point clouds collected for objects that are far away are relatively sparse, they are more likely to be judged as noise points. The point cloud points can adopt a larger neighborhood range or a smaller threshold.
  • the noise filtering process in step S450 is not limited to the above-mentioned noise filtering method, and the filtered noise is not limited to the noise caused by the crosstalk between different ranging devices, but also includes other types of noise, such as sunlight, dust and ranging Noise caused by reflected light inside the device, etc.
  • step S460 the dense point cloud true value data is output, the dense point cloud true value data includes the target point cloud data after distortion correction and noise filtering processing and the image data after spatial alignment, the two have matching features, Together constitute the dense point cloud ground truth data, as shown in Figure 8.
  • the image data may be image data corresponding to a reference time point of distortion correction.
  • the method 400 for constructing dense point cloud true value data uses point cloud data collected by multiple distance measuring devices to construct a dense point cloud, which can avoid the problems of inconsistency in depth and blurring of moving objects caused by inconsistent viewing angles; By performing time alignment, spatial alignment and noise filtering on the point cloud data collected by multiple ranging devices, the accuracy of the dense point cloud true value data is improved.
  • the construction system 500 of dense point cloud true value data includes a camera 510, at least two distance measuring devices 520, an inertial measurement device 530 and a processor 550, the camera 510, at least two distance measuring devices 520 and an inertial measurement device
  • the device 530 is set on the same movable platform, and the fields of view of the camera 510 and at least two distance measuring devices 520 overlap at least in part;
  • the point cloud data of the camera 510 and the pose data of the camera 510 and the distance measuring device 520 collected by the inertial measurement device, the field of view of the camera 510 and at least two distance measuring devices 520 at least partially overlap; acquire image data, point cloud data and pose
  • the time information of the data, according to the time information, the image data, point cloud data and pose data are time-aligned; the camera 510, the distance measuring device 520 and the inertial measurement device are calibrated to obtain the image, point cloud data and pose data.
  • the orientations of the camera 510 and at least two distance measuring devices 520 are the same.
  • the layout of the camera 510 and the at least two distance measuring devices 520 is as close as possible to each other, so as to obtain as large an overlapping field of view as possible, so as to collect point cloud data and image data in the same field of view to the greatest extent.
  • the system 500 for constructing dense point cloud true value data further includes a data synchronization system 540, and the data synchronization system 540 is used to: receive image data collected by the camera 510, point cloud data collected by at least two ranging devices 520 , and the pose data collected by the inertial measurement device 530; time synchronization is performed on the image data, point cloud data and pose data; and the time-synchronized image data, point cloud data and pose data are sent to the processor 550.
  • the data synchronization system 540 is used to: receive image data collected by the camera 510, point cloud data collected by at least two ranging devices 520 , and the pose data collected by the inertial measurement device 530; time synchronization is performed on the image data, point cloud data and pose data; and the time-synchronized image data, point cloud data and pose data are sent to the processor 550.
  • the distance measuring device 520 includes a first distance measuring device 520 and at least one second distance measuring device 520, and calibrating the camera 510, the distance measuring device 520 and the inertial measurement device includes: respectively The distance measuring device 520 is calibrated with the first distance measuring device 520 to obtain the space transformation between the second point cloud data collected by each second distance measuring device 520 and the first point cloud data collected by the first distance measuring device 520 Relationship; calibrate the camera 510 and the first distance measuring device 520 to obtain the spatial transformation relationship between the image and the first point cloud data; calibrate the inertial measurement device and the first distance measuring device 520 to obtain the pose data
  • the spatial transformation relationship with the first point cloud data; the spatial alignment includes: transforming the image data, point cloud data and pose data into the space coordinate system of the first point cloud data.
  • the distortion correction is performed on the motion distortion in the target point cloud data according to the change of the pose data, including: obtaining the motion direction and speed of the target point cloud data within a preset time range according to the change of the pose data ; Perform distortion correction on the motion distortion caused by the motion of the ranging device 520 in the target point cloud data according to the motion direction and motion speed.
  • performing distortion correction on the motion distortion caused by the movement of the ranging device 520 in the target point cloud data includes: correcting the target point cloud data to a position corresponding to the initial moment of the preset time range.
  • performing distortion correction on the motion distortion caused by the movement of the distance measuring device 520 in the target point cloud data includes: assigning corresponding correction coefficients to the target point cloud data at different times according to the time information of the target point cloud data; The correction coefficients, motion directions and speeds corresponding to the target point cloud data at different times are used to correct the distortion of the target point cloud data at different times.
  • the processor 550 is also used to: identify point cloud clusters corresponding to different target objects in the target point cloud data; obtain displacement information of the point cloud clusters at different times, and perform motion estimation on the target object according to the displacement information ; Distortion correction is performed on the motion distortion of the point cloud cluster caused by the motion of the target object according to the result of the motion estimation.
  • the distortion correction is performed on the motion distortion of the point cloud cluster caused by the motion of the target object, including: assigning corresponding corrections to the point cloud cluster at different times according to the time information of the point cloud cluster Coefficient: Correct the distortion of point cloud clusters at different times according to the correction coefficients corresponding to point cloud clusters at different times and the results of motion estimation.
  • the noise filtering process includes: identifying noise points in the target point cloud data caused by crosstalk between different ranging devices 520, and filtering out these noise points.
  • the system 500 for constructing dense point cloud true value data in the embodiment of the present invention uses point cloud data collected by multiple distance measuring devices to construct a dense point cloud, which can avoid the problems of inconsistency in depth and blurring of moving objects caused by inconsistent viewing angles;
  • the point cloud data collected by the ranging device is time-aligned, space-aligned and noise-filtered, which improves the accuracy of the dense point cloud true value data.
  • the electronic device 900 may be implemented as an electronic device such as a computer, a server, or a vehicle-mounted terminal.
  • the electronic device 900 may be carried on a movable platform together with a camera, a distance measuring device, and an inertial measurement device, but is not limited thereto.
  • the mobile platform with the electronic device 900 can measure the external environment, for example, measure the distance between the mobile platform and obstacles for purposes such as obstacle avoidance, and perform two-dimensional or three-dimensional mapping of the external environment.
  • the electronic device 900 includes one or more processors 920 and one or more memories 910, and the one or more processors 920 work together or individually.
  • the electronic device 900 may further include at least one of an input device (not shown), an output device (not shown) and an image sensor (not shown), and these components are connected through a bus system and/or other forms Mechanisms (not shown) are interconnected.
  • the memory 910 is used for storing processor-executable program instructions, for example, for storing corresponding steps and program instructions for realizing the method for constructing dense point cloud truth data according to the embodiment of the present invention.
  • One or more computer program products may be included, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
  • the volatile memory may include, for example, random access memory (RAM) and/or cache memory (cache).
  • the non-volatile memory may include, for example, a read-only memory (ROM), a hard disk, a flash memory, and the like.
  • the processor 920 may be a central processing unit (CPU), a graphics processing unit (GPU), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or other forms of processing with data processing capabilities and/or instruction execution capabilities. unit, and can control other components in the electronic device 900 to perform desired functions.
  • the processor 920 can execute the instructions stored in the memory, so as to execute the method for constructing the dense point cloud ground truth data in the embodiment of the present invention described herein.
  • a processor can include one or more embedded processors, processor cores, microprocessors, logic circuits, hardware finite state machines (FSMs), digital signal processors (DSPs), or combinations thereof.
  • the processor includes a Field Programmable Gate Array (FPGA), wherein the operation circuit of the ranging device may be a part of the Field Programmable Gate Array (FPGA).
  • FPGA Field Programmable Gate Array
  • the input device may be a device used by a user to input an instruction, and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
  • the output device can output various information (such as images or sounds) to the outside (such as a user), and can include one or more of a display, a speaker, etc., and the output device can be used to output dense point cloud true value data.
  • the communication interface (not shown) is used to communicate with other devices, including wired or wireless communication.
  • the laser ranging device can access wireless networks based on communication standards, such as WiFi, 2G, 3G, 8G, 5G or a combination thereof.
  • the communication interface receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication interface further includes a near field communication (NFC) module to facilitate short range communication.
  • NFC near field communication
  • the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wide Band (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wide Band
  • Bluetooth Bluetooth
  • the processor 920 when the program instructions stored in the memory 910 are executed by the processor 920, the processor 920 is used to: acquire the image data collected by the camera, the point cloud data collected by at least two distance measuring devices, and the cameras and data collected by the inertial measurement device.
  • the pose data of the distance measuring device, the camera, at least two distance measuring devices and the inertial measurement device are arranged on the same movable platform, and the fields of view of the camera and the at least two distance measuring devices at least partially overlap; acquire image data, point cloud The time information of data and pose data, time-aligned image data, point cloud data and pose data according to the time information; calibrate the camera, ranging device and inertial measurement device to obtain images, point cloud data and pose The spatial transformation relationship between the data, and spatially align the image data, point cloud data and pose data according to the spatial transformation relationship; extract the target point within the preset time range from the point cloud data transformed into the same space coordinate system Cloud data, according to the change of pose data, the motion distortion in the target point cloud data is distorted to obtain the corrected target point cloud data; the corrected target point cloud data is subjected to noise filtering processing, and the filtered noise processing
  • the target point cloud data; output the dense point cloud true value data, the dense point cloud true value data includes the target point cloud data after noise filtering
  • the electronic device 900 in the embodiment of the present invention can obtain dense and high-precision dense point cloud truth data.
  • an embodiment of the present invention also provides a computer storage medium on which a computer program is stored.
  • One or more computer program instructions can be stored on the computer-readable storage medium, and the processor can execute the program instructions stored in the memory to realize the functions (implemented by the processor) in the embodiments of the present invention described herein And/or other desired functions, such as to execute the corresponding steps of the method for constructing dense point cloud truth data according to the embodiment of the present invention, various application programs and various data can also be stored in the computer-readable storage medium , such as various data used and/or generated by the application.
  • the computer storage medium may include, for example, a memory card of a smart phone, a memory component of a tablet computer, a hard disk of a personal computer, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), a portable compact disk Read Only Memory (CD-ROM), USB memory, or any combination of the above storage media.
  • the computer readable storage medium can be any combination of one or more computer readable storage medium. Because the computer program instructions stored in the computer storage medium are used to implement the method for constructing the dense point cloud true value data according to the embodiment of the present invention, it also has the above-mentioned advantages.
  • all or part may be implemented by software, hardware, firmware or other arbitrary combinations.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present invention will be generated in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server or data center by wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
  • the available medium may be a magnetic medium (such as a floppy disk, a hard disk, a magnetic tape), an optical medium (such as a digital video disc (digital video disc, DVD)), or a semiconductor medium (such as a solid state disk (solid state disk, SSD)), etc. .
  • a magnetic medium such as a floppy disk, a hard disk, a magnetic tape
  • an optical medium such as a digital video disc (digital video disc, DVD)
  • a semiconductor medium such as a solid state disk (solid state disk, SSD)
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or May be integrated into another device, or some features may be omitted, or not implemented.
  • the various component embodiments of the present invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all functions of some modules according to the embodiments of the present invention.
  • DSP digital signal processor
  • the present invention can also be implemented as an apparatus program (for example, a computer program and a computer program product) for performing a part or all of the methods described herein.
  • Such a program for realizing the present invention may be stored on a computer-readable medium, or may be in the form of one or more signals.
  • Such a signal may be downloaded from an Internet site, or provided on a carrier signal, or provided in any other form.

Abstract

A method and system for constructing dense point cloud truth value data and an electronic device. The method comprises: obtaining image data acquired by a camera, point cloud data acquired by at least two distance measurement apparatuses and pose data acquired by an inertial measurement apparatus, the described sensors being disposed on the same movable platform, and the field of view of the camera at least partially coinciding with the fields of view of the at least two distance measurement apparatuses; performing time alignment and space alignment on the image data, the point cloud data and the pose data; extracting target point cloud data within a preset time range, and performing distortion correction on a motion distortion in the target point cloud data according to a change in the pose data to obtain corrected target point cloud data; performing noise filtering processing on the corrected target point cloud data to obtain noise-filtered target point cloud data; and outputting dense point cloud truth value data, the dense point cloud truth value data comprising the noise-filtered target point cloud data and space-aligned image data. By means of the method, dense and high-accuracy point cloud truth value data can be constructed.

Description

稠密点云真值数据的构建方法、系统和电子设备Method, system and electronic device for constructing dense point cloud true value data
说明书manual
技术领域technical field
本发明实施例涉及测距技术领域,并且更具体地,涉及一种稠密点云真值数据的构建方法、系统和电子设备。Embodiments of the present invention relate to the technical field of distance measurement, and more specifically, relate to a method, system and electronic device for constructing dense point cloud truth data.
背景技术Background technique
诸如激光雷达在内的三维点云探测系统、激光测距仪等测距装置可以通过测量测距装置和被测物之间光传播的时间,即光飞行时间(Time-of-Flight,TOF),来探测被测物到测距装置的距离。Three-dimensional point cloud detection systems such as lidar, laser rangefinders and other distance measuring devices can measure the time of light propagation between the distance measuring device and the measured object, that is, the time of flight (Time-of-Flight, TOF) , to detect the distance from the measured object to the distance measuring device.
测距装置采集的点云数据具有稀疏性和分布不均匀性,如何结合图像信息,构建完整、高精度且稠密的点云数据是亟待解决的问题。深度学习被认为是一种有效的解决方案,其通过大量的数据对特定的网络进行训练,从而得到能够依据稠密的图像和稀疏的点云数据生成稠密的点云数据的网络模型。深度学习需要大量的数据进行训练,以提升网络的估计精度,因此,如何自动化地构建稠密点云真值数据是当前需要解决的首要问题之一。The point cloud data collected by the ranging device is sparse and unevenly distributed. How to combine image information to construct a complete, high-precision and dense point cloud data is an urgent problem to be solved. Deep learning is considered to be an effective solution, which trains a specific network through a large amount of data, so as to obtain a network model that can generate dense point cloud data based on dense images and sparse point cloud data. Deep learning requires a large amount of data for training to improve the estimation accuracy of the network. Therefore, how to automatically construct dense point cloud true value data is one of the most important problems to be solved at present.
发明内容Contents of the invention
在发明内容部分中引入了一系列简化形式的概念,这将在具体实施方式部分中进一步详细说明。本发明的发明内容部分并不意味着要试图限定出所要求保护的技术方案的关键特征和必要技术特征,更不意味着试图确定所要求保护的技术方案的保护范围。A series of concepts in simplified form are introduced in the Summary of the Invention, which will be further detailed in the Detailed Description. The summary of the invention in the present invention does not mean to limit the key features and essential technical features of the claimed technical solution, nor does it mean to try to determine the protection scope of the claimed technical solution.
针对现有技术的不足,本发明实施例第一方面提供了一种稠密点云真值数据的构建方法,包括:In view of the deficiencies of the prior art, the first aspect of the embodiment of the present invention provides a method for constructing dense point cloud truth data, including:
获取相机采集的图像数据、至少两个测距装置采集的点云数据、以及惯性测量装置采集的所述相机和所述测距装置的位姿数据,所述相机、所述至少两个测距装置和所述惯性测量装置设置在同一可移动平台上,并且所述相机和所述至少两个测距装置的视场至少部分重合;Obtain image data collected by the camera, point cloud data collected by at least two distance measuring devices, and pose data of the camera and the distance measuring device collected by the inertial measurement device, the camera, the at least two distance measuring devices The device and the inertial measurement device are arranged on the same movable platform, and the fields of view of the camera and the at least two distance measuring devices at least partially overlap;
获取所述图像数据、所述点云数据和所述位姿数据的时间信息,根据所述时间信息将所述图像数据、所述点云数据和所述位姿数据进行时间对齐;Acquiring time information of the image data, the point cloud data, and the pose data, and time-aligning the image data, the point cloud data, and the pose data according to the time information;
对所述相机、所述测距装置和所述惯性测量装置进行标定,以得到所述图像、所述点云数据和所述位姿数据之间的空间变换关系,并根据所述空间变换关系将所述图像数据、所述点云数据和所述位姿数据进行空间对齐;Calibrate the camera, the distance measuring device and the inertial measurement device to obtain the spatial transformation relationship between the image, the point cloud data and the pose data, and according to the spatial transformation relationship spatially aligning the image data, the point cloud data, and the pose data;
在变换到同一空间坐标系下的点云数据中提取预设时间范围内的目标点云数据,根据所述位姿数据的变化对所述目标点云数据中的运动畸变进行畸变修正,以得到修正后的目标点云数据;Extract the target point cloud data within the preset time range from the point cloud data transformed into the same space coordinate system, and perform distortion correction on the motion distortion in the target point cloud data according to the change of the pose data, so as to obtain The corrected target point cloud data;
对所述修正后的目标点云数据进行滤噪处理,得到滤噪处理后的目标点云数据;Noise filtering is performed on the corrected target point cloud data to obtain the target point cloud data after noise filtering;
输出稠密点云真值数据,所述稠密点云真值数据包括所述滤噪处理后的目标点云数据和进行所述空间对齐后的图像数据。Outputting dense point cloud true value data, the dense point cloud true value data includes the target point cloud data after the noise filtering process and the image data after the spatial alignment.
本发明实施例第二方面提供了一种稠密点云真值数据的构建系统,所述系统包括相机、至少两个测距装置、惯性测量装置和处理器,所述相机、所述至少两个测距装置和所述惯性测量装置设置在同一可移动平台上,并且所述相机和所述至少两个测距装置的视场至少部分重合;The second aspect of the embodiment of the present invention provides a system for constructing dense point cloud true value data. The system includes a camera, at least two distance measuring devices, an inertial measurement device and a processor. The camera, the at least two The distance measuring device and the inertial measurement device are arranged on the same movable platform, and the fields of view of the camera and the at least two distance measuring devices at least partially overlap;
所述处理器用于:The processor is used to:
获取所述相机采集的图像数据、所述至少两个测距装置采集的点云数据、以及所述惯性测量装置采集的所述相机和所述测距装置的位姿数据,所述相机和所述至少两个测距装置的视场至少部分重合;Acquiring the image data collected by the camera, the point cloud data collected by the at least two distance measuring devices, and the pose data of the camera and the distance measuring device collected by the inertial measurement device, the camera and the distance measuring device The fields of view of the at least two ranging devices at least partially overlap;
获取所述图像数据、所述点云数据和所述位姿数据的时间信息,根据所述时间信息将所述图像数据、所述点云数据和所述位姿数据进行时间对齐;Acquiring time information of the image data, the point cloud data, and the pose data, and time-aligning the image data, the point cloud data, and the pose data according to the time information;
对所述相机、所述测距装置和所述惯性测量装置进行标定,以得到所述图像、所述点云数据和所述位姿数据之间的空间变换关系,并根据所述空间变换关系将所述图像数据、所述点云数据和所述位姿数据进行空间对齐;Calibrate the camera, the distance measuring device and the inertial measurement device to obtain the spatial transformation relationship between the image, the point cloud data and the pose data, and according to the spatial transformation relationship spatially aligning the image data, the point cloud data, and the pose data;
在变换到同一空间坐标系下的点云数据中提取预设时间范围内的目标点云数据,根据所述位姿数据的变化对所述目标点云数据中的运动畸变进行畸变修正,以得到修正后的目标点云数据;Extract the target point cloud data within the preset time range from the point cloud data transformed into the same space coordinate system, and perform distortion correction on the motion distortion in the target point cloud data according to the change of the pose data, so as to obtain The corrected target point cloud data;
对所述修正后的目标点云数据进行滤噪处理,得到滤噪处理后的目标点云数据;Noise filtering is performed on the corrected target point cloud data to obtain the target point cloud data after noise filtering;
输出稠密点云真值数据,所述稠密点云真值数据包括所述滤噪处理后的 目标点云数据和进行所述空间对齐后的图像数据。Output dense point cloud true value data, described dense point cloud true value data comprises the target point cloud data after the noise filtering process and the image data after carrying out the space alignment.
本发明实施例第三方面提供一种电子设备,所述电子设备包括:The third aspect of the embodiment of the present invention provides an electronic device, and the electronic device includes:
存储器,用于存储可执行指令;memory for storing executable instructions;
处理器,用于执行所述存储器中存储的所述指令,使得所述处理器执行如上所述的稠密点云真值数据的构建方法A processor, configured to execute the instructions stored in the memory, so that the processor executes the method for constructing dense point cloud truth data as described above
本发明实施例的稠密点云真值数据的构建方法、系统和电子设备采用多个测距装置采集的点云数据构建稠密点云,能够避免视角不一致导致的深度不一致和运动目标模糊问题;通过对多个测距装置采集的点云数据进行时间对齐、空间对齐和滤噪处理,提升了稠密点云真值数据的精确度。The method, system, and electronic device for constructing dense point cloud true value data in the embodiments of the present invention use point cloud data collected by multiple ranging devices to construct dense point clouds, which can avoid the problems of inconsistency in depth and blurring of moving objects caused by inconsistent viewing angles; through Time alignment, spatial alignment and noise filtering are performed on the point cloud data collected by multiple ranging devices, which improves the accuracy of the dense point cloud true value data.
附图说明Description of drawings
为了更清楚地说明本发明实施例的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following will briefly introduce the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only some of the present invention. Embodiments, for those of ordinary skill in the art, other drawings can also be obtained according to these drawings without paying creative labor.
图1是本发明实施例所涉及的一种测距装置的示意性框架图;Fig. 1 is a schematic frame diagram of a distance measuring device involved in an embodiment of the present invention;
图2是本发明实施例所涉及的测距装置采用同轴光路的一种实施例的示意图;Fig. 2 is a schematic diagram of an embodiment in which the distance measuring device involved in the embodiment of the present invention adopts a coaxial optical path;
图3是根据本发明实施例的测距装置的一种扫描图案的示意图;Fig. 3 is a schematic diagram of a scanning pattern of a ranging device according to an embodiment of the present invention;
图4是本发明实施例的一种稠密点云真值数据的构建方法的示意性流程图;Fig. 4 is a schematic flowchart of a method for constructing dense point cloud true value data according to an embodiment of the present invention;
图5是本发明实施例的稠密点云真值数据的构建系统的示意图;5 is a schematic diagram of a construction system of dense point cloud true value data according to an embodiment of the present invention;
图6A是本发明一个实施例的对测距装置的运动引起的运动畸变进行畸变修正之前的目标点云数据;Fig. 6A is the target point cloud data before distortion correction is performed on the motion distortion caused by the motion of the distance measuring device according to an embodiment of the present invention;
图6B是本发明一个实施例的对测距装置的运动引起的运动畸变进行畸变修正之后的目标点云数据;Fig. 6B is the target point cloud data after distortion correction is performed on the motion distortion caused by the motion of the distance measuring device according to an embodiment of the present invention;
图7A是本发明一个实施例的对目标对象的运动引起的运动畸变进行畸变修正之前的点云簇;Fig. 7A is a point cloud cluster before distortion correction is performed on the motion distortion caused by the motion of the target object according to an embodiment of the present invention;
图7B是本发明一个实施例的对目标对象的运动引起的运动畸变进行畸变修正之后的点云簇;Fig. 7B is a point cloud cluster after distortion correction is performed on the motion distortion caused by the motion of the target object according to an embodiment of the present invention;
图8是本发明一个实施例的稠密点云真值数据的示意图;Fig. 8 is a schematic diagram of dense point cloud true value data according to an embodiment of the present invention;
图9是本发明一个实施例的电子设备的示意性框图。Fig. 9 is a schematic block diagram of an electronic device according to an embodiment of the present invention.
具体实施方式Detailed ways
为了使得本发明的目的、技术方案和优点更为明显,下面将参照附图详细描述根据本发明的示例实施例。显然,所描述的实施例仅仅是本发明的一部分实施例,而不是本发明的全部实施例,应理解,本发明不受这里描述的示例实施例的限制。基于本发明中描述的本发明实施例,本领域技术人员在没有付出创造性劳动的情况下所得到的所有其它实施例都应落入本发明的保护范围之内。In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. Apparently, the described embodiments are only some embodiments of the present invention, rather than all embodiments of the present invention, and it should be understood that the present invention is not limited by the exemplary embodiments described here. Based on the embodiments of the present invention described in the present invention, all other embodiments obtained by those skilled in the art without creative effort shall fall within the protection scope of the present invention.
在下文的描述中,给出了大量具体的细节以便提供对本发明更为彻底的理解。然而,对于本领域技术人员而言显而易见的是,本发明可以无需一个或多个这些细节而得以实施。在其他的例子中,为了避免与本发明发生混淆,对于本领域公知的一些技术特征未进行描述。In the following description, numerous specific details are given in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without one or more of these details. In other examples, some technical features known in the art are not described in order to avoid confusion with the present invention.
应当理解的是,本发明能够以不同形式实施,而不应当解释为局限于这里提出的实施例。相反地,提供这些实施例将使公开彻底和完全,并且将本发明的范围完全地传递给本领域技术人员。It should be understood that the invention can be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
在此使用的术语的目的仅在于描述具体实施例并且不作为本发明的限制。在此使用时,单数形式的“一”、“一个”和“所述/该”也意图包括复数形式,除非上下文清楚指出另外的方式。还应明白术语“组成”和/或“包括”,当在该说明书中使用时,确定所述特征、整数、步骤、操作、元件和/或部件的存在,但不排除一个或更多其它的特征、整数、步骤、操作、元件、部件和/或组的存在或添加。在此使用时,术语“和/或”包括相关所列项目的任何及所有组合。The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the/the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. It should also be understood that the terms "consists of" and/or "comprising", when used in this specification, identify the presence of stated features, integers, steps, operations, elements and/or parts, but do not exclude one or more other Presence or addition of features, integers, steps, operations, elements, parts and/or groups. As used herein, the term "and/or" includes any and all combinations of the associated listed items.
为了彻底理解本发明,将在下列的描述中提出详细的结构,以便阐释本发明提出的技术方案。本发明的可选实施例详细描述如下,然而除了这些详细描述外,本发明还可以具有其他实施方式。In order to thoroughly understand the present invention, a detailed structure will be provided in the following description to illustrate the technical solution proposed by the present invention. Alternative embodiments of the invention are described in detail below, however the invention may have other embodiments beyond these detailed descriptions.
本发明各个实施例提供的激光测距方法可以应用于测距装置,该测距装置可以是激光雷达、激光测距设备等电子设备。在一种实施方式中,测距装置用于感测外部环境信息,例如,环境目标的距离信息、方位信息、反射强 度信息、速度信息等。所述测距装置可以通过测量测距装置和探测物之间光传播的时间,即光飞行时间(Time-of-Flight,TOF),来探测探测物到测距装置的距离。The laser ranging method provided by various embodiments of the present invention can be applied to a ranging device, and the ranging device can be an electronic device such as a laser radar or a laser ranging device. In one embodiment, the ranging device is used to sense external environment information, for example, distance information, orientation information, reflection intensity information, speed information, etc. of environmental objects. The distance measuring device can detect the distance from the detection object to the distance measurement device by measuring the time of light propagation between the distance measurement device and the detection object, that is, the time-of-flight (TOF).
为了便于理解,以下将结合图1所示的测距装置100对测距的工作流程进行举例描述。For ease of understanding, the working process of distance measurement will be described below with reference to the distance measurement device 100 shown in FIG. 1 .
如图1所示,测距装置100可以包括发射电路110、接收电路120、采样电路130和运算电路140。As shown in FIG. 1 , the ranging device 100 may include a transmitting circuit 110 , a receiving circuit 120 , a sampling circuit 130 and an arithmetic circuit 140 .
发射电路110可以发射光脉冲序列(例如激光脉冲序列)。接收电路120可以接收经过被测物反射的光脉冲序列,并对该光脉冲序列进行光电转换,以得到电信号,再对电信号进行处理之后可以输出给采样电路130。采样电路130可以对电信号进行采样,以获取采样结果。运算电路140可以基于采样电路130的采样结果,以确定测距装置100与被测物之间的距离。The transmitting circuit 110 may transmit a sequence of light pulses (eg, a sequence of laser pulses). The receiving circuit 120 can receive the light pulse sequence reflected by the measured object, and perform photoelectric conversion on the light pulse sequence to obtain an electrical signal, and then process the electrical signal and output it to the sampling circuit 130 . The sampling circuit 130 can sample the electrical signal to obtain a sampling result. The arithmetic circuit 140 can determine the distance between the distance measuring device 100 and the measured object based on the sampling result of the sampling circuit 130 .
可选地,该测距装置100还可以包括控制电路150,该控制电路150可以实现对其他电路的控制,例如,可以控制各个电路的工作时间和/或对各个电路进行参数设置等。Optionally, the distance measuring device 100 may further include a control circuit 150, which can control other circuits, for example, control the working time of each circuit and/or set parameters for each circuit.
应理解,虽然图1示出的测距装置中包括一个发射电路、一个接收电路、一个采样电路和一个运算电路,用于出射一路光束进行探测,但是本发明实施例并不限于此,发射电路、接收电路、采样电路、运算电路中的任一种电路的数量也可以是至少两个,用于沿相同方向或分别沿不同方向出射至少两路光束;其中,该至少两束光路可以是同时出射,也可以是分别在不同时刻出射。一个示例中,该至少两个发射电路中的发光芯片封装在同一个模块中。例如,每个发射电路包括一个激光发射芯片,该至少两个发射电路中的激光发射芯片中的die封装到一起,容置在同一个封装空间中。It should be understood that although the ranging device shown in FIG. 1 includes a transmitting circuit, a receiving circuit, a sampling circuit and an arithmetic circuit for emitting a light beam for detection, the embodiment of the present invention is not limited thereto. The transmitting circuit The number of any one of the receiving circuit, the sampling circuit, and the computing circuit can also be at least two, for emitting at least two light beams along the same direction or respectively along different directions; wherein, the at least two light paths can be simultaneously It can also be emitted at different times. In an example, the light emitting chips in the at least two emitting circuits are packaged in the same module. For example, each emitting circuit includes a laser emitting chip, and the dies of the laser emitting chips in the at least two emitting circuits are packaged together and accommodated in the same packaging space.
一些实现方式中,除了图1所示的电路,测距装置100还可以包括扫描模块160,用于将发射电路出射的至少一路激光脉冲序列改变传播方向出射。In some implementations, in addition to the circuit shown in FIG. 1 , the distance measuring device 100 may further include a scanning module 160 configured to change the propagation direction of at least one laser pulse sequence emitted by the transmitting circuit to emit.
其中,可以将包括发射电路110、接收电路120、采样电路130和运算电路140的模块,或者,包括发射电路110、接收电路120、采样电路130、运算电路140和控制电路150的模块称为测距模块,该测距模块150可以独立于其他模块,例如,扫描模块160。Wherein, the module including the transmitting circuit 110, the receiving circuit 120, the sampling circuit 130 and the operation circuit 140, or the module including the transmitting circuit 110, the receiving circuit 120, the sampling circuit 130, the operation circuit 140 and the control circuit 150 may be called a measuring circuit. The ranging module 150 may be independent from other modules, for example, the scanning module 160 .
测距装置中可以采用同轴光路,也即测距装置出射的光束和经反射回来 的光束在测距装置内共用至少部分光路。例如,发射电路出射的至少一路激光脉冲序列经扫描模块改变传播方向出射后,经被测物反射回来的激光脉冲序列经过扫描模块后入射至接收电路。或者,测距装置也可以采用异轴光路,也即测距装置出射的光束和经反射回来的光束在测距装置内分别沿不同的光路传输。图2示出了本发明的测距装置采用同轴光路的一种实施例的示意图。A coaxial optical path can be used in the distance measuring device, that is, the light beam emitted by the distance measuring device and the reflected light beam share at least part of the optical path in the distance measuring device. For example, after the at least one laser pulse sequence emitted by the transmitting circuit changes its propagation direction and exits through the scanning module, the laser pulse sequence reflected by the object under test passes through the scanning module and enters the receiving circuit. Alternatively, the distance measuring device may also adopt an off-axis optical path, that is, the light beam emitted by the distance measuring device and the reflected light beam are respectively transmitted along different optical paths in the distance measuring device. Fig. 2 shows a schematic diagram of an embodiment in which the distance measuring device of the present invention adopts a coaxial optical path.
测距装置200包括测距模块210,测距模块210包括发射器203(可以包括上述的发射电路)、准直元件204、探测器205(可以包括上述的接收电路、采样电路和运算电路)和光路改变元件206。测距模块210用于发射光束,且接收回光,将回光转换为电信号。其中,发射器203可以用于发射光脉冲序列。在一个实施例中,发射器203可以发射激光脉冲序列。可选的,发射器203发射出的激光束为波长在可见光范围之外的窄带宽光束。准直元件204设置于发射器的出射光路上,用于准直从发射器203发出的光束,将发射器203发出的光束准直为平行光出射至扫描模块。准直元件还用于会聚经被测物反射的回光的至少一部分。该准直元件204可以是准直透镜或者是其他能够准直光束的元件。The ranging device 200 includes a ranging module 210, and the ranging module 210 includes a transmitter 203 (which may include the above-mentioned transmitting circuit), a collimator element 204, a detector 205 (which may include the above-mentioned receiving circuit, sampling circuit and computing circuit) and The light path changing element 206 . The distance measuring module 210 is used for emitting light beams, receiving return light, and converting the return light into electrical signals. Wherein, the transmitter 203 can be used to transmit the light pulse sequence. In one embodiment, the transmitter 203 may emit a sequence of laser pulses. Optionally, the laser beam emitted by the transmitter 203 is a narrow-bandwidth beam whose wavelength is outside the range of visible light. The collimating element 204 is arranged on the outgoing light path of the emitter, and is used for collimating the light beam emitted from the emitter 203, and collimating the light beam emitted by the emitter 203 into a parallel light that is emitted to the scanning module. The collimating element is also used for converging at least a part of the returned light reflected by the measured object. The collimating element 204 may be a collimating lens or other elements capable of collimating light beams.
在图2所示实施例中,通过光路改变元件206来将测距装置内的发射光路和接收光路在准直元件204之前合并,使得发射光路和接收光路可以共用同一个准直元件,使得光路更加紧凑。在其他的一些实现方式中,也可以是发射器203和探测器205分别使用各自的准直元件,将光路改变元件206设置在准直元件之后的光路上。In the embodiment shown in FIG. 2, the transmitting optical path and receiving optical path in the distance measuring device are combined before the collimating element 204 through the optical path changing element 206, so that the transmitting optical path and the receiving optical path can share the same collimating element, so that the optical path more compact. In some other implementation manners, it is also possible that the emitter 203 and the detector 205 respectively use their own collimating elements, and the optical path changing element 206 is arranged on the optical path after the collimating element.
在图2所示的实施例中,由于发射器203出射的光束的光束孔径较小,测距装置所接收到的回光的光束孔径较大,所以光路改变元件可以采用小面积的反射镜来将发射光路和接收光路合并。在其他的一些实现方式中,光路改变元件也可以采用带通孔的反射镜,其中该通孔用于透射发射器203的出射光,反射镜用于将回光反射至探测器205。这样可以减小采用小反射镜的情况中小反射镜的支架会对回光的遮挡。In the embodiment shown in Fig. 2, since the beam aperture of the light beam emitted by the transmitter 203 is relatively small, the beam aperture of the return light received by the distance measuring device is relatively large, so the optical path changing element can be realized by using a small-area reflector. Merge the transmit light path and the receive light path. In some other implementation manners, the optical path changing element may also use a reflector with a through hole, wherein the through hole is used to transmit the outgoing light of the emitter 203 , and the reflector is used to reflect the return light to the detector 205 . In this way, the shielding of the return light by the support of the small reflector in the case of using the small reflector can be reduced.
在图2所示的实施例中,光路改变元件偏离了准直元件204的光轴。在其他的一些实现方式中,光路改变元件也可以位于准直元件204的光轴上。In the embodiment shown in FIG. 2 , the optical path changing element deviates from the optical axis of the collimating element 204 . In some other implementation manners, the optical path changing element may also be located on the optical axis of the collimating element 204 .
测距装置200还包括扫描模块202。扫描模块202放置于测距模块210的出射光路上,扫描模块202用于改变经准直元件204出射的准直光束219的 传输方向并投射至外界环境,并将回光投射至准直元件204。回光经准直元件204汇聚到探测器205上。The ranging device 200 also includes a scanning module 202 . The scanning module 202 is placed on the outgoing optical path of the distance measuring module 210. The scanning module 202 is used to change the transmission direction of the collimated light beam 219 emitted by the collimating element 204 and project it to the external environment, and project the return light to the collimating element 204 . The returned light is converged onto the detector 205 through the collimation element 204 .
在一个实施例中,扫描模块202可以包括至少一个光学元件,用于改变光束的传播路径,其中,该光学元件可以通过对光束进行反射、折射、衍射等等方式来改变光束传播路径。例如,扫描模块202包括透镜、反射镜、棱镜、光栅、液晶、光学相控阵(Optical Phased Array)或上述光学元件的任意组合。一个示例中,至少部分光学元件是运动的,例如通过驱动模块来驱动该至少部分光学元件进行运动,该运动的光学元件可以在不同时刻将光束反射、折射或衍射至不同的方向。在一些实施例中,扫描模块202的多个光学元件可以绕共同的轴209旋转或振动,每个旋转或振动的光学元件用于不断改变入射光束的传播方向。在一个实施例中,扫描模块202的多个光学元件可以以不同的转速旋转,或以不同的速度振动。在另一个实施例中,扫描模块202的至少部分光学元件可以以基本相同的转速旋转。在一些实施例中,扫描模块的多个光学元件也可以是绕不同的轴旋转。在一些实施例中,扫描模块的多个光学元件也可以是以相同的方向旋转,或以不同的方向旋转;或者沿相同的方向振动,或者沿不同的方向振动,在此不作限制。In one embodiment, the scanning module 202 may include at least one optical element for changing the propagation path of the beam, wherein the optical element may change the propagation path of the beam by reflecting, refracting, diffracting and so on. For example, the scanning module 202 includes lenses, mirrors, prisms, gratings, liquid crystals, optical phased arrays (Optical Phased Array), or any combination of the above optical elements. In an example, at least part of the optical elements are movable, for example, driven by a driving module to move the at least part of the optical elements, and the moving optical elements can reflect, refract or diffract light beams to different directions at different times. In some embodiments, multiple optical elements of scanning module 202 may rotate or vibrate about a common axis 209, with each rotating or vibrating optical element serving to continuously change the direction of propagation of the incident light beam. In one embodiment, the multiple optical elements of scanning module 202 may rotate at different rotational speeds, or vibrate at different speeds. In another embodiment, at least some of the optical elements of scanning module 202 may rotate at substantially the same rotational speed. In some embodiments, the multiple optical elements of the scanning module may also rotate about different axes. In some embodiments, the multiple optical elements of the scanning module may also rotate in the same direction or in different directions; or vibrate in the same direction or in different directions, which is not limited here.
在一个实施例中,扫描模块202包括第一光学元件214和与第一光学元件214连接的驱动器216,驱动器216用于驱动第一光学元件214绕转动轴209转动,使第一光学元件214改变准直光束219的方向。第一光学元件214将准直光束219投射至不同的方向。在一个实施例中,准直光束219经第一光学元件改变后的方向与转动轴209的夹角随着第一光学元件214的转动而变化。在一个实施例中,第一光学元件214包括相对的非平行的一对表面,准直光束219穿过该对表面。在一个实施例中,第一光学元件214包括厚度沿至少一个径向变化的棱镜。在一个实施例中,第一光学元件114包括楔角棱镜,对准直光束219进行折射。In one embodiment, the scanning module 202 includes a first optical element 214 and a driver 216 connected to the first optical element 214, the driver 216 is used to drive the first optical element 214 to rotate around the rotation axis 209, so that the first optical element 214 changes The direction of the collimated light beam 219 . The first optical element 214 projects the collimated light beam 219 in different directions. In one embodiment, the angle between the direction of the collimated light beam 219 changed by the first optical element and the rotation axis 209 changes as the first optical element 214 rotates. In one embodiment, first optical element 214 includes a pair of opposing non-parallel surfaces through which collimated light beam 219 passes. In one embodiment, the first optical element 214 comprises a prism having a thickness varying along at least one radial direction. In one embodiment, the first optical element 114 includes a wedge prism that refracts the collimated light beam 219 .
在一个实施例中,扫描模块202还包括第二光学元件215,第二光学元件215绕转动轴209转动,第二光学元件215的转动速度与第一光学元件214的转动速度不同。第二光学元件215用于改变第一光学元件214投射的光束的方向。在一个实施例中,第二光学元件215与另一驱动器217连接,驱动器217驱动第二光学元件215转动。第一光学元件214和第二光学元件215可以 由相同或不同的驱动器驱动,使第一光学元件214和第二光学元件215的转速和/或转向不同,从而将准直光束219投射至外界空间不同的方向,可以扫描较大的空间范围。在一个实施例中,控制器218控制驱动器216和217,分别驱动第一光学元件214和第二光学元件215。第一光学元件214和第二光学元件215的转速可以根据实际应用中预期扫描的区域和样式确定。驱动器216和217可以包括电机或其他驱动器。In one embodiment, the scanning module 202 further includes a second optical element 215 , the second optical element 215 rotates around the rotation axis 209 , and the rotation speed of the second optical element 215 is different from that of the first optical element 214 . The second optical element 215 is used to change the direction of the light beam projected by the first optical element 214 . In one embodiment, the second optical element 215 is connected with another driver 217, and the driver 217 drives the second optical element 215 to rotate. The first optical element 214 and the second optical element 215 can be driven by the same or different drivers, so that the rotation speed and/or the direction of rotation of the first optical element 214 and the second optical element 215 are different, thereby projecting a collimated light beam 219 to the external space In different directions, a larger spatial range can be scanned. In one embodiment, the controller 218 controls the drivers 216 and 217 to drive the first optical element 214 and the second optical element 215 respectively. The rotational speeds of the first optical element 214 and the second optical element 215 can be determined according to the area and pattern expected to be scanned in practical applications. Drivers 216 and 217 may include motors or other drivers.
在一个实施例中,第二光学元件215包括相对的非平行的一对表面,光束穿过该对表面。在一个实施例中,第二光学元件215包括厚度沿至少一个径向变化的棱镜。在一个实施例中,第二光学元件215包括楔角棱镜。In one embodiment, the second optical element 215 includes a pair of opposing non-parallel surfaces through which the light beam passes. In one embodiment, the second optical element 215 comprises a prism whose thickness varies along at least one radial direction. In one embodiment, the second optical element 215 includes a wedge prism.
一个实施例中,扫描模块202还包括第三光学元件(图未示)和用于驱动第三光学元件运动的驱动器。可选地,该第三光学元件包括相对的非平行的一对表面,光束穿过该对表面。在一个实施例中,第三光学元件包括厚度沿至少一个径向变化的棱镜。在一个实施例中,第三光学元件包括楔角棱镜。第一、第二和第三光学元件中的至少两个光学元件以不同的转速和/或转向转动。In one embodiment, the scanning module 202 further includes a third optical element (not shown in the figure) and a driver for driving the movement of the third optical element. Optionally, the third optical element comprises a pair of opposite non-parallel surfaces through which the light beam passes. In one embodiment, the third optical element comprises a prism whose thickness varies along at least one radial direction. In one embodiment, the third optical element comprises a wedge prism. At least two of the first, second and third optical elements rotate at different rotational speeds and/or deflections.
扫描模块202中的各光学元件旋转可以将光投射至不同的方向,例如方向211和213,如此对测距装置200周围的空间进行扫描。如图3所示,图3为测距装置200的一种扫描图案的示意图。可以理解的是,扫描模块内的光学元件的速度变化时,扫描图案也会随之变化。The rotation of each optical element in the scanning module 202 can project light to different directions, such as directions 211 and 213 , so as to scan the space around the distance measuring device 200 . As shown in FIG. 3 , FIG. 3 is a schematic diagram of a scanning pattern of the ranging device 200 . It can be understood that when the speed of the optical elements in the scanning module changes, the scanning pattern will also change accordingly.
当扫描模块202投射出的光211打到探测物201时,一部分光被探测物201沿与投射的光211相反的方向反射至测距装置200。探测物201反射的回光212经过扫描模块202后入射至准直元件204。When the light 211 projected by the scanning module 202 hits the detection object 201 , a part of the light is reflected by the detection object 201 to the distance measuring device 200 in a direction opposite to the projected light 211 . The return light 212 reflected by the detection object 201 enters the collimation element 204 after passing through the scanning module 202 .
探测器205与发射器203放置于准直元件204的同一侧,探测器205用于将穿过准直元件204的至少部分回光转换为电信号。The detector 205 is placed on the same side of the collimation element 204 as the emitter 203, and the detector 205 is used to convert at least part of the return light passing through the collimation element 204 into an electrical signal.
一个实施例中,各光学元件上镀有增透膜。可选的,增透膜的厚度与发射器203发射出的光束的波长相等或接近,能够增加透射光束的强度。In one embodiment, each optical element is coated with an anti-reflection film. Optionally, the thickness of the antireflection film is equal to or close to the wavelength of the light beam emitted by the emitter 203, which can increase the intensity of the transmitted light beam.
一个实施例中,测距装置中位于光束传播路径上的一个元件表面上镀有滤光层,或者在光束传播路径上设置有滤光器,用于至少透射发射器所出射的光束所在波段,反射其他波段,以减少环境光给接收器带来的噪音。In one embodiment, a filter layer is coated on the surface of a component located on the beam propagation path in the ranging device, or an optical filter is arranged on the beam propagation path, for at least transmitting the wavelength band of the beam emitted by the transmitter, Reflect other bands to reduce noise from ambient light to the receiver.
在一些实施例中,发射器203可以包括激光二极管,通过激光二极管发 射纳秒级别的激光脉冲。进一步地,可以确定激光脉冲接收时间,例如,通过探测电信号脉冲的上升沿时间和/或下降沿时间确定激光脉冲接收时间。如此,测距装置200可以利用脉冲接收时间信息和脉冲发出时间信息计算TOF,从而确定探测物201到测距装置200的距离。In some embodiments, the transmitter 203 may include a laser diode, and the laser diode emits nanosecond-level laser pulses. Further, the laser pulse receiving time can be determined, for example, the laser pulse receiving time can be determined by detecting the rising edge time and/or falling edge time of the electrical signal pulse. In this way, the distance measuring device 200 can calculate the TOF by using the pulse receiving time information and the pulse sending time information, so as to determine the distance from the detection object 201 to the distance measuring device 200 .
测距装置200探测到的距离和方位可以用于遥感、避障、测绘、建模、导航等。在一种实施方式中,本发明实施方式的测距装置可应用于可移动平台,测距装置可安装在可移动平台的可移动平台本体。具有测距装置的可移动平台可对外部环境进行测量,例如,测量可移动平台与障碍物的距离用于避障等用途,和对外部环境进行二维或三维的测绘。在某些实施方式中,可移动平台包括无人飞行器、汽车、遥控车、机器人、相机中的至少一种。当测距装置应用于无人飞行器时,可移动平台本体为无人飞行器的机身。当测距装置应用于汽车时,可移动平台本体为汽车的车身。该汽车可以是自动驾驶汽车或者半自动驾驶汽车,在此不做限制。当测距装置应用于遥控车时,可移动平台本体为遥控车的车身。当测距装置应用于机器人时,可移动平台本体为机器人。The distance and orientation detected by the ranging device 200 can be used for remote sensing, obstacle avoidance, surveying and mapping, modeling, navigation and so on. In one embodiment, the distance measuring device in the embodiment of the present invention can be applied to a movable platform, and the distance measuring device can be installed on the movable platform body of the movable platform. The movable platform with the distance measuring device can measure the external environment, for example, measure the distance between the movable platform and obstacles for purposes such as obstacle avoidance, and perform two-dimensional or three-dimensional mapping of the external environment. In some embodiments, the mobile platform includes at least one of an unmanned aerial vehicle, an automobile, a remote control vehicle, a robot, and a camera. When the ranging device is applied to the unmanned aerial vehicle, the movable platform body is the fuselage of the unmanned aerial vehicle. When the distance measuring device is applied to a car, the movable platform body is the body of the car. The car may be an automatic driving car or a semi-automatic driving car, which is not limited here. When the distance measuring device is applied to the remote control car, the movable platform body is the body of the remote control car. When the ranging device is applied to a robot, the movable platform body is a robot.
由于单个测距装置的不同视角存在前景与背景,其深度存在不一致性,若基于单个测距装置、采用移动视角构建稠密点云真值数据,则无法或者很难解决不同视角下的深度一致性问题。此外,不同视角采集的点云数据,对于运动目标的测量深度误差较大,会造成运动目标表面模糊,畸变严重。Because there are foreground and background in different viewing angles of a single ranging device, there is inconsistency in its depth. If the dense point cloud true value data is constructed based on a single ranging device and using a mobile viewing angle, it is impossible or difficult to solve the depth consistency under different viewing angles. question. In addition, the point cloud data collected from different angles of view has a large error in the measurement depth of the moving target, which will cause the surface of the moving target to be blurred and severely distorted.
为此,本发明实施例提出了一种基于多个测距装置的稠密点云真值数据的构建方法,图4示出了根据本发明实施例的稠密点云真值数据的构建方法400的示意性流程图。如图4所示,稠密点云真值数据的构建方法400包括以下步骤:To this end, the embodiment of the present invention proposes a method for constructing dense point cloud true value data based on multiple distance measuring devices. FIG. 4 shows the construction method 400 of dense point cloud true value data according to an embodiment of the present invention. Schematic flow chart. As shown in FIG. 4, the construction method 400 of dense point cloud true value data includes the following steps:
在步骤S410,获取相机采集的图像数据、至少两个测距装置采集的点云数据、以及惯性测量装置采集的所述相机和所述测距装置的位姿数据,所述相机、所述至少两个测距装置和所述惯性测量装置设置在同一可移动平台上,并且所述相机和所述至少两个测距装置的视场至少部分重合;In step S410, the image data collected by the camera, the point cloud data collected by at least two distance measuring devices, and the pose data of the camera and the distance measuring device collected by the inertial measurement device are acquired, the camera, the at least The two distance measuring devices and the inertial measurement device are arranged on the same movable platform, and the fields of view of the camera and the at least two distance measuring devices at least partially overlap;
在步骤S420,获取所述图像数据、所述点云数据和所述位姿数据的时间信息,根据所述时间信息将所述图像数据、所述点云数据和所述位姿数据进行时间对齐;In step S420, acquire the time information of the image data, the point cloud data and the pose data, and time-align the image data, the point cloud data and the pose data according to the time information ;
在步骤S430,对所述相机、所述测距装置和所述惯性测量装置进行标定,以得到所述图像、所述点云数据和所述位姿数据之间的空间变换关系,并根据所述空间变换关系将所述图像数据、所述点云数据和所述位姿数据进行空间对齐;In step S430, the camera, the distance measuring device and the inertial measurement device are calibrated to obtain the spatial transformation relationship between the image, the point cloud data and the pose data, and according to the The image data, the point cloud data and the pose data are spatially aligned according to the spatial transformation relationship;
在步骤S440,在变换到同一空间坐标系下的点云数据中提取预设时间范围内的目标点云数据,根据所述位姿数据的变化对所述目标点云数据中的运动畸变进行畸变修正,以得到修正后的目标点云数据;In step S440, the target point cloud data within a preset time range is extracted from the point cloud data transformed into the same space coordinate system, and the motion distortion in the target point cloud data is distorted according to the change of the pose data Correction to obtain the corrected target point cloud data;
在步骤S450,对所述修正后的目标点云数据进行滤噪处理,得到滤噪处理后的目标点云数据;In step S450, perform noise filtering processing on the corrected target point cloud data to obtain target point cloud data after noise filtering processing;
在步骤S460,输出稠密点云真值数据,所述稠密点云真值数据包括所述滤噪处理后的目标点云数据和进行所述空间对齐后的图像数据。In step S460, the dense point cloud true value data is output, and the dense point cloud true value data includes the target point cloud data after the noise filtering process and the image data after the spatial alignment.
本发明实施例的稠密点云真值数据的构建方法400基于多个测距装置采集的点云数据构建稠密点云数据,能够避免由于单个测距装置视角不一致导致的深度不一致和运动目标模糊问题。The method 400 for constructing dense point cloud true value data in the embodiment of the present invention constructs dense point cloud data based on point cloud data collected by multiple distance measuring devices, which can avoid the problems of inconsistency in depth and blurring of moving objects caused by inconsistent viewing angles of a single distance measuring device .
图5示出了本发明实施例的稠密点云真值数据的构建系统500。如图5所示,相机510和至少两个测距装置520设置在同一可移动平台上,并且相机510和至少两个测距装置520的视场至少部分重合。其中,可移动平台包括汽车、遥控车、机器人、无人飞行器等。为了更好地模拟真实应用场景,在可移动平台保持运动的过程中控制相机510采集图像数据、控制至少两个测距装置520采集点云数据、同时控制惯性测量装置530采集位姿数据。测距装置510采集的点云数据是大量点云点的集合,其中每一个点云点至少包含物体三维坐标信息;相机所采集的图像为二维图像(例如,灰度图像或RGB图像),可以提供物体的颜色、纹理等关键特征。FIG. 5 shows a system 500 for constructing dense point cloud ground truth data according to an embodiment of the present invention. As shown in FIG. 5 , the camera 510 and at least two distance measuring devices 520 are arranged on the same movable platform, and the fields of view of the camera 510 and the at least two distance measuring devices 520 at least partially overlap. Among them, mobile platforms include cars, remote control cars, robots, unmanned aerial vehicles, etc. In order to better simulate real application scenarios, the camera 510 is controlled to collect image data, at least two ranging devices 520 are controlled to collect point cloud data, and the inertial measurement device 530 is controlled to collect pose data while the movable platform is in motion. The point cloud data collected by the ranging device 510 is a collection of a large number of point cloud points, wherein each point cloud point contains at least three-dimensional coordinate information of an object; the image collected by the camera is a two-dimensional image (for example, a grayscale image or an RGB image), Key features such as color and texture of objects can be provided.
进一步地,每个测距装置520和相机510的朝向相同,其布局方式尽可能贴合彼此,以获得尽可能大的重合视场,从而最大程度地采集相同视场内的点云数据和图像数据。测距装置520可以实现为参照图1和图2描述的测距装置,相机510可以实现为单目相机,具体可以实现为单目可见光相机。在一些实施例中,为了增加点云数据的稠密度,用于采集点云数据的测距装置520不少于三台。Further, each distance measuring device 520 and camera 510 have the same orientation, and their layout fits each other as much as possible to obtain as large an overlapping field of view as possible, so as to collect point cloud data and images in the same field of view to the greatest extent data. The ranging device 520 may be implemented as the ranging device described with reference to FIG. 1 and FIG. 2 , and the camera 510 may be implemented as a monocular camera, specifically, a monocular visible light camera. In some embodiments, in order to increase the density of point cloud data, there are no less than three ranging devices 520 for collecting point cloud data.
此外,可移动平台上还搭载有惯性测量装置530,惯性测量装置530可以 是能够获取6自由度的高精度位置和姿态的高精度惯导系统。示例性地,惯性测量装置530包括但不限于陀螺和加速度计,陀螺仪用于形成导航坐标系,使加速度计的测量轴稳定在该坐标系中,并给出航向和姿态角;加速度计用于测量载体的加速度,对加速度进行积分可得到速度,对速度进行积分即可得到距离。惯性测量装置530可以是可移动平台自身配备的惯性测量装置。由于惯性测量装置530和相机510、测距装置520等多个传感器均搭载在可移动平台上,惯性测量装置530采集的位姿数据可以理解为相机510和测距装置520的位姿数据,惯性测量装置530采集的位姿数据可以用于修正测距装置520自身运动导致的点云畸变。In addition, an inertial measurement device 530 is also mounted on the movable platform, and the inertial measurement device 530 may be a high-precision inertial navigation system capable of obtaining a high-precision position and attitude with 6 degrees of freedom. Exemplarily, the inertial measurement device 530 includes, but is not limited to, a gyroscope and an accelerometer. The gyroscope is used to form a navigation coordinate system, stabilize the measurement axis of the accelerometer in this coordinate system, and give the heading and attitude angle; To measure the acceleration of the carrier, the acceleration can be integrated to obtain the velocity, and the velocity can be integrated to obtain the distance. The inertial measurement device 530 may be an inertial measurement device equipped on the movable platform itself. Since the inertial measurement device 530, camera 510, distance measuring device 520 and other sensors are all mounted on the movable platform, the pose data collected by the inertial measurement device 530 can be understood as the pose data of the camera 510 and the distance measuring device 520, the inertial The pose data collected by the measuring device 530 can be used to correct point cloud distortion caused by the movement of the distance measuring device 520 itself.
在步骤S420,获取图像数据、点云数据和位姿数据的时间信息,根据其时间信息将图像数据、点云数据和位姿数据进行时间对齐。时间信息包括图像数据、点云数据和位姿数据的时间戳。时间对齐即寻找到同一时刻的图像数据、点云数据和位姿数据,以便于后续根据位姿数据对点云数据的畸变进行修正,以及获得相匹配的图像数据和点云数据,从而构成能够用于训练网络模型的稠密点云真值数据。In step S420, the time information of the image data, point cloud data and pose data is acquired, and the image data, point cloud data and pose data are time-aligned according to the time information. Temporal information includes timestamps of image data, point cloud data, and pose data. Time alignment is to find the image data, point cloud data and pose data at the same time, so that the distortion of the point cloud data can be corrected according to the pose data, and the matching image data and point cloud data can be obtained, so as to form a Dense point cloud ground truth data for training network models.
示例性地,图像数据、点云数据和位姿数据为基于数据同步系统540进行时间同步后的数据。由数据同步系统540获取相机510、测距装置520和惯性测量装置530采集的数据后,对数据在不同时间域内进行时间戳的统一。数据同步系统540将时间同步后的数据传入处理器550,处理器550根据图像数据、点云数据和位姿数据的时间信息,将图像数据、点云数据和位姿数据进行时间对齐,具体地,可以获取点云数据的时间戳,并根据时间戳寻找到对应时刻的图像数据和位姿数据,其中位姿数据可以包括每一时刻对应的位姿数据,图像数据可以包括目标时刻对应的图像数据。Exemplarily, image data, point cloud data and pose data are data after time synchronization based on the data synchronization system 540 . After the data collected by the camera 510 , the distance measuring device 520 and the inertial measurement device 530 are acquired by the data synchronization system 540 , the time stamps of the data are unified in different time domains. The data synchronization system 540 transmits the time-synchronized data to the processor 550, and the processor 550 time-aligns the image data, point cloud data, and pose data according to the time information of the image data, point cloud data, and pose data, specifically Specifically, the time stamp of the point cloud data can be obtained, and the image data and pose data corresponding to the time can be found according to the time stamp, where the pose data can include the pose data corresponding to each moment, and the image data can include the corresponding image data.
示例性地,可以采用GPS(Global Positioning System,全球定位系统)、进行时间同步,基于以太网的IEEE 1588时钟同步协议进行时间同步,或者基于PPS(Pulse per Second,每秒脉冲信号)进行时间同步等。当然,上述时间同步方式仅作为示例而非限制,任何合适的时间同步方式都可以应用于根据本发明实施例的稠密点云真值数据的构建方法400中。Exemplarily, GPS (Global Positioning System, Global Positioning System) can be used for time synchronization, and the IEEE 1588 clock synchronization protocol based on Ethernet can be used for time synchronization, or based on PPS (Pulse per Second, pulse signal per second) for time synchronization Wait. Of course, the above time synchronization method is only an example and not a limitation, and any suitable time synchronization method can be applied to the method 400 for constructing dense point cloud ground truth data according to the embodiment of the present invention.
由于每个传感器输出的数据均以该传感器的坐标系为参考,从而导致不同传感器输出的数据所对应的坐标系不同。因此,在进行如上所述的时间对 齐后,执行步骤S430,对相机、测距装置和惯性测量装置进行标定,以得到图像、点云数据和位姿数据之间的空间变换关系,并根据空间变换关系将图像数据、点云数据和位姿数据进行空间对齐。其中,对不同传感器之间进行标定即对不同传感器的外参进行标定;空间对齐即将多传感器的数据统一到同一个空间坐标系下。其中,标定可以是实时进行的,也可以是预先进行的。Since the data output by each sensor is referenced to the coordinate system of the sensor, the coordinate systems corresponding to the data output by different sensors are different. Therefore, after performing the time alignment as described above, step S430 is performed to calibrate the camera, distance measuring device and inertial measurement device to obtain the spatial transformation relationship between the image, point cloud data and pose data, and according to the spatial Transformation relations spatially align image data, point cloud data, and pose data. Among them, the calibration between different sensors is to calibrate the external parameters of different sensors; the spatial alignment is to unify the data of multiple sensors into the same spatial coordinate system. The calibration can be performed in real time or in advance.
具体地,对相机、测距装置和惯性测量装置进行标定包括:对至少两个测距装置彼此之间进行标定,对测距装置与相机进行标定、以及对测距装置与惯性测量装置进行标定。通过以上标定算法,对多传感器的数据进行空间坐标系下的统一,以便于后续对点云数据进行畸变修正和获取深度一致的点云数据。Specifically, calibrating the camera, the distance measuring device and the inertial measurement device includes: calibrating at least two distance measuring devices with each other, calibrating the distance measuring device and the camera, and calibrating the distance measuring device and the inertial measurement device . Through the above calibration algorithm, the multi-sensor data is unified under the spatial coordinate system, so as to facilitate subsequent distortion correction of point cloud data and obtain point cloud data with consistent depth.
其中,对测距装置、相机和惯性测量装置进行标定的方法包括但不限于手动标定、半自动标定和全自动标定。以自动标定为例,可以预先在测距装置和相机前摆放的特定标志物,对点云数据和图像进行特征提取和特征匹配,之后,基于特征匹配结果,标定测距装置和相机之间的外参。以手动标定为例,可以提供用户交互界面,用户交互界面包括参数输入区域和显示区域,参数输入区域用于供用户输入传感器的外参,显示区域用于显示各个传感器采集的数据,用户可以根据显示区域显示的数据调节各传感器的外参。例如,对于测距装置采集的点云数据来说,每个测距装置输出的点云数据可以通过对应测距装置的外参进行空间中的位置转换,从而实现该点云在空间中的位置转换。用户通过观察显示区域,可以确定经过位置转换后的点云数据是否在同一坐标系中,若确定经过位置转换后的点云不在同一坐标系中,则用户可以在用户交互界面上对至少一个测距装置的外参进行调节操作,使得所述各个测距装置探测到的点云数据经转换后位于同一坐标系中。Wherein, the methods for calibrating the distance measuring device, the camera and the inertial measurement device include but not limited to manual calibration, semi-automatic calibration and fully automatic calibration. Taking automatic calibration as an example, it is possible to perform feature extraction and feature matching on point cloud data and images with specific markers placed in front of the distance measuring device and camera in advance, and then calibrate the distance between the distance measuring device and the camera based on the feature matching results. external reference. Taking manual calibration as an example, a user interaction interface can be provided. The user interaction interface includes a parameter input area and a display area. The parameter input area is used for the user to input the external parameters of the sensor, and the display area is used to display the data collected by each sensor. The data displayed in the display area adjusts the external parameters of each sensor. For example, for the point cloud data collected by the distance measuring device, the point cloud data output by each distance measuring device can be converted to the position in space through the external parameters of the corresponding distance measuring device, so as to realize the position of the point cloud in space convert. By observing the display area, the user can determine whether the point cloud data after position transformation is in the same coordinate system. The external parameters of the distance measuring devices are adjusted so that the point cloud data detected by the various distance measuring devices are located in the same coordinate system after conversion.
在一个实施例中,测距装置包括第一测距装置和至少一个第二测距装置,其中以第一测距装置采集的第一点云数据的空间坐标系作为参考坐标系,则在对至少两个测距装置进行标定时,可以分别对每个第二测距装置与第一测距装置进行标定,以得到每个第二测距装置采集的第二点云数据与第一测距装置采集的第一点云数据之间的空间变换关系,即在统一点云数据的坐标系时,将所有测距装置采集的点云数据均转换到第一测距装置采集的第一点云数据的空间坐标系下。在对相机和测距装置进行标定时,可以对相机与第一 测距装置进行标定,以得到图像与第一点云数据之间的空间变换关系,从而将图像转换到第一点云数据的空间坐标系下。在对惯性测量装置与测距装置进行标定时,可以对惯性测量装置与第一测距装置进行标定,以得到位姿数据与第一点云数据之间的空间变换关系,从而将位姿数据转换到第一点云数据的空间坐标系下。其中,第一测距装置可以是至少两个测距装置中的任意一个。在其他实施例中,也可以以图像数据的坐标系作为参考坐标系、以惯性测量数据的坐标系作为参考坐标系,或者以可移动平台的坐标系作为参考坐标系等,只要将多传感器的数据统一到同一坐标系即可。In one embodiment, the distance measuring device includes a first distance measuring device and at least one second distance measuring device, wherein the space coordinate system of the first point cloud data collected by the first distance measuring device is used as the reference coordinate system, then in the When at least two distance measuring devices are calibrated, each second distance measuring device and the first distance measuring device can be calibrated respectively to obtain the second point cloud data collected by each second distance measuring device and the first distance measuring device. The spatial transformation relationship between the first point cloud data collected by the device, that is, when the coordinate system of the point cloud data is unified, the point cloud data collected by all distance measuring devices are converted to the first point cloud collected by the first distance measuring device In the spatial coordinate system of the data. When calibrating the camera and the distance measuring device, the camera and the first distance measuring device can be calibrated to obtain the spatial transformation relationship between the image and the first point cloud data, thereby converting the image to the first point cloud data. in the space coordinate system. When calibrating the inertial measurement device and the distance measuring device, the inertial measurement device and the first distance measuring device can be calibrated to obtain the spatial transformation relationship between the pose data and the first point cloud data, so that the pose data Convert to the space coordinate system of the first point cloud data. Wherein, the first distance measuring device may be any one of at least two distance measuring devices. In other embodiments, the coordinate system of the image data may be used as the reference coordinate system, the coordinate system of the inertial measurement data may be used as the reference coordinate system, or the coordinate system of the movable platform may be used as the reference coordinate system, as long as the multi-sensor The data can be unified to the same coordinate system.
将多个测距装置采集的点云数据统一到同一空间坐标系下之后,可以获得稠密的点云数据。但由于测距装置在计算点云点坐标时以自身坐标系为基础,而本申请的测距装置搭载在可移动平台上,因而在可移动平台运动过程中,不同时刻采集的点云数据的基准坐标系不同,使得点云数据出现畸变。为了解决这一问题,就需要获得采集过程中测距装置的运动信息,通过运动补偿算法将每个点云点的坐标转换到目标时间点进行运动补偿,从而利用运动补偿后的点云数据构建稠密点云真值数据。After unifying the point cloud data collected by multiple ranging devices into the same space coordinate system, dense point cloud data can be obtained. However, since the distance measuring device is based on its own coordinate system when calculating point cloud point coordinates, and the distance measuring device of the present application is mounted on a movable platform, during the movement of the movable platform, the difference between the point cloud data collected at different times The reference coordinate system is different, causing the point cloud data to be distorted. In order to solve this problem, it is necessary to obtain the motion information of the ranging device during the acquisition process, and convert the coordinates of each point cloud point to the target time point for motion compensation through the motion compensation algorithm, so as to use the point cloud data after motion compensation to construct Dense point cloud ground truth data.
因此,在进行时间和空间对齐后,执行步骤S440,在变换到同一空间坐标系下的点云数据中提取预设时间范围内的目标点云数据,根据位姿数据的变化对目标点云数据中的运动畸变进行畸变修正,以得到修正后的目标点云数据。在一个示例中,该预设时间范围的时间长度可以大于单帧点云数据的采集时间,以获得稠密的点云数据。在其他示例中,由于本发明实施例采用多个测距装置共同采集点云数据,无需过长的时间也能够获得稠密的点云数据,因而该预设时间范围的长度也可以等于或小于单帧点云数据的采集时间,从而减小运动畸变。示例性地,预设时间范围大于或等于100ms。Therefore, after time and space alignment, step S440 is executed to extract the target point cloud data within the preset time range from the point cloud data transformed into the same space coordinate system, and to adjust the target point cloud data according to the change of pose data Distortion correction is performed on the motion distortion in the image to obtain the corrected target point cloud data. In an example, the time length of the preset time range may be longer than the acquisition time of a single frame of point cloud data, so as to obtain dense point cloud data. In other examples, since the embodiment of the present invention adopts multiple ranging devices to jointly collect point cloud data, dense point cloud data can be obtained without too long a time, so the length of the preset time range can also be equal to or less than a single Acquisition time of frame point cloud data, thereby reducing motion distortion. Exemplarily, the preset time range is greater than or equal to 100ms.
提取预设时间范围内的目标点云数据之后,将时间与空间对齐后的位姿数据作用于目标点云数据,对测距装置自身的运动导致的运动畸变进行修正,即对目标点云数据进行运动补偿。运动畸变产生的原因是目标点云数据中的点云点不是在同一时刻采集的,而是在预设时间范围内持续采集的,在采集过程中,测距装置随着可移动平台发生运动,由于点云点测量的是物体与测距装置之间的距离,对于同一物体来说,不同时间测得的深度不一致,从而使点云数据产生畸变。图6A、图6B分别示出了畸变修正前和畸变修正后的 目标点云数据。由图6A、图6B可以看出,畸变修正有效减轻了点云数据的拖影现象。After extracting the target point cloud data within the preset time range, the pose data aligned in time and space is applied to the target point cloud data to correct the motion distortion caused by the movement of the ranging device itself, that is, the target point cloud data Perform motion compensation. The reason for motion distortion is that the point cloud points in the target point cloud data are not collected at the same time, but are collected continuously within the preset time range. During the collection process, the distance measuring device moves with the movable platform. Since the point cloud point measures the distance between the object and the distance measuring device, for the same object, the depth measured at different times is inconsistent, which causes the point cloud data to be distorted. Figure 6A and Figure 6B respectively show the target point cloud data before and after distortion correction. It can be seen from Fig. 6A and Fig. 6B that distortion correction effectively reduces the smear phenomenon of point cloud data.
具体地,由于位姿数据与目标点云数据位于同一空间坐标系下,根据位姿数据的变化可以得到目标点云数据在预设时间范围内的运动方向和运动速度,根据所得到的运动方向和运动速度可以对目标点云数据中由测距装置的运动引发的运动畸变进行畸变修正。其中,为了获取准确的运动方向和运动速度,可以提取较长时间内的位姿数据来确定点云数据的运动方向和运动速度。Specifically, since the pose data and the target point cloud data are located in the same space coordinate system, the movement direction and speed of the target point cloud data within the preset time range can be obtained according to the change of the pose data, and according to the obtained movement direction The motion distortion caused by the motion of the ranging device in the target point cloud data can be corrected for distortion. Among them, in order to obtain accurate motion direction and motion speed, the pose data for a long time can be extracted to determine the motion direction and motion speed of point cloud data.
具体地,可以根据目标点云数据的时间信息为不同时刻的目标点云数据分配对应的修正系数,修正系数也可以称为畸变系数,畸变越小,修正系数越小,需要进行的修正幅度越小;畸变越大,修正系数越大,需要进行的修正幅度越大。确定修正系数后,可以根据不同时刻的目标点云数据对应的修正系数以及目标点云数据的运动方向和运动速度,对不同时刻的目标点云系数进行修正。Specifically, according to the time information of the target point cloud data, corresponding correction coefficients can be assigned to the target point cloud data at different times, and the correction coefficients can also be called distortion coefficients. Small; the larger the distortion, the larger the correction coefficient, and the larger the correction range required. After the correction coefficient is determined, the target point cloud coefficients at different moments can be corrected according to the correction coefficients corresponding to the target point cloud data at different moments and the moving direction and moving speed of the target point cloud data.
在确定修正系数时,需要在预设时间范围内确定一个参考时间点,根据点云点的采集时间与参考时间点的间隔为不同时刻采集的点云点分配修正系数。根据短时匀速假设,可以对目标点云数据中所有点云点的修正系数按时间均匀划分,并归一化到[0-1],从而按时间t(t0-tn)为点云点分配对应的修正系数。距离参考时间点越近,畸变越小,修正系数越接近于0;反之,距离参考时间点越远,畸变越大,修正系数越接近于1。分配畸变系数后,作用上对应时刻的运动插值估计,得出最终的畸变修正结果。When determining the correction coefficient, it is necessary to determine a reference time point within the preset time range, and assign correction coefficients to the point cloud points collected at different times according to the interval between the collection time of point cloud points and the reference time point. According to the assumption of short-term uniform speed, the correction coefficients of all point cloud points in the target point cloud data can be evenly divided by time, and normalized to [0-1], so as to assign point cloud points according to time t(t0-tn) Corresponding correction factor. The closer the distance to the reference time point, the smaller the distortion and the closer the correction coefficient is to 0; on the contrary, the farther the distance from the reference time point is, the greater the distortion is and the closer the correction coefficient is to 1. After distributing the distortion coefficients, the motion interpolation estimation at the corresponding moment is applied to obtain the final distortion correction result.
例如,假设目标点云数据的运动方向为△q,运动距离为△d,则对于目标点云数据中的第i点来说,获取其在进行畸变修正之前的原始坐标point(i),并获取其时间戳ti,根据第i点的时间戳ti为其分配修正系数s,其中s=1-(ti-t0)/(tn–t0);之后,根据修正系数可以得到第i点的旋转量插值slerp(s,△q)以及平移量插值s*△d,其中slerp表示球面线性插值;最终,进行修正后的第i点的坐标为slerp(s,△q)*point(i)+s*△d。For example, assuming that the moving direction of the target point cloud data is △q and the moving distance is △d, then for the i-th point in the target point cloud data, its original coordinate point(i) before distortion correction is obtained, and Obtain its time stamp ti, and assign a correction coefficient s to it according to the time stamp ti of the i-th point, where s=1-(ti-t0)/(tn–t0); after that, the rotation of the i-th point can be obtained according to the correction coefficient Quantity interpolation slerp(s,△q) and translation interpolation s*△d, where slerp represents spherical linear interpolation; finally, the coordinates of the i-th point after correction are slerp(s,△q)*point(i)+ s*△d.
如上所述,对由测距装置的运动引发的运动畸变进行畸变修正即对预设时间范围内的目标点云数据进行运动补偿,使得不同时刻采集的点云点均补偿到目标时间点所在的位置。在一个实施例中,目标时间点可以是预设时间 范围的起始时刻,则对目标点云数据中由测距装置的运动引发的运动畸变进行畸变修正包括:对于目标点云数据中的每个点云点,根据位姿数据估计其在预设时间范围的起始时刻所在的位置,从而将目标点云数据修正到预设时间范围的起始时刻所对应的位置。在其他实施例中,也可以将目标点云数据修正到预设时间范围的结束时刻所对应的位置,或者预设时间范围的其他任意时刻所对应的位置。As mentioned above, the distortion correction for the motion distortion caused by the movement of the ranging device is to perform motion compensation on the target point cloud data within the preset time range, so that the point cloud points collected at different times are all compensated to the point where the target time point is located. Location. In one embodiment, the target time point may be the starting moment of the preset time range, then performing distortion correction on the motion distortion caused by the movement of the distance measuring device in the target point cloud data includes: for each point cloud data in the target According to the pose data, estimate its position at the initial moment of the preset time range, so as to correct the target point cloud data to the position corresponding to the initial moment of the preset time range. In other embodiments, the target point cloud data may also be corrected to a position corresponding to the end moment of the preset time range, or a position corresponding to any other moment in the preset time range.
进一步地,由于目标点云数据是在一段时间范围内采集的,若视场区域内存在运动物体,则点云数据中包含的运动物体的点云簇也会发生运动畸变。因此,在对测距装置的运动导致的畸变进行畸变修正之后,还可以对目标点云数据中由目标对象的运动引发的运动畸变进行畸变修正,以进一步提高稠密点云数据的精度。具体地,首先识别目标点云数据中对应于不同目标对象的点云簇,获取点云簇在不同时刻下的位移信息,根据其位移信息对目标对象进行运动估计。之后,根据运动估计的结果,对由目标对象的运动引发的点云簇的运动畸变进行畸变修正。其中,目标点云数据可以是经过上文所述的对测距装置的运动引起的运动畸变进行修正后的目标点云数据。在一些实施例中,也可以先对目标对象的运动引起的运动畸变进行畸变修正,再对测距装置的运动引起的运动畸变进行畸变修正。Furthermore, since the target point cloud data is collected within a period of time, if there is a moving object in the field of view, the point cloud cluster of the moving object contained in the point cloud data will also have motion distortion. Therefore, after the distortion correction is performed on the distortion caused by the movement of the ranging device, the distortion correction can also be performed on the motion distortion caused by the movement of the target object in the target point cloud data, so as to further improve the accuracy of the dense point cloud data. Specifically, the point cloud clusters corresponding to different target objects in the target point cloud data are first identified, the displacement information of the point cloud clusters at different moments is obtained, and the motion estimation of the target object is performed according to the displacement information. Afterwards, based on the results of motion estimation, distortion correction is performed on the motion distortion of the point cloud clusters induced by the motion of the target object. Wherein, the target point cloud data may be the target point cloud data corrected for the above-mentioned motion distortion caused by the movement of the ranging device. In some embodiments, the distortion correction may be performed on the motion distortion caused by the motion of the target object first, and then the distortion correction is performed on the motion distortion caused by the motion of the distance measuring device.
其中,检测目标对象的点云簇的方法包括但不限于场景流算法、目标检测算法或基于聚类分割的运动目标估计算法等,当然,除此之外,本发明实施例还可以采用任何其他合适的方法识别属于目标对象的点云簇,本发明实施例对识别点云簇的具体方法不做限定。对目标对象进行运动速度估计后,对目标对象的运动引起的运动畸变进行畸变修正,即能够还原目标对象的高精度深度数据。Wherein, the method for detecting the point cloud cluster of the target object includes but not limited to the scene flow algorithm, the target detection algorithm, or the moving target estimation algorithm based on clustering and segmentation, etc. Of course, in addition, the embodiment of the present invention can also adopt any other A suitable method is to identify the point cloud clusters belonging to the target object, and the embodiment of the present invention does not limit the specific method for identifying the point cloud clusters. After estimating the motion speed of the target object, the distortion correction is performed on the motion distortion caused by the motion of the target object, that is, the high-precision depth data of the target object can be restored.
示例性地,在获取点云簇在不同时刻下的位移信息时,可以在点云数据中提取较长时间范围内的点云数据,从中识别运动目标的点云簇以进行运动估计,该较长时间范围的长度大于预设时间范围的长度,从而能够获得较为准确的位移信息。Exemplarily, when obtaining the displacement information of the point cloud cluster at different times, the point cloud data within a longer time range can be extracted from the point cloud data, and the point cloud cluster of the moving target can be identified from it for motion estimation. The length of the long-term range is greater than the length of the preset time range, so that more accurate displacement information can be obtained.
之后,根据运动估计的结果,对由目标对象的运动引发的点云簇的运动畸变进行畸变修正。与上述对目标点云数据的畸变修正方法类似,首先根据点云簇的时间信息为不同时刻的点云簇分配对应的修正系数,根据不同时刻 的点云簇对应的修正系数和运动估计的结果对不同时刻的点云簇进行畸变修正。Afterwards, based on the results of motion estimation, distortion correction is performed on the motion distortion of the point cloud clusters induced by the motion of the target object. Similar to the above-mentioned distortion correction method for the target point cloud data, first assign corresponding correction coefficients to the point cloud clusters at different times according to the time information of the point cloud clusters, and according to the correction coefficients corresponding to the point cloud clusters at different times and the results of motion estimation Distortion correction is performed on point cloud clusters at different moments.
在确定修正系数时,同样需要在预设时间范围内确定一个参考时间点,根据点云簇中各点云点的采集时间与参考时间点的间隔为不同时刻采集的点云点分配修正系数,其中,参考时间点的选择应与对目标点云数据进行畸变修正时选择的参考时间点一致。具体地,可以对点云簇中所有点云点的修正系数按时间均匀划分,从而按时间为点云点分配对应的修正系数,距离参考时间点越近,修正系数越接近于0,距离参考时间点越远,修正系数越接近于1。分配畸变系数后,作用上对应时刻的运动插值估计,得出最终的畸变修正结果。When determining the correction coefficient, it is also necessary to determine a reference time point within the preset time range, and assign correction coefficients to the point cloud points collected at different times according to the interval between the collection time of each point cloud point in the point cloud cluster and the reference time point, Among them, the selection of the reference time point should be consistent with the reference time point selected when performing distortion correction on the target point cloud data. Specifically, the correction coefficients of all point cloud points in the point cloud cluster can be evenly divided by time, so as to assign corresponding correction coefficients to point cloud points by time. The closer the distance to the reference time point, the closer the correction coefficient is to 0, and the distance from the reference point The farther the time point is, the closer the correction factor is to 1. After distributing the distortion coefficients, the motion interpolation estimation at the corresponding moment is applied to obtain the final distortion correction result.
示例性地,参见图7A、图7B,以车辆为例,其中图7A是对车辆的运动引起的运动畸变进行畸变修正之前的车辆的点云簇;图7B是对图7A进行畸变修正之后的车辆的点云簇。通过对图7A、图7B的对比可知,对点云簇的运动畸变进行修正能够明显改善点云簇的拖影现象。Exemplarily, referring to Fig. 7A and Fig. 7B, taking a vehicle as an example, Fig. 7A is the point cloud cluster of the vehicle before the distortion correction is performed on the motion distortion caused by the movement of the vehicle; Fig. 7B is the point cloud cluster after the distortion correction is performed on Fig. 7A Point cloud clusters of vehicles. From the comparison of FIG. 7A and FIG. 7B , it can be seen that correcting the motion distortion of the point cloud cluster can significantly improve the smear phenomenon of the point cloud cluster.
由于本发明实施例采用多个测距装置共同采集点云数据,并且多个测距装置朝向相同、布局紧密,因而当视场中存在高反射率的物体时,各个测距装置有可能接收到其他测距装置发射的激光脉冲的反射光,而形成噪点,这些噪点会对稠密点云真值数据产生不可忽视的影响。因此,在步骤S450,识别目标点云数据中由不同测距装置之间的串扰引发的噪点,并滤除这部分噪点,以提高稠密点云真值数据的精度。Since the embodiment of the present invention uses multiple distance measuring devices to jointly collect point cloud data, and the multiple distance measuring devices have the same orientation and tight layout, when there is an object with high reflectivity in the field of view, each distance measuring device may receive The reflected light of laser pulses emitted by other ranging devices forms noise points, which will have a non-negligible impact on the dense point cloud true value data. Therefore, in step S450, identify noise points in the target point cloud data caused by crosstalk between different distance measuring devices, and filter out these noise points, so as to improve the accuracy of the dense point cloud true value data.
示例性地,可以采用近邻点滤噪方法或深度图滤噪方法等对点云数据进行滤噪。例如,若某一点云点的一定邻域范围内近邻点的数目小于预设阈值,则该点云点为孤立点,由于真实测量点的空间连续性,孤立点通常为噪点,因而可以将这部分点云点作为噪点滤除。在一个实施例中,所有点云点可以采用相同的邻域范围和预设阈值。在其他实施例中,由于对距离较近的物体采集到的点云较为稠密,而对距离较远的物体采集到的点云较为稀疏,因而更容易被判断为噪声点,因而对于距离较远的点云点可以采用较大的邻域范围或较小的阈值。Exemplarily, the noise filtering of the point cloud data may be performed using a neighbor point noise filtering method or a depth map noise filtering method. For example, if the number of adjacent points within a certain neighborhood of a certain point cloud point is less than the preset threshold, the point cloud point is an isolated point. Due to the spatial continuity of the real measurement point, the isolated point is usually a noise point, so this can be Some point cloud points are filtered out as noise. In one embodiment, all point cloud points may use the same neighborhood range and preset threshold. In other embodiments, since the point clouds collected for objects that are closer to the distance are relatively dense, and the point clouds collected for objects that are far away are relatively sparse, they are more likely to be judged as noise points. The point cloud points can adopt a larger neighborhood range or a smaller threshold.
当然,步骤S450中的滤噪处理不限于采用上述滤噪方法,并且滤除的噪点不限于不同测距装置之间的串扰引发的噪点,还包括其他类型的噪点,例 如阳光、灰尘以及测距装置的内部反射光引发的噪点等。Of course, the noise filtering process in step S450 is not limited to the above-mentioned noise filtering method, and the filtered noise is not limited to the noise caused by the crosstalk between different ranging devices, but also includes other types of noise, such as sunlight, dust and ranging Noise caused by reflected light inside the device, etc.
通过以上步骤,最终得到了高精度的稠密点云数据。之后,在步骤S460,输出稠密点云真值数据,稠密点云真值数据包括畸变修正和滤噪处理后的目标点云数据和进行空间对齐后的图像数据,二者具有相匹配的特征,共同构成了稠密点云真值数据,如图8所示。其中,图像数据可以是畸变修正的参考时间点对应的图像数据。得到稠密点云真值数据之后,可以将其作为深度学习模型的训练样本,训练能够依据稠密的图像和稀疏的点云数据生成稠密的点云数据的网络模型。Through the above steps, high-precision dense point cloud data is finally obtained. Afterwards, in step S460, the dense point cloud true value data is output, the dense point cloud true value data includes the target point cloud data after distortion correction and noise filtering processing and the image data after spatial alignment, the two have matching features, Together constitute the dense point cloud ground truth data, as shown in Figure 8. Wherein, the image data may be image data corresponding to a reference time point of distortion correction. After the dense point cloud true value data is obtained, it can be used as a training sample of the deep learning model to train a network model that can generate dense point cloud data based on dense images and sparse point cloud data.
综上所述,本发明实施例的稠密点云真值数据的构建方法400采用多个测距装置采集的点云数据构建稠密点云,能够避免视角不一致导致的深度不一致和运动目标模糊问题;通过对多个测距装置采集的点云数据进行时间对齐、空间对齐和滤噪处理,提升了稠密点云真值数据的精确度。To sum up, the method 400 for constructing dense point cloud true value data according to the embodiment of the present invention uses point cloud data collected by multiple distance measuring devices to construct a dense point cloud, which can avoid the problems of inconsistency in depth and blurring of moving objects caused by inconsistent viewing angles; By performing time alignment, spatial alignment and noise filtering on the point cloud data collected by multiple ranging devices, the accuracy of the dense point cloud true value data is improved.
下面,重新参照图5,对本发明一个实施例的稠密点云真值数据的构建系统500进行描述,其中,前述稠密点云真值数据的构建方法400的特征可以结合到本实施例中。Next, with reference to FIG. 5 , a system 500 for constructing dense point cloud true-value data according to an embodiment of the present invention will be described, wherein the features of the above-mentioned method 400 for constructing dense point cloud true-value data can be combined into this embodiment.
如图5所示,稠密点云真值数据的构建系统500包括相机510、至少两个测距装置520、惯性测量装置530和处理器550,相机510、至少两个测距装置520和惯性测量装置530设置在同一可移动平台上,并且相机510和至少两个测距装置520的视场至少部分重合;处理器550用于:获取相机510采集的图像数据、至少两个测距装置520采集的点云数据、以及惯性测量装置采集的相机510和测距装置520的位姿数据,相机510和至少两个测距装置520的视场至少部分重合;获取图像数据、点云数据和位姿数据的时间信息,根据时间信息将图像数据、点云数据和位姿数据进行时间对齐;对相机510、测距装置520和惯性测量装置进行标定,以得到图像、点云数据和位姿数据之间的空间变换关系,并根据空间变换关系将图像数据、点云数据和位姿数据进行空间对齐;在变换到同一空间坐标系下的点云数据中提取预设时间范围内的目标点云数据,根据位姿数据的变化对目标点云数据中的运动畸变进行畸变修正,以得到修正后的目标点云数据;对修正后的目标点云数据进行滤噪处理,得到滤噪处理后的目标点云数据;输出稠密点云真值数据,稠密 点云真值数据包括滤噪处理后的目标点云数据和进行空间对齐后的图像数据。As shown in Figure 5, the construction system 500 of dense point cloud true value data includes a camera 510, at least two distance measuring devices 520, an inertial measurement device 530 and a processor 550, the camera 510, at least two distance measuring devices 520 and an inertial measurement device The device 530 is set on the same movable platform, and the fields of view of the camera 510 and at least two distance measuring devices 520 overlap at least in part; The point cloud data of the camera 510 and the pose data of the camera 510 and the distance measuring device 520 collected by the inertial measurement device, the field of view of the camera 510 and at least two distance measuring devices 520 at least partially overlap; acquire image data, point cloud data and pose The time information of the data, according to the time information, the image data, point cloud data and pose data are time-aligned; the camera 510, the distance measuring device 520 and the inertial measurement device are calibrated to obtain the image, point cloud data and pose data. The spatial transformation relationship between them, and spatially align the image data, point cloud data and pose data according to the spatial transformation relationship; extract the target point cloud data within the preset time range from the point cloud data transformed into the same space coordinate system According to the change of pose data, the motion distortion in the target point cloud data is distorted to obtain the corrected target point cloud data; the corrected target point cloud data is subjected to noise filtering to obtain the target after noise filtering Point cloud data; output dense point cloud true value data, dense point cloud true value data includes target point cloud data after noise filtering and image data after spatial alignment.
在一些实施例中,相机510和至少两个测距装置520的朝向相同。较佳地,相机510和至少两个测距装置520的布局方式尽可能贴合彼此,以获得尽可能大的重合视场,从而最大程度地采集相同视场内的点云数据和图像数据In some embodiments, the orientations of the camera 510 and at least two distance measuring devices 520 are the same. Preferably, the layout of the camera 510 and the at least two distance measuring devices 520 is as close as possible to each other, so as to obtain as large an overlapping field of view as possible, so as to collect point cloud data and image data in the same field of view to the greatest extent.
在一些实施例中,稠密点云真值数据的构建系统500还包括数据同步系统540,数据同步系统540用于:接收相机510采集的图像数据、至少两个测距装置520采集的点云数据、以及惯性测量装置530采集的位姿数据;对图像数据、点云数据和位姿数据进行时间同步;将时间同步后的图像数据、点云数据和位姿数据发送至处理器550。In some embodiments, the system 500 for constructing dense point cloud true value data further includes a data synchronization system 540, and the data synchronization system 540 is used to: receive image data collected by the camera 510, point cloud data collected by at least two ranging devices 520 , and the pose data collected by the inertial measurement device 530; time synchronization is performed on the image data, point cloud data and pose data; and the time-synchronized image data, point cloud data and pose data are sent to the processor 550.
在一些实施例中,测距装置520包括第一测距装置520和至少一个第二测距装置520,对相机510、测距装置520和惯性测量装置进行标定,包括:分别对每个第二测距装置520与第一测距装置520进行标定,以得到每个第二测距装置520采集的第二点云数据与第一测距装置520采集的第一点云数据之间的空间变换关系;对相机510与第一测距装置520进行标定,以得到图像与第一点云数据之间的空间变换关系;对惯性测量装置与第一测距装置520进行标定,以得到位姿数据与第一点云数据之间的空间变换关系;空间对齐包括:将图像数据、点云数据和位姿数据变换到第一点云数据的空间坐标系下。In some embodiments, the distance measuring device 520 includes a first distance measuring device 520 and at least one second distance measuring device 520, and calibrating the camera 510, the distance measuring device 520 and the inertial measurement device includes: respectively The distance measuring device 520 is calibrated with the first distance measuring device 520 to obtain the space transformation between the second point cloud data collected by each second distance measuring device 520 and the first point cloud data collected by the first distance measuring device 520 Relationship; calibrate the camera 510 and the first distance measuring device 520 to obtain the spatial transformation relationship between the image and the first point cloud data; calibrate the inertial measurement device and the first distance measuring device 520 to obtain the pose data The spatial transformation relationship with the first point cloud data; the spatial alignment includes: transforming the image data, point cloud data and pose data into the space coordinate system of the first point cloud data.
在一些实施例中,根据位姿数据的变化对目标点云数据中的运动畸变进行畸变修正,包括:根据位姿数据的变化得到目标点云数据在预设时间范围内的运动方向和运动速度;根据运动方向和运动速度,对目标点云数据中由测距装置520的运动引发的运动畸变进行畸变修正。In some embodiments, the distortion correction is performed on the motion distortion in the target point cloud data according to the change of the pose data, including: obtaining the motion direction and speed of the target point cloud data within a preset time range according to the change of the pose data ; Perform distortion correction on the motion distortion caused by the motion of the ranging device 520 in the target point cloud data according to the motion direction and motion speed.
进一步地,对目标点云数据中由测距装置520的运动引发的运动畸变进行畸变修正,包括:将目标点云数据修正到预设时间范围的起始时刻所对应的位置。Further, performing distortion correction on the motion distortion caused by the movement of the ranging device 520 in the target point cloud data includes: correcting the target point cloud data to a position corresponding to the initial moment of the preset time range.
进一步地,对目标点云数据中由测距装置520的运动引发的运动畸变进行畸变修正,包括:根据目标点云数据的时间信息,为不同时刻的目标点云数据分配对应的修正系数;根据不同时刻的目标点云数据对应的修正系数以及运动方向和运动速度,对不同时刻的目标点云数据进行畸变修正。Further, performing distortion correction on the motion distortion caused by the movement of the distance measuring device 520 in the target point cloud data includes: assigning corresponding correction coefficients to the target point cloud data at different times according to the time information of the target point cloud data; The correction coefficients, motion directions and speeds corresponding to the target point cloud data at different times are used to correct the distortion of the target point cloud data at different times.
在一些实施例中,处理器550还用于:识别目标点云数据中对应于不同目标对象的点云簇;获取点云簇在不同时刻下的位移信息,根据位移信息对目标对象进行运动估计;根据运动估计的结果,对由目标对象的运动引发的点云簇的运动畸变进行畸变修正。In some embodiments, the processor 550 is also used to: identify point cloud clusters corresponding to different target objects in the target point cloud data; obtain displacement information of the point cloud clusters at different times, and perform motion estimation on the target object according to the displacement information ; Distortion correction is performed on the motion distortion of the point cloud cluster caused by the motion of the target object according to the result of the motion estimation.
在一些实施例中,根据运动估计的结果,对由目标对象的运动引发的点云簇的运动畸变进行畸变修正,包括:根据点云簇的时间信息为不同时刻的点云簇分配对应的修正系数;根据不同时刻的点云簇对应的修正系数和运动估计的结果对不同时刻的点云簇进行畸变修正。In some embodiments, according to the result of the motion estimation, the distortion correction is performed on the motion distortion of the point cloud cluster caused by the motion of the target object, including: assigning corresponding corrections to the point cloud cluster at different times according to the time information of the point cloud cluster Coefficient: Correct the distortion of point cloud clusters at different times according to the correction coefficients corresponding to point cloud clusters at different times and the results of motion estimation.
在一些实施例中,滤噪处理包括:识别目标点云数据中由不同测距装置520之间的串扰引发的噪点,并滤除这部分噪点。In some embodiments, the noise filtering process includes: identifying noise points in the target point cloud data caused by crosstalk between different ranging devices 520, and filtering out these noise points.
本发明实施例的稠密点云真值数据的构建系统500的更多细节可以参照前述稠密点云真值数据的构建方法400的相关描述,在此不做赘述。For more details of the system 500 for constructing dense point cloud true value data in the embodiment of the present invention, reference may be made to the relevant description of the aforementioned method 400 for constructing dense point cloud true value data, and details are not repeated here.
本发明实施例的稠密点云真值数据的构建方系统500采用多个测距装置采集的点云数据构建稠密点云,能够避免视角不一致导致的深度不一致和运动目标模糊问题;通过对多个测距装置采集的点云数据进行时间对齐、空间对齐和滤噪处理,提升了稠密点云真值数据的精确度。The system 500 for constructing dense point cloud true value data in the embodiment of the present invention uses point cloud data collected by multiple distance measuring devices to construct a dense point cloud, which can avoid the problems of inconsistency in depth and blurring of moving objects caused by inconsistent viewing angles; The point cloud data collected by the ranging device is time-aligned, space-aligned and noise-filtered, which improves the accuracy of the dense point cloud true value data.
下面,参考图9对本发明一个实施例的电子设备900进行描述,其中,前述稠密点云真值数据的构建方法400的特征可以结合到本实施例中。电子设备900可以实现为计算机、服务器或车载终端等电子设备。电子设备900可以与相机、测距装置和惯性测量装置一同搭载在可移动平台上,但不限于此。具有电子设备900的可移动平台可以对外部环境进行测量,例如,测量移动平台与障碍物的距离用于避障等用途,和对外部环境进行二维或三维的测绘。Next, an electronic device 900 according to an embodiment of the present invention will be described with reference to FIG. 9 , wherein the features of the aforementioned method 400 for constructing dense point cloud ground truth data can be combined into this embodiment. The electronic device 900 may be implemented as an electronic device such as a computer, a server, or a vehicle-mounted terminal. The electronic device 900 may be carried on a movable platform together with a camera, a distance measuring device, and an inertial measurement device, but is not limited thereto. The mobile platform with the electronic device 900 can measure the external environment, for example, measure the distance between the mobile platform and obstacles for purposes such as obstacle avoidance, and perform two-dimensional or three-dimensional mapping of the external environment.
电子设备900包括一个或多个处理器920,以及一个或多个存储器910,一个或多个处理器920共同地或单独地工作。可选地,电子设备900还可以包括输入装置(未示出)、输出装置(未示出)以及图像传感器(未示出)中的至少一个,这些组件通过总线系统和/或其它形式的连接机构(未示出)互连。The electronic device 900 includes one or more processors 920 and one or more memories 910, and the one or more processors 920 work together or individually. Optionally, the electronic device 900 may further include at least one of an input device (not shown), an output device (not shown) and an image sensor (not shown), and these components are connected through a bus system and/or other forms Mechanisms (not shown) are interconnected.
存储器910用于存储处理器可执行的程序指令,例如用于存储用于实现根据本发明实施例的稠密点云真值数据的构建方法的相应步骤和程序指令。 可以包括一个或多个计算机程序产品,所述计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。所述易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。所述非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。The memory 910 is used for storing processor-executable program instructions, for example, for storing corresponding steps and program instructions for realizing the method for constructing dense point cloud truth data according to the embodiment of the present invention. One or more computer program products may be included, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random access memory (RAM) and/or cache memory (cache). The non-volatile memory may include, for example, a read-only memory (ROM), a hard disk, a flash memory, and the like.
处理器920可以是中央处理单元(CPU)、图像处理单元(GPU)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者具有数据处理能力和/或指令执行能力的其它形式的处理单元,并且可以控制电子设备900中的其它组件以执行期望的功能。处理器920能够执行存储器中存储的指令,以执行本文描述的本发明实施例的稠密点云真值数据的构建方法。例如,处理器能够包括一个或多个嵌入式处理器、处理器核心、微型处理器、逻辑电路、硬件有限状态机(FSM)、数字信号处理器(DSP)或它们的组合。在本实施例中,所述处理器包括现场可编程门阵列(FPGA),其中,测距装置的运算电路可以是现场可编程门阵列(FPGA)的一部分。The processor 920 may be a central processing unit (CPU), a graphics processing unit (GPU), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or other forms of processing with data processing capabilities and/or instruction execution capabilities. unit, and can control other components in the electronic device 900 to perform desired functions. The processor 920 can execute the instructions stored in the memory, so as to execute the method for constructing the dense point cloud ground truth data in the embodiment of the present invention described herein. For example, a processor can include one or more embedded processors, processor cores, microprocessors, logic circuits, hardware finite state machines (FSMs), digital signal processors (DSPs), or combinations thereof. In this embodiment, the processor includes a Field Programmable Gate Array (FPGA), wherein the operation circuit of the ranging device may be a part of the Field Programmable Gate Array (FPGA).
输入装置可以是用户用来输入指令的装置,并且可以包括键盘、鼠标、麦克风和触摸屏等中的一个或多个。输出装置可以向外部(例如用户)输出各种信息(例如图像或声音),并且可以包括显示器、扬声器等中的一个或多个,输出装置可以用于输出稠密点云真值数据。The input device may be a device used by a user to input an instruction, and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like. The output device can output various information (such as images or sounds) to the outside (such as a user), and can include one or more of a display, a speaker, etc., and the output device can be used to output dense point cloud true value data.
通信接口(未示出)用于与其他设备之间进行通信,包括有线或者无线方式的通信。激光测距装置可以接入基于通信标准的无线网络,如WiFi、2G、3G、8G、5G或它们的组合。在一个示例性实施例中,通信接口经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信接口还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。The communication interface (not shown) is used to communicate with other devices, including wired or wireless communication. The laser ranging device can access wireless networks based on communication standards, such as WiFi, 2G, 3G, 8G, 5G or a combination thereof. In an exemplary embodiment, the communication interface receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication interface further includes a near field communication (NFC) module to facilitate short range communication. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wide Band (UWB) technology, Bluetooth (BT) technology and other technologies.
具体地,当存储器910存储的程序指令被处理器920执行时,处理器920用于:获取相机采集的图像数据、至少两个测距装置采集的点云数据、以及惯性测量装置采集的相机和测距装置的位姿数据,相机、至少两个测距装置和惯性测量装置设置在同一可移动平台上,并且相机和至少两个测距装置的视场至少部分重合;获取图像数据、点云数据和位姿数据的时间信息,根据时间信息将图像数据、点云数据和位姿数据进行时间对齐;对相机、测距装 置和惯性测量装置进行标定,以得到图像、点云数据和位姿数据之间的空间变换关系,并根据空间变换关系将图像数据、点云数据和位姿数据进行空间对齐;在变换到同一空间坐标系下的点云数据中提取预设时间范围内的目标点云数据,根据位姿数据的变化对目标点云数据中的运动畸变进行畸变修正,以得到修正后的目标点云数据;对修正后的目标点云数据进行滤噪处理,得到滤噪处理后的目标点云数据;输出稠密点云真值数据,稠密点云真值数据包括滤噪处理后的目标点云数据和进行空间对齐后的图像数据。Specifically, when the program instructions stored in the memory 910 are executed by the processor 920, the processor 920 is used to: acquire the image data collected by the camera, the point cloud data collected by at least two distance measuring devices, and the cameras and data collected by the inertial measurement device. The pose data of the distance measuring device, the camera, at least two distance measuring devices and the inertial measurement device are arranged on the same movable platform, and the fields of view of the camera and the at least two distance measuring devices at least partially overlap; acquire image data, point cloud The time information of data and pose data, time-aligned image data, point cloud data and pose data according to the time information; calibrate the camera, ranging device and inertial measurement device to obtain images, point cloud data and pose The spatial transformation relationship between the data, and spatially align the image data, point cloud data and pose data according to the spatial transformation relationship; extract the target point within the preset time range from the point cloud data transformed into the same space coordinate system Cloud data, according to the change of pose data, the motion distortion in the target point cloud data is distorted to obtain the corrected target point cloud data; the corrected target point cloud data is subjected to noise filtering processing, and the filtered noise processing The target point cloud data; output the dense point cloud true value data, the dense point cloud true value data includes the target point cloud data after noise filtering and the image data after spatial alignment.
本发明实施例的电子设备900能够获得稠密且高精度的稠密点云真值数据。The electronic device 900 in the embodiment of the present invention can obtain dense and high-precision dense point cloud truth data.
另外,本发明实施例还提供了一种计算机存储介质,其上存储有计算机程序。在所述计算机可读存储介质上可以存储一个或多个计算机程序指令,处理器可以运行存储器存储的所述程序指令,以实现本文所述的本发明实施例中(由处理器实现)的功能以及/或者其它期望的功能,例如以执行根据本发明实施例的稠密点云真值数据的构建方法的相应步骤,在所述计算机可读存储介质中还可以存储各种应用程序和各种数据,例如所述应用程序使用和/或产生的各种数据等。In addition, an embodiment of the present invention also provides a computer storage medium on which a computer program is stored. One or more computer program instructions can be stored on the computer-readable storage medium, and the processor can execute the program instructions stored in the memory to realize the functions (implemented by the processor) in the embodiments of the present invention described herein And/or other desired functions, such as to execute the corresponding steps of the method for constructing dense point cloud truth data according to the embodiment of the present invention, various application programs and various data can also be stored in the computer-readable storage medium , such as various data used and/or generated by the application.
例如,所述计算机存储介质例如可以包括智能电话的存储卡、平板电脑的存储部件、个人计算机的硬盘、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)、便携式紧致盘只读存储器(CD-ROM)、USB存储器、或者上述存储介质的任意组合。所述计算机可读存储介质可以是一个或多个计算机可读存储介质的任意组合。由于计算机存储介质存储的计算机程序指令用于实现本发明实施例的稠密点云真值数据的构建方法,因而也具备了上文所述的优点。For example, the computer storage medium may include, for example, a memory card of a smart phone, a memory component of a tablet computer, a hard disk of a personal computer, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), a portable compact disk Read Only Memory (CD-ROM), USB memory, or any combination of the above storage media. The computer readable storage medium can be any combination of one or more computer readable storage medium. Because the computer program instructions stored in the computer storage medium are used to implement the method for constructing the dense point cloud true value data according to the embodiment of the present invention, it also has the above-mentioned advantages.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其他任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如, 所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如数字视频光盘(digital video disc,DVD))、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。In the above embodiments, all or part may be implemented by software, hardware, firmware or other arbitrary combinations. When implemented using software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present invention will be generated in whole or in part. The computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server or data center by wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.). The computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media. The available medium may be a magnetic medium (such as a floppy disk, a hard disk, a magnetic tape), an optical medium (such as a digital video disc (digital video disc, DVD)), or a semiconductor medium (such as a solid state disk (solid state disk, SSD)), etc. .
尽管这里已经参考附图描述了示例实施例,应理解上述示例实施例仅仅是示例性的,并且不意图将本发明的范围限制于此。本领域普通技术人员可以在其中进行各种改变和修改,而不偏离本发明的范围和精神。所有这些改变和修改意在被包括在所附权利要求所要求的本发明的范围之内。Although example embodiments have been described herein with reference to the accompanying drawings, it should be understood that the above-described example embodiments are exemplary only and are not intended to limit the scope of the invention thereto. Various changes and modifications can be made therein by those skilled in the art without departing from the scope and spirit of the invention. All such changes and modifications are intended to be included within the scope of the invention as claimed in the appended claims.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。Those skilled in the art can appreciate that the units and algorithm steps of the examples described in conjunction with the embodiments disclosed herein can be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the present invention.
在本发明所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。例如,以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个设备,或一些特征可以忽略,或不执行。In the several embodiments provided by the present invention, it should be understood that the disclosed devices and methods can be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components can be combined or May be integrated into another device, or some features may be omitted, or not implemented.
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本发明的实施例可以在没有这些具体细节的情况下实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure the understanding of this description.
类似地,应当理解,为了精简本发明并帮助理解各个发明方面中的一个或多个,在对本发明的示例性实施例的描述中,本发明的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该本发明的方法解释成反映如下意图:即所要求保护的本发明要求比在每个权利要求中所明确记载的特征更多的特征。更确切地说,如相应的权利要求书所反映的那样,其发明点在于可以用少于某个公开的单个实施例的所有特征的特征来 解决相应的技术问题。因此,遵循具体实施方式的权利要求书由此明确地并入该具体实施方式,其中每个权利要求本身都作为本发明的单独实施例。Similarly, it should be understood that in the description of the exemplary embodiments of the invention, in order to streamline the disclosure and to facilitate an understanding of one or more of the various inventive aspects, various features of the invention are sometimes grouped together in a single embodiment, figure , or in its description. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the corresponding claims reflect, the inventive point lies in that the corresponding technical problem can be solved by using less than all features of a single disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of this invention.
本领域的技术人员可以理解,除了特征之间相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的替代特征来代替。It will be appreciated by those skilled in the art that all features disclosed in this specification (including accompanying claims, abstract and drawings) and all features of any method or apparatus so disclosed may be used in any combination, except where the features are mutually exclusive. process or unit. Each feature disclosed in this specification (including accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
此外,本领域的技术人员能够理解,尽管在此所述的一些实施例包括其它实施例中所包括的某些特征而不是其它特征,但是不同实施例的特征的组合意味着处于本发明的范围之内并且形成不同的实施例。例如,在权利要求书中,所要求保护的实施例的任意之一都可以以任意的组合方式来使用。Furthermore, those skilled in the art will understand that although some embodiments described herein include some features included in other embodiments but not others, combinations of features from different embodiments are meant to be within the scope of the invention. and form different embodiments. For example, in the claims, any one of the claimed embodiments can be used in any combination.
本发明的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本发明实施例的一些模块的一些或者全部功能。本发明还可以实现为用于执行这里所描述的方法的一部分或者全部的装置程序(例如,计算机程序和计算机程序产品)。这样的实现本发明的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。The various component embodiments of the present invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art should understand that a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all functions of some modules according to the embodiments of the present invention. The present invention can also be implemented as an apparatus program (for example, a computer program and a computer program product) for performing a part or all of the methods described herein. Such a program for realizing the present invention may be stored on a computer-readable medium, or may be in the form of one or more signals. Such a signal may be downloaded from an Internet site, or provided on a carrier signal, or provided in any other form.
应该注意的是上述实施例对本发明进行说明而不是对本发明进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。本发明可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The invention can be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In a unit claim enumerating several means, several of these means can be embodied by one and the same item of hardware. The use of the words first, second, and third, etc. does not indicate any order. These words can be interpreted as names.

Claims (21)

  1. 一种稠密点云真值数据的构建方法,其特征在于,所述方法包括:A method for constructing dense point cloud true value data, characterized in that the method comprises:
    获取相机采集的图像数据、至少两个测距装置采集的点云数据、以及惯性测量装置采集的所述相机和所述测距装置的位姿数据,所述相机、所述至少两个测距装置和所述惯性测量装置设置在同一可移动平台上,并且所述相机和所述至少两个测距装置的视场至少部分重合;Obtain image data collected by the camera, point cloud data collected by at least two distance measuring devices, and pose data of the camera and the distance measuring device collected by the inertial measurement device, the camera, the at least two distance measuring devices The device and the inertial measurement device are arranged on the same movable platform, and the fields of view of the camera and the at least two distance measuring devices at least partially overlap;
    获取所述图像数据、所述点云数据和所述位姿数据的时间信息,根据所述时间信息将所述图像数据、所述点云数据和所述位姿数据进行时间对齐;Acquiring time information of the image data, the point cloud data, and the pose data, and time-aligning the image data, the point cloud data, and the pose data according to the time information;
    对所述相机、所述测距装置和所述惯性测量装置进行标定,以得到所述图像、所述点云数据和所述位姿数据之间的空间变换关系,并根据所述空间变换关系将所述图像数据、所述点云数据和所述位姿数据进行空间对齐;Calibrate the camera, the distance measuring device and the inertial measurement device to obtain the spatial transformation relationship between the image, the point cloud data and the pose data, and according to the spatial transformation relationship spatially aligning the image data, the point cloud data, and the pose data;
    在变换到同一空间坐标系下的点云数据中提取预设时间范围内的目标点云数据,根据所述位姿数据的变化对所述目标点云数据中的运动畸变进行畸变修正,以得到修正后的目标点云数据;Extract the target point cloud data within the preset time range from the point cloud data transformed into the same space coordinate system, and perform distortion correction on the motion distortion in the target point cloud data according to the change of the pose data, so as to obtain The corrected target point cloud data;
    对所述修正后的目标点云数据进行滤噪处理,得到滤噪处理后的目标点云数据;Noise filtering is performed on the corrected target point cloud data to obtain the target point cloud data after noise filtering;
    输出稠密点云真值数据,所述稠密点云真值数据包括所述滤噪处理后的目标点云数据和进行所述空间对齐后的图像数据。Outputting dense point cloud true value data, the dense point cloud true value data includes the target point cloud data after the noise filtering process and the image data after the spatial alignment.
  2. 如权利要求1所述的方法,其特征在于,所述相机和所述至少两个测距装置的朝向相同。The method of claim 1, wherein the camera and the at least two distance measuring devices are oriented in the same direction.
  3. 如权利要求1或2所述的方法,其特征在于,所述图像数据、所述点云数据和所述位姿数据为经过时间同步后的数据。The method according to claim 1 or 2, wherein the image data, the point cloud data and the pose data are time-synchronized data.
  4. 如权利要求1-3中任一项所述的方法,其特征在于,所述测距装置包括第一测距装置和至少一个第二测距装置,所述对所述相机、所述测距装置和所述惯性测量装置进行标定,包括:The method according to any one of claims 1-3, wherein the distance measuring device comprises a first distance measuring device and at least one second distance measuring device, and the pair of the camera, the distance measuring device The device and the inertial measurement unit are calibrated, including:
    分别对每个所述第二测距装置与所述第一测距装置进行标定,以得到每个所述第二测距装置采集的第二点云数据与所述第一测距装置采集的第一点云数据之间的空间变换关系;Calibrating each of the second distance measuring devices and the first distance measuring device respectively, so as to obtain the second point cloud data collected by each of the second distance measuring devices and the point cloud data collected by the first distance measuring device The spatial transformation relationship between the first point cloud data;
    对所述相机与所述第一测距装置进行标定,以得到所述图像与所述第一点云数据之间的空间变换关系;Calibrate the camera and the first distance measuring device to obtain a spatial transformation relationship between the image and the first point cloud data;
    对所述惯性测量装置与所述第一测距装置进行标定,以得到所述位姿数 据与所述第一点云数据之间的空间变换关系;Calibrating the inertial measurement device and the first distance measuring device to obtain a spatial transformation relationship between the pose data and the first point cloud data;
    所述空间对齐包括:将所述图像数据、所述点云数据和所述位姿数据变换到所述第一点云数据的空间坐标系下。The spatial alignment includes: transforming the image data, the point cloud data and the pose data into a space coordinate system of the first point cloud data.
  5. 如权利要求1-4中任一项所述的方法,其特征在于,所述根据所述位姿数据的变化对所述目标点云数据中的运动畸变进行畸变修正,包括:The method according to any one of claims 1-4, wherein said performing distortion correction on the motion distortion in the target point cloud data according to the change of the pose data comprises:
    根据所述位姿数据的变化得到所述目标点云数据在所述预设时间范围内的运动方向和运动速度;Obtaining the movement direction and movement speed of the target point cloud data within the preset time range according to the change of the pose data;
    根据所述运动方向和运动速度,对所述目标点云数据中由所述测距装置的运动引发的运动畸变进行畸变修正。Distortion correction is performed on the motion distortion caused by the motion of the distance measuring device in the target point cloud data according to the motion direction and motion speed.
  6. 如权利要求5所述的方法,其特征在于,所述对所述目标点云数据中由所述测距装置的运动引发的运动畸变进行畸变修正,包括:The method according to claim 5, wherein said performing distortion correction on the motion distortion caused by the motion of the distance measuring device in the target point cloud data comprises:
    将所述目标点云数据修正到所述预设时间范围的起始时刻所对应的位置。Correcting the target point cloud data to the position corresponding to the starting moment of the preset time range.
  7. 如权利要求5或6所述的方法,其特征在于,所述对所述目标点云数据中由所述测距装置的运动引发的运动畸变进行畸变修正,包括:The method according to claim 5 or 6, wherein the distortion correction of the motion distortion caused by the motion of the distance measuring device in the target point cloud data includes:
    根据所述目标点云数据的时间信息,为不同时刻的目标点云数据分配对应的修正系数;According to the time information of the target point cloud data, assign corresponding correction coefficients to the target point cloud data at different times;
    根据不同时刻的目标点云数据对应的修正系数以及所述运动方向和所述运动速度,对不同时刻的目标点云数据进行畸变修正。Distortion correction is performed on the target point cloud data at different times according to the correction coefficients corresponding to the target point cloud data at different times, the moving direction and the moving speed.
  8. 如权利要求1-7中任一项所述的方法,其特征在于,还包括:The method according to any one of claims 1-7, further comprising:
    识别所述目标点云数据中对应于不同目标对象的点云簇;identifying point cloud clusters corresponding to different target objects in the target point cloud data;
    获取所述点云簇在不同时刻下的位移信息,根据所述位移信息对目标对象进行运动估计;Acquiring displacement information of the point cloud cluster at different times, and performing motion estimation on the target object according to the displacement information;
    根据所述运动估计的结果,对由所述目标对象的运动引发的所述点云簇的运动畸变进行畸变修正。Distortion correction is performed on the motion distortion of the point cloud cluster caused by the motion of the target object according to the motion estimation result.
  9. 如权利要求8所述的方法,其特征在于,所述根据所述运动估计的结果,对由所述目标对象的运动引发的所述点云簇的运动畸变进行畸变修正,包括:The method according to claim 8, wherein, according to the result of the motion estimation, performing distortion correction on the motion distortion of the point cloud cluster caused by the motion of the target object comprises:
    根据所述点云簇的时间信息为不同时刻的点云簇分配对应的修正系数;Assigning corresponding correction coefficients to point cloud clusters at different times according to the time information of the point cloud clusters;
    根据不同时刻的点云簇对应的修正系数和所述运动估计的结果,对不同时刻的所述点云簇进行畸变修正。Distortion correction is performed on the point cloud clusters at different times according to the correction coefficients corresponding to the point cloud clusters at different times and the result of the motion estimation.
  10. 如权利要求1-9中任一项所述的方法,其特征在于,所述滤噪处理包 括:识别所述目标点云数据中由不同测距装置之间的串扰引发的噪点,并滤除所述噪点。The method according to any one of claims 1-9, wherein the noise filtering process comprises: identifying noise points caused by crosstalk between different ranging devices in the target point cloud data, and filtering out the noise.
  11. 一种稠密点云真值数据的构建系统,其特征在于,所述系统包括相机、至少两个测距装置、惯性测量装置和处理器,所述相机、所述至少两个测距装置和所述惯性测量装置设置在同一可移动平台上,并且所述相机和所述至少两个测距装置的视场至少部分重合;A system for constructing dense point cloud true value data, characterized in that the system includes a camera, at least two distance measuring devices, an inertial measurement device and a processor, the camera, the at least two distance measuring devices and the The inertial measurement device is arranged on the same movable platform, and the fields of view of the camera and the at least two distance measuring devices at least partially overlap;
    所述处理器用于:The processor is used to:
    获取所述相机采集的图像数据、所述至少两个测距装置采集的点云数据、以及所述惯性测量装置采集的所述相机和所述测距装置的位姿数据,所述相机和所述至少两个测距装置的视场至少部分重合;Acquiring the image data collected by the camera, the point cloud data collected by the at least two distance measuring devices, and the pose data of the camera and the distance measuring device collected by the inertial measurement device, the camera and the distance measuring device The fields of view of the at least two ranging devices at least partially overlap;
    获取所述图像数据、所述点云数据和所述位姿数据的时间信息,根据所述时间信息将所述图像数据、所述点云数据和所述位姿数据进行时间对齐;Acquiring time information of the image data, the point cloud data, and the pose data, and time-aligning the image data, the point cloud data, and the pose data according to the time information;
    对所述相机、所述测距装置和所述惯性测量装置进行标定,以得到所述图像、所述点云数据和所述位姿数据之间的空间变换关系,并根据所述空间变换关系将所述图像数据、所述点云数据和所述位姿数据进行空间对齐;Calibrate the camera, the distance measuring device and the inertial measurement device to obtain the spatial transformation relationship between the image, the point cloud data and the pose data, and according to the spatial transformation relationship spatially aligning the image data, the point cloud data, and the pose data;
    在变换到同一空间坐标系下的点云数据中提取预设时间范围内的目标点云数据,根据所述位姿数据的变化对所述目标点云数据中的运动畸变进行畸变修正,以得到修正后的目标点云数据;Extract the target point cloud data within the preset time range from the point cloud data transformed into the same space coordinate system, and perform distortion correction on the motion distortion in the target point cloud data according to the change of the pose data, so as to obtain The corrected target point cloud data;
    对所述修正后的目标点云数据进行滤噪处理,得到滤噪处理后的目标点云数据;Noise filtering is performed on the corrected target point cloud data to obtain the target point cloud data after noise filtering;
    输出稠密点云真值数据,所述稠密点云真值数据包括所述滤噪处理后的目标点云数据和进行所述空间对齐后的图像数据。Outputting dense point cloud true value data, the dense point cloud true value data includes the target point cloud data after the noise filtering process and the image data after the spatial alignment.
  12. 如权利要求11所述的系统,其特征在于,所述相机和所述至少两个测距装置的朝向相同。The system of claim 11, wherein the camera and the at least two distance measuring devices are oriented in the same direction.
  13. 如权利要求11或12所述的系统,其特征在于,还包括数据同步系统,所述数据同步系统用于:The system according to claim 11 or 12, further comprising a data synchronization system, the data synchronization system is used for:
    接收所述相机采集的图像数据、所述至少两个测距装置采集的点云数据、以及所述惯性测量装置采集的位姿数据;receiving image data collected by the camera, point cloud data collected by the at least two ranging devices, and pose data collected by the inertial measurement device;
    对所述图像数据、所述点云数据和所述位姿数据进行时间同步;performing time synchronization on the image data, the point cloud data and the pose data;
    将时间同步后的所述图像数据、所述点云数据和所述位姿数据发送至所述处理器。sending the time-synchronized image data, point cloud data, and pose data to the processor.
  14. 如权利要求11-13中任一项所述的系统,其特征在于,所述测距装置包括第一测距装置和至少一个第二测距装置,所述对所述相机、所述测距装置和所述惯性测量装置进行标定,包括:The system according to any one of claims 11-13, wherein the distance measuring device comprises a first distance measuring device and at least one second distance measuring device, and the pair of the camera, the distance measuring device The device and the inertial measurement unit are calibrated, including:
    分别对每个所述第二测距装置与所述第一测距装置进行标定,以得到每个所述第二测距装置采集的第二点云数据与所述第一测距装置采集的第一点云数据之间的空间变换关系;Calibrating each of the second distance measuring devices and the first distance measuring device respectively, so as to obtain the second point cloud data collected by each of the second distance measuring devices and the point cloud data collected by the first distance measuring device The spatial transformation relationship between the first point cloud data;
    对所述相机与所述第一测距装置进行标定,以得到所述图像与所述第一点云数据之间的空间变换关系;Calibrate the camera and the first distance measuring device to obtain a spatial transformation relationship between the image and the first point cloud data;
    对所述惯性测量装置与所述第一测距装置进行标定,以得到所述位姿数据与所述第一点云数据之间的空间变换关系;Calibrating the inertial measurement device and the first ranging device to obtain a spatial transformation relationship between the pose data and the first point cloud data;
    所述空间对齐包括:将所述图像数据、所述点云数据和所述位姿数据变换到所述第一点云数据的空间坐标系下。The spatial alignment includes: transforming the image data, the point cloud data and the pose data into a space coordinate system of the first point cloud data.
  15. 如权利要求11-14中任一项所述的系统,其特征在于,所述根据所述位姿数据的变化对所述目标点云数据中的运动畸变进行畸变修正,包括:The system according to any one of claims 11-14, wherein said performing distortion correction on the motion distortion in the target point cloud data according to the change of the pose data comprises:
    根据所述位姿数据的变化得到所述目标点云数据在所述预设时间范围内的运动方向和运动速度;Obtaining the movement direction and movement speed of the target point cloud data within the preset time range according to the change of the pose data;
    根据所述运动方向和运动速度,对所述目标点云数据中由所述测距装置的运动引发的运动畸变进行畸变修正。Distortion correction is performed on the motion distortion caused by the motion of the distance measuring device in the target point cloud data according to the motion direction and motion speed.
  16. 如权利要求15所述的系统,其特征在于,所述对所述目标点云数据中由所述测距装置的运动引发的运动畸变进行畸变修正,包括:The system according to claim 15, wherein the distortion correction of the motion distortion caused by the motion of the distance measuring device in the target point cloud data comprises:
    将所述目标点云数据修正到所述预设时间范围的起始时刻所对应的位置。Correcting the target point cloud data to the position corresponding to the starting moment of the preset time range.
  17. 如权利要求15或16所述的系统,其特征在于,所述对所述目标点云数据中由所述测距装置的运动引发的运动畸变进行畸变修正,包括:The system according to claim 15 or 16, wherein the distortion correction of the motion distortion caused by the motion of the distance measuring device in the target point cloud data includes:
    根据所述目标点云数据的时间信息,为不同时刻的目标点云数据分配对应的修正系数;According to the time information of the target point cloud data, assign corresponding correction coefficients to the target point cloud data at different times;
    根据不同时刻的目标点云数据对应的修正系数以及所述运动方向和所述运动速度,对不同时刻的目标点云数据进行畸变修正。Distortion correction is performed on the target point cloud data at different times according to the correction coefficients corresponding to the target point cloud data at different times, the moving direction and the moving speed.
  18. 如权利要求11-17中任一项所述的系统,其特征在于,所述处理器还用于:The system according to any one of claims 11-17, wherein the processor is further configured to:
    识别所述目标点云数据中对应于不同目标对象的点云簇;identifying point cloud clusters corresponding to different target objects in the target point cloud data;
    获取所述点云簇在不同时刻下的位移信息,根据所述位移信息对目标对 象进行运动估计;Obtain the displacement information of the point cloud cluster at different moments, and carry out motion estimation to the target object according to the displacement information;
    根据所述运动估计的结果,对由所述目标对象的运动引发的所述点云簇的运动畸变进行畸变修正。Distortion correction is performed on the motion distortion of the point cloud cluster caused by the motion of the target object according to the motion estimation result.
  19. 如权利要求18所述的系统,其特征在于,所述根据所述运动估计的结果,对由所述目标对象的运动引发的所述点云簇的运动畸变进行畸变修正,包括:The system according to claim 18, wherein said performing distortion correction on the motion distortion of the point cloud cluster caused by the motion of the target object according to the result of the motion estimation comprises:
    根据所述点云簇的时间信息为不同时刻的点云簇分配对应的修正系数;Assigning corresponding correction coefficients to point cloud clusters at different times according to the time information of the point cloud clusters;
    根据不同时刻的点云簇对应的修正系数和所述运动估计的结果,对不同时刻的所述点云簇进行畸变修正。Distortion correction is performed on the point cloud clusters at different times according to the correction coefficients corresponding to the point cloud clusters at different times and the result of the motion estimation.
  20. 如权利要求11-19中任一项所述的系统,其特征在于,所述滤噪处理包括:识别所述目标点云数据中由不同测距装置之间的串扰引发的噪点,并滤除所述噪点。The system according to any one of claims 11-19, wherein the noise filtering process includes: identifying noise points in the target point cloud data caused by crosstalk between different ranging devices, and filtering out the noise.
  21. 一种电子设备,其特征在于,所述电子设备包括:An electronic device, characterized in that the electronic device comprises:
    存储器,用于存储可执行指令;memory for storing executable instructions;
    处理器,用于执行所述存储器中存储的所述指令,使得所述处理器执行权利要求1至10中任一项所述的稠密点云真值数据的构建方法。A processor, configured to execute the instructions stored in the memory, so that the processor executes the method for constructing dense point cloud truth data according to any one of claims 1 to 10.
PCT/CN2021/098657 2021-06-07 2021-06-07 Method and system for constructing dense point cloud truth value data and electronic device WO2022256976A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/098657 WO2022256976A1 (en) 2021-06-07 2021-06-07 Method and system for constructing dense point cloud truth value data and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/098657 WO2022256976A1 (en) 2021-06-07 2021-06-07 Method and system for constructing dense point cloud truth value data and electronic device

Publications (1)

Publication Number Publication Date
WO2022256976A1 true WO2022256976A1 (en) 2022-12-15

Family

ID=84424683

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/098657 WO2022256976A1 (en) 2021-06-07 2021-06-07 Method and system for constructing dense point cloud truth value data and electronic device

Country Status (1)

Country Link
WO (1) WO2022256976A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117710602A (en) * 2024-02-04 2024-03-15 航天宏图信息技术股份有限公司 Building reconstruction method, device and equipment for sparse grid three-dimensional data

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108020826A (en) * 2017-10-26 2018-05-11 厦门大学 Multi-line laser radar and multichannel camera mixed calibration method
CN109099901A (en) * 2018-06-26 2018-12-28 苏州路特工智能科技有限公司 Full-automatic road roller localization method based on multisource data fusion
CN109974742A (en) * 2017-12-28 2019-07-05 沈阳新松机器人自动化股份有限公司 A kind of laser Method for Calculate Mileage and map constructing method
CN111046776A (en) * 2019-12-06 2020-04-21 杭州成汤科技有限公司 Mobile robot traveling path obstacle detection method based on depth camera
US20210049784A1 (en) * 2019-08-12 2021-02-18 Leica Geosystems Ag Localization of a surveying instrument
CN112435325A (en) * 2020-09-29 2021-03-02 北京航空航天大学 VI-SLAM and depth estimation network-based unmanned aerial vehicle scene density reconstruction method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108020826A (en) * 2017-10-26 2018-05-11 厦门大学 Multi-line laser radar and multichannel camera mixed calibration method
CN109974742A (en) * 2017-12-28 2019-07-05 沈阳新松机器人自动化股份有限公司 A kind of laser Method for Calculate Mileage and map constructing method
CN109099901A (en) * 2018-06-26 2018-12-28 苏州路特工智能科技有限公司 Full-automatic road roller localization method based on multisource data fusion
US20210049784A1 (en) * 2019-08-12 2021-02-18 Leica Geosystems Ag Localization of a surveying instrument
CN111046776A (en) * 2019-12-06 2020-04-21 杭州成汤科技有限公司 Mobile robot traveling path obstacle detection method based on depth camera
CN112435325A (en) * 2020-09-29 2021-03-02 北京航空航天大学 VI-SLAM and depth estimation network-based unmanned aerial vehicle scene density reconstruction method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117710602A (en) * 2024-02-04 2024-03-15 航天宏图信息技术股份有限公司 Building reconstruction method, device and equipment for sparse grid three-dimensional data

Similar Documents

Publication Publication Date Title
JP7297017B2 (en) Method and apparatus for calibrating external parameters of on-board sensors and related vehicles
Liu et al. TOF lidar development in autonomous vehicle
EP3264364B1 (en) Method and apparatus for obtaining range image with uav, and uav
EP3540464B1 (en) Ranging method based on laser radar system, device and readable storage medium
CN109211298B (en) Sensor calibration method and device
CN107092021B (en) Vehicle-mounted laser radar three-dimensional scanning method, and ground object classification method and system
CA2562620C (en) Increasing measurement rate in time of flight measurement apparatuses
WO2022127532A1 (en) Method and apparatus for calibrating external parameter of laser radar and imu, and device
WO2022126427A1 (en) Point cloud processing method, point cloud processing apparatus, mobile platform, and computer storage medium
JP7025156B2 (en) Data processing equipment, data processing method and data processing program
US20160070981A1 (en) Operating device, operating system, operating method, and program therefor
WO2020243962A1 (en) Object detection method, electronic device and mobile platform
WO2021072710A1 (en) Point cloud fusion method and system for moving object, and computer storage medium
US11941888B2 (en) Method and device for generating training data for a recognition model for recognizing objects in sensor data of a sensor, in particular, of a vehicle, method for training and method for activating
RU2767949C2 (en) Method (options) and system for calibrating several lidar sensors
CN111308415B (en) Online pose estimation method and equipment based on time delay
CN114585879A (en) Pose estimation method and device
WO2022256976A1 (en) Method and system for constructing dense point cloud truth value data and electronic device
WO2020215252A1 (en) Method for denoising point cloud of distance measurement device, distance measurement device and mobile platform
CN114296057A (en) Method, device and storage medium for calculating relative external parameter of distance measuring system
WO2021232227A1 (en) Point cloud frame construction method, target detection method, ranging apparatus, movable platform, and storage medium
WO2020237663A1 (en) Multi-channel lidar point cloud interpolation method and ranging apparatus
CN116047481A (en) Method, device, equipment and storage medium for correcting point cloud data distortion
US20230090576A1 (en) Dynamic control and configuration of autonomous navigation systems
CN105043341A (en) Over-ground height measuring method and device of drone

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21944487

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE